{"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwrites the specified message to the console stdout.", "response": "def write_to_console(self, message: str):\n \"\"\"\n Writes the specified message to the console stdout without including\n it in the notebook display.\n \"\"\"\n if not self._step:\n raise ValueError(\n 'Cannot write to the console stdout on an uninitialized step'\n )\n interceptor = self._step.report.stdout_interceptor\n interceptor.write_source('{}'.format(message))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef render_to_console(self, message: str, **kwargs):\n rendered = templating.render(message, **kwargs)\n return self.write_to_console(rendered)", "response": "Renders the specified message to the console using Jinja2 template."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list of operations that can be performed on the current site.", "response": "def get_operations():\n \"\"\"\n This will break things if you upgrade Django to 1.8 having already applied this migration in 1.7.\n Since this is for a demo site it doesn't really matter (simply blow away the DB if you want to go to 1.8)\n\n Our demo site is a unusual in that we want to run it's tests (for integration testing) in multiple Django versions.\n Typical sites don't have to worry about that sort of thing.\n \"\"\"\n compatible = (1, 8) <= DJANGO_VERSION < (1, 10)\n if not compatible:\n return []\n\n return [\n migrations.AlterField(\n model_name='user',\n name='groups',\n field=models.ManyToManyField(related_query_name='user', related_name='user_set', to='auth.Group', blank=True, help_text='The groups this user belongs to. A user will get all permissions granted to each of their groups.', verbose_name='groups'),\n ),\n migrations.AlterField(\n model_name='user',\n name='last_login',\n field=models.DateTimeField(null=True, verbose_name='last login', blank=True),\n ),\n ]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new component.", "response": "def create(project: 'projects.Project') -> COMPONENT:\n \"\"\"\n :return:\n \"\"\"\n\n try:\n from bokeh.resources import Resources as BokehResources\n bokeh_resources = BokehResources(mode='absolute')\n except Exception:\n bokeh_resources = None\n\n if bokeh_resources is None:\n environ.log(BOKEH_WARNING)\n return COMPONENT([], [])\n\n return definitions.merge_components(\n _assemble_component(\n project,\n 'bokeh-css',\n ['bokeh', 'bokeh.css'],\n bokeh_resources.css_files\n ),\n _assemble_component(\n project,\n 'bokeh-js',\n ['bokeh', 'bokeh.js'],\n bokeh_resources.js_files\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the path where the step report is cached between sessions and runs.", "response": "def results_cache_path(self) -> str:\n \"\"\"\n Location where step report is cached between sessions to\n prevent loss of display data between runs.\n \"\"\"\n if not self.project:\n return ''\n return os.path.join(\n self.project.results_path,\n '.cache',\n 'steps',\n '{}.json'.format(self.id)\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef clear(self) -> 'Report':\n self.body = []\n self.data = SharedCache()\n self.files = SharedCache()\n self._last_update_time = time.time()\n return self", "response": "Clear all user - data stored in this instance and reset it to its original state."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nappending the specified HTML - formatted DOM string to the current report body.", "response": "def append_body(self, dom: str):\n \"\"\"\n Appends the specified HTML-formatted DOM string to the\n currently stored report body for the step.\n \"\"\"\n self.flush_stdout()\n self.body.append(dom)\n self._last_update_time = time.time()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef read_stdout(self):\n try:\n contents = self.stdout_interceptor.read_all()\n except Exception as err:\n contents = ''\n\n return render_texts.preformatted_text(contents)", "response": "Reads the current state of the print buffer and returns the contents of the print buffer as a preformatted string."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nflush the standard out redirect buffer and renders the contents to the body as a preformatted text box.", "response": "def flush_stdout(self):\n \"\"\"\n Empties the standard out redirect buffer and renders the\n contents to the body as a preformatted text box.\n \"\"\"\n try:\n contents = self.stdout_interceptor.flush_all()\n except Exception:\n return\n\n if len(contents) > 0:\n self.body.append(render_texts.preformatted_text(contents))\n self._last_update_time = time.time()\n\n return contents"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef import_class(klass):\n '''Import the named class and return that class'''\n mod = __import__(klass.rpartition('.')[0])\n for segment in klass.split('.')[1:-1]:\n mod = getattr(mod, segment)\n return getattr(mod, klass.rpartition('.')[2])", "response": "Import the named class and return that class"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create(\n project: 'projects.Project',\n include_path: str\n) -> COMPONENT:\n \"\"\"\n Creates a COMPONENT instance for the project component specified by the\n include path\n\n :param project:\n The project in which the component resides\n :param include_path:\n The relative path within the project where the component resides\n :return:\n The created COMPONENT instance\n \"\"\"\n\n source_path = environ.paths.clean(\n os.path.join(project.source_directory, include_path)\n )\n if not os.path.exists(source_path):\n return COMPONENT([], [])\n\n if os.path.isdir(source_path):\n glob_path = os.path.join(source_path, '**', '*')\n include_paths = glob.iglob(glob_path, recursive=True)\n else:\n include_paths = [source_path]\n\n destination_path = os.path.join(project.output_directory, include_path)\n\n return COMPONENT(\n includes=filter(\n lambda web_include: web_include is not None,\n map(functools.partial(to_web_include, project), include_paths)\n ),\n files=[file_io.FILE_COPY_ENTRY(\n source=source_path,\n destination=destination_path\n )]\n )", "response": "Creates a COMPONENT instance for the project component specified by the include path."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_many(\n project: 'projects.Project',\n include_paths: typing.List[str]\n) -> COMPONENT:\n \"\"\"\n Creates a single COMPONENT instance for all of the specified project\n include paths\n\n :param project:\n Project where the components reside\n :param include_paths:\n A list of relative paths within the project directory to files or\n directories that should be included in the project\n :return:\n The combined COMPONENT instance for all of the included paths\n \"\"\"\n\n return definitions.merge_components(*map(\n functools.partial(create, project),\n include_paths\n ))", "response": "Creates a single COMPONENT instance for all of the specified project and include_paths."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_web_include(\n project: 'projects.Project',\n file_path: str\n) -> WEB_INCLUDE:\n \"\"\"\n Converts the given file_path into a WEB_INCLUDE instance that represents\n the deployed version of this file to be loaded into the results project\n page\n\n :param project:\n Project in which the file_path resides\n :param file_path:\n Absolute path to the source file for which the WEB_INCLUDE instance\n will be created\n :return:\n The WEB_INCLUDE instance that represents the given source file\n \"\"\"\n\n if not file_path.endswith('.css') and not file_path.endswith('.js'):\n return None\n\n slug = file_path[len(project.source_directory):]\n url = '/{}' \\\n .format(slug) \\\n .replace('\\\\', '/') \\\n .replace('//', '/')\n\n return WEB_INCLUDE(name=':project:{}'.format(url), src=url)", "response": "Converts the given file_path into a WEB_INCLUDE instance that represents the deployed version of this file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef attempt_file_write(\n path: str,\n contents: typing.Union[str, bytes],\n mode: str = 'w',\n offset: int = 0\n) -> typing.Union[None, Exception]:\n \"\"\"\n Attempts to write the specified contents to a file and returns None if\n successful, or the raised exception if writing failed.\n \n :param path:\n The path to the file that will be written\n :param contents:\n The contents of the file to write\n :param mode:\n The mode in which the file will be opened when written\n :param offset:\n The byte offset in the file where the contents should be written.\n If the value is zero, the offset information will be ignored and\n the operation will write entirely based on mode. Note that if you\n indicate an append write mode and an offset, the mode will be forced\n to write instead of append.\n :return:\n None if the write operation succeeded. Otherwise, the exception that\n was raised by the failed write action.\n \"\"\"\n try:\n data = contents.encode()\n except Exception:\n data = contents\n\n if offset > 0:\n with open(path, 'rb') as f:\n existing = f.read(offset)\n else:\n existing = None\n\n append = 'a' in mode\n write_mode = 'wb' if offset > 0 or not append else 'ab'\n try:\n with open(path, write_mode) as f:\n if existing is not None:\n f.write(existing)\n f.write(data)\n return None\n except Exception as error:\n return error", "response": "Attempts to write the contents of the specified file to a file and returns None if the file write operation failed."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwrites the contents of the specified contents to the specified file.", "response": "def write_file(\n path: str,\n contents,\n mode: str = 'w',\n retry_count: int = 3,\n offset: int = 0\n) -> typing.Tuple[bool, typing.Union[None, Exception]]:\n \"\"\"\n Writes the specified contents to a file, with retry attempts if the write\n operation fails. This is useful to prevent OS related write collisions with\n files that are regularly written to and read from quickly.\n \n :param path:\n The path to the file that will be written\n :param contents:\n The contents of the file to write\n :param mode:\n The mode in which the file will be opened when written\n :param retry_count:\n The number of attempts to make before giving up and returning a\n failed write.\n :param offset:\n The byte offset in the file where the contents should be written.\n If the value is zero, the offset information will be ignored and the\n operation will write entirely based on mode. Note that if you indicate\n an append write mode and an offset, the mode will be forced to write\n instead of append.\n :return:\n Returns two arguments. The first is a boolean specifying whether or\n not the write operation succeeded. The second is the error result, which\n is None if the write operation succeeded. Otherwise, it will be the \n exception that was raised by the last failed write attempt.\n \"\"\"\n error = None\n for i in range(retry_count):\n error = attempt_file_write(path, contents, mode, offset)\n if error is None:\n return True, None\n time.sleep(0.2)\n\n return False, error"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef attempt_json_write(\n path: str,\n contents: dict,\n mode: str = 'w'\n) -> typing.Union[None, Exception]:\n \"\"\"\n Attempts to write the specified JSON content to file.\n \n :param path: \n The path to the file where the JSON serialized content will be written.\n :param contents: \n The JSON data to write to the file\n :param mode: \n The mode used to open the file where the content will be written.\n :return: \n None if the write operation succeeded. Otherwise, the exception that\n was raised by the failed write operation.\n \"\"\"\n try:\n with open(path, mode) as f:\n json.dump(contents, f)\n return None\n except Exception as error:\n return error", "response": "Attempts to write the specified JSON content to the specified file."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites the contents of a dictionary to a JSON - serialized file.", "response": "def write_json_file(\n path: str,\n contents: dict,\n mode: str = 'w',\n retry_count: int = 3\n) -> typing.Tuple[bool, typing.Union[None, Exception]]:\n \"\"\"\n Writes the specified dictionary to a file as a JSON-serialized string, \n with retry attempts if the write operation fails. This is useful to prevent \n OS related write collisions with files that are regularly written to and \n read from quickly.\n \n :param path:\n The path to the file that will be written\n :param contents:\n The contents of the file to write\n :param mode:\n The mode in which the file will be opened when written\n :param retry_count:\n The number of attempts to make before giving up and returning a\n failed write.\n :return:\n Returns two arguments. The first is a boolean specifying whether or\n not the write operation succeeded. The second is the error result, which\n is None if the write operation succeeded. Otherwise, it will be the \n exception that was raised by the last failed write attempt.\n \"\"\"\n error = None\n for i in range(retry_count):\n error = attempt_json_write(path, contents, mode)\n if error is None:\n return True, None\n time.sleep(0.2)\n\n return False, error"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nformats the source string to strip newlines on both ends and dedents the entire string", "response": "def reformat(source: str) -> str:\n \"\"\"\n Formats the source string to strip newlines on both ends and dedents the\n the entire string\n\n :param source:\n The string to reformat\n \"\"\"\n\n value = source if source else ''\n return dedent(value.strip('\\n')).strip()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_environment() -> Environment:\n\n env = JINJA_ENVIRONMENT\n\n loader = env.loader\n resource_path = environ.configs.make_path(\n 'resources', 'templates',\n override_key='template_path'\n )\n\n if not loader:\n env.filters['id'] = get_id\n env.filters['latex'] = get_latex\n\n if not loader or resource_path not in loader.searchpath:\n env.loader = FileSystemLoader(resource_path)\n\n return env", "response": "Returns the jinja2 templating environment updated with the most recent\n cauldron environment configurations\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef render(template: typing.Union[str, Template], **kwargs):\n\n if not hasattr(template, 'render'):\n template = get_environment().from_string(textwrap.dedent(template))\n\n return template.render(\n cauldron_template_uid=make_template_uid(),\n **kwargs\n )", "response": "Renders a template string using Jinja2 and Cauldron templating."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef render_file(path: str, **kwargs):\n\n with open(path, 'r') as f:\n contents = f.read()\n\n return get_environment().from_string(contents).render(\n cauldron_template_uid=make_template_uid(),\n **kwargs\n )", "response": "Renders a file at the specified absolute path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrendering the template with the given filename from within Cauldron s base template folder.", "response": "def render_template(template_name: str, **kwargs):\n \"\"\"\n Renders the template file with the given filename from within Cauldron's\n template environment folder.\n\n :param template_name:\n The filename of the template to render. Any path elements should be\n relative to Cauldron's root template folder.\n :param kwargs:\n Any elements passed to Jinja2 for rendering the template\n :return:\n The rendered string\n \"\"\"\n\n return get_environment().get_template(template_name).render(\n cauldron_template_uid=make_template_uid(),\n **kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncleans the specified path by expanding shorthand elements redirecting to the real path for symbolic links and removing any relative components from .", "response": "def clean(path: str) -> str:\n \"\"\"\n Cleans the specified path by expanding shorthand elements, redirecting to\n the real path for symbolic links, and removing any relative components to\n return a complete, absolute path to the specified location.\n\n :param path:\n The source path to be cleaned\n \"\"\"\n\n if not path or path == '.':\n path = os.curdir\n\n if path.startswith('~'):\n path = os.path.expanduser(path)\n\n return os.path.realpath(os.path.abspath(path))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn an absolute path to a file or folder within the cauldron package containing the relative path elements specified by the args.", "response": "def package(*args: str) -> str:\n \"\"\"\n Creates an absolute path to a file or folder within the cauldron package\n using the relative path elements specified by the args.\n\n :param args:\n Zero or more relative path elements that describe a file or folder\n within the reporting\n \"\"\"\n\n return clean(os.path.join(os.path.dirname(__file__), '..', *args))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrequest confirmation of the specified question and returns that result", "response": "def confirm(question: str, default: bool = True) -> bool:\n \"\"\"\n Requests confirmation of the specified question and returns that result\n\n :param question:\n The question to print to the console for the confirmation\n :param default:\n The default value if the user hits enter without entering a value\n \"\"\"\n\n result = input('{question} [{yes}/{no}]:'.format(\n question=question,\n yes='(Y)' if default else 'Y',\n no='N' if default else '(N)'\n ))\n\n if not result:\n return default\n\n if result[0].lower() in ['y', 't', '1']:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the last opened project path if such a path exists.", "response": "def fetch_last(response: Response) -> typing.Union[str, None]:\n \"\"\" Returns the last opened project path if such a path exists \"\"\"\n\n recent_paths = environ.configs.fetch('recent_paths', [])\n\n if len(recent_paths) < 1:\n response.fail(\n code='NO_RECENT_PROJECTS',\n message='No projects have been opened recently'\n ).console()\n return None\n\n return recent_paths[0]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef before_save(file_or_dir):\n dir_name = os.path.dirname(os.path.abspath(file_or_dir))\n if not os.path.exists(dir_name):\n os.makedirs(dir_name)", "response": "Create the dedicated path if it does not exist"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef categorical2pysbrl_data(\n x,\n y,\n data_filename,\n label_filename,\n method='eclat',\n supp=0.05,\n zmin=1,\n zmax=3):\n \"\"\"\n Run a frequent item mining algorithm to extract candidate rules.\n :param x: 2D np.ndarray, categorical data of shape [n_instances, n_features]\n :param y: 1D np.ndarray, label array of shape [n_instances, ]\n :param data_filename: the path to store data file\n :param label_filename: the path to store label file\n :param method: a str denoting the method to use, default to 'eclat'\n :param supp: the minimum support of a rule (item)\n :param zmin:\n :param zmax:\n :return:\n \"\"\"\n\n # Safely cast data types\n x = x.astype(np.int, casting='safe')\n y = y.astype(np.int, casting='safe')\n\n labels = np.unique(y)\n labels = np.arange(np.max(labels) + 1)\n # assert max(labels) + 1 == len(labels)\n\n mine = get_fim_method(method)\n\n x_by_labels = []\n for label in labels:\n x_by_labels.append(x[y == label])\n transactions_by_labels = [categorical2transactions(_x) for _x in x_by_labels]\n itemsets = transactions2freqitems(transactions_by_labels, mine, supp=supp, zmin=zmin, zmax=zmax)\n rules = [itemset2feature_categories(itemset) for itemset in itemsets]\n data_by_rule = []\n for features, categories in rules:\n satisfied = rule_satisfied(x, features, categories)\n data_by_rule.append(satisfied)\n\n # Write data file\n # data_filename = get_path(_datasets_path, data_name+'.data')\n before_save(data_filename)\n with open(data_filename, 'w') as f:\n f.write('n_items: %d\\n' % len(itemsets))\n f.write('n_samples: %d\\n' % len(y))\n for itemset, data in zip(itemsets, data_by_rule):\n rule_str = '{' + ','.join(itemset) + '}' + ' '\n f.write(rule_str)\n bit_s = ' '.join(['1' if bit else '0' for bit in data])\n f.write(bit_s)\n f.write('\\n')\n\n # Write label file\n # label_filename = get_path(_datasets_path, data_name+'.label')\n before_save(label_filename)\n with open(label_filename, 'w') as f:\n f.write('n_items: %d\\n' % len(labels))\n f.write('n_samples: %d\\n' % len(y))\n for label in labels:\n f.write('{label=%d} ' % label)\n bits = np.equal(y, label)\n bit_s = ' '.join(['1' if bit else '0' for bit in bits])\n f.write(bit_s)\n f.write('\\n')\n return rules", "response": "This function extracts the data from a categorical data set and stores it in a file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef categorical2transactions(x):\n # type: (np.ndarray) -> List\n \"\"\"\n Convert a 2D int array into a transaction list:\n [\n ['x0=1', 'x1=0', ...],\n ...\n ]\n :param x:\n :return:\n \"\"\"\n assert len(x.shape) == 2\n\n transactions = []\n for entry in x:\n transactions.append(['x%d=%d' % (i, val) for i, val in enumerate(entry)])\n\n return transactions", "response": "Convert a 2D int array into a list of categorical transactions."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a logical array representing whether entries in x satisfied the rules denoted by features and categories.", "response": "def rule_satisfied(x, features, categories):\n \"\"\"\n return a logical array representing whether entries in x satisfied the rules denoted by features and categories\n :param x: a categorical 2D array\n :param features: a list of feature indices\n :param categories: a list of categories\n :return:\n \"\"\"\n satisfied = []\n if features[0] == -1 and len(features) == 1:\n # Default rule, all satisfied\n return np.ones(x.shape[0], dtype=bool)\n for idx, cat in zip(features, categories):\n # Every single condition needs to be satisfied.\n satisfied.append(x[:, idx] == cat)\n return functools.reduce(np.logical_and, satisfied)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef of_project(project: 'projects.Project') -> dict:\n source_directory = project.source_directory\n\n libraries_status = [\n {} if d.startswith(source_directory) else of_directory(d)\n for d in project.library_directories\n ]\n\n return dict(\n project=of_directory(source_directory),\n libraries=libraries_status\n )", "response": "Returns the status information for every file within the project source directory and its shared library folders."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef of_file(path: str, root_directory: str = None) -> dict:\n\n slug = (\n path\n if root_directory is None\n else path[len(root_directory):].lstrip(os.sep)\n )\n\n if not os.path.exists(path) or os.path.isdir(path):\n return dict(\n size=-1,\n modified=-1,\n path=slug\n )\n\n size = os.path.getsize(path)\n modified = max(os.path.getmtime(path), os.path.getctime(path))\n\n return dict(\n modified=modified,\n path=slug,\n size=size\n )", "response": "Returns a dictionary containing the status information for the specified file at the specified path."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dictionary containing status information for all files within the specified directory and its descendants.", "response": "def of_directory(directory: str, root_directory: str = None) -> dict:\n \"\"\"\n Returns a dictionary containing status entries recursively for all files\n within the specified directory and its descendant directories.\n\n :param directory:\n The directory in which to retrieve status information\n :param root_directory:\n Directory relative to which all file status paths are related. If this\n argument is None then the directory argument itself will be used.\n :return:\n A dictionary containing status information for each file within the\n specified directory and its descendants. The keys of the dictionary\n are the relative path names for each of the files.\n \"\"\"\n\n glob_path = os.path.join(directory, '**/*')\n root = root_directory if root_directory else directory\n results = filter(\n lambda result: (result['modified'] != -1),\n [of_file(path, root) for path in glob.iglob(glob_path, recursive=True)]\n )\n return dict([(result['path'], result) for result in results])"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run_local(\n context: cli.CommandContext,\n project: projects.Project,\n project_steps: typing.List[projects.ProjectStep],\n force: bool,\n continue_after: bool,\n single_step: bool,\n limit: int,\n print_status: bool,\n skip_library_reload: bool = False\n) -> environ.Response:\n \"\"\"\n Execute the run command locally within this cauldron environment\n\n :param context:\n :param project:\n :param project_steps:\n :param force:\n :param continue_after:\n :param single_step:\n :param limit:\n :param print_status:\n :param skip_library_reload:\n Whether or not to skip reloading all project libraries prior to\n execution of the project. By default this is False in which case\n the project libraries are reloaded prior to execution.\n :return:\n \"\"\"\n skip_reload = (\n skip_library_reload\n or environ.modes.has(environ.modes.TESTING)\n )\n if not skip_reload:\n runner.reload_libraries()\n\n environ.log_header('RUNNING', 5)\n\n steps_run = []\n\n if single_step:\n # If the user specifies the single step flag, only run one step. Force\n # the step to be run if they specified it explicitly\n\n ps = project_steps[0] if len(project_steps) > 0 else None\n force = force or (single_step and bool(ps is not None))\n steps_run = runner.section(\n response=context.response,\n project=project,\n starting=ps,\n limit=1,\n force=force\n )\n\n elif continue_after or len(project_steps) == 0:\n # If the continue after flag is set, start with the specified step\n # and run the rest of the project after that. Or, if no steps were\n # specified, run the entire project with the specified flags.\n\n ps = project_steps[0] if len(project_steps) > 0 else None\n steps_run = runner.complete(\n context.response,\n project,\n ps,\n force=force,\n limit=limit\n )\n else:\n for ps in project_steps:\n steps_run += runner.section(\n response=context.response,\n project=project,\n starting=ps,\n limit=max(1, limit),\n force=force or (limit < 1 and len(project_steps) < 2),\n skips=steps_run + []\n )\n\n project.write()\n environ.log_blanks()\n\n step_changes = []\n for ps in steps_run:\n step_changes.append(dict(\n name=ps.definition.name,\n action='updated',\n step=writing.step_writer.serialize(ps)._asdict()\n ))\n\n context.response.update(step_changes=step_changes)\n\n if print_status or context.response.failed:\n context.response.update(project=project.kernel_serialize())\n\n return context.response", "response": "Execute the run command locally within this cauldron environment."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef loadalldatas():\n dependency_order = ['common', 'profiles', 'blog', 'democomments']\n for app in dependency_order:\n project.recursive_load(os.path.join(paths.project_paths.manage_root, app))", "response": "Loads all demo fixtures."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef project_status():\n r = Response()\n\n try:\n project = cauldron.project.get_internal_project()\n if project:\n r.update(project=project.status())\n else:\n r.update(project=None)\n except Exception as err:\n r.fail(\n code='PROJECT_STATUS_ERROR',\n message='Unable to check status of currently opened project',\n error=err\n )\n\n r.update(server=server_runner.get_server_data())\n return flask.jsonify(r.serialize())", "response": "Returns a JSON representation of the current status of the currently opened project."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nclean the current step.", "response": "def clean_step(step_name: str):\n \"\"\"...\"\"\"\n r = Response()\n project = cauldron.project.get_internal_project()\n\n if not project:\n return flask.jsonify(r.fail(\n code='PROJECT_FETCH_ERROR',\n message='No project is currently open'\n ).response.serialize())\n\n step = project.get_step(step_name)\n\n if not step:\n return flask.jsonify(r.fail(\n code='STEP_FETCH_ERROR',\n message='No such step \"{}\" found'.format(step_name)\n ).response.serialize())\n\n step.mark_dirty(False, force=True)\n\n return flask.jsonify(r.update(\n project=project.kernel_serialize()\n ).response.serialize())"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nappend a timestamp integer to the given slug ensuring that the result is less than the specified max_length.", "response": "def _insert_timestamp(self, slug, max_length=255):\n \"\"\"Appends a timestamp integer to the given slug, yet ensuring the\n result is less than the specified max_length.\n \"\"\"\n timestamp = str(int(time.time()))\n ts_len = len(timestamp) + 1\n while len(slug) + ts_len > max_length:\n slug = '-'.join(slug.split('-')[:-1])\n slug = '-'.join([slug, timestamp])\n return slug"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrun jobs popping one after another", "response": "def run(self):\n '''Run jobs, popping one after another'''\n # Register our signal handlers\n self.signals()\n\n with self.listener():\n for job in self.jobs():\n # If there was no job to be had, we should sleep a little bit\n if not job:\n self.jid = None\n self.title('Sleeping for %fs' % self.interval)\n time.sleep(self.interval)\n else:\n self.jid = job.jid\n self.title('Working on %s (%s)' % (job.jid, job.klass_name))\n with Worker.sandbox(self.sandbox):\n job.sandbox = self.sandbox\n job.process()\n if self.shutdown:\n break"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninitializing the cauldron library by confirming that it can be imported by importlib.", "response": "def initialize():\n \"\"\"\n Initializes the cauldron library by confirming that it can be imported\n by the importlib library. If the attempt to import it fails, the system\n path will be modified and the attempt retried. If both attempts fail, an\n import error will be raised.\n \"\"\"\n\n cauldron_module = get_cauldron_module()\n if cauldron_module is not None:\n return cauldron_module\n\n sys.path.append(ROOT_DIRECTORY)\n cauldron_module = get_cauldron_module()\n\n if cauldron_module is not None:\n return cauldron_module\n\n raise ImportError(' '.join((\n 'Unable to import cauldron.'\n 'The package was not installed in a known location.'\n )))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef run(arguments: typing.List[str] = None):\n initialize()\n\n from cauldron.invoke import parser\n from cauldron.invoke import invoker\n\n args = parser.parse(arguments)\n exit_code = invoker.run(args.get('command'), args)\n sys.exit(exit_code)", "response": "Executes the cauldron command"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn either the linked or not - linked profile name.", "response": "def author_display(author, *args):\n \"\"\"Returns either the linked or not-linked profile name.\"\"\"\n\n # Call get_absolute_url or a function returning none if not defined\n url = getattr(author, 'get_absolute_url', lambda: None)()\n # get_short_name or unicode representation\n short_name = getattr(author, 'get_short_name', lambda: six.text_type(author))()\n if url:\n return mark_safe('{}'.format(url, short_name))\n else:\n return short_name"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nechoes the project steps.", "response": "def echo_steps(response: Response, project: Project):\n \"\"\"\n :param response:\n :param project:\n :return:\n \"\"\"\n\n if len(project.steps) < 1:\n response.update(\n steps=[]\n ).notify(\n kind='SUCCESS',\n code='ECHO_STEPS',\n message='No steps in project'\n ).console(\n \"\"\"\n [NONE]: This project does not have any steps yet. To add a new\n step use the command:\n\n steps add [YOUR_STEP_NAME]\n\n and a new step will be created in this project.\n \"\"\",\n whitespace=1\n )\n return\n\n response.update(\n steps=[ps.kernel_serialize() for ps in project.steps]\n ).notify(\n kind='SUCCESS',\n code='ECHO_STEPS'\n ).console_header(\n 'Project Steps',\n level=3\n ).console(\n '\\n'.join(['* {}'.format(ps.definition.name) for ps in project.steps]),\n indent_by=2,\n whitespace_bottom=1\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nexplodes a filename into a dictionary containing the name of the file without extension and the extension of the file.", "response": "def explode_filename(name: str, scheme: str) -> dict:\n \"\"\"\n Removes any path components from the input filename and returns a\n dictionary containing the name of the file without extension and the\n extension (if an extension exists)\n\n :param name:\n :param scheme:\n :return:\n \"\"\"\n\n if not scheme:\n return split_filename(name)\n\n replacements = {\n 'name': '(?P.*)',\n 'ext': '(?P.+)$',\n 'index': '(?P[0-9]{{{length}}})'\n }\n\n scheme_pattern = '^'\n empty_scheme_pattern = ''\n\n offset = 0\n while offset < len(scheme):\n char = scheme[offset]\n next_char = scheme[offset + 1] if (offset + 1) < len(scheme) else None\n\n if char in r'.()^$?*+\\[]|':\n addition = '\\\\{}'.format(char)\n scheme_pattern += addition\n empty_scheme_pattern += addition\n offset += 1\n continue\n\n if char != '{':\n scheme_pattern += char\n empty_scheme_pattern += char\n offset += 1\n continue\n\n if next_char != '{':\n scheme_pattern += char\n empty_scheme_pattern += char\n offset += 1\n continue\n\n end_index = scheme.find('}}', offset)\n\n contents = scheme[offset:end_index].strip('{}').lower()\n\n if contents in replacements:\n scheme_pattern += replacements[contents]\n elif contents == ('#' * len(contents)):\n addition = replacements['index'].format(length=len(contents))\n scheme_pattern += addition\n empty_scheme_pattern += addition\n else:\n addition = '{{{}}}'.format(contents)\n scheme_pattern += addition\n empty_scheme_pattern += addition\n\n offset = end_index + 2\n\n match = re.compile(scheme_pattern).match(name)\n\n if not match:\n parts = split_filename(name)\n comparison = re.compile(empty_scheme_pattern.rstrip('-_: .\\\\'))\n match = comparison.match(parts['name'])\n if not match:\n return parts\n\n parts = match.groupdict()\n index = parts.get('index')\n index = int(index) if index else None\n\n return dict(\n index=index - 1,\n name=parts.get('name', ''),\n extension=parts.get('extension', 'py')\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new COMPONENT instance for the given project.", "response": "def create(project: 'projects.Project') -> COMPONENT:\n \"\"\"\n :param project:\n :return:\n \"\"\"\n\n source_path = get_source_path()\n if not source_path:\n return COMPONENT([], [])\n\n output_slug = 'components/plotly/plotly.min.js'\n output_path = os.path.join(project.output_directory, output_slug)\n\n return COMPONENT(\n includes=[WEB_INCLUDE(\n name='plotly',\n src='/{}'.format(output_slug)\n )],\n files=[file_io.FILE_COPY_ENTRY(\n source=source_path,\n destination=output_path\n )]\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns whether or not the current working directory is a Cauldron project directory which contains a cauldron. json file.", "response": "def in_project_directory() -> bool:\n \"\"\"\n Returns whether or not the current working directory is a Cauldron project\n directory, which contains a cauldron.json file.\n \"\"\"\n current_directory = os.path.realpath(os.curdir)\n project_path = os.path.join(current_directory, 'cauldron.json')\n return os.path.exists(project_path) and os.path.isfile(project_path)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading a shared data from a JSON file stored on disk", "response": "def load_shared_data(path: typing.Union[str, None]) -> dict:\n \"\"\"Load shared data from a JSON file stored on disk\"\"\"\n\n if path is None:\n return dict()\n\n if not os.path.exists(path):\n raise FileNotFoundError('No such shared data file \"{}\"'.format(path))\n\n try:\n with open(path, 'r') as fp:\n data = json.load(fp)\n except Exception:\n raise IOError('Unable to read shared data file \"{}\"'.format(path))\n\n if not isinstance(data, dict):\n raise ValueError('Shared data must load into a dictionary object')\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef run_version(args: dict) -> int:\n version = environ.package_settings.get('version', 'unknown')\n print('VERSION: {}'.format(version))\n return 0", "response": "Displays the current version"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrun a batch operation for the given arguments", "response": "def run_batch(args: dict) -> int:\n \"\"\"Runs a batch operation for the given arguments\"\"\"\n\n batcher.run_project(\n project_directory=args.get('project_directory'),\n log_path=args.get('logging_path'),\n output_directory=args.get('output_directory'),\n shared_data=load_shared_data(args.get('shared_data_path'))\n )\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run_shell(args: dict) -> int:\n\n if args.get('project_directory'):\n return run_batch(args)\n\n shell = CauldronShell()\n\n if in_project_directory():\n shell.cmdqueue.append('open \"{}\"'.format(os.path.realpath(os.curdir)))\n\n shell.cmdloop()\n return 0", "response": "Run the shell sub command"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrunning the specified command action and returns the return code for exit.", "response": "def run(action: str, args: dict) -> int:\n \"\"\"\n Runs the specified command action and returns the return status code\n for exit.\n \n :param action:\n The action to run\n :param args:\n The arguments parsed for the specified action\n \"\"\"\n if args.get('show_version_info'):\n return run_version(args)\n\n actions = dict(\n shell=run_shell,\n kernel=run_kernel,\n serve=run_kernel,\n version=run_version\n )\n\n if action not in actions:\n print('[ERROR]: Unrecognized sub command \"{}\"'.format(action))\n parser = args['parser'] # type: ArgumentParser\n parser.print_help()\n return 1\n\n return actions.get(action)(args)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_running_step_changes(write: bool = False) -> list:\n project = cd.project.get_internal_project()\n\n running_steps = list(filter(\n lambda step: step.is_running,\n project.steps\n ))\n\n def get_changes(step):\n step_data = writing.step_writer.serialize(step)\n\n if write:\n writing.save(project, step_data.file_writes)\n\n return dict(\n name=step.definition.name,\n action='updated',\n step=step_data._asdict(),\n written=write\n )\n\n return [get_changes(step) for step in running_steps]", "response": "Returns a list of all running steps."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse(\n args: typing.List[str] = None,\n arg_parser: ArgumentParser = None\n) -> dict:\n \"\"\"Parses the arguments for the cauldron server\"\"\"\n\n parser = arg_parser or create_parser()\n return vars(parser.parse_args(args))", "response": "Parses the arguments for the cauldron server"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_parser(arg_parser: ArgumentParser = None) -> ArgumentParser:\n\n parser = arg_parser or ArgumentParser()\n parser.description = 'Cauldron kernel server'\n\n parser.add_argument(\n '-p', '--port',\n dest='port',\n type=int,\n default=5010\n )\n\n parser.add_argument(\n '-d', '--debug',\n dest='debug',\n default=False,\n action='store_true'\n )\n\n parser.add_argument(\n '-v', '--version',\n dest='version',\n default=False,\n action='store_true'\n )\n\n parser.add_argument(\n '-c', '--code',\n dest='authentication_code',\n type=str,\n default=''\n )\n\n parser.add_argument(\n '-n', '--name',\n dest='host',\n type=str,\n default=None\n )\n\n return parser", "response": "Creates an argument parser populated with the arg formats for the server\n command."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmatches a Tree structure with the given query rules.", "response": "def match_rules(tree, rules, fun=None, multi=False):\n \"\"\"Matches a Tree structure with the given query rules.\n\n Query rules are represented as a dictionary of template to action.\n Action is either a function, or a dictionary of subtemplate parameter to rules::\n\n rules = { 'template' : { 'key': rules } }\n | { 'template' : {} }\n\n Args:\n tree (Tree): Parsed tree structure\n rules (dict): A dictionary of query rules\n fun (function): Function to call with context (set to None if you want to return context)\n multi (Bool): If True, returns all matched contexts, else returns first matched context\n Returns:\n Contexts from matched rules\n \"\"\"\n if multi:\n context = match_rules_context_multi(tree, rules)\n else:\n context = match_rules_context(tree, rules)\n if not context:\n return None\n\n if fun:\n args = fun.__code__.co_varnames\n if multi:\n res = []\n for c in context:\n action_context = {}\n for arg in args:\n if arg in c:\n action_context[arg] = c[arg]\n res.append(fun(**action_context))\n return res\n else:\n action_context = {}\n for arg in args:\n if arg in context:\n action_context[arg] = context[arg]\n return fun(**action_context)\n else:\n return context"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncrosses product of all contexts in a list of contexts", "response": "def cross_context(contextss):\n \"\"\"\n Cross product of all contexts\n [[a], [b], [c]] -> [[a] x [b] x [c]]\n\n \"\"\"\n if not contextss:\n return []\n\n product = [{}]\n\n for contexts in contextss:\n tmp_product = []\n for c in contexts:\n for ce in product:\n c_copy = c.copy()\n c_copy.update(ce)\n tmp_product.append(c_copy)\n product = tmp_product\n return product"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef match_rules_context_multi(tree, rules, parent_context={}):\n all_contexts = []\n for template, match_rules in rules.items():\n context = parent_context.copy()\n if match_template(tree, template, context):\n child_contextss = []\n if not match_rules:\n all_contexts += [context]\n else:\n for key, child_rules in match_rules.items():\n child_contextss.append(match_rules_context_multi(context[key], child_rules, context))\n all_contexts += cross_context(child_contextss) \n return all_contexts", "response": "Recursively matches a Tree structure with rules and returns a list of all contexts that are matched by the rules."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking if match string matches Tree structure", "response": "def match_template(tree, template, args=None):\n \"\"\"Check if match string matches Tree structure\n \n Args:\n tree (Tree): Parsed Tree structure of a sentence\n template (str): String template to match. Example: \"( S ( NP ) )\"\n Returns:\n bool: If they match or not\n \"\"\"\n tokens = get_tokens(template.split())\n cur_args = {}\n if match_tokens(tree, tokens, cur_args):\n if args is not None:\n for k, v in cur_args.items():\n args[k] = v\n logger.debug('MATCHED: {0}'.format(template))\n return True\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef match_tokens(tree, tokens, args):\n arg_type_to_func = {\n 'r': get_raw_lower,\n 'R': get_raw,\n 'o': get_object_lower,\n 'O': get_object,\n }\n\n if len(tokens) == 0:\n return True\n\n if not isinstance(tree, Tree):\n return False\n\n root_token = tokens[0]\n\n # Equality\n if root_token.find('=') >= 0:\n eq_tokens = root_token.split('=')[1].lower().split('|')\n root_token = root_token.split('=')[0]\n word = get_raw_lower(tree)\n if word not in eq_tokens:\n return False\n\n # Get arg\n if root_token.find(':') >= 0:\n arg_tokens = root_token.split(':')[1].split('-')\n if len(arg_tokens) == 1:\n arg_name = arg_tokens[0]\n args[arg_name] = tree\n else:\n arg_name = arg_tokens[0]\n arg_type = arg_tokens[1]\n args[arg_name] = arg_type_to_func[arg_type](tree)\n root_token = root_token.split(':')[0]\n\n # Does not match wild card and label does not match\n if root_token != '.' and tree.label() not in root_token.split('/'):\n return False\n\n # Check end symbol\n if tokens[-1] == '$':\n if len(tree) != len(tokens[:-1]) - 1:\n return False\n else:\n tokens = tokens[:-1]\n\n # Check # of tokens\n if len(tree) < len(tokens) - 1:\n return False\n\n for i in range(len(tokens) - 1):\n if not match_tokens(tree[i], tokens[i + 1], args):\n return False\n return True", "response": "Check if a stack of tokens matches the Tree structure"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_object(tree):\n if isinstance(tree, Tree):\n if tree.label() == 'DT' or tree.label() == 'POS':\n return ''\n words = []\n for child in tree:\n words.append(get_object(child))\n return ' '.join([_f for _f in words if _f])\n else:\n return tree", "response": "Get the object in the tree."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_raw(tree):\n if isinstance(tree, Tree):\n words = []\n for child in tree:\n words.append(get_raw(child))\n return ' '.join(words)\n else:\n return tree", "response": "Get the exact words in lowercase in the tree object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef initialize(spark_home_path: str = None):\n\n if not spark_home_path:\n spark_home_path = os.environ.get('SPARK_HOME')\n\n spark_home_path = environ.paths.clean(spark_home_path)\n\n if not os.path.exists(spark_home_path):\n raise FileNotFoundError(\n errno.ENOENT,\n os.strerror(errno.ENOENT),\n spark_home_path\n )\n\n spark_python_path = os.path.join(spark_home_path, 'python')\n\n if not os.path.exists(spark_python_path):\n raise FileNotFoundError(\n errno.ENOENT,\n os.strerror(errno.ENOENT),\n spark_python_path\n )\n\n spark_pylib_path = os.path.join(spark_python_path, 'lib')\n\n if not os.path.exists(spark_pylib_path):\n raise FileNotFoundError(\n errno.ENOENT,\n os.strerror(errno.ENOENT),\n spark_python_path\n )\n\n lib_glob = os.path.join(spark_pylib_path, '*.zip')\n lib_sources = [path for path in glob.iglob(lib_glob)]\n\n unload()\n\n for p in lib_sources:\n if p not in sys.path:\n sys.path.append(p)\n\n spark_environment.update(dict(\n spark_home_path=spark_home_path,\n spark_python_path=spark_python_path,\n spark_pylib_path=spark_pylib_path,\n libs=lib_sources\n ))", "response": "Initializes the modules and the PySpark libraries for the current notebook."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef is_remote_project(self) -> bool:\n project_path = environ.paths.clean(self.source_directory)\n return project_path.find('cd-remote-project') != -1", "response": "Whether or not this project is remote"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef library_directories(self) -> typing.List[str]:\n def listify(value):\n return [value] if isinstance(value, str) else list(value)\n\n # If this is a project running remotely remove external library\n # folders as the remote shared libraries folder will contain all\n # of the necessary dependencies\n is_local_project = not self.is_remote_project\n folders = [\n f\n for f in listify(self.settings.fetch('library_folders', ['libs']))\n if is_local_project or not f.startswith('..')\n ]\n\n # Include the remote shared library folder as well\n folders.append('../__cauldron_shared_libs')\n\n # Include the project directory as well\n folders.append(self.source_directory)\n\n return [\n environ.paths.clean(os.path.join(self.source_directory, folder))\n for folder in folders\n ]", "response": "Returns a list of directories to all of the library locations that are needed by the project."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the URL that will open this project s results file in the browser.", "response": "def url(self) -> str:\n \"\"\"\n Returns the URL that will open this project results file in the browser\n :return:\n \"\"\"\n\n return 'file://{path}?id={id}'.format(\n path=os.path.join(self.results_path, 'project.html'),\n id=self.uuid\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the directory where the project results files will be written", "response": "def output_directory(self) -> str:\n \"\"\"\n Returns the directory where the project results files will be written\n \"\"\"\n\n return os.path.join(self.results_path, 'reports', self.uuid, 'latest')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef refresh(self, force: bool = False) -> bool:\n\n lm = self.last_modified\n is_newer = lm is not None and lm >= os.path.getmtime(self.source_path)\n if not force and is_newer:\n return False\n\n old_definition = self.settings.fetch(None)\n new_definition = definitions.load_project_definition(\n self.source_directory\n )\n\n if not force and old_definition == new_definition:\n return False\n\n self.settings.clear().put(**new_definition)\n\n old_step_definitions = old_definition.get('steps', [])\n new_step_definitions = new_definition.get('steps', [])\n\n if not force and old_step_definitions == new_step_definitions:\n return True\n\n old_steps = self.steps\n self.steps = []\n\n for step_data in new_step_definitions:\n matches = [s for s in old_step_definitions if s == step_data]\n if len(matches) > 0:\n index = old_step_definitions.index(matches[0])\n self.steps.append(old_steps[index])\n else:\n self.add_step(step_data)\n\n self.last_modified = time.time()\n return True", "response": "Refreshes the project s cauldron. json definition file for the project and populates the project with the loaded data."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef preformatted_text(source: str) -> str:\n environ.abort_thread()\n\n if not source:\n return ''\n\n source = render_utils.html_escape(source)\n\n return '
{text}
'.format(\n text=str(textwrap.dedent(source))\n )", "response": "Renders a preformatted text box."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef markdown(\n source: str = None,\n source_path: str = None,\n preserve_lines: bool = False,\n font_size: float = None,\n **kwargs\n) -> dict:\n \"\"\"\n Renders a markdown file with support for Jinja2 templating. Any keyword\n arguments will be passed to Jinja2 for templating prior to rendering the\n markdown to HTML for display within the notebook.\n\n :param source:\n A string of markdown text that should be rendered to HTML for \n notebook display.\n :param source_path:\n The path to a markdown file that should be rendered to HTML for\n notebook display.\n :param preserve_lines:\n If True, all line breaks will be treated as hard breaks. Use this\n for pre-formatted markdown text where newlines should be retained\n during rendering.\n :param font_size:\n Specifies a relative font size adjustment. The default value is 1.0,\n which preserves the inherited font size values. Set it to a value\n below 1.0 for smaller font-size rendering and greater than 1.0 for\n larger font size rendering.\n :return:\n The HTML results of rendering the specified markdown string or file.\n \"\"\"\n\n environ.abort_thread()\n\n library_includes = []\n\n rendered = textwrap.dedent(\n templating.render_file(source_path, **kwargs)\n if source_path else\n templating.render(source or '', **kwargs)\n )\n\n if md is None:\n raise ImportError('Unable to import the markdown package')\n\n offset = 0\n while offset < len(rendered):\n bound_chars = '$$'\n start_index = rendered.find(bound_chars, offset)\n\n if start_index < 0:\n break\n\n inline = rendered[start_index + 2] != '$'\n bound_chars = '$$' if inline else '$$$'\n end_index = rendered.find(\n bound_chars,\n start_index + len(bound_chars)\n )\n\n if end_index < 0:\n break\n end_index += len(bound_chars)\n\n chunk = rendered[start_index: end_index] \\\n .strip('$') \\\n .strip() \\\n .replace('@', '\\\\')\n\n if inline:\n chunk = chunk.replace('\\\\', '\\\\\\\\')\n\n chunk = latex(chunk, inline)\n rendered = '{pre}{gap}{latex}{gap}{post}'.format(\n pre=rendered[:start_index],\n latex=chunk,\n post=rendered[end_index:],\n gap='' if inline else '\\n\\n'\n )\n\n if 'katex' not in library_includes:\n library_includes.append('katex')\n\n offset = end_index\n\n extensions = [\n 'markdown.extensions.extra',\n 'markdown.extensions.admonition',\n 'markdown.extensions.sane_lists',\n 'markdown.extensions.nl2br' if preserve_lines else None\n ]\n\n body = templating.render_template(\n 'markdown-block.html',\n text=md.markdown(rendered, extensions=[\n e for e in extensions if e is not None\n ]),\n font_size=font_size\n )\n\n pattern = re.compile('src=\"(?P[^\"]+)\"')\n body = pattern.sub(r'data-src=\"\\g\"', body)\n return dict(\n body=body,\n library_includes=library_includes,\n rendered=rendered\n )", "response": "Renders a markdown file and returns the HTML for the notebook display."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\npopulate the list of extra files that are not python data files.", "response": "def populate_extra_files():\n \"\"\"\n Creates a list of non-python data files to include in package distribution\n \"\"\"\n\n out = ['cauldron/settings.json']\n\n for entry in glob.iglob('cauldron/resources/examples/**/*', recursive=True):\n out.append(entry)\n\n for entry in glob.iglob('cauldron/resources/templates/**/*', recursive=True):\n out.append(entry)\n\n for entry in glob.iglob('cauldron/resources/web/**/*', recursive=True):\n out.append(entry)\n\n return out"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_rename_entry(\n step: 'projects.ProjectStep',\n insertion_index: int = None,\n stash_path: str = None\n) -> typing.Union[None, STEP_RENAME]:\n \"\"\"\n Creates a STEP_RENAME for the given ProjectStep instance\n\n :param step:\n The ProjectStep instance for which the STEP_RENAME will be created\n :param insertion_index:\n An optional index where a step will be inserted as part of this\n renaming process. Allows files to be renamed prior to the insertion\n of the step to prevent conflicts.\n :param stash_path:\n :return:\n \"\"\"\n\n project = step.project\n name = step.definition.name\n name_parts = naming.explode_filename(name, project.naming_scheme)\n index = project.index_of_step(name)\n name_index = index\n\n if insertion_index is not None and insertion_index <= index:\n # Adjusts indexing when renaming is for the purpose of\n # inserting a new step\n name_index += 1\n\n name_parts['index'] = name_index\n new_name = naming.assemble_filename(\n scheme=project.naming_scheme,\n **name_parts\n )\n\n if name == new_name:\n return None\n\n if not stash_path:\n fd, stash_path = tempfile.mkstemp(\n prefix='{}-{}--{}--'.format(step.reference_id, name, new_name)\n )\n os.close(fd)\n\n return STEP_RENAME(\n id=step.reference_id,\n index=index,\n old_name=name,\n new_name=new_name,\n old_path=step.source_path,\n stash_path=stash_path,\n new_path=os.path.join(step.project.source_directory, new_name)\n )", "response": "Creates a STEP_RENAME entry for the given project step."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef synchronize_step_names(\n project: 'projects.Project',\n insert_index: int = None\n) -> Response:\n \"\"\"\n :param project:\n :param insert_index:\n \"\"\"\n\n response = Response()\n response.returned = dict()\n\n if not project.naming_scheme:\n return response\n\n create_mapper_func = functools.partial(\n create_rename_entry,\n insertion_index=insert_index\n )\n\n step_renames = list([create_mapper_func(s) for s in project.steps])\n step_renames = list(filter(lambda sr: (sr is not None), step_renames))\n\n if not step_renames:\n return response\n\n try:\n backup_path = create_backup(project)\n except Exception as err:\n return response.fail(\n code='RENAME_BACKUP_ERROR',\n message='Unable to create backup name',\n error=err\n ).response\n\n try:\n step_renames = list([stash_source(sr) for sr in step_renames])\n step_renames = list([unstash_source(sr) for sr in step_renames])\n except Exception as err:\n return response.fail(\n code='RENAME_FILE_ERROR',\n message='Unable to rename files',\n error=err\n ).response\n\n response.returned = update_steps(project, step_renames)\n project.save()\n\n try:\n os.remove(backup_path)\n except PermissionError:\n pass\n\n return response", "response": "Synchronizes the names of the steps in the project."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef assemble_url(\n endpoint: str,\n remote_connection: 'environ.RemoteConnection' = None\n) -> str:\n \"\"\"\n Assembles a fully-resolved remote connection URL from the given endpoint\n and remote_connection structure. If the remote_connection is omitted, the\n global remote_connection object stored in the environ module will be\n used in its place.\n\n :param endpoint:\n The endpoint for the API call\n :param remote_connection:\n The remote connection definition data structure\n :return:\n The fully-resolved URL for the given endpoint\n \"\"\"\n url_root = (\n remote_connection.url\n if remote_connection else\n environ.remote_connection.url\n )\n url_root = url_root if url_root else 'localhost:5010'\n\n parts = [\n 'http://' if not url_root.startswith('http') else '',\n url_root.rstrip('/'),\n '/',\n endpoint.lstrip('/')\n ]\n\n return ''.join(parts)", "response": "Assembles a fully - resolved remote connection URL from the given endpoint and remote_connection structure."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses the given HTTP response into a Cauldron response object.", "response": "def parse_http_response(http_response: HttpResponse) -> 'environ.Response':\n \"\"\"\n Returns a Cauldron response object parsed from the serialized JSON data\n specified in the http_response argument. If the response doesn't contain\n valid Cauldron response data, an error Cauldron response object is\n returned instead.\n\n :param http_response:\n The response object from an http request that contains a JSON\n serialized Cauldron response object as its body\n :return:\n The Cauldron response object for the given http response\n \"\"\"\n try:\n response = environ.Response.deserialize(http_response.json())\n except Exception as error:\n response = environ.Response().fail(\n code='INVALID_REMOTE_RESPONSE',\n error=error,\n message='Invalid HTTP response from remote connection'\n ).console(\n whitespace=1\n ).response\n\n response.http_response = http_response\n return response"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsending a request to the remote kernel and processes the response.", "response": "def send_request(\n endpoint: str,\n data: dict = None,\n remote_connection: 'environ.RemoteConnection' = None,\n method: str = None,\n timeout: int = 10,\n max_retries: int = 10,\n **kwargs\n) -> 'environ.Response':\n \"\"\"\n Sends a request to the remote kernel specified by the RemoteConnection\n object and processes the result. If the request fails or times out it\n will be retried until the max retries is reached. After that a failed\n response will be returned instead.\n\n :param endpoint:\n Remote endpoint where the request will be directed.\n :param data:\n An optional JSON-serializable dictionary containing the request\n body data.\n :param remote_connection:\n Defines the connection to the remote server where the request will\n be sent.\n :param method:\n The HTTP method type for the request, e.g. GET, POST.\n :param timeout:\n Number of seconds before the request aborts when trying to either\n connect to the target endpoint or receive data from the server.\n :param max_retries:\n Number of retry attempts to make before giving up if a non-HTTP\n error is encountered during communication.\n \"\"\"\n if max_retries < 0:\n return environ.Response().fail(\n code='COMMUNICATION_ERROR',\n error=None,\n message='Unable to communicate with the remote kernel.'\n ).console(whitespace=1).response\n\n url = assemble_url(endpoint, remote_connection)\n\n retriable_errors = (\n requests.ConnectionError,\n requests.HTTPError,\n requests.Timeout\n )\n default_method = 'POST' if data is not None else 'GET'\n try:\n http_response = requests.request(\n method=method or default_method,\n url=url,\n json=data,\n timeout=10,\n **kwargs\n )\n except retriable_errors:\n return send_request(\n endpoint=endpoint,\n data=data,\n remote_connection=remote_connection,\n method=method,\n timeout=timeout,\n max_retries=max_retries - 1,\n **kwargs\n )\n\n return parse_http_response(http_response)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve the contents of the file specified by the route if it exists.", "response": "def view(route: str):\n \"\"\"\n Retrieves the contents of the file specified by the view route if it\n exists.\n \"\"\"\n project = cauldron.project.get_internal_project()\n results_path = project.results_path if project else None\n if not project or not results_path:\n return '', 204\n\n path = os.path.join(results_path, route)\n if not os.path.exists(path):\n return '', 204\n\n return flask.send_file(\n path,\n mimetype=mimetypes.guess_type(path)[0],\n cache_timeout=-1\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsaving the current state of the project to disk.", "response": "def save(\n project: 'projects.Project',\n write_list: typing.List[tuple] = None\n) -> typing.List[tuple]:\n \"\"\"\n Computes the file write list for the current state of the project if no\n write_list was specified in the arguments, and then writes each entry in\n that list to disk.\n\n :param project:\n The project to be saved\n :param write_list:\n The file writes list for the project if one already exists, or None\n if a new writes list should be computed\n :return:\n The file write list that was used to save the project to disk\n \"\"\"\n\n try:\n writes = (\n to_write_list(project)\n if write_list is None\n else write_list.copy()\n )\n except Exception as err:\n raise\n\n environ.systems.remove(project.output_directory)\n os.makedirs(project.output_directory)\n\n file_io.deploy(writes)\n return writes"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of file writes that should be executed for the project assets to the results folder.", "response": "def list_asset_writes(\n project: 'projects.Project'\n) -> typing.List[file_io.FILE_COPY_ENTRY]:\n \"\"\"\n Returns a list containing the file/directory writes that should be executed\n to deploy the project assets to the results folder. If the project has no\n assets an empty list is returned.\n\n :param project:\n The project for which the assets should be copied\n :return:\n A list containing the file copy entries for deploying project assets\n \"\"\"\n\n def make_asset_copy(directory: str) -> file_io.FILE_COPY_ENTRY:\n output_directory = os.path.join(\n project.output_directory,\n directory[len(project.source_directory):].lstrip(os.sep)\n )\n\n return file_io.FILE_COPY_ENTRY(\n source=directory,\n destination=output_directory\n )\n\n copies = [\n make_asset_copy(asset_directory)\n for asset_directory in project.asset_directories\n ]\n\n return list(filter(\n lambda fc: os.path.exists(fc.source),\n copies\n ))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd the path to the Python system path if not already added.", "response": "def add_library_path(path: str) -> bool:\n \"\"\"\n Adds the path to the Python system path if not already added and the path\n exists.\n\n :param path:\n The path to add to the system paths\n :return:\n Whether or not the path was added. Only returns False if the path was\n not added because it doesn't exist\n \"\"\"\n\n if not os.path.exists(path):\n return False\n\n if path not in sys.path:\n sys.path.append(path)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving the path from the system path if it is found in the system paths.", "response": "def remove_library_path(path: str) -> bool:\n \"\"\"\n Removes the path from the Python system path if it is found in the system\n paths.\n\n :param path:\n The path to remove from the system paths\n :return:\n Whether or not the path was removed.\n \"\"\"\n\n if path in sys.path:\n sys.path.remove(path)\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef close():\n os.chdir(os.path.expanduser('~'))\n project = cauldron.project.internal_project\n if not project:\n return False\n\n [remove_library_path(path) for path in project.library_directories]\n remove_library_path(project.source_directory)\n\n cauldron.project.unload()\n return True", "response": "Close the current project and remove all libraries and source directories."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef reload_libraries(library_directories: list = None):\n directories = library_directories or []\n project = cauldron.project.get_internal_project()\n if project:\n directories += project.library_directories\n\n if not directories:\n return\n\n def reload_module(path: str, library_directory: str):\n path = os.path.dirname(path) if path.endswith('__init__.py') else path\n start_index = len(library_directory) + 1\n end_index = -3 if path.endswith('.py') else None\n package_path = path[start_index:end_index]\n\n module = sys.modules.get(package_path.replace(os.sep, '.'))\n return importlib.reload(module) if module is not None else None\n\n def reload_library(directory: str) -> list:\n if not add_library_path(directory):\n # If the library wasn't added because it doesn't exist, remove it\n # in case the directory has recently been deleted and then return\n # an empty result\n remove_library_path(directory)\n return []\n\n glob_path = os.path.join(directory, '**', '*.py')\n return [\n reload_module(path, directory)\n for path in glob.glob(glob_path, recursive=True)\n ]\n\n return [\n reloaded_module\n for directory in directories\n for reloaded_module in reload_library(directory)\n if reload_module is not None\n ]", "response": "Reloads the libraries stored in the project s local and shared library directories."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrunning the entire project writes the results files and returns the URL to the report file", "response": "def complete(\n response: Response,\n project: typing.Union[Project, None],\n starting: ProjectStep = None,\n force: bool = False,\n limit: int = -1\n) -> list:\n \"\"\"\n Runs the entire project, writes the results files, and returns the URL to\n the report file\n\n :param response:\n :param project:\n :param starting:\n :param force:\n :param limit:\n :return:\n Local URL to the report path\n \"\"\"\n\n if project is None:\n project = cauldron.project.get_internal_project()\n\n starting_index = 0\n if starting:\n starting_index = project.steps.index(starting)\n count = 0\n\n steps_run = []\n\n for ps in project.steps:\n if 0 < limit <= count:\n break\n\n if ps.index < starting_index:\n continue\n\n if not force and not ps.is_dirty():\n if limit < 1:\n environ.log(\n '[{}]: Nothing to update'.format(ps.definition.name)\n )\n continue\n\n count += 1\n\n steps_run.append(ps)\n success = source.run_step(response, project, ps, force=True)\n if not success or project.stop_condition.halt:\n return steps_run\n\n return steps_run"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the number of seconds that has elapsed since the step started running or the amount of time that has elapsed during the last execution of the step.", "response": "def elapsed_time(self) -> float:\n \"\"\"\n The number of seconds that has elapsed since the step started running\n if the step is still running. Or, if the step has already finished\n running, the amount of time that elapsed during the last execution of\n the step.\n \"\"\"\n current_time = datetime.utcnow()\n start = self.start_time or current_time\n end = self.end_time or current_time\n return (end - start).total_seconds()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_elapsed_timestamp(self) -> str:\n t = self.elapsed_time\n minutes = int(t / 60)\n seconds = int(t - (60 * minutes))\n millis = int(100 * (t - int(t)))\n return '{:>02d}:{:>02d}.{:<02d}'.format(minutes, seconds, millis)", "response": "Returns a human - readable version of the elapsed time for the last execution of the project step."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the current value of the DOM for the step.", "response": "def get_dom(self) -> str:\n \"\"\" Retrieves the current value of the DOM for the step \"\"\"\n\n if self.is_running:\n return self.dumps()\n\n if self.dom is not None:\n return self.dom\n\n dom = self.dumps()\n self.dom = dom\n return dom"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nwrite the step information to an HTML - formatted string.", "response": "def dumps(self) -> str:\n \"\"\"Writes the step information to an HTML-formatted string\"\"\"\n code_file_path = os.path.join(\n self.project.source_directory,\n self.filename\n )\n code = dict(\n filename=self.filename,\n path=code_file_path,\n code=render.code_file(code_file_path)\n )\n\n if not self.is_running:\n # If no longer running, make sure to flush the stdout buffer so\n # any print statements at the end of the step get included in\n # the body\n self.report.flush_stdout()\n\n # Create a copy of the body for dumping\n body = self.report.body[:]\n\n if self.is_running:\n # If still running add a temporary copy of anything not flushed\n # from the stdout buffer to the copy of the body for display. Do\n # not flush the buffer though until the step is done running or\n # it gets flushed by another display call.\n body.append(self.report.read_stdout())\n\n body = ''.join(body)\n\n has_body = len(body) > 0 and (\n body.find(' 1:\n tags['tag:' + split_tag[0]] = split_tag[1]\n else:\n tags['tag:' + split_tag[0]] = ''\n return tags"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a JSON response.", "response": "def json_response(self, status=200, data={}, headers={}):\n '''\n To set flask to inject specific headers on response request,\n such as CORS_ORIGIN headers\n '''\n mimetype = 'application/json'\n\n header_dict = {}\n for k, v in headers.items():\n header_dict[k] = v\n\n return Response(\n json.dumps(data),\n status=status,\n mimetype=mimetype,\n headers=header_dict)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef template_response(self, template_name, headers={}, **values):\n response = make_response(\n self.render_template(template_name, **values))\n\n for field, value in headers.items():\n response.headers.set(field, value)\n\n return response", "response": "Constructs a response with the template_name and headers"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef describe_key_pairs():\n region_keys = {}\n for r in boto3.client('ec2', 'us-west-2').describe_regions()['Regions']:\n region = r['RegionName']\n client = boto3.client('ec2', region_name=region)\n try:\n pairs = client.describe_key_pairs()\n if pairs:\n region_keys[region] = pairs\n except Exception as e:\n app.logger.info(e)\n return region_keys", "response": "Returns all key pairs for all regions in the current region"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_app(module, BASE_DIR, **kwargs):\n global app\n\n def init_config():\n \"\"\"\n Load settings module and attach values to the application\n config dictionary\n \"\"\"\n if 'FLASK_PHILO_SETTINGS_MODULE' not in os.environ:\n raise ConfigurationError('No settings has been defined')\n\n app.config['BASE_DIR'] = BASE_DIR\n\n # default settings\n for v in dir(default_settings):\n if not v.startswith('_'):\n app.config[v] = getattr(default_settings, v)\n\n app.debug = app.config['DEBUG']\n\n # app settings\n settings = importlib.import_module(\n os.environ['FLASK_PHILO_SETTINGS_MODULE'])\n for v in dir(settings):\n if not v.startswith('_'):\n app.config[v] = getattr(settings, v)\n\n def init_urls():\n # Reads urls definition from URLs file and bind routes and views\n urls_module = importlib.import_module(app.config['URLS'])\n for route in urls_module.URLS:\n app.add_url_rule(\n route[0], view_func=route[1].as_view(route[2]))\n\n def init_logging():\n \"\"\"\n initialize logger for the app\n \"\"\"\n\n hndlr = logging.StreamHandler()\n formatter = logging.Formatter(\n '%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n hndlr.setFormatter(formatter)\n app.logger.addHandler(hndlr)\n log_level = app.config['LOG_LEVEL']\n app.logger.setLevel(getattr(logging, log_level))\n\n def init_flask_oauthlib():\n \"\"\"\n http://flask-oauthlib.readthedocs.io/en/latest/oauth2.html\n \"\"\"\n oauth.init_app(app)\n\n def init_cors(app):\n \"\"\"\n Initializes cors protection if config\n \"\"\"\n\n if 'CORS' in app.config:\n CORS(\n app,\n resources=app.config['CORS'],\n supports_credentials=app.config.get(\n \"CORS_SUPPORT_CREDENTIALS\",\n False\n ),\n allow_headers=app.config.get(\n \"CORS_ALLOW_HEADERS\",\n \"Content-Type,Authorization,accept-language,accept\"\n )\n )\n\n init_db(g, app)\n init_logging()\n init_urls()\n init_flask_oauthlib()\n init_jinja2(g, app)\n init_cors(app)\n app = Flask(module)\n init_config()\n return app", "response": "Initializes an application with the given module and BASE_DIR."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef execute_command(cmd, **kwargs):\n cmd_dict = {\n c: 'flask_philo.commands_flask_philo.' + c for c\n in dir(commands_flask_philo) if not c.startswith('_') and c != 'os' # noqa\n }\n\n # loading specific app commands\n try:\n import console_commands\n for cm in console_commands.__all__:\n if not cm.startswith('_'):\n cmd_dict[cm] = 'console_commands.' + cm\n except Exception:\n pass\n\n if cmd not in cmd_dict:\n raise ConfigurationError('command {} does not exists'.format(cmd))\n cmd_module = importlib.import_module(cmd_dict[cmd])\n kwargs['app'] = app\n cmd_module.run(**kwargs)", "response": "execute a console command"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef syncdb():\n from flask_philo.db.postgresql.schema import Base\n from flask_philo.db.postgresql.orm import BaseModel # noqa\n from flask_philo.db.postgresql.connection import get_pool\n\n for conn_name, conn in get_pool().connections.items():\n Base.metadata.create_all(conn.engine)", "response": "Create tables if they don t exist"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef new(cls, password, rounds):\n if isinstance(password, str):\n password = password.encode('utf8')\n value = bcrypt.hashpw(password, bcrypt.gensalt(rounds))\n return cls(value)", "response": "Creates a PasswordHash from the given password and rounds."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _convert(self, value):\n if isinstance(value, PasswordHash):\n return value\n elif isinstance(value, str):\n value = value.encode('utf-8')\n return PasswordHash.new(value, self.rounds)\n elif value is not None:\n raise TypeError(\n 'Cannot convert {} to a PasswordHash'.format(type(value)))", "response": "Converts the given string to a PasswordHash instance."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef init_db_conn(\n connection_name, HOST=None, PORT=None, DB=None, PASSWORD=None):\n \"\"\"\n Initialize a redis connection by each connection string\n defined in the configuration file\n \"\"\"\n rpool = redis.ConnectionPool(\n host=HOST, port=PORT, db=DB, password=PASSWORD)\n r = redis.Redis(connection_pool=rpool)\n redis_pool.connections[connection_name] = RedisClient(r)", "response": "Initialize a redis connection by each connection string\n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes the database connection parameters and redis connection pool for the global application.", "response": "def initialize(g, app):\n \"\"\"\n If redis connection parameters are defined in configuration params a\n session will be created\n \"\"\"\n\n if 'DATABASES' in app.config and 'REDIS' in app.config['DATABASES']:\n\n # Initialize connections for console commands\n for k, v in app.config['DATABASES']['REDIS'].items():\n init_db_conn(k, **v)\n\n @app.before_request\n def before_request():\n \"\"\"\n Assign redis connection pool to the global\n flask object at the beginning of every request\n \"\"\"\n for k, v in app.config['DATABASES']['REDIS'].items():\n init_db_conn(k, **v)\n g.redis_pool = redis_pool\n\n if 'test' not in sys.argv:\n @app.teardown_request\n def teardown_request(exception):\n pool = getattr(g, 'redis_pool', None)\n if pool is not None:\n pool.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _initialize_from_dict(self, data):\n self._json = data\n self._validate()\n\n for name, value in self._json.items():\n if name in self._properties:\n if '$ref' in self._properties[name]:\n if 'decimal' in self._properties[name]['$ref']:\n value = Decimal(value)\n\n # applying proper formatting when required\n if 'format' in self._properties[name]:\n format = self._properties[name]['format']\n\n if 'date-time' == format:\n value = utils.string_to_datetime(value)\n elif 'date' == format:\n value = utils.string_to_date(value)\n setattr(self, name, value)", "response": "Loads the serializer from a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ninitializing the object from a model.", "response": "def _initialize_from_model(self, model):\n \"\"\"\n Loads a model from\n \"\"\"\n for name, value in model.__dict__.items():\n if name in self._properties:\n setattr(self, name, value)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the record based on serializer values", "response": "def update(self):\n \"\"\"\n Finds record and update it based in serializer values\n \"\"\"\n obj = self.__model__.objects.get_for_update(id=self.id)\n for name, value in self.__dict__.items():\n if name in self._properties:\n setattr(obj, name, value)\n obj.update()\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a json representation of the object.", "response": "def to_json(self):\n \"\"\"\n Returns a json representation\n \"\"\"\n data = {}\n for k, v in self.__dict__.items():\n if not k.startswith('_'):\n # values not serializable, should be converted to strings\n if isinstance(v, datetime):\n v = utils.datetime_to_string(v)\n elif isinstance(v, date):\n v = utils.date_to_string(v)\n elif isinstance(v, uuid.UUID):\n v = str(v)\n elif isinstance(v, Decimal):\n v = str(v)\n data[k] = v\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef OPERATING_SYSTEM(stats, info):\n info.append(('architecture', platform.machine().lower()))\n info.append(('distribution',\n \"%s;%s\" % (platform.linux_distribution()[0:2])))\n info.append(('system',\n \"%s;%s\" % (platform.system(), platform.release())))", "response": "General information about the operating system."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef SESSION_TIME(stats, info):\n duration = time.time() - stats.started_time\n secs = int(duration)\n msecs = int((duration - secs) * 1000)\n info.append(('session_time', '%d.%d' % (secs, msecs)))", "response": "Total time elapsed from the construction of the Stats object to\n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads the configuration file and sets the status attribute of the usage statistics object.", "response": "def read_config(self):\n \"\"\"Reads the configuration.\n\n This method can be overloaded to integrate with your application's own\n configuration mechanism. By default, a single 'status' file is read\n from the reports' directory.\n\n This should set `self.status` to one of the state constants, and make\n sure `self.location` points to a writable directory where the reports\n will be written.\n\n The possible values for `self.status` are:\n\n - `UNSET`: nothing has been selected and the user should be prompted\n - `ENABLED`: collect and upload reports\n - `DISABLED`: don't collect or upload anything, stop prompting\n - `ERRORED`: something is broken, and we can't do anything in this\n session (for example, the configuration directory is not writable)\n \"\"\"\n if self.enabled and not os.path.isdir(self.location):\n try:\n os.makedirs(self.location, 0o700)\n except OSError:\n logger.warning(\"Couldn't create %s, usage statistics won't be \"\n \"collected\", self.location)\n self.status = Stats.ERRORED\n\n status_file = os.path.join(self.location, 'status')\n if self.enabled and os.path.exists(status_file):\n with open(status_file, 'r') as fp:\n status = fp.read().strip()\n if status == 'ENABLED':\n self.status = Stats.ENABLED\n elif status == 'DISABLED':\n self.status = Stats.DISABLED"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write_config(self, enabled):\n status_file = os.path.join(self.location, 'status')\n with open(status_file, 'w') as fp:\n if enabled is Stats.ENABLED:\n fp.write('ENABLED')\n elif enabled is Stats.DISABLED:\n fp.write('DISABLED')\n else:\n raise ValueError(\"Unknown reporting state %r\" % enabled)", "response": "Writes the configuration.\n\n This method can be overloaded to integrate with your application's own\n configuration mechanism. By default, a single 'status' file is written\n in the reports' directory, containing either ``ENABLED`` or\n ``DISABLED``; if the file doesn't exist, `UNSET` is assumed.\n\n :param enabled: Either `Stats.UNSET`, `Stats.DISABLED` or\n `Stats.ENABLED`."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef enable_reporting(self):\n if self.status == Stats.ENABLED:\n return\n if not self.enableable:\n logger.critical(\"Can't enable reporting\")\n return\n self.status = Stats.ENABLED\n self.write_config(self.status)", "response": "Set the status of the current report to Stats. ENABLED."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncall this method to explicitly disable reporting. The current report will be discarded, along with the previously recorded ones that haven't been uploaded. The configuration is updated so that future runs do not record or upload reports.", "response": "def disable_reporting(self):\n \"\"\"Call this method to explicitly disable reporting.\n\n The current report will be discarded, along with the previously\n recorded ones that haven't been uploaded. The configuration is updated\n so that future runs do not record or upload reports.\n \"\"\"\n if self.status == Stats.DISABLED:\n return\n if not self.disableable:\n logger.critical(\"Can't disable reporting\")\n return\n self.status = Stats.DISABLED\n self.write_config(self.status)\n if os.path.exists(self.location):\n old_reports = [f for f in os.listdir(self.location)\n if f.startswith('report_')]\n for old_filename in old_reports:\n fullname = os.path.join(self.location, old_filename)\n os.remove(fullname)\n logger.info(\"Deleted %d pending reports\", len(old_reports))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nrecords some info to the report.", "response": "def note(self, info):\n \"\"\"Record some info to the report.\n\n :param info: Dictionary of info to record. Note that previous info\n recorded under the same keys will not be overwritten.\n \"\"\"\n if self.recording:\n if self.notes is None:\n raise ValueError(\"This report has already been submitted\")\n self.notes.extend(self._to_notes(info))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsubmitting a new report.", "response": "def submit(self, info, *flags):\n \"\"\"Finish recording and upload or save the report.\n\n This closes the `Stats` object, no further methods should be called.\n The report is either saved, uploaded or discarded, depending on\n configuration. If uploading is enabled, previous reports might be\n uploaded too. If uploading is not explicitly enabled or disabled, the\n prompt will be shown, to ask the user to enable or disable it.\n \"\"\"\n if not self.recording:\n return\n env_val = os.environ.get(self.env_var, '').lower()\n if env_val not in (None, '', '1', 'on', 'enabled', 'yes', 'true'):\n self.status = Stats.DISABLED_ENV\n self.notes = None\n return\n\n if self.notes is None:\n raise ValueError(\"This report has already been submitted\")\n\n all_info, self.notes = self.notes, None\n all_info.extend(self._to_notes(info))\n for flag in flags:\n flag(self, all_info)\n\n now = time.time()\n secs = int(now)\n msecs = int((now - secs) * 1000)\n all_info.insert(0, ('date', '%d.%d' % (secs, msecs)))\n\n if self.user_id:\n all_info.insert(1, ('user', self.user_id))\n\n logger.debug(\"Generated report:\\n%r\", (all_info,))\n\n # Current report\n def generator():\n for key, value in all_info:\n yield _encode(key) + b':' + _encode(value) + b'\\n'\n filename = 'report_%d_%d.txt' % (secs, msecs)\n\n # Save current report and exit, unless user has opted in\n if not self.sending:\n fullname = os.path.join(self.location, filename)\n with open(fullname, 'wb') as fp:\n for l in generator():\n fp.write(l)\n\n # Show prompt\n sys.stderr.write(self.prompt.prompt)\n return\n\n # Post previous reports\n old_reports = [f for f in os.listdir(self.location)\n if f.startswith('report_')]\n old_reports.sort()\n old_reports = old_reports[:4] # Only upload 5 at a time\n for old_filename in old_reports:\n fullname = os.path.join(self.location, old_filename)\n try:\n with open(fullname, 'rb') as fp:\n # `data=fp` would make requests stream, which is currently\n # not a good idea (WSGI chokes on it)\n r = requests.post(self.drop_point, data=fp.read(),\n timeout=1, verify=self.ssl_verify)\n r.raise_for_status()\n except Exception as e:\n logger.warning(\"Couldn't upload %s: %s\", old_filename, str(e))\n break\n else:\n logger.info(\"Submitted report %s\", old_filename)\n os.remove(fullname)\n\n # Post current report\n try:\n # `data=generator()` would make requests stream, which is currently\n # not a good idea (WSGI chokes on it)\n r = requests.post(self.drop_point, data=b''.join(generator()),\n timeout=1, verify=self.ssl_verify)\n except requests.RequestException as e:\n logger.warning(\"Couldn't upload report: %s\", str(e))\n fullname = os.path.join(self.location, filename)\n with open(fullname, 'wb') as fp:\n for l in generator():\n fp.write(l)\n else:\n try:\n r.raise_for_status()\n logger.info(\"Submitted report\")\n except requests.RequestException as e:\n logger.warning(\"Server rejected report: %s\", str(e))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstores the report on disk.", "response": "def store(report, address):\n \"\"\"Stores the report on disk.\n \"\"\"\n now = time.time()\n secs = int(now)\n msecs = int((now - secs) * 1000)\n submitted_date = filename = None # avoids warnings\n while True:\n submitted_date = '%d.%03d' % (secs, msecs)\n filename = 'report_%s.txt' % submitted_date\n filename = os.path.join(DESTINATION, filename)\n if not os.path.exists(filename):\n break\n msecs += 1\n\n lines = [l for l in report.split(b'\\n') if l]\n for line in lines:\n if line.startswith(b'date:'):\n date = line[5:]\n if date_format.match(date):\n with open(filename, 'wb') as fp:\n if not isinstance(address, bytes):\n address = address.encode('ascii')\n fp.write(b'submitted_from:' + address + b'\\n')\n fp.write(('submitted_date:%s\\n' % submitted_date)\n .encode('ascii'))\n fp.write(report)\n return None\n else:\n return \"invalid date\"\n return \"missing date field\""} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef read(self, pin):\n port, pin = self.pin_to_port(pin)\n self.i2c_write([0x12 + port])\n raw = self.i2c_read(1)\n value = struct.unpack('>B', raw)[0]\n return (value & (1 << pin)) > 0", "response": "Read the pin state of an input pin."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading the state of a whole port.", "response": "def read_port(self, port):\n \"\"\" Read the pin state of a whole port (8 pins)\n\n :Example:\n\n >>> expander = MCP23017I2C(gw)\n >>> # Read pin A0-A7 as a int (A0 and A1 are high)\n >>> expander.read_port('A')\n 3\n >>> # Read pin B0-B7 as a int (B2 is high)\n >>> expander.read_port('B')\n 4\n\n :param port: use 'A' to read port A and 'B' for port b\n :return: An int where every bit represents the input level.\n \"\"\"\n if port == 'A':\n raw = self.i2c_read_register(0x12, 1)\n elif port == 'B':\n raw = self.i2c_read_register(0x13, 1)\n return struct.unpack('>B', raw)[0]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef write(self, pin, value):\n port, pin = self.pin_to_port(pin)\n portname = 'A'\n if port == 1:\n portname = 'B'\n self._update_register('GPIO' + portname, pin, value)\n self.sync()", "response": "Set the pin state to the specified value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite a byte to a port in the current system.", "response": "def write_port(self, port, value):\n \"\"\" Use a whole port as a bus and write a byte to it.\n\n :param port: Name of the port ('A' or 'B')\n :param value: Value to write (0-255)\n \"\"\"\n if port == 'A':\n self.GPIOA = value\n elif port == 'B':\n self.GPIOB = value\n else:\n raise AttributeError('Port {} does not exist, use A or B'.format(port))\n self.sync()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nuploads the changed registers to the chip This will check which register have been changed since the last sync and send them to the chip. You need to call this method if you modify one of the register attributes (mcp23017.IODIRA for example) or if you use one of the helper attributes (mcp23017.direction_A0 for example)", "response": "def sync(self):\n \"\"\" Upload the changed registers to the chip\n\n This will check which register have been changed since the last sync and send them to the chip.\n You need to call this method if you modify one of the register attributes (mcp23017.IODIRA for example) or\n if you use one of the helper attributes (mcp23017.direction_A0 for example)\n \"\"\"\n registers = {\n 0x00: 'IODIRA',\n 0x01: 'IODIRB',\n 0x02: 'IPOLA',\n 0x03: 'IPOLB',\n 0x04: 'GPINTENA',\n 0x05: 'GPINTENB',\n 0x0C: 'GPPUA',\n 0x0D: 'GPPUB',\n 0x12: 'GPIOA',\n 0x13: 'GPIOB'\n }\n for reg in registers:\n name = registers[reg]\n if getattr(self, name) != getattr(self, '_' + name):\n self.i2c_write_register(reg, [getattr(self, name)])\n setattr(self, '_' + name, getattr(self, name))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a list containing all 16 pins of the chip.", "response": "def get_pins(self):\n \"\"\" Get a list containing references to all 16 pins of the chip.\n\n :Example:\n\n >>> expander = MCP23017I2C(gw)\n >>> pins = expander.get_pins()\n >>> pprint.pprint(pins)\n [,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ,\n ]\n\n\n \"\"\"\n result = []\n for a in range(0, 7):\n result.append(GPIOPin(self, '_action', {'pin': 'A{}'.format(a)}, name='A{}'.format(a)))\n for b in range(0, 7):\n result.append(GPIOPin(self, '_action', {'pin': 'B{}'.format(b)}, name='B{}'.format(b)))\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read(self):\n m = getattr(self.chip, self.method)\n return m(**self.arguments)", "response": "Get the logic input level for the pin\n\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the logic output level for the pin.", "response": "def write(self, value):\n \"\"\" Set the logic output level for the pin.\n\n :type value: bool\n :param value: True for a logic high\n \"\"\"\n if self.inverted:\n value = not value\n m = getattr(self.chip, self.method)\n m(value=value, **self.arguments)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes application object. :param app: An instance of :class:`~flask.Flask`.", "response": "def init_app(self, app, **kwargs):\n \"\"\"Initialize application object.\n\n :param app: An instance of :class:`~flask.Flask`.\n \"\"\"\n self.init_config(app)\n\n # Initialize extensions\n self.menu_ext.init_app(app)\n self.menu = app.extensions['menu']\n self.breadcrumbs.init_app(app)\n\n # Register blueprint in order to register template and static folder.\n app.register_blueprint(Blueprint(\n 'invenio_theme',\n __name__,\n template_folder='templates',\n static_folder='static',\n ))\n\n # Register frontpage blueprint if enabled.\n if app.config['THEME_FRONTPAGE']:\n app.register_blueprint(blueprint)\n\n # Initialize breadcrumbs.\n item = self.menu.submenu('breadcrumbs')\n item.register(app.config['THEME_BREADCRUMB_ROOT_ENDPOINT'], _('Home'))\n\n # Register errors handlers.\n app.register_error_handler(401, unauthorized)\n app.register_error_handler(403, insufficient_permissions)\n app.register_error_handler(404, page_not_found)\n app.register_error_handler(500, internal_error)\n\n # Save reference to self on object\n app.extensions['invenio-theme'] = self"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninitializes configuration. :param app: An instance of :class:`~flask.Flask`.", "response": "def init_config(self, app):\n \"\"\"Initialize configuration.\n\n :param app: An instance of :class:`~flask.Flask`.\n \"\"\"\n _vars = ['BASE_TEMPLATE', 'COVER_TEMPLATE', 'SETTINGS_TEMPLATE']\n\n # Sets RequireJS config and SASS binary as well if not already set.\n for k in dir(config):\n if k.startswith('THEME_') or k in [\n 'REQUIREJS_CONFIG', 'SASS_BIN'] + _vars:\n app.config.setdefault(k, getattr(config, k))\n\n # Set THEME__TEMPLATE from _TEMPLATE variables if not\n # already set.\n for varname in _vars:\n theme_varname = 'THEME_{}'.format(varname)\n if app.config[theme_varname] is None:\n app.config[theme_varname] = app.config[varname]\n\n app.config.setdefault(\n 'ADMIN_BASE_TEMPLATE', config.ADMIN_BASE_TEMPLATE)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the measurement range for the accel and gyro MEMS. Higher range means less resolution.", "response": "def set_range(self, accel=1, gyro=1):\n \"\"\"Set the measurement range for the accel and gyro MEMS. Higher range means less resolution.\n\n :param accel: a RANGE_ACCEL_* constant\n :param gyro: a RANGE_GYRO_* constant\n\n :Example:\n\n .. code-block:: python\n\n sensor = MPU6050I2C(gateway_class_instance)\n sensor.set_range(\n accel=MPU6050I2C.RANGE_ACCEL_2G,\n gyro=MPU6050I2C.RANGE_GYRO_250DEG\n )\n\n \"\"\"\n self.i2c_write_register(0x1c, accel)\n self.i2c_write_register(0x1b, gyro)\n self.accel_range = accel\n self.gyro_range = gyro"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the aux i2c bus in bypass mode thus connecting it to the main i2c bus.", "response": "def set_slave_bus_bypass(self, enable):\n \"\"\"Put the aux i2c bus on the MPU-6050 in bypass mode, thus connecting it to the main i2c bus directly\n\n Dont forget to use wakeup() or else the slave bus is unavailable\n :param enable:\n :return:\n \"\"\"\n current = self.i2c_read_register(0x37, 1)[0]\n if enable:\n current |= 0b00000010\n else:\n current &= 0b11111101\n self.i2c_write_register(0x37, current)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreading the value for the internal temperature sensor.", "response": "def temperature(self):\n \"\"\"Read the value for the internal temperature sensor.\n\n :returns: Temperature in degree celcius as float\n\n :Example:\n\n >>> sensor = MPU6050I2C(gw)\n >>> sensor.wakeup()\n >>> sensor.temperature()\n 49.38\n \"\"\"\n if not self.awake:\n raise Exception(\"MPU6050 is in sleep mode, use wakeup()\")\n\n raw = self.i2c_read_register(0x41, 2)\n raw = struct.unpack('>h', raw)[0]\n return round((raw / 340) + 36.53, 2)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the acceleration in G s as a tuple", "response": "def acceleration(self):\n \"\"\"Return the acceleration in G's\n\n :returns: Acceleration for every axis as a tuple\n\n :Example:\n\n >>> sensor = MPU6050I2C(gw)\n >>> sensor.wakeup()\n >>> sensor.acceleration()\n (0.6279296875, 0.87890625, 1.1298828125)\n \"\"\"\n if not self.awake:\n raise Exception(\"MPU6050 is in sleep mode, use wakeup()\")\n\n raw = self.i2c_read_register(0x3B, 6)\n x, y, z = struct.unpack('>HHH', raw)\n scales = {\n self.RANGE_ACCEL_2G: 16384,\n self.RANGE_ACCEL_4G: 8192,\n self.RANGE_ACCEL_8G: 4096,\n self.RANGE_ACCEL_16G: 2048\n }\n scale = scales[self.accel_range]\n return x / scale, y / scale, z / scale"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef angular_rate(self):\n if not self.awake:\n raise Exception(\"MPU6050 is in sleep mode, use wakeup()\")\n\n raw = self.i2c_read_register(0x43, 6)\n x, y, z = struct.unpack('>HHH', raw)\n scales = {\n self.RANGE_GYRO_250DEG: 16384,\n self.RANGE_GYRO_500DEG: 8192,\n self.RANGE_GYRO_1000DEG: 4096,\n self.RANGE_GYRO_2000DEG: 2048\n }\n scale = scales[self.gyro_range]\n return x / scale, y / scale, z / scale", "response": "Return the angular rate for every axis in degree / second."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_contents(self):\n return [\n str(value) if is_lazy_string(value) else value\n for value in super(LazyNpmBundle, self)._get_contents()\n ]", "response": "Create strings from lazy strings."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads factory calibration data from device.", "response": "def load_calibration(self):\n \"\"\"Load factory calibration data from device.\"\"\"\n registers = self.i2c_read_register(0xAA, 22)\n (\n self.cal['AC1'],\n self.cal['AC2'],\n self.cal['AC3'],\n self.cal['AC4'],\n self.cal['AC5'],\n self.cal['AC6'],\n self.cal['B1'],\n self.cal['B2'],\n self.cal['MB'],\n self.cal['MC'],\n self.cal['MD']\n ) = struct.unpack('>hhhHHHhhhhh', registers)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef temperature(self):\n ut = self.get_raw_temp()\n x1 = ((ut - self.cal['AC6']) * self.cal['AC5']) >> 15\n x2 = (self.cal['MC'] << 11) // (x1 + self.cal['MD'])\n b5 = x1 + x2\n return ((b5 + 8) >> 4) / 10", "response": "Get the temperature from the sensor."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pressure(self):\n ut = self.get_raw_temp()\n up = self.get_raw_pressure()\n x1 = ((ut - self.cal['AC6']) * self.cal['AC5']) >> 15\n x2 = (self.cal['MC'] << 11) // (x1 + self.cal['MD'])\n b5 = x1 + x2\n b6 = b5 - 4000\n x1 = (self.cal['B2'] * (b6 * b6) >> 12) >> 11\n x2 = (self.cal['AC2'] * b6) >> 11\n x3 = x1 + x2\n b3 = (((self.cal['AC1'] * 4 + x3) << self.mode) + 2) // 4\n x1 = (self.cal['AC3'] * b6) >> 13\n x2 = (self.cal['B1'] * ((b6 * b6) >> 12)) >> 16\n x3 = ((x1 + x2) + 2) >> 2\n b4 = (self.cal['AC4'] * (x3 + 32768)) >> 15\n b7 = (up - b3) * (50000 >> self.mode)\n if b7 < 0x80000000:\n p = (b7 * 2) // b4\n else:\n p = (b7 // b4) * 2\n x1 = (p >> 8) * (p >> 8)\n x1 = (x1 * 3038) >> 16\n x2 = (-7357 * p) >> 16\n p += (x1 + x2 + 3791) >> 4\n return p", "response": "Get barometric pressure in milibar as a int"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef switch_mode(self, new_mode):\n packet = bytearray()\n packet.append(new_mode)\n self.device.write(packet)\n possible_responses = {\n self.MODE_I2C: b'I2C1',\n self.MODE_RAW: b'BBIO1',\n self.MODE_SPI: b'API1',\n self.MODE_UART: b'ART1',\n self.MODE_ONEWIRE: b'1W01'\n }\n expected = possible_responses[new_mode]\n response = self.device.read(4)\n if response != expected:\n raise Exception('Could not switch mode')\n self.mode = new_mode\n self.set_peripheral()\n if self.i2c_speed:\n self._set_i2c_speed(self.i2c_speed)", "response": "Explicitly switch the Bus Pirate mode."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_peripheral(self, power=None, pullup=None, aux=None, chip_select=None):\n if power is not None:\n self.power = power\n if pullup is not None:\n self.pullup = pullup\n if aux is not None:\n self.aux = aux\n if chip_select is not None:\n self.chip_select = chip_select\n # Set peripheral status\n peripheral_byte = 64\n if self.chip_select:\n peripheral_byte |= 0x01\n if self.aux:\n peripheral_byte |= 0x02\n if self.pullup:\n peripheral_byte |= 0x04\n if self.power:\n peripheral_byte |= 0x08\n\n self.device.write(bytearray([peripheral_byte]))\n response = self.device.read(1)\n if response != b\"\\x01\":\n raise Exception(\"Setting peripheral failed. Received: {}\".format(repr(response)))", "response": "Set the state of the internal peripheral state at runtime."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _set_i2c_speed(self, i2c_speed):\n lower_bits_mapping = {\n '400kHz': 3,\n '100kHz': 2,\n '50kHz': 1,\n '5kHz': 0,\n }\n if i2c_speed not in lower_bits_mapping:\n raise ValueError('Invalid i2c_speed')\n speed_byte = 0b01100000 | lower_bits_mapping[i2c_speed]\n self.device.write(bytearray([speed_byte]))\n response = self.device.read(1)\n if response != b\"\\x01\":\n raise Exception(\"Changing I2C speed failed. Received: {}\".format(repr(response)))", "response": "Set the I2C speed of the ISO."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_resolution(self, resolution=1090):\n options = {\n 1370: 0,\n 1090: 1,\n 820: 2,\n 660: 3,\n 440: 4,\n 390: 5,\n 330: 6,\n 230: 7\n }\n\n if resolution not in options.keys():\n raise Exception('Resolution of {} steps is not supported'.format(resolution))\n\n self.resolution = resolution\n\n config_b = 0\n config_b &= options[resolution] << 5\n self.i2c_write_register(0x01, config_b)", "response": "Sets the resolution of the current log entry."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the magnetometer values as gauss for each axis as a tuple x y z", "response": "def gauss(self):\n \"\"\"\n Get the magnetometer values as gauss for each axis as a tuple (x,y,z)\n\n :example:\n\n >>> sensor = HMC5883L(gw)\n >>> sensor.gauss()\n (16.56, 21.2888, 26.017599999999998)\n \"\"\"\n raw = self.raw()\n factors = {\n 1370: 0.73,\n 1090: 0.92,\n 820: 1.22,\n 660: 1.52,\n 440: 2.27,\n 390: 2.56,\n 330: 3.03,\n 230: 4.35\n }\n factor = factors[self.resolution] / 100\n return raw[0] * factor, raw[1] * factor, raw[2] * factor"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the temperature in degree celcius", "response": "def temperature(self):\n \"\"\" Get the temperature in degree celcius\n \"\"\"\n result = self.i2c_read(2)\n value = struct.unpack('>H', result)[0]\n\n if value < 32768:\n return value / 256.0\n else:\n return (value - 65536) / 256.0"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndisplays a single character on the display", "response": "def write(self, char):\n \"\"\" Display a single character on the display\n\n :type char: str or int\n :param char: Character to display\n \"\"\"\n char = str(char).lower()\n self.segments.write(self.font[char])"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_html_theme_path():\n return os.path.abspath(os.path.dirname(os.path.dirname(__file__)))", "response": "Get the absolute path of the directory containing the theme files."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a URL for feedback on a particular page in a project.", "response": "def feedback_form_url(project, page):\n \"\"\"\n Create a URL for feedback on a particular page in a project.\n \"\"\"\n return FEEDBACK_FORM_FMT.format(pageid=quote(\"{}: {}\".format(project, page)))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates the page rendering context to include feedback_form_url.", "response": "def update_context(app, pagename, templatename, context, doctree): # pylint: disable=unused-argument\n \"\"\"\n Update the page rendering context to include ``feedback_form_url``.\n \"\"\"\n context['feedback_form_url'] = feedback_form_url(app.config.project, pagename)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef objectsFromPEM(pemdata):\n certificates = []\n keys = []\n blobs = [b\"\"]\n for line in pemdata.split(b\"\\n\"):\n if line.startswith(b'-----BEGIN'):\n if b'CERTIFICATE' in line:\n blobs = certificates\n else:\n blobs = keys\n blobs.append(b'')\n blobs[-1] += line\n blobs[-1] += b'\\n'\n keys = [KeyPair.load(key, FILETYPE_PEM) for key in keys]\n certificates = [Certificate.loadPEM(certificate)\n for certificate in certificates]\n return PEMObjects(keys=keys, certificates=certificates)", "response": "Load some objects from a PEM file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef following_key(key):\n if key.timestamp is not None:\n key.timestamp -= 1\n elif key.colVisibility is not None:\n key.colVisibility = following_array(key.colVisibility)\n elif key.colQualifier is not None:\n key.colQualifier = following_array(key.colQualifier)\n elif key.colFamily is not None:\n key.colFamily = following_array(key.colFamily)\n elif key.row is not None:\n key.row = following_array(key.row)\n return key", "response": "Returns the key immediately following the input key - based on the Java implementation found in\n org. apache. accumulo. core. data. Key function followingKey"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef followingPrefix(prefix):\n prefixBytes = array('B', prefix)\n\n changeIndex = len(prefixBytes) - 1\n while (changeIndex >= 0 and prefixBytes[changeIndex] == 0xff ):\n changeIndex = changeIndex - 1;\n if(changeIndex < 0):\n return None\n newBytes = array('B', prefix[0:changeIndex + 1])\n newBytes[changeIndex] = newBytes[changeIndex] + 1\n return newBytes.tostring()", "response": "Returns a String that sorts just after all Strings beginning with a prefix"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a Range that covers all rows beginning with a prefix", "response": "def prefix(rowPrefix):\n \"\"\"Returns a Range that covers all rows beginning with a prefix\"\"\"\n fp = Range.followingPrefix(rowPrefix)\n return Range(srow=rowPrefix, sinclude=True, erow=fp, einclude=False)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding mutations to a table and flush the batch writer.", "response": "def add_mutations_and_flush(self, table, muts):\n \"\"\"\n Add mutations to a table without the need to create and manage a batch writer.\n \"\"\"\n if not isinstance(muts, list) and not isinstance(muts, tuple):\n muts = [muts]\n cells = {}\n for mut in muts:\n cells.setdefault(mut.row, []).extend(mut.updates)\n self.client.updateAndFlush(self.login, table, cells)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving a constraint from the specified table", "response": "def remove_constraint(self, table, constraint):\n \"\"\"\n :param table: table name\n :param constraint: the constraint number as returned by list_constraints\n \"\"\"\n self.client.removeConstraint(self.login, table, constraint)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsending login to the server and receive the response.", "response": "def login(self, principal, loginProperties):\n \"\"\"\n Parameters:\n - principal\n - loginProperties\n \"\"\"\n self.send_login(principal, loginProperties)\n return self.recv_login()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef addConstraint(self, login, tableName, constraintClassName):\n self.send_addConstraint(login, tableName, constraintClassName)\n return self.recv_addConstraint()", "response": "Adds a constraint to the table with the specified login."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef addSplits(self, login, tableName, splits):\n self.send_addSplits(login, tableName, splits)\n self.recv_addSplits()", "response": "Adds a list of splits to the table with login login."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef clearLocatorCache(self, login, tableName):\n self.send_clearLocatorCache(login, tableName)\n self.recv_clearLocatorCache()", "response": "Clear the locator cache for a given login and tableName."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cloneTable(self, login, tableName, newTableName, flush, propertiesToSet, propertiesToExclude):\n self.send_cloneTable(login, tableName, newTableName, flush, propertiesToSet, propertiesToExclude)\n self.recv_cloneTable()", "response": "This method is used to clone a table."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cancelCompaction(self, login, tableName):\n self.send_cancelCompaction(login, tableName)\n self.recv_cancelCompaction()", "response": "Cancels compaction of a table."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deleteTable(self, login, tableName):\n self.send_deleteTable(login, tableName)\n self.recv_deleteTable()", "response": "Delete a table from the server."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef deleteRows(self, login, tableName, startRow, endRow):\n self.send_deleteRows(login, tableName, startRow, endRow)\n self.recv_deleteRows()", "response": "Delete rows from a table."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef flushTable(self, login, tableName, startRow, endRow, wait):\n self.send_flushTable(login, tableName, startRow, endRow, wait)\n self.recv_flushTable()", "response": "This method is used to flush a table from the server."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of all locality groups for a given login and tableName.", "response": "def getLocalityGroups(self, login, tableName):\n \"\"\"\n Parameters:\n - login\n - tableName\n \"\"\"\n self.send_getLocalityGroups(login, tableName)\n return self.recv_getLocalityGroups()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef getMaxRow(self, login, tableName, auths, startRow, startInclusive, endRow, endInclusive):\n self.send_getMaxRow(login, tableName, auths, startRow, startInclusive, endRow, endInclusive)\n return self.recv_getMaxRow()", "response": "This method returns the maximum row number in a table."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the properties of a table.", "response": "def getTableProperties(self, login, tableName):\n \"\"\"\n Parameters:\n - login\n - tableName\n \"\"\"\n self.send_getTableProperties(login, tableName)\n return self.recv_getTableProperties()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of splits for a given login and tableName.", "response": "def getSplits(self, login, tableName, maxSplits):\n \"\"\"\n Parameters:\n - login\n - tableName\n - maxSplits\n \"\"\"\n self.send_getSplits(login, tableName, maxSplits)\n return self.recv_getSplits()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef importDirectory(self, login, tableName, importDir, failureDir, setTime):\n self.send_importDirectory(login, tableName, importDir, failureDir, setTime)\n self.recv_importDirectory()", "response": "This method is used to send and receive import directory command to the server."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef importTable(self, login, tableName, importDir):\n self.send_importTable(login, tableName, importDir)\n self.recv_importTable()", "response": "This function is used to send and receive a table import request to the server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef listIterators(self, login, tableName):\n self.send_listIterators(login, tableName)\n return self.recv_listIterators()", "response": "This method is used to list the iterators for a given login and tableName."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef listConstraints(self, login, tableName):\n self.send_listConstraints(login, tableName)\n return self.recv_listConstraints()", "response": "This method is used to list the constraints of a table."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef offlineTable(self, login, tableName):\n self.send_offlineTable(login, tableName)\n self.recv_offlineTable()", "response": "Sends an offline table request to the server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef onlineTable(self, login, tableName):\n self.send_onlineTable(login, tableName)\n self.recv_onlineTable()", "response": "This function is used to send an online table to the server."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef removeConstraint(self, login, tableName, constraint):\n self.send_removeConstraint(login, tableName, constraint)\n self.recv_removeConstraint()", "response": "Removes a constraint from the table with login login."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef removeIterator(self, login, tableName, iterName, scopes):\n self.send_removeIterator(login, tableName, iterName, scopes)\n self.recv_removeIterator()", "response": "Remove an iterator from the table."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nremove a property from a table", "response": "def removeTableProperty(self, login, tableName, property):\n \"\"\"\n Parameters:\n - login\n - tableName\n - property\n \"\"\"\n self.send_removeTableProperty(login, tableName, property)\n self.recv_removeTableProperty()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef renameTable(self, login, oldTableName, newTableName):\n self.send_renameTable(login, oldTableName, newTableName)\n self.recv_renameTable()", "response": "This function is used to rename a table in a loginspace."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setLocalityGroups(self, login, tableName, groups):\n self.send_setLocalityGroups(login, tableName, groups)\n self.recv_setLocalityGroups()", "response": "This method is used to set the locality of the table."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef setTableProperty(self, login, tableName, property, value):\n self.send_setTableProperty(login, tableName, property, value)\n self.recv_setTableProperty()", "response": "Sets the value of a table property."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nping a tablet server.", "response": "def pingTabletServer(self, login, tserver):\n \"\"\"\n Parameters:\n - login\n - tserver\n \"\"\"\n self.send_pingTabletServer(login, tserver)\n self.recv_pingTabletServer()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getActiveScans(self, login, tserver):\n self.send_getActiveScans(login, tserver)\n return self.recv_getActiveScans()", "response": "Returns the active scans for the specified login and tserver."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getActiveCompactions(self, login, tserver):\n self.send_getActiveCompactions(login, tserver)\n return self.recv_getActiveCompactions()", "response": "Returns the active compactions for the given login and tserver."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving a property from the specified login.", "response": "def removeProperty(self, login, property):\n \"\"\"\n Parameters:\n - login\n - property\n \"\"\"\n self.send_removeProperty(login, property)\n self.recv_removeProperty()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef setProperty(self, login, property, value):\n self.send_setProperty(login, property, value)\n self.recv_setProperty()", "response": "Set the value of a property on a node."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef authenticateUser(self, login, user, properties):\n self.send_authenticateUser(login, user, properties)\n return self.recv_authenticateUser()", "response": "Parameters:\n - login\n - user\n - properties"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nchange the authorizations for a user.", "response": "def changeUserAuthorizations(self, login, user, authorizations):\n \"\"\"\n Parameters:\n - login\n - user\n - authorizations\n \"\"\"\n self.send_changeUserAuthorizations(login, user, authorizations)\n self.recv_changeUserAuthorizations()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef changeLocalUserPassword(self, login, user, password):\n self.send_changeLocalUserPassword(login, user, password)\n self.recv_changeLocalUserPassword()", "response": "Change the password of a local user."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef dropLocalUser(self, login, user):\n self.send_dropLocalUser(login, user)\n self.recv_dropLocalUser()", "response": "Drop a local user from the specified login."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getUserAuthorizations(self, login, user):\n self.send_getUserAuthorizations(login, user)\n return self.recv_getUserAuthorizations()", "response": "Returns a list of authorizations for a user."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngrants system permission to a user", "response": "def grantSystemPermission(self, login, user, perm):\n \"\"\"\n Parameters:\n - login\n - user\n - perm\n \"\"\"\n self.send_grantSystemPermission(login, user, perm)\n self.recv_grantSystemPermission()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngranting a permission to a table.", "response": "def grantTablePermission(self, login, user, table, perm):\n \"\"\"\n Parameters:\n - login\n - user\n - table\n - perm\n \"\"\"\n self.send_grantTablePermission(login, user, table, perm)\n self.recv_grantTablePermission()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn true if the user has permission for login.", "response": "def hasSystemPermission(self, login, user, perm):\n \"\"\"\n Parameters:\n - login\n - user\n - perm\n \"\"\"\n self.send_hasSystemPermission(login, user, perm)\n return self.recv_hasSystemPermission()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef hasTablePermission(self, login, user, table, perm):\n self.send_hasTablePermission(login, user, table, perm)\n return self.recv_hasTablePermission()", "response": "Returns True if the user has permission for the table perm."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef revokeSystemPermission(self, login, user, perm):\n self.send_revokeSystemPermission(login, user, perm)\n self.recv_revokeSystemPermission()", "response": "revokes a system permission for a given login user"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrevoke a permission from a table", "response": "def revokeTablePermission(self, login, user, table, perm):\n \"\"\"\n Parameters:\n - login\n - user\n - table\n - perm\n \"\"\"\n self.send_revokeTablePermission(login, user, table, perm)\n self.recv_revokeTablePermission()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef createBatchScanner(self, login, tableName, options):\n self.send_createBatchScanner(login, tableName, options)\n return self.recv_createBatchScanner()", "response": "Create a batch scanner for the specified login and tableName."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a new scanner for the specified login and tableName.", "response": "def createScanner(self, login, tableName, options):\n \"\"\"\n Parameters:\n - login\n - tableName\n - options\n \"\"\"\n self.send_createScanner(login, tableName, options)\n return self.recv_createScanner()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef nextK(self, scanner, k):\n self.send_nextK(scanner, k)\n return self.recv_nextK()", "response": "This method sends the next k - element to the specified scanner and returns the k - element."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getFollowing(self, key, part):\n self.send_getFollowing(key, part)\n return self.recv_getFollowing()", "response": "Returns the next following element in a node."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nturn the relay on.", "response": "def set_relay_on(self):\n \"\"\"Turn the relay on.\"\"\"\n if not self.get_relay_state():\n try:\n request = requests.get(\n '{}/relay'.format(self.resource), params={'state': '1'},\n timeout=self.timeout)\n if request.status_code == 200:\n self.data['relay'] = True\n except requests.exceptions.ConnectionError:\n raise exceptions.MyStromConnectionError()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nturns the relay off.", "response": "def set_relay_off(self):\n \"\"\"Turn the relay off.\"\"\"\n if self.get_relay_state():\n try:\n request = requests.get(\n '{}/relay'.format(self.resource), params={'state': '0'},\n timeout=self.timeout)\n if request.status_code == 200:\n self.data['relay'] = False\n except requests.exceptions.ConnectionError:\n raise exceptions.MyStromConnectionError()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_relay_state(self):\n self.get_status()\n try:\n self.state = self.data['relay']\n except TypeError:\n self.state = False\n\n return bool(self.state)", "response": "Get the relay state."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_consumption(self):\n self.get_status()\n try:\n self.consumption = self.data['power']\n except TypeError:\n self.consumption = 0\n\n return self.consumption", "response": "Get current power consumption in mWh."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting current temperature in celsius.", "response": "def get_temperature(self):\n \"\"\"Get current temperature in celsius.\"\"\"\n try:\n request = requests.get(\n '{}/temp'.format(self.resource), timeout=self.timeout, allow_redirects=False)\n self.temperature = request.json()['compensated']\n return self.temperature\n except requests.exceptions.ConnectionError:\n raise exceptions.MyStromConnectionError()\n except ValueError:\n raise exceptions.MyStromNotVersionTwoSwitch()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef eval_fitness(self, chromosome):\n score = 0\n\n number = self.translator.translate_gene(chromosome.genes[0])\n\n for factor in self.factors:\n if number % factor == 0:\n score += 1\n else:\n score -= 1\n\n # check for optimal solution\n if score == len(self.factors):\n min_product = functools.reduce(lambda a,b: a * b, self.factors)\n\n if number + min_product > self.max_encoded_val:\n print(\"Found best solution:\", number)\n self.found_best = True\n\n # scale number of factors achieved/missed by ratio of\n # the solution number to the maximum possible integer\n # represented by binary strings of the given length\n return score * number / self.max_encoded_val", "response": "Convert a 1 - gene chromosome into an integer and calculate its fitness value by checking it against each required factor and checking it against each required factor."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_status(self):\n try:\n request = requests.get(\n '{}/{}/'.format(self.resource, URI), timeout=self.timeout)\n raw_data = request.json()\n # Doesn't always work !!!!!\n #self._mac = next(iter(self.raw_data))\n self.data = raw_data[self._mac]\n return self.data\n except (requests.exceptions.ConnectionError, ValueError):\n raise exceptions.MyStromConnectionError()", "response": "Get the details from the bulb."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the relay state.", "response": "def get_bulb_state(self):\n \"\"\"Get the relay state.\"\"\"\n self.get_status()\n try:\n self.state = self.data['on']\n except TypeError:\n self.state = False\n\n return bool(self.state)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_firmware(self):\n self.get_status()\n try:\n self.firmware = self.data['fw_version']\n except TypeError:\n self.firmware = 'Unknown'\n\n return self.firmware", "response": "Get the current firmware version."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the transition time in ms.", "response": "def get_transition_time(self):\n \"\"\"Get the transition time in ms.\"\"\"\n self.get_status()\n try:\n self.transition_time = self.data['ramp']\n except TypeError:\n self.transition_time = 0\n\n return self.transition_time"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_color_hsv(self, hue, saturation, value):\n try:\n data = \"action=on&color={};{};{}\".format(hue, saturation, value)\n request = requests.post(\n '{}/{}/{}'.format(self.resource, URI, self._mac),\n data=data, timeout=self.timeout)\n if request.status_code == 200:\n self.data['on'] = True\n except requests.exceptions.ConnectionError:\n raise exceptions.MyStromConnectionError()", "response": "Turn the bulb on with the given values as HSV."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nturning the bulb on and create a rainbow.", "response": "def set_rainbow(self, duration):\n \"\"\"Turn the bulb on and create a rainbow.\"\"\"\n for i in range(0, 359):\n self.set_color_hsv(i, 100, 100)\n time.sleep(duration/359)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nturns the bulb on and create a sunrise.", "response": "def set_sunrise(self, duration):\n \"\"\"Turn the bulb on and create a sunrise.\"\"\"\n self.set_transition_time(duration/100)\n for i in range(0, duration):\n try:\n data = \"action=on&color=3;{}\".format(i)\n request = requests.post(\n '{}/{}/{}'.format(self.resource, URI, self._mac),\n data=data, timeout=self.timeout)\n if request.status_code == 200:\n self.data['on'] = True\n except requests.exceptions.ConnectionError:\n raise exceptions.MyStromConnectionError()\n time.sleep(duration/100)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nturning the bulb on flashing with two colors.", "response": "def set_flashing(self, duration, hsv1, hsv2):\n \"\"\"Turn the bulb on, flashing with two colors.\"\"\"\n self.set_transition_time(100)\n for step in range(0, int(duration/2)):\n self.set_color_hsv(hsv1[0], hsv1[1], hsv1[2])\n time.sleep(1)\n self.set_color_hsv(hsv2[0], hsv2[1], hsv2[2])\n time.sleep(1)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nturns the bulb off.", "response": "def set_off(self):\n \"\"\"Turn the bulb off.\"\"\"\n try:\n request = requests.post(\n '{}/{}/{}/'.format(self.resource, URI, self._mac),\n data={'action': 'off'}, timeout=self.timeout)\n if request.status_code == 200:\n pass\n except requests.exceptions.ConnectionError:\n raise exceptions.MyStromConnectionError()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef translate_chromosome(self, chromosome):\n assert isinstance(chromosome, Chromosome)\n return [self.translate_gene(g) for g in chromosome]", "response": "Translate all the genes in a chromosome into translation products"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef translate_gene(self, gene):\n if self.signed:\n sign = 1 if gene.dna[0] == '0' else -1\n base_start_idx = 1\n else:\n sign = 1\n base_start_idx = 0\n\n base = sign * int(gene.dna[base_start_idx:base_start_idx + self.significand_length], base=2)\n \n exponent_sign = 1 if gene.dna[1 + self.significand_length] == '0' else -1\n exponent = exponent_sign * int(gene.dna[self.significand_length + 2:], base=2)\n \n return float(base * 10 ** exponent)", "response": "Translate a gene with binary DNA into a base - 10 floating point real number."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading all the client libraries for the given apis.", "response": "def load_clients(self, path=None, apis=[]):\n \"\"\"Generate client libraries for the given apis, without starting an\n api server\"\"\"\n\n if not path:\n raise Exception(\"Missing path to api swagger files\")\n\n if type(apis) is not list:\n raise Exception(\"'apis' should be a list of api names\")\n\n if len(apis) == 0:\n raise Exception(\"'apis' is an empty list - Expected at least one api name\")\n\n for api_name in apis:\n api_path = os.path.join(path, '%s.yaml' % api_name)\n if not os.path.isfile(api_path):\n raise Exception(\"Cannot find swagger specification at %s\" % api_path)\n log.info(\"Loading api %s from %s\" % (api_name, api_path))\n ApiPool.add(\n api_name,\n yaml_path=api_path,\n timeout=self.timeout,\n error_callback=self.error_callback,\n formats=self.formats,\n do_persist=False,\n local=False,\n )\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload all apis under the given path.", "response": "def load_apis(self, path, ignore=[], include_crash_api=False):\n \"\"\"Load all swagger files found at the given path, except those whose\n names are in the 'ignore' list\"\"\"\n\n if not path:\n raise Exception(\"Missing path to api swagger files\")\n\n if type(ignore) is not list:\n raise Exception(\"'ignore' should be a list of api names\")\n\n # Always ignore pym-config.yaml\n ignore.append('pym-config')\n\n # Find all swagger apis under 'path'\n apis = {}\n\n log.debug(\"Searching path %s\" % path)\n for root, dirs, files in os.walk(path):\n for f in files:\n if f.endswith('.yaml'):\n api_name = f.replace('.yaml', '')\n\n if api_name in ignore:\n log.info(\"Ignoring api %s\" % api_name)\n continue\n\n apis[api_name] = os.path.join(path, f)\n log.debug(\"Found api %s in %s\" % (api_name, f))\n\n # And add pymacaron's default ping and crash apis\n for name in ['ping', 'crash']:\n yaml_path = pkg_resources.resource_filename(__name__, 'pymacaron/%s.yaml' % name)\n if not os.path.isfile(yaml_path):\n yaml_path = os.path.join(os.path.dirname(sys.modules[__name__].__file__), '%s.yaml' % name)\n apis[name] = yaml_path\n\n if not include_crash_api:\n del apis['crash']\n\n # Save found apis\n self.path_apis = path\n self.apis = apis\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef publish_apis(self, path='doc'):\n\n assert path\n\n if not self.apis:\n raise Exception(\"You must call .load_apis() before .publish_apis()\")\n\n # Infer the live host url from pym-config.yaml\n proto = 'http'\n if hasattr(get_config(), 'aws_cert_arn'):\n proto = 'https'\n\n live_host = \"%s://%s\" % (proto, get_config().live_host)\n\n # Allow cross-origin calls\n CORS(self.app, resources={r\"/%s/*\" % path: {\"origins\": \"*\"}})\n\n # Add routes to serve api specs and redirect to petstore ui for each one\n for api_name, api_path in self.apis.items():\n\n api_filename = os.path.basename(api_path)\n log.info(\"Publishing api %s at /%s/%s\" % (api_name, path, api_name))\n\n def redirect_to_petstore(live_host, api_filename):\n def f():\n url = 'http://petstore.swagger.io/?url=%s/%s/%s' % (live_host, path, api_filename)\n log.info(\"Redirecting to %s\" % url)\n return redirect(url, code=302)\n return f\n\n def serve_api_spec(api_path):\n def f():\n with open(api_path, 'r') as f:\n spec = f.read()\n log.info(\"Serving %s\" % api_path)\n return Response(spec, mimetype='text/plain')\n return f\n\n self.app.add_url_rule('/%s/%s' % (path, api_name), str(uuid4()), redirect_to_petstore(live_host, api_filename))\n self.app.add_url_rule('/%s/%s' % (path, api_filename), str(uuid4()), serve_api_spec(api_path))\n\n return self", "response": "Publish all loaded apis on under the uri path by redirecting to petstore. swagger. io"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstarts the app server.", "response": "def start(self, serve=[]):\n \"\"\"Load all apis, either as local apis served by the flask app, or as\n remote apis to be called from whithin the app's endpoints, then start\n the app server\"\"\"\n\n # Check arguments\n if type(serve) is str:\n serve = [serve]\n elif type(serve) is list:\n pass\n else:\n raise Exception(\"'serve' should be an api name or a list of api names\")\n\n if len(serve) == 0:\n raise Exception(\"You must specify at least one api to serve\")\n\n for api_name in serve:\n if api_name not in self.apis:\n raise Exception(\"Can't find %s.yaml (swagger file) in the api directory %s\" % (api_name, self.path_apis))\n\n app = self.app\n app.secret_key = os.urandom(24)\n\n # Initialize JWT config\n conf = get_config()\n if hasattr(conf, 'jwt_secret'):\n log.info(\"Set JWT parameters to issuer=%s audience=%s secret=%s***\" % (\n conf.jwt_issuer,\n conf.jwt_audience,\n conf.jwt_secret[0:8],\n ))\n\n # Always serve the ping api\n serve.append('ping')\n\n # Let's compress returned data when possible\n compress = Compress()\n compress.init_app(app)\n\n # All apis that are not served locally are not persistent\n not_persistent = []\n for api_name in self.apis.keys():\n if api_name in serve:\n pass\n else:\n not_persistent.append(api_name)\n\n # Now load those apis into the ApiPool\n for api_name, api_path in self.apis.items():\n\n host = None\n port = None\n\n if api_name in serve:\n # We are serving this api locally: override the host:port specified in the swagger spec\n host = self.host\n port = self.port\n\n do_persist = True if api_name not in not_persistent else False\n local = True if api_name in serve else False\n\n log.info(\"Loading api %s from %s (persist: %s)\" % (api_name, api_path, do_persist))\n ApiPool.add(\n api_name,\n yaml_path=api_path,\n timeout=self.timeout,\n error_callback=self.error_callback,\n formats=self.formats,\n do_persist=do_persist,\n host=host,\n port=port,\n local=local,\n )\n\n ApiPool.merge()\n\n # Now spawn flask routes for all endpoints\n for api_name in self.apis.keys():\n if api_name in serve:\n log.info(\"Spawning api %s\" % api_name)\n api = getattr(ApiPool, api_name)\n # Spawn api and wrap every endpoint in a crash handler that\n # catches replies and reports errors\n api.spawn_api(app, decorator=generate_crash_handler_decorator(self.error_decorator))\n\n log.debug(\"Argv is [%s]\" % ' '.join(sys.argv))\n if 'celery' in sys.argv[0].lower():\n # This code is loading in a celery server - Don't start the actual flask app.\n log.info(\"Running in a Celery worker - Not starting the Flask app\")\n return\n\n # Initialize monitoring, if any is defined\n monitor_init(app=app, config=conf)\n\n if os.path.basename(sys.argv[0]) == 'gunicorn':\n # Gunicorn takes care of spawning workers\n log.info(\"Running in Gunicorn - Not starting the Flask app\")\n return\n\n # Debug mode is the default when not running via gunicorn\n app.debug = self.debug\n\n app.run(host='0.0.0.0', port=self.port)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a new exception class with the given name and code and status.", "response": "def add_error(name=None, code=None, status=None):\n \"\"\"Create a new Exception class\"\"\"\n if not name or not status or not code:\n raise Exception(\"Can't create Exception class %s: you must set both name, status and code\" % name)\n myexception = type(name, (PyMacaronException, ), {\"code\": code, \"status\": status})\n globals()[name] = myexception\n if code in code_to_class:\n raise Exception(\"ERROR! Exception %s is already defined.\" % code)\n code_to_class[code] = myexception\n return myexception"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef responsify(error):\n assert str(type(error).__name__) == 'Error'\n if error.error in code_to_class:\n e = code_to_class[error.error](error.error_description)\n if error.error_id:\n e.error_id = error.error_id\n if error.user_message:\n e.user_message = error.user_message\n return e.http_reply()\n elif isinstance(error, PyMacaronException):\n return error.http_reply()\n else:\n return PyMacaronException(\"Caught un-mapped error: %s\" % error).http_reply()", "response": "Take an Error model and return it as a Flask response"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_error(o):\n if hasattr(o, 'error') and hasattr(o, 'error_description') and hasattr(o, 'status'):\n return True\n return False", "response": "True if o is an error model or a flask Response"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntakes an exception caught within pymacaron_core and turn it into abravado - core Error instance", "response": "def format_error(e):\n \"\"\"Take an exception caught within pymacaron_core and turn it into a\n bravado-core Error instance\n \"\"\"\n\n if isinstance(e, PyMacaronException):\n return e.to_model()\n\n if isinstance(e, PyMacaronCoreException) and e.__class__.__name__ == 'ValidationError':\n return ValidationError(str(e)).to_model()\n\n # Turn this exception into a PyMacaron Error model\n return UnhandledServerError(str(e)).to_model()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef raise_error(e):\n code = e.error\n if code in code_to_class:\n raise code_to_class[code](e.error_description)\n else:\n raise InternalServerError(e.error_description)", "response": "Take a bravado - core Error model and raise it as an exception"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef http_reply(self):\n data = {\n 'status': self.status,\n 'error': self.code.upper(),\n 'error_description': str(self)\n }\n\n if self.error_caught:\n data['error_caught'] = pformat(self.error_caught)\n\n if self.error_id:\n data['error_id'] = self.error_id\n\n if self.user_message:\n data['user_message'] = self.user_message\n\n r = jsonify(data)\n r.status_code = self.status\n\n if str(self.status) != \"200\":\n log.warn(\"ERROR: caught error %s %s [%s]\" % (self.status, self.code, str(self)))\n\n return r", "response": "Return a Flask reply object describing this error"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef to_model(self):\n e = ApiPool().current_server_api.model.Error(\n status=self.status,\n error=self.code.upper(),\n error_description=str(self),\n )\n if self.error_id:\n e.error_id = self.error_id\n if self.user_message:\n e.user_message = self.user_message\n if self.error_caught:\n e.error_caught = pformat(self.error_caught)\n return e", "response": "Return a bravado - core Error instance"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef compute_fitness_cdf(chromosomes, ga):\n ga.sort(chromosomes)\n \n fitness = [ga.eval_fitness(c) for c in chromosomes]\n min_fit = min(fitness)\n fit_range = max(fitness) - min_fit\n \n if fit_range == 0:\n # all chromosomes have equal chance of being chosen\n n = len(chromosomes)\n return [i / n for i in range(1, n + 1)]\n \n return [(fit - min_fit) / fit_range for fit in fitness]", "response": "Compute the fitness - weighted cumulative probabilities for a set of chromosomes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef weighted_choice(seq, cdf):\n assert len(seq) == len(cdf)\n rand = random.random()\n \n for i, e in enumerate(seq):\n cp = cdf[i]\n assert 0 <= cp <= 1\n \n if rand < cp:\n return e", "response": "Select a random element from a sequence given a cumulative probabilities of selection."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget a list of metrics.", "response": "def _list(self, path, dim_key=None, **kwargs):\n \"\"\"Get a list of metrics.\"\"\"\n url_str = self.base_url + path\n if dim_key and dim_key in kwargs:\n dim_str = self.get_dimensions_url_string(kwargs[dim_key])\n kwargs[dim_key] = dim_str\n\n if kwargs:\n url_str += '?%s' % parse.urlencode(kwargs, True)\n\n body = self.client.list(\n path=url_str\n )\n\n return self._parse_body(body)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert an ASCII map string with rows to a list of strings 1 string per row.", "response": "def mapstr_to_list(mapstr):\n \"\"\" Convert an ASCII map string with rows to a list of strings, 1 string per row. \"\"\"\n maplist = []\n\n with StringIO(mapstr) as infile:\n for row in infile:\n maplist.append(row.strip())\n\n return maplist"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning whether a cell is within the radius of the sprinkler.", "response": "def sprinkler_reaches_cell(x, y, sx, sy, r):\n \"\"\"\n Return whether a cell is within the radius of the sprinkler.\n\n x: column index of cell\n y: row index of cell\n sx: column index of sprinkler\n sy: row index of sprinkler\n r: sprinkler radius\n \"\"\"\n dx = sx - x\n dy = sy - y\n return math.sqrt(dx ** 2 + dy ** 2) <= r"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nevaluating the fitness of a sprinkler.", "response": "def eval_fitness(self, chromosome):\n \"\"\"\n Return the number of plants reached by the sprinkler.\n\n Returns a large penalty for sprinkler locations outside the map.\n \"\"\"\n # convert DNA to represented sprinkler coordinates\n sx, sy = self.translator.translate_chromosome(chromosome)\n\n # check for invalid points\n penalty = 0\n if sx >= self.w:\n penalty += self.w * self.h\n if sy >= self.h:\n penalty += self.w * self.h\n\n if penalty > 0:\n self.fitness_cache[chromosome.dna] = -penalty\n return -penalty\n\n # calculate number of crop cells watered by sprinkler\n crops_watered = 0\n\n # check bounding box around sprinkler for crop cells\n # circle guaranteed to be within square with side length 2*r around sprinkler\n row_start_idx = max(0, sy - self.r)\n row_end_idx = min(sy + self.r + 1, self.h)\n col_start_idx = max(0, sx - self.r)\n col_end_idx = min(sx + self.r + 1, self.w)\n\n for y, row in enumerate(self.maplist[row_start_idx:row_end_idx], row_start_idx):\n for x, cell in enumerate(row[col_start_idx:col_end_idx], col_start_idx):\n if cell == 'x' and sprinkler_reaches_cell(x, y, sx, sy, self.r):\n crops_watered += 1\n\n # reduce score by 1 if sprinkler placed on a crop cell\n if self.maplist[sy][sx] == 'x':\n crops_watered -= 1\n\n self.fitness_cache[chromosome.dna] = crops_watered\n return crops_watered"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a string of the ASCII map showing reached crop cells.", "response": "def map_sprinkler(self, sx, sy, watered_crop='^', watered_field='_', dry_field=' ', dry_crop='x'):\n \"\"\"\n Return a version of the ASCII map showing reached crop cells.\n \"\"\"\n # convert strings (rows) to lists of characters for easier map editing\n maplist = [list(s) for s in self.maplist]\n\n for y, row in enumerate(maplist):\n for x, cell in enumerate(row):\n if sprinkler_reaches_cell(x, y, sx, sy, self.r):\n if cell == 'x':\n cell = watered_crop\n else:\n cell = watered_field\n else:\n cell = dry_crop if cell == 'x' else dry_field\n\n maplist[y][x] = cell\n\n maplist[sy][sx] = 'O' # sprinkler\n\n return '\\n'.join([''.join(row) for row in maplist])"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the fitness score for a given chromosome.", "response": "def get_fitness(self, chromosome):\n \"\"\" Get the fitness score for a chromosome, using the cached value if available. \"\"\"\n fitness = self.fitness_cache.get(chromosome.dna)\n\n if fitness is None:\n fitness = self.eval_fitness(chromosome)\n self.fitness_cache[chromosome.dna] = fitness\n\n return fitness"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef compete(self, chromosomes):\n # update overall fitness for this run\n self.sort(chromosomes)\n min_fit = self.get_fitness(chromosomes[0])\n max_fit = self.get_fitness(chromosomes[-1])\n \n if min_fit < self.min_fit_ever:\n self.min_fit_ever = min_fit\n if max_fit > self.max_fit_ever:\n self.max_fit_ever = max_fit\n \n overall_fit_range = self.max_fit_ever - self.min_fit_ever # \"absolute\" fitness range\n current_fit_range = max_fit - min_fit # \"relative\" fitness range\n \n # choose survivors based on relative fitness within overall fitness range\n survivors = []\n for chromosome in chromosomes:\n fit = self.get_fitness(chromosome)\n p_survival_absolute = (fit - self.min_fit_ever) / overall_fit_range if overall_fit_range != 0 else 1\n p_survival_relative = (fit - min_fit) / current_fit_range if current_fit_range != 0 else 1\n \n # compute weighted average survival probability\n # a portion accounts for absolute overall fitness for all chromosomes ever encountered (environment-driven)\n # the other portion accounts for relative fitness within the current population (competition-driven)\n p_survival = p_survival_absolute * self.abs_fit_weight + p_survival_relative * self.rel_fit_weight\n\n if random.random() < p_survival:\n survivors.append(chromosome)\n\n if not survivors:\n # rarely, nothing survives -- allow everyone to live\n return chromosomes\n \n return survivors", "response": "Simulate competition and survival of the current generation."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreproduce the population from a pool of surviving chromosomes and a set of crossover events.", "response": "def reproduce(self, survivors, p_crossover, two_point_crossover=False, target_size=None):\n \"\"\"\n Reproduces the population from a pool of surviving chromosomes\n until a target population size is met. Offspring are created\n by selecting a survivor. Survivors with higher fitness have a\n greater chance to be selected for reproduction.\n \n Genetic crossover events may occur for each offspring created.\n Crossover mates are randomly selected from the pool of survivors.\n Crossover points are randomly selected from the length of the crossed chromosomes.\n If crossover does not occur, an offspring is an exact copy of the selected survivor.\n Crossover only affects the DNA of the offspring, not the survivors/parents.\n \n survivors: pool of parent chromosomes to reproduce from\n p_crossover: probability in [0, 1] that a crossover event will\n occur for each offspring\n two_point_crossover (default=False): whether 2-point crossover is used; default is 1-point\n target_size (default=original population size): target population size\n \n return: list of survivors plus any offspring\n \"\"\"\n assert 0 <= p_crossover <= 1\n \n if not target_size:\n target_size = self.orig_pop_size\n \n num_survivors = len(survivors)\n \n # compute reproduction cumulative probabilities\n # weakest member gets p=0 but can be crossed-over with\n cdf = compute_fitness_cdf(survivors, self)\n \n offspring = []\n while num_survivors + len(offspring) < target_size:\n # pick a survivor to reproduce\n c1 = weighted_choice(survivors, cdf).copy()\n \n # crossover\n if random.random() < p_crossover:\n # randomly pick a crossover mate from survivors\n # same chromosome can be c1 and c2\n c2 = random.choice(survivors).copy()\n point1 = random.randrange(0, c1.length)\n point2 = random.randrange(point1 + 1, c1.length + 1) if two_point_crossover else None\n c1.crossover(c2, point1, point2)\n \n offspring.append(c1)\n \n return survivors + offspring"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncall every chromosome s mutate method.", "response": "def mutate(self, chromosomes, p_mutate):\n \"\"\" \n Call every chromosome's ``mutate`` method. \n \n p_mutate: probability of mutation in [0, 1]\n \"\"\"\n assert 0 <= p_mutate <= 1\n \n for chromosome in chromosomes:\n chromosome.mutate(p_mutate)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run(self, generations, p_mutate, p_crossover, elitist=True, two_point_crossover=False,\n refresh_after=None, quit_after=None):\n \"\"\"\n Run a standard genetic algorithm simulation for a set number\n of generations (iterations), each consisting of the following\n ordered steps:\n \n 1. competition/survival of the fittest (``compete`` method)\n 2. reproduction (``reproduce`` method)\n 3. mutation (``mutate`` method)\n 4. check if the new population's fittest is fitter than the overall fittest\n 4a. if not and the ``elitist`` option is active, replace the weakest solution\n with the overall fittest\n \n generations: how many generations to run\n p_mutate: probability of mutation in [0, 1]\n p_crossover: probability in [0, 1] that a crossover event will occur for each offspring\n elitist (default=True): option to replace the weakest solution with the \n strongest if a new one is not found each generation\n two_point_crossover (default=False): whether 2-point crossover is used\n refresh_after: number of generations since the last upset after which to randomly generate a new population\n quit_after: number of generations since the last upset after which to stop the run, possibly before reaching\n ``generations`` iterations\n \n return: the overall fittest solution (chromosome)\n \"\"\"\n start_time = time.time()\n \n assert 0 <= p_mutate <= 1\n assert 0 <= p_crossover <= 1\n \n # these values guaranteed to be replaced in first generation\n self.min_fit_ever = 1e999999999\n self.max_fit_ever = -1e999999999\n \n self.generation_fittest.clear()\n self.generation_fittest_fit.clear()\n self.overall_fittest_fit.clear()\n self.new_fittest_generations.clear()\n \n overall_fittest = self.get_fittest()\n overall_fittest_fit = self.get_fitness(overall_fittest)\n gens_since_upset = 0\n \n for gen in range(1, generations + 1):\n survivors = self.compete(self.chromosomes)\n self.chromosomes = self.reproduce(survivors, p_crossover, two_point_crossover=two_point_crossover)\n self.mutate(self.chromosomes, p_mutate)\n \n # check for new fittest\n gen_fittest = self.get_fittest().copy()\n gen_fittest_fit = self.get_fitness(gen_fittest)\n \n if gen_fittest_fit > overall_fittest_fit:\n overall_fittest = gen_fittest\n overall_fittest_fit = gen_fittest_fit\n self.new_fittest_generations.append(gen)\n gens_since_upset = 0\n else:\n gens_since_upset += 1\n \n if elitist:\n # no new fittest found, replace least fit with overall fittest\n self.sort(self.chromosomes)\n self.chromosomes[0].dna = overall_fittest.dna\n \n if quit_after and gens_since_upset >= quit_after:\n print(\"quitting on generation\", gen, \"after\", quit_after, \"generations with no upset\")\n break\n \n if refresh_after and gens_since_upset >= refresh_after:\n # been a very long time since a new best solution -- mix things up\n print(\"refreshing on generation\", gen)\n self.mutate(self.chromosomes, 0.5)\n gens_since_upset = 0\n \n self.generation_fittest[gen] = gen_fittest\n self.generation_fittest_fit[gen] = gen_fittest_fit\n self.overall_fittest_fit[gen] = overall_fittest_fit\n \n if self.should_terminate(overall_fittest):\n break\n\n self.fitness_cache.clear()\n \n self.run_time_s = time.time() - start_time\n \n return overall_fittest", "response": "Run a standard genetic algorithm simulation for a set of generations"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates 1 or more chromosomes with randomly generated DNA.", "response": "def create_random(cls, gene_length, n=1, gene_class=BinaryGene):\n \"\"\"\n Create 1 or more chromosomes with randomly generated DNA.\n\n gene_length: int (or sequence of ints) describing gene DNA length\n n: number of chromosomes to create (default=1); returns a list if n>1, else a single chromosome\n gene_class: subclass of ``ga.chromosomes.BaseGene`` to use for genes\n\n return: new chromosome\n \"\"\"\n assert issubclass(gene_class, BaseGene)\n chromosomes = []\n\n # when gene_length is scalar, convert to a list to keep subsequent code simple\n if not hasattr(gene_length, '__iter__'):\n gene_length = [gene_length]\n\n for _ in range(n):\n genes = [gene_class.create_random(length) for length in gene_length]\n chromosomes.append(cls(genes))\n\n if n == 1:\n return chromosomes[0]\n else:\n return chromosomes"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreplacing this chromosome s DNA with new DNA of equal length and assign the new DNA to the genes sequentially.", "response": "def dna(self, dna):\n \"\"\" \n Replace this chromosome's DNA with new DNA of equal length,\n assigning the new DNA to the chromosome's genes sequentially.\n \n For example, if a chromosome contains these genes...\n 1. 100100\n 2. 011011\n \n ...and the new DNA is 111111000000, the genes become:\n 1. 111111\n 2. 000000\n \"\"\"\n assert self.length == len(dna)\n i = 0\n \n for gene in self.genes:\n gene.dna = dna[i:i + gene.length]\n \n i += gene.length"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef crossover(self, chromosome, point1, point2=None):\n assert self.length == chromosome.length\n\n if point2 is None:\n new_dna = self.dna[:point1] + chromosome.dna[point1:]\n other_new_dna = chromosome.dna[:point1] + self.dna[point1:]\n\n self.dna = new_dna\n chromosome.dna = other_new_dna\n else:\n assert point2 > point1\n self_substr = self.dna[point1:point2 + 1]\n other_substr = chromosome.dna[point1:point2 + 1]\n\n self.dna = self.dna[:point1] + other_substr + self.dna[point2 + 1:]\n chromosome.dna = chromosome.dna[:point1] + self_substr + chromosome.dna[point2 + 1:]", "response": "Exchange the DNA with another chromosome at one or two common points."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mutate(self, p_mutate):\n assert 0 <= p_mutate <= 1\n \n for gene in self.genes:\n gene.mutate(p_mutate)", "response": "Mutate all genes in this chromosome for the given probability."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a new instance of this chromosome by copying its genes.", "response": "def copy(self):\n \"\"\" Return a new instance of this chromosome by copying its genes. \"\"\"\n genes = [g.copy() for g in self.genes]\n return type(self)(genes)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_genes(self):\n gene_dna_set = set([g.dna for g in self.genes])\n assert gene_dna_set == self.dna_choices_set", "response": "Assert that every DNA choice is represented by exactly one gene."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef report_error(title=None, data={}, caught=None, is_fatal=False):\n\n # Don't report errors if NO_ERROR_REPORTING set to 1 (set by run_acceptance_tests)\n if os.environ.get('DO_REPORT_ERROR', None):\n # Force error reporting\n pass\n elif os.environ.get('NO_ERROR_REPORTING', '') == '1':\n log.info(\"NO_ERROR_REPORTING is set: not reporting error!\")\n return\n elif 'is_ec2_instance' in data:\n if not data['is_ec2_instance']:\n # Not running on amazon: no reporting\n log.info(\"DATA[is_ec2_instance] is False: not reporting error!\")\n return\n elif not is_ec2_instance():\n log.info(\"Not running on an EC2 instance: not reporting error!\")\n return\n\n # Fill error report with tons of usefull data\n if 'user' not in data:\n populate_error_report(data)\n\n # Add the message\n data['title'] = title\n data['is_fatal_error'] = is_fatal\n\n # Add the error caught, if any:\n if caught:\n data['error_caught'] = \"%s\" % caught\n\n # Add a trace - Formatting traceback may raise a UnicodeDecodeError...\n data['stack'] = []\n try:\n data['stack'] = [l for l in traceback.format_stack()]\n except Exception:\n data['stack'] = 'Skipped trace - contained non-ascii chars'\n\n # inspect may raise a UnicodeDecodeError...\n fname = ''\n try:\n fname = inspect.stack()[1][3]\n except Exception as e:\n fname = 'unknown-method'\n\n # Format the error's title\n status, code = 'unknown_status', 'unknown_error_code'\n if 'response' in data:\n status = data['response'].get('status', status)\n code = data['response'].get('error_code', code)\n title_details = \"%s %s %s\" % (ApiPool().current_server_name, status, code)\n else:\n title_details = \"%s %s()\" % (ApiPool().current_server_name, fname)\n\n if is_fatal:\n title_details = 'FATAL ERROR %s' % title_details\n else:\n title_details = 'NON-FATAL ERROR %s' % title_details\n\n if title:\n title = \"%s: %s\" % (title_details, title)\n else:\n title = title_details\n\n global error_reporter\n log.info(\"Reporting crash...\")\n\n try:\n error_reporter(title, json.dumps(data, sort_keys=True, indent=4))\n except Exception as e:\n # Don't block on replying to api caller\n log.error(\"Failed to send email report: %s\" % str(e))", "response": "Format a crash report and send it somewhere relevant."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef populate_error_report(data):\n\n # Did pymacaron_core set a call_id and call_path?\n call_id, call_path = '', ''\n if hasattr(stack.top, 'call_id'):\n call_id = stack.top.call_id\n if hasattr(stack.top, 'call_path'):\n call_path = stack.top.call_path\n\n # Unique ID associated to all responses associated to a given\n # call to apis, across all micro-services\n data['call_id'] = call_id\n data['call_path'] = call_path\n\n # Are we in aws?\n data['is_ec2_instance'] = is_ec2_instance()\n\n # If user is authenticated, get her id\n user_data = {\n 'id': '',\n 'is_auth': 0,\n 'ip': '',\n }\n\n if stack.top:\n # We are in a request context\n user_data['ip'] = request.remote_addr\n\n if 'X-Forwarded-For' in request.headers:\n user_data['forwarded_ip'] = request.headers.get('X-Forwarded-For', '')\n\n if 'User-Agent' in request.headers:\n user_data['user_agent'] = request.headers.get('User-Agent', '')\n\n if hasattr(stack.top, 'current_user'):\n user_data['is_auth'] = 1\n user_data['id'] = stack.top.current_user.get('sub', '')\n for k in ('name', 'email', 'is_expert', 'is_admin', 'is_support', 'is_tester', 'language'):\n v = stack.top.current_user.get(k, None)\n if v:\n user_data[k] = v\n\n data['user'] = user_data\n\n # Is the current code running as a server?\n if ApiPool().current_server_api:\n # Server info\n server = request.base_url\n server = server.replace('http://', '')\n server = server.replace('https://', '')\n server = server.split('/')[0]\n parts = server.split(':')\n fqdn = parts[0]\n port = parts[1] if len(parts) == 2 else ''\n\n data['server'] = {\n 'fqdn': fqdn,\n 'port': port,\n 'api_name': ApiPool().current_server_name,\n 'api_version': ApiPool().current_server_api.get_version(),\n }\n\n # Endpoint data\n data['endpoint'] = {\n 'id': \"%s %s %s\" % (ApiPool().current_server_name, request.method, request.path),\n 'url': request.url,\n 'base_url': request.base_url,\n 'path': request.path,\n 'method': request.method\n }", "response": "Populate the error report with the generic stats from the request"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a function that returns a function that will be called when a crash occurs.", "response": "def generate_crash_handler_decorator(error_decorator=None):\n \"\"\"Return the crash_handler to pass to pymacaron_core, with optional error decoration\"\"\"\n\n def crash_handler(f):\n \"\"\"Return a decorator that reports failed api calls via the error_reporter,\n for use on every server endpoint\"\"\"\n\n @wraps(f)\n def wrapper(*args, **kwargs):\n \"\"\"Generate a report of this api call, and if the call failed or was too slow,\n forward this report via the error_reporter\"\"\"\n\n data = {}\n t0 = timenow()\n exception_string = ''\n\n # Call endpoint and log execution time\n try:\n res = f(*args, **kwargs)\n except Exception as e:\n # An unhandled exception occured!\n exception_string = str(e)\n exc_type, exc_value, exc_traceback = sys.exc_info()\n trace = traceback.format_exception(exc_type, exc_value, exc_traceback, 30)\n data['trace'] = trace\n\n # If it is a PyMacaronException, just call its http_reply()\n if hasattr(e, 'http_reply'):\n res = e.http_reply()\n else:\n # Otherwise, forge a Response\n e = UnhandledServerError(exception_string)\n log.error(\"UNHANDLED EXCEPTION: %s\" % '\\n'.join(trace))\n res = e.http_reply()\n\n t1 = timenow()\n\n # Is the response an Error instance?\n response_type = type(res).__name__\n status_code = 200\n is_an_error = 0\n error = ''\n error_description = ''\n error_user_message = ''\n\n error_id = ''\n\n if isinstance(res, Response):\n # Got a flask.Response object\n res_data = None\n\n status_code = str(res.status_code)\n\n if str(status_code) == '200':\n\n # It could be any valid json response, but it could also be an Error model\n # that pymacaron_core handled as a status 200 because it does not know of\n # pymacaron Errors\n if res.content_type == 'application/json':\n s = str(res.data)\n if '\"error\":' in s and '\"error_description\":' in s and '\"status\":' in s:\n # This looks like an error, let's decode it\n res_data = res.get_data()\n else:\n # Assuming it is a PyMacaronException.http_reply()\n res_data = res.get_data()\n\n if res_data:\n if type(res_data) is bytes:\n res_data = res_data.decode(\"utf-8\")\n\n is_json = True\n try:\n j = json.loads(res_data)\n except ValueError as e:\n # This was a plain html response. Fake an error\n is_json = False\n j = {'error': res_data, 'status': status_code}\n\n # Make sure that the response gets the same status as the PyMacaron Error it contained\n status_code = j['status']\n res.status_code = int(status_code)\n\n # Patch Response to contain a unique id\n if is_json:\n if 'error_id' not in j:\n # If the error is forwarded by multiple micro-services, we\n # want the error_id to be set only on the original error\n error_id = str(uuid.uuid4())\n j['error_id'] = error_id\n res.set_data(json.dumps(j))\n\n if error_decorator:\n # Apply error_decorator, if any defined\n res.set_data(json.dumps(error_decorator(j)))\n\n # And extract data from this error\n error = j.get('error', 'NO_ERROR_IN_JSON')\n error_description = j.get('error_description', res_data)\n if error_description == '':\n error_description = res_data\n\n if not exception_string:\n exception_string = error_description\n\n error_user_message = j.get('user_message', '')\n is_an_error = 1\n\n\n request_args = []\n if len(args):\n request_args.append(args)\n if kwargs:\n request_args.append(kwargs)\n\n data.update({\n # Set only on the original error, not on forwarded ones, not on\n # success responses\n 'error_id': error_id,\n\n # Call results\n 'time': {\n 'start': t0.isoformat(),\n 'end': t1.isoformat(),\n 'microsecs': (t1.timestamp() - t0.timestamp()) * 1000000,\n },\n\n # Response details\n 'response': {\n 'type': response_type,\n 'status': str(status_code),\n 'is_error': is_an_error,\n 'error_code': error,\n 'error_description': error_description,\n 'user_message': error_user_message,\n },\n\n # Request details\n 'request': {\n 'params': pformat(request_args),\n },\n })\n\n populate_error_report(data)\n\n # inspect may raise a UnicodeDecodeError...\n fname = function_name(f)\n\n #\n # Should we report this call?\n #\n\n # If it is an internal errors, report it\n if data['response']['status'] and int(data['response']['status']) >= 500:\n report_error(\n title=\"%s(): %s\" % (fname, exception_string),\n data=data,\n is_fatal=True\n )\n\n log.info(\"\")\n log.info(\" <= Done!\")\n log.info(\"\")\n\n return res\n\n return wrapper\n\n return crash_handler"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_json_results(self, response):\n '''\n Parses the request result and returns the JSON object. Handles all errors.\n '''\n try:\n # return the proper JSON object, or error code if request didn't go through.\n self.most_recent_json = response.json()\n json_results = response.json()\n if response.status_code in [401, 403]: #401 is invalid key, 403 is out of monthly quota.\n raise PyMsCognitiveWebSearchException(\"CODE {code}: {message}\".format(code=response.status_code,message=json_results[\"message\"]) )\n elif response.status_code in [429]: #429 means try again in x seconds.\n message = json_results['message']\n try:\n # extract time out seconds from response\n timeout = int(re.search('in (.+?) seconds', message).group(1)) + 1\n print (\"CODE 429, sleeping for {timeout} seconds\").format(timeout=str(timeout))\n time.sleep(timeout)\n except (AttributeError, ValueError) as e:\n if not self.silent_fail:\n raise PyMsCognitiveWebSearchException(\"CODE 429. Failed to auto-sleep: {message}\".format(code=response.status_code,message=json_results[\"message\"]) )\n else:\n print (\"CODE 429. Failed to auto-sleep: {message}. Trying again in 5 seconds.\".format(code=response.status_code,message=json_results[\"message\"]))\n time.sleep(5)\n except ValueError as vE:\n if not self.silent_fail:\n raise PyMsCognitiveWebSearchException(\"Request returned with code %s, error msg: %s\" % (r.status_code, r.text))\n else:\n print (\"[ERROR] Request returned with code %s, error msg: %s. \\nContinuing in 5 seconds.\" % (r.status_code, r.text))\n time.sleep(5)\n return json_results", "response": "Parses the response and returns the JSON object. Handles all errors."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsearch all the available resource types for all available resources.", "response": "def search_all(self, quota=50, format='json'):\n '''\n Returns a single list containing up to 'limit' Result objects\n Will keep requesting until quota is met\n Will also truncate extra results to return exactly the given quota\n '''\n quota_left = quota\n results = []\n while quota_left > 0:\n more_results = self._search(quota_left, format)\n if not more_results:\n break\n results += more_results\n quota_left = quota_left - len(more_results)\n time.sleep(1)\n results = results[0:quota]\n return results"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format_parameters(params):\n '''Reformat parameters into dict of format expected by the API.'''\n\n if not params:\n return {}\n\n # expect multiple invocations of --parameters but fall back\n # to ; delimited if only one --parameters is specified\n if len(params) == 1:\n if params[0].find(';') != -1: # found\n params = params[0].split(';')\n else:\n params = params[0].split(',')\n\n parameters = {}\n for p in params:\n try:\n (n, v) = p.split('=', 1)\n except ValueError:\n msg = '%s(%s). %s.' % ('Malformed parameter', p,\n 'Use the key=value format')\n raise exc.CommandError(msg)\n\n if n not in parameters:\n parameters[n] = v\n else:\n if not isinstance(parameters[n], list):\n parameters[n] = [parameters[n]]\n parameters[n].append(v)\n\n return parameters", "response": "Reformat parameters into dict of format expected by the API."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes a specific alarm.", "response": "def delete(self, **kwargs):\n \"\"\"Delete a specific alarm.\"\"\"\n url_str = self.base_url + '/%s' % kwargs['alarm_id']\n resp = self.client.delete(url_str)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning version details of the running server api", "response": "def do_version():\n \"\"\"Return version details of the running server api\"\"\"\n v = ApiPool.ping.model.Version(\n name=ApiPool().current_server_name,\n version=ApiPool().current_server_api.get_version(),\n container=get_container_version(),\n )\n log.info(\"/version: \" + pprint.pformat(v))\n return v"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef do_metric_create_raw(mc, args):\n '''Create metric from raw json body.'''\n try:\n mc.metrics.create(**args.jsonbody)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n print('Successfully created metric')", "response": "Create metric from raw json body."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlist names of metrics.", "response": "def do_metric_name_list(mc, args):\n '''List names of metrics.'''\n fields = {}\n if args.dimensions:\n fields['dimensions'] = utils.format_dimensions_query(args.dimensions)\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n if args.tenant_id:\n fields['tenant_id'] = args.tenant_id\n\n try:\n metric_names = mc.metrics.list_names(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n if args.json:\n print(utils.json_formatter(metric_names))\n return\n if isinstance(metric_names, list):\n utils.print_list(metric_names, ['Name'], formatters={'Name': lambda x: x['name']})"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlist metrics for this tenant.", "response": "def do_metric_list(mc, args):\n '''List metrics for this tenant.'''\n fields = {}\n if args.name:\n fields['name'] = args.name\n if args.dimensions:\n fields['dimensions'] = utils.format_dimensions_query(args.dimensions)\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n if args.starttime:\n _translate_starttime(args)\n fields['start_time'] = args.starttime\n if args.endtime:\n fields['end_time'] = args.endtime\n if args.tenant_id:\n fields['tenant_id'] = args.tenant_id\n\n try:\n metric = mc.metrics.list(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n if args.json:\n print(utils.json_formatter(metric))\n return\n cols = ['name', 'dimensions']\n formatters = {\n 'name': lambda x: x['name'],\n 'dimensions': lambda x: utils.format_dict(x['dimensions']),\n }\n if isinstance(metric, list):\n # print the list\n utils.print_list(metric, cols, formatters=formatters)\n else:\n # add the dictionary to a list, so print_list works\n metric_list = list()\n metric_list.append(metric)\n utils.print_list(\n metric_list,\n cols,\n formatters=formatters)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlist names of metric dimensions.", "response": "def do_dimension_name_list(mc, args):\n '''List names of metric dimensions.'''\n fields = {}\n if args.metric_name:\n fields['metric_name'] = args.metric_name\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n if args.tenant_id:\n fields['tenant_id'] = args.tenant_id\n\n try:\n dimension_names = mc.metrics.list_dimension_names(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n\n if args.json:\n print(utils.json_formatter(dimension_names))\n return\n\n if isinstance(dimension_names, list):\n utils.print_list(dimension_names, ['Dimension Names'], formatters={\n 'Dimension Names': lambda x: x['dimension_name']})"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlists names of metric dimensions.", "response": "def do_dimension_value_list(mc, args):\n '''List names of metric dimensions.'''\n fields = {}\n fields['dimension_name'] = args.dimension_name\n if args.metric_name:\n fields['metric_name'] = args.metric_name\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n if args.tenant_id:\n fields['tenant_id'] = args.tenant_id\n\n try:\n dimension_values = mc.metrics.list_dimension_values(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n\n if args.json:\n print(utils.json_formatter(dimension_values))\n return\n\n if isinstance(dimension_values, list):\n utils.print_list(dimension_values, ['Dimension Values'], formatters={\n 'Dimension Values': lambda x: x['dimension_value']})"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef do_metric_statistics(mc, args):\n '''List measurement statistics for the specified metric.'''\n statistic_types = ['AVG', 'MIN', 'MAX', 'COUNT', 'SUM']\n statlist = args.statistics.split(',')\n for stat in statlist:\n if stat.upper() not in statistic_types:\n errmsg = ('Invalid type, not one of [' +\n ', '.join(statistic_types) + ']')\n raise osc_exc.CommandError(errmsg)\n\n fields = {}\n fields['name'] = args.name\n if args.dimensions:\n fields['dimensions'] = utils.format_dimensions_query(args.dimensions)\n _translate_starttime(args)\n fields['start_time'] = args.starttime\n if args.endtime:\n fields['end_time'] = args.endtime\n if args.period:\n fields['period'] = args.period\n fields['statistics'] = args.statistics\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n if args.merge_metrics:\n fields['merge_metrics'] = args.merge_metrics\n if args.group_by:\n fields['group_by'] = args.group_by\n if args.tenant_id:\n fields['tenant_id'] = args.tenant_id\n\n try:\n metric = mc.metrics.list_statistics(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n if args.json:\n print(utils.json_formatter(metric))\n return\n cols = ['name', 'dimensions']\n # add dynamic column names\n if metric:\n column_names = metric[0]['columns']\n for name in column_names:\n cols.append(name)\n else:\n # when empty set, print_list needs a col\n cols.append('timestamp')\n\n formatters = {\n 'name': lambda x: x['name'],\n 'dimensions': lambda x: utils.format_dict(x['dimensions']),\n 'timestamp': lambda x:\n format_statistic_timestamp(x['statistics'], x['columns'],\n 'timestamp'),\n 'avg': lambda x:\n format_statistic_value(x['statistics'], x['columns'], 'avg'),\n 'min': lambda x:\n format_statistic_value(x['statistics'], x['columns'], 'min'),\n 'max': lambda x:\n format_statistic_value(x['statistics'], x['columns'], 'max'),\n 'count': lambda x:\n format_statistic_value(x['statistics'], x['columns'], 'count'),\n 'sum': lambda x:\n format_statistic_value(x['statistics'], x['columns'], 'sum'),\n }\n if isinstance(metric, list):\n # print the list\n utils.print_list(metric, cols, formatters=formatters)\n else:\n # add the dictionary to a list, so print_list works\n metric_list = list()\n metric_list.append(metric)\n utils.print_list(\n metric_list,\n cols,\n formatters=formatters)", "response": "List measurement statistics for the specified metric."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef do_notification_list(mc, args):\n '''List notifications for this tenant.'''\n fields = {}\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n if args.sort_by:\n sort_by = args.sort_by.split(',')\n for field in sort_by:\n field_values = field.lower().split()\n if len(field_values) > 2:\n print(\"Invalid sort_by value {}\".format(field))\n if field_values[0] not in allowed_notification_sort_by:\n print(\"Sort-by field name {} is not in [{}]\".format(field_values[0],\n allowed_notification_sort_by))\n return\n if len(field_values) > 1 and field_values[1] not in ['asc', 'desc']:\n print(\"Invalid value {}, must be asc or desc\".format(field_values[1]))\n fields['sort_by'] = args.sort_by\n\n try:\n notification = mc.notifications.list(**fields)\n except osc_exc.ClientException as he:\n raise osc_exc.CommandError(\n 'ClientException code=%s message=%s' %\n (he.code, he.message))\n else:\n if args.json:\n print(utils.json_formatter(notification))\n return\n cols = ['name', 'id', 'type', 'address', 'period']\n formatters = {\n 'name': lambda x: x['name'],\n 'id': lambda x: x['id'],\n 'type': lambda x: x['type'],\n 'address': lambda x: x['address'],\n 'period': lambda x: x['period'],\n }\n if isinstance(notification, list):\n\n utils.print_list(\n notification,\n cols,\n formatters=formatters)\n else:\n notif_list = list()\n notif_list.append(notification)\n utils.print_list(notif_list, cols, formatters=formatters)", "response": "List notifications for this tenant."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating a single notification.", "response": "def do_notification_update(mc, args):\n '''Update notification.'''\n fields = {}\n fields['notification_id'] = args.id\n fields['name'] = args.name\n\n fields['type'] = args.type\n fields['address'] = args.address\n if not _validate_notification_period(args.period, args.type.upper()):\n return\n fields['period'] = args.period\n try:\n notification = mc.notifications.update(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n print(jsonutils.dumps(notification, indent=2))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef do_alarm_definition_create(mc, args):\n '''Create an alarm definition.'''\n fields = {}\n fields['name'] = args.name\n if args.description:\n fields['description'] = args.description\n fields['expression'] = args.expression\n if args.alarm_actions:\n fields['alarm_actions'] = args.alarm_actions\n if args.ok_actions:\n fields['ok_actions'] = args.ok_actions\n if args.undetermined_actions:\n fields['undetermined_actions'] = args.undetermined_actions\n if args.severity:\n if not _validate_severity(args.severity):\n return\n fields['severity'] = args.severity\n if args.match_by:\n fields['match_by'] = args.match_by.split(',')\n try:\n alarm = mc.alarm_definitions.create(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n print(jsonutils.dumps(alarm, indent=2))", "response": "Create an alarm definition."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlists alarm definitions for this tenant.", "response": "def do_alarm_definition_list(mc, args):\n '''List alarm definitions for this tenant.'''\n fields = {}\n if args.name:\n fields['name'] = args.name\n if args.dimensions:\n fields['dimensions'] = utils.format_dimensions_query(args.dimensions)\n if args.severity:\n if not _validate_severity(args.severity):\n return\n fields['severity'] = args.severity\n if args.sort_by:\n sort_by = args.sort_by.split(',')\n for field in sort_by:\n field_values = field.split()\n if len(field_values) > 2:\n print(\"Invalid sort_by value {}\".format(field))\n if field_values[0] not in allowed_definition_sort_by:\n print(\"Sort-by field name {} is not in [{}]\".format(field_values[0],\n allowed_definition_sort_by))\n return\n if len(field_values) > 1 and field_values[1] not in ['asc', 'desc']:\n print(\"Invalid value {}, must be asc or desc\".format(field_values[1]))\n fields['sort_by'] = args.sort_by\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n try:\n alarm = mc.alarm_definitions.list(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n if args.json:\n print(utils.json_formatter(alarm))\n return\n cols = ['name', 'id', 'expression', 'match_by', 'actions_enabled']\n formatters = {\n 'name': lambda x: x['name'],\n 'id': lambda x: x['id'],\n 'expression': lambda x: x['expression'],\n 'match_by': lambda x: utils.format_list(x['match_by']),\n 'actions_enabled': lambda x: x['actions_enabled'],\n }\n if isinstance(alarm, list):\n # print the list\n utils.print_list(alarm, cols, formatters=formatters)\n else:\n # add the dictionary to a list, so print_list works\n alarm_list = list()\n alarm_list.append(alarm)\n utils.print_list(alarm_list, cols, formatters=formatters)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting the alarm definition.", "response": "def do_alarm_definition_delete(mc, args):\n '''Delete the alarm definition.'''\n fields = {}\n fields['alarm_id'] = args.id\n try:\n mc.alarm_definitions.delete(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n print('Successfully deleted alarm definition')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates the alarm definition.", "response": "def do_alarm_definition_update(mc, args):\n '''Update the alarm definition.'''\n fields = {}\n fields['alarm_id'] = args.id\n fields['name'] = args.name\n fields['description'] = args.description\n fields['expression'] = args.expression\n fields['alarm_actions'] = _arg_split_patch_update(args.alarm_actions)\n fields['ok_actions'] = _arg_split_patch_update(args.ok_actions)\n fields['undetermined_actions'] = _arg_split_patch_update(args.undetermined_actions)\n if args.actions_enabled not in enabled_types:\n errmsg = ('Invalid value, not one of [' +\n ', '.join(enabled_types) + ']')\n print(errmsg)\n return\n fields['actions_enabled'] = args.actions_enabled in ['true', 'True']\n fields['match_by'] = _arg_split_patch_update(args.match_by)\n if not _validate_severity(args.severity):\n return\n fields['severity'] = args.severity\n try:\n alarm = mc.alarm_definitions.update(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n print(jsonutils.dumps(alarm, indent=2))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\npatches the alarm definition.", "response": "def do_alarm_definition_patch(mc, args):\n '''Patch the alarm definition.'''\n fields = {}\n fields['alarm_id'] = args.id\n if args.name:\n fields['name'] = args.name\n if args.description:\n fields['description'] = args.description\n if args.expression:\n fields['expression'] = args.expression\n if args.alarm_actions:\n fields['alarm_actions'] = _arg_split_patch_update(args.alarm_actions, patch=True)\n if args.ok_actions:\n fields['ok_actions'] = _arg_split_patch_update(args.ok_actions, patch=True)\n if args.undetermined_actions:\n fields['undetermined_actions'] = _arg_split_patch_update(args.undetermined_actions,\n patch=True)\n if args.actions_enabled:\n if args.actions_enabled not in enabled_types:\n errmsg = ('Invalid value, not one of [' +\n ', '.join(enabled_types) + ']')\n print(errmsg)\n return\n fields['actions_enabled'] = args.actions_enabled in ['true', 'True']\n if args.severity:\n if not _validate_severity(args.severity):\n return\n fields['severity'] = args.severity\n try:\n alarm = mc.alarm_definitions.patch(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n print(jsonutils.dumps(alarm, indent=2))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef do_alarm_list(mc, args):\n '''List alarms for this tenant.'''\n fields = {}\n if args.alarm_definition_id:\n fields['alarm_definition_id'] = args.alarm_definition_id\n if args.metric_name:\n fields['metric_name'] = args.metric_name\n if args.metric_dimensions:\n fields['metric_dimensions'] = utils.format_dimensions_query(args.metric_dimensions)\n if args.state:\n if args.state.upper() not in state_types:\n errmsg = ('Invalid state, not one of [' +\n ', '.join(state_types) + ']')\n print(errmsg)\n return\n fields['state'] = args.state\n if args.severity:\n if not _validate_severity(args.severity):\n return\n fields['severity'] = args.severity\n if args.state_updated_start_time:\n fields['state_updated_start_time'] = args.state_updated_start_time\n if args.lifecycle_state:\n fields['lifecycle_state'] = args.lifecycle_state\n if args.link:\n fields['link'] = args.link\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n if args.sort_by:\n sort_by = args.sort_by.split(',')\n for field in sort_by:\n field_values = field.lower().split()\n if len(field_values) > 2:\n print(\"Invalid sort_by value {}\".format(field))\n if field_values[0] not in allowed_alarm_sort_by:\n print(\"Sort-by field name {} is not in [{}]\".format(field_values[0],\n allowed_alarm_sort_by))\n return\n if len(field_values) > 1 and field_values[1] not in ['asc', 'desc']:\n print(\"Invalid value {}, must be asc or desc\".format(field_values[1]))\n fields['sort_by'] = args.sort_by\n try:\n alarm = mc.alarms.list(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n if args.json:\n print(utils.json_formatter(alarm))\n return\n cols = ['id', 'alarm_definition_id', 'alarm_definition_name', 'metric_name',\n 'metric_dimensions', 'severity', 'state', 'lifecycle_state', 'link',\n 'state_updated_timestamp', 'updated_timestamp', \"created_timestamp\"]\n formatters = {\n 'id': lambda x: x['id'],\n 'alarm_definition_id': lambda x: x['alarm_definition']['id'],\n 'alarm_definition_name': lambda x: x['alarm_definition']['name'],\n 'metric_name': lambda x: format_metric_name(x['metrics']),\n 'metric_dimensions': lambda x: format_metric_dimensions(x['metrics']),\n 'severity': lambda x: x['alarm_definition']['severity'],\n 'state': lambda x: x['state'],\n 'lifecycle_state': lambda x: x['lifecycle_state'],\n 'link': lambda x: x['link'],\n 'state_updated_timestamp': lambda x: x['state_updated_timestamp'],\n 'updated_timestamp': lambda x: x['updated_timestamp'],\n 'created_timestamp': lambda x: x['created_timestamp'],\n }\n if isinstance(alarm, list):\n # print the list\n utils.print_list(alarm, cols, formatters=formatters)\n else:\n # add the dictionary to a list, so print_list works\n alarm_list = list()\n alarm_list.append(alarm)\n utils.print_list(alarm_list, cols, formatters=formatters)", "response": "List alarms for this tenant."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef do_alarm_update(mc, args):\n '''Update the alarm state.'''\n fields = {}\n fields['alarm_id'] = args.id\n if args.state.upper() not in state_types:\n errmsg = ('Invalid state, not one of [' +\n ', '.join(state_types) + ']')\n print(errmsg)\n return\n fields['state'] = args.state\n fields['lifecycle_state'] = args.lifecycle_state\n fields['link'] = args.link\n try:\n alarm = mc.alarms.update(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n print(jsonutils.dumps(alarm, indent=2))", "response": "Update the alarm state."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nalarming state transition history.", "response": "def do_alarm_history(mc, args):\n '''Alarm state transition history.'''\n fields = {}\n fields['alarm_id'] = args.id\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n try:\n alarm = mc.alarms.history(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n output_alarm_history(args, alarm)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlisting alarms state history.", "response": "def do_alarm_history_list(mc, args):\n '''List alarms state history.'''\n fields = {}\n if args.dimensions:\n fields['dimensions'] = utils.format_parameters(args.dimensions)\n if args.starttime:\n _translate_starttime(args)\n fields['start_time'] = args.starttime\n if args.endtime:\n fields['end_time'] = args.endtime\n if args.limit:\n fields['limit'] = args.limit\n if args.offset:\n fields['offset'] = args.offset\n try:\n alarm = mc.alarms.history_list(**fields)\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n output_alarm_history(args, alarm)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists notification types supported by monasca.", "response": "def do_notification_type_list(mc, args):\n '''List notification types supported by monasca.'''\n\n try:\n notification_types = mc.notificationtypes.list()\n except (osc_exc.ClientException, k_exc.HttpError) as he:\n raise osc_exc.CommandError('%s\\n%s' % (he.message, he.details))\n else:\n if args.json:\n print(utils.json_formatter(notification_types))\n return\n else:\n formatters = {'types': lambda x: x[\"type\"]}\n # utils.print_list(notification_types['types'], [\"types\"], formatters=formatters)\n utils.print_list(notification_types, [\"types\"], formatters=formatters)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nvalidate an auth0 token and return the payload or an exception if the token is invalid.", "response": "def load_auth_token(token, load=True):\n \"\"\"Validate an auth0 token. Returns the token's payload, or an exception\n of the type:\"\"\"\n\n assert get_config().jwt_secret, \"No JWT secret configured for pymacaron\"\n assert get_config().jwt_issuer, \"No JWT issuer configured for pymacaron\"\n assert get_config().jwt_audience, \"No JWT audience configured for pymacaron\"\n\n log.info(\"Validating token, using issuer:%s, audience:%s, secret:%s***\" % (\n get_config().jwt_issuer,\n get_config().jwt_audience,\n get_config().jwt_secret[1:8],\n ))\n\n # First extract the issuer\n issuer = get_config().jwt_issuer\n try:\n headers = jwt.get_unverified_header(token)\n except jwt.DecodeError:\n raise AuthInvalidTokenError('token signature is invalid')\n\n log.debug(\"Token has headers %s\" % headers)\n\n if 'iss' in headers:\n issuer = headers['iss']\n\n # Then validate the token against this issuer\n log.info(\"Validating token in issuer %s\" % issuer)\n try:\n payload = jwt.decode(\n token,\n get_config().jwt_secret,\n audience=get_config().jwt_audience,\n # Allow for a time difference of up to 5min (300sec)\n leeway=300\n )\n except jwt.ExpiredSignature:\n raise AuthTokenExpiredError('Auth token is expired')\n except jwt.InvalidAudienceError:\n raise AuthInvalidTokenError('incorrect audience')\n except jwt.DecodeError:\n raise AuthInvalidTokenError('token signature is invalid')\n except jwt.InvalidIssuedAtError:\n raise AuthInvalidTokenError('Token was issued in the future')\n\n # Save payload to stack\n payload['token'] = token\n payload['iss'] = issuer\n\n if load:\n stack.top.current_user = payload\n\n return payload"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef authenticate_http_request(token=None):\n\n if token:\n auth = token\n else:\n auth = request.headers.get('Authorization', None)\n\n if not auth:\n auth = request.cookies.get('token', None)\n if auth:\n auth = unquote_plus(auth)\n\n log.debug(\"Validating Auth header [%s]\" % auth)\n\n if not auth:\n raise AuthMissingHeaderError('There is no Authorization header in the HTTP request')\n\n parts = auth.split()\n\n if parts[0].lower() != 'bearer':\n raise AuthInvalidTokenError('Authorization header must start with Bearer')\n elif len(parts) == 1:\n raise AuthInvalidTokenError('Token not found in Authorization header')\n elif len(parts) > 2:\n raise AuthInvalidTokenError('Authorization header must be Bearer + \\s + token')\n\n token = parts[1]\n\n return load_auth_token(token)", "response": "Validate auth0 tokens passed in the HTTP request."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_token(user_id, expire_in=None, data={}, issuer=None, iat=None):\n assert user_id, \"No user_id passed to generate_token()\"\n assert isinstance(data, dict), \"generate_token(data=) should be a dictionary\"\n assert get_config().jwt_secret, \"No JWT secret configured in pymacaron\"\n\n if not issuer:\n issuer = get_config().jwt_issuer\n\n assert issuer, \"No JWT issuer configured for pymacaron\"\n\n if expire_in is None:\n expire_in = get_config().jwt_token_timeout\n\n if iat:\n epoch_now = iat\n else:\n epoch_now = to_epoch(timenow())\n epoch_end = epoch_now + expire_in\n\n data['iss'] = issuer\n data['sub'] = user_id\n data['aud'] = get_config().jwt_audience\n data['exp'] = epoch_end\n data['iat'] = epoch_now\n\n headers = {\n \"typ\": \"JWT\",\n \"alg\": \"HS256\",\n \"iss\": issuer,\n }\n\n log.debug(\"Encoding token with data %s and headers %s (secret:%s****)\" % (data, headers, get_config().jwt_secret[0:8]))\n\n t = jwt.encode(\n data,\n get_config().jwt_secret,\n headers=headers,\n )\n\n if type(t) is bytes:\n t = t.decode(\"utf-8\")\n\n return t", "response": "Generate a new JWT token for the given user_id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_user_token():\n if not hasattr(stack.top, 'current_user'):\n return ''\n current_user = stack.top.current_user\n return current_user.get('token', '')", "response": "Return the authenticated user s auth token"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the issuer in which this user s token was created", "response": "def get_token_issuer():\n \"\"\"Return the issuer in which this user's token was created\"\"\"\n try:\n current_user = stack.top.current_user\n return current_user.get('iss', get_config().jwt_issuer)\n except Exception:\n pass\n return get_config().jwt_issuer"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _search(self, limit, format):\n '''\n Returns a list of result objects, with the url for the next page MsCognitive search url.\n '''\n limit = min(limit, self.MAX_SEARCH_PER_QUERY)\n payload = {\n 'q' : self.query,\n 'count' : limit, #currently 50 is max per search.\n 'offset': self.current_offset,\n }\n payload.update(self.CUSTOM_PARAMS)\n\n headers = { 'Ocp-Apim-Subscription-Key' : self.api_key }\n if not self.silent_fail:\n QueryChecker.check_web_params(payload, headers)\n response = requests.get(self.QUERY_URL, params=payload, headers=headers)\n json_results = self.get_json_results(response)\n packaged_results = [NewsResult(single_result_json) for single_result_json in json_results[\"value\"]]\n self.current_offset += min(50, limit, len(packaged_results))\n return packaged_results", "response": "Returns a list of result objects with the url for the next page MsCognitive search url."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the details for a specific notification.", "response": "def get(self, **kwargs):\n \"\"\"Get the details for a specific notification.\"\"\"\n\n # NOTE(trebskit) should actually be find_one, but\n # monasca does not support expected response format\n\n url = '%s/%s' % (self.base_url, kwargs['notification_id'])\n resp = self.client.list(path=url)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating an alarm definition.", "response": "def create(self, **kwargs):\n \"\"\"Create an alarm definition.\"\"\"\n resp = self.client.create(url=self.base_url,\n json=kwargs)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update(self, **kwargs):\n url_str = self.base_url + '/%s' % kwargs['alarm_id']\n del kwargs['alarm_id']\n\n resp = self.client.create(url=url_str,\n method='PUT',\n json=kwargs)\n\n return resp", "response": "Update a specific alarm definition."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef compute_y(self, coefficients, num_x):\n y_vals = []\n\n for x in range(1, num_x + 1):\n y = sum([c * x ** i for i, c in enumerate(coefficients[::-1])])\n y_vals.append(y)\n\n return y_vals", "response": "Computes the y - values for the domain of x - values in [ 1 num_x )."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes the error value of the current set of values produced by the current set of values.", "response": "def compute_err(self, solution_y, coefficients):\n \"\"\"\n Return an error value by finding the absolute difference for each\n element in a list of solution-generated y-values versus expected values.\n\n Compounds error by 50% for each negative coefficient in the solution.\n\n solution_y: list of y-values produced by a solution\n coefficients: list of polynomial coefficients represented by the solution\n\n return: error value\n \"\"\"\n error = 0\n for modeled, expected in zip(solution_y, self.expected_values):\n error += abs(modeled - expected)\n\n if any([c < 0 for c in coefficients]):\n error *= 1.5\n\n return error"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef eval_fitness(self, chromosome):\n coefficients = self.translator.translate_chromosome(chromosome)\n solution_y = self.compute_y(coefficients, self.num_x)\n fitness = -1 * self.compute_err(solution_y, coefficients)\n\n return fitness", "response": "Evaluate the polynomial equation using coefficients represented by a\n solution and chromosome. Returns the error as the solution s fitness."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef read_config(ip, mac):\n click.echo(\"Read configuration from %s\" % ip)\n request = requests.get(\n 'http://{}/{}/{}/'.format(ip, URI, mac), timeout=TIMEOUT)\n print(request.json())", "response": "Read the current configuration of a myStrom device."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write_config(ip, mac, single, double, long, touch):\n click.echo(\"Write configuration to device %s\" % ip)\n data = {\n 'single': single,\n 'double': double,\n 'long': long,\n 'touch': touch,\n }\n request = requests.post(\n 'http://{}/{}/{}/'.format(ip, URI, mac), data=data, timeout=TIMEOUT)\n\n if request.status_code == 200:\n click.echo(\"Configuration of %s set\" % mac)", "response": "Write the current configuration of a myStrom button."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nwrite the configuration for Home Assistant to a myStrom button.", "response": "def write_ha_config(ip, mac, hass, port, id):\n \"\"\"Write the configuration for Home Assistant to a myStrom button.\"\"\"\n click.echo(\"Write configuration for Home Assistant to device %s...\" % ip)\n\n action = \"get://{1}:{2}/api/mystrom?{0}={3}\"\n data = {\n 'single': action.format('single', hass, port, id),\n 'double': action.format('double', hass, port, id),\n 'long': action.format('long', hass, port, id),\n 'touch': action.format('touch', hass, port, id),\n }\n request = requests.post(\n 'http://{}/{}/{}/'.format(ip, URI, mac), data=data, timeout=TIMEOUT)\n\n if request.status_code == 200:\n click.echo(\"Configuration for %s set\" % ip)\n click.echo(\"After using the push pattern the first time then \"\n \"the myStrom WiFi Button will show up as %s\" % id)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef reset_config(ip, mac):\n click.echo(\"Reset configuration of button %s...\" % ip)\n data = {\n 'single': \"\",\n 'double': \"\",\n 'long': \"\",\n 'touch': \"\",\n }\n request = requests.post(\n 'http://{}/{}/{}/'.format(ip, URI, mac), data=data, timeout=TIMEOUT)\n\n if request.status_code == 200:\n click.echo(\"Reset configuration of %s\" % mac)", "response": "Reset the current configuration of a myStrom WiFi Button."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef color(ip, mac, hue, saturation, value):\n bulb = MyStromBulb(ip, mac)\n bulb.set_color_hsv(hue, saturation, value)", "response": "Switch the bulb on with the given color."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks if host is on an ec2 instance", "response": "def is_ec2_instance():\n \"\"\"Try fetching instance metadata at 'curl http://169.254.169.254/latest/meta-data/'\n to see if host is on an ec2 instance\"\"\"\n\n # Note: this code assumes that docker containers running on ec2 instances\n # inherit instances metadata, which they do as of 2016-08-25\n\n global IS_EC2_INSTANCE\n\n if IS_EC2_INSTANCE != -1:\n # Returned the cached value\n return IS_EC2_INSTANCE\n\n s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n s.settimeout(0.2)\n try:\n s.connect((\"169.254.169.254\", 80))\n IS_EC2_INSTANCE = 1\n return True\n except socket.timeout:\n IS_EC2_INSTANCE = 0\n return False\n except socket.error:\n IS_EC2_INSTANCE = 0\n return False"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntake a datetime either as a string or a datetime. datetime object and return the corresponding epoch", "response": "def to_epoch(t):\n \"\"\"Take a datetime, either as a string or a datetime.datetime object,\n and return the corresponding epoch\"\"\"\n if isinstance(t, str):\n if '+' not in t:\n t = t + '+00:00'\n t = parser.parse(t)\n elif t.tzinfo is None or t.tzinfo.utcoffset(t) is None:\n t = t.replace(tzinfo=pytz.timezone('utc'))\n\n t0 = datetime.datetime(1970, 1, 1, 0, 0, 0, 0, pytz.timezone('utc'))\n delta = t - t0\n return int(delta.total_seconds())"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the version of the docker container running the present server or an empty string if not in a container", "response": "def get_container_version():\n \"\"\"Return the version of the docker container running the present server,\n or '' if not in a container\"\"\"\n root_dir = os.path.dirname(os.path.realpath(sys.argv[0]))\n version_file = os.path.join(root_dir, 'VERSION')\n if os.path.exists(version_file):\n with open(version_file) as f:\n return f.read()\n return ''"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_app_name():\n conf = get_config()\n if is_ec2_instance():\n return conf.app_name_live if hasattr(conf, 'app_name_live') else 'PYMACARON_LIVE'\n else:\n return conf.app_name_dev if hasattr(conf, 'app_name_dev') else 'PYMACARON_DEV'", "response": "Return a generic name for this app usefull when reporting to monitoring frameworks"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a new instance of this gene class with random DNA and random characters chosen from the GeneMaterIAL_OPTIONS list.", "response": "def create_random(cls, length, **kwargs):\n \"\"\"\n Return a new instance of this gene class with random DNA,\n with characters chosen from ``GENETIC_MATERIAL_OPTIONS``.\n \n length: the number of characters in the randomized DNA\n **kwargs: forwarded to the ``cls`` constructor\n \"\"\"\n dna = ''.join([random.choice(cls.GENETIC_MATERIAL_OPTIONS) for _ in range(length)])\n return cls(dna, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsimulate mutation against a probability.", "response": "def mutate(self, p_mutate):\n \"\"\"\n Simulate mutation against a probability.\n \n p_mutate: probability for mutation to occur\n \"\"\"\n new_dna = []\n\n for bit in self.dna:\n if random.random() < p_mutate:\n new_bit = bit\n\n while new_bit == bit:\n new_bit = random.choice(self.GENETIC_MATERIAL_OPTIONS)\n bit = new_bit\n\n new_dna.append(bit)\n\n self.dna = ''.join(new_dna)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef copy(self):\n return type(self)(self.dna, suppressed=self.suppressed, name=self.name)", "response": "Return a copy of this gene with the same DNA."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks that a DNA string only contains characters in self. GENETIC_MATERIAL_OPTIONS.", "response": "def _check_dna(self, dna):\n \"\"\" Check that a DNA string only contains characters in ``GENETIC_MATERIAL_OPTIONS``. \"\"\"\n valid_chars = set(self.GENETIC_MATERIAL_OPTIONS)\n assert all(char in valid_chars for char in dna)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef mutate(self, p_mutate):\n new_dna = []\n \n for bit in self.dna:\n if random.random() < p_mutate:\n bit = '1' if bit == '0' else '0'\n \n new_dna.append(bit)\n \n self.dna = ''.join(new_dna)", "response": "Mutate the DNA string."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_client(api_version, session=None,\n endpoint=None, service_type='monitoring'):\n \"\"\"Returns an monitoring API client.\"\"\"\n\n client_cls = utils.get_client_class('monitoring', api_version, VERSION_MAP)\n c = client_cls(\n session=session,\n service_type=service_type,\n endpoint=endpoint,\n app_name='monascaclient',\n app_version=version.version_string,\n )\n\n return c", "response": "Returns an monitoring API client."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning or reuses session.", "response": "def _session(kwargs):\n \"\"\"Returns or reuses session.\n\n Method takes care of providing instance of\n session object for the client.\n\n :param kwargs: all params (without api_version) client was initialized with\n :type kwargs: dict\n\n :returns: session object\n :rtype keystoneauth1.session.Session\n\n \"\"\"\n if 'session' in kwargs:\n LOG.debug('Reusing session')\n sess = kwargs.get('session')\n if not isinstance(sess, k_session.Session):\n msg = ('session should be an instance of %s' % k_session.Session)\n LOG.error(msg)\n raise RuntimeError(msg)\n else:\n LOG.debug('Initializing new session')\n auth = _get_auth_handler(kwargs)\n sess = _get_session(auth, kwargs)\n return sess"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload an instance of the class from a file.", "response": "def load(cls, path):\n \"\"\"\n Loads an instance of the class from a file.\n\n Parameters\n ----------\n path : str\n Path to an HDF5 file.\n\n Examples\n --------\n This is an abstract data type, but let us say that ``Foo`` inherits\n from ``Saveable``. To construct an object of this class from a file, we\n do:\n\n >>> foo = Foo.load('foo.h5') #doctest: +SKIP\n \"\"\"\n if path is None:\n return cls.load_from_dict({})\n else:\n d = io.load(path)\n return cls.load_from_dict(d)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef register(cls, name):\n def register_decorator(reg_cls):\n def name_func(self):\n return name\n reg_cls.name = property(name_func)\n assert issubclass(reg_cls, cls), \\\n \"Must be subclass matching your NamedRegistry class\"\n cls.REGISTRY[name] = reg_cls\n return reg_cls\n return register_decorator", "response": "Decorator to register a class."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconstructs an instance of the named object given its name.", "response": "def construct(cls, name, *args, **kwargs):\n \"\"\"\n Constructs an instance of an object given its name.\n \"\"\"\n return cls.REGISTRY[name](*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef config():\n conf = ConfigParser()\n # Set up defaults\n conf.add_section('io')\n conf.set('io', 'compression', 'zlib')\n\n conf.read(os.path.expanduser('~/.deepdish.conf'))\n return conf", "response": "Loads and returns a ConfigParser from ~. deepdish. conf."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef asgray(im):\n if im.ndim == 2:\n return im\n elif im.ndim == 3 and im.shape[2] in (3, 4):\n return im[..., :3].mean(axis=-1)\n else:\n raise ValueError('Invalid image format')", "response": "Takes an image and returns its grayscale version by averaging the color\n channels."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncropping an image in the center.", "response": "def crop(im, size):\n \"\"\"\n Crops an image in the center.\n\n Parameters\n ----------\n size : tuple, (height, width)\n Finally size after cropping.\n \"\"\"\n diff = [im.shape[index] - size[index] for index in (0, 1)]\n im2 = im[diff[0]//2:diff[0]//2 + size[0], diff[1]//2:diff[1]//2 + size[1]]\n return im2"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef crop_or_pad(im, size, value=0):\n diff = [im.shape[index] - size[index] for index in (0, 1)]\n im2 = im[diff[0]//2:diff[0]//2 + size[0], diff[1]//2:diff[1]//2 + size[1]]\n return im2", "response": "Crop an image or pad it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading an image from file.", "response": "def load(path, dtype=np.float64):\n \"\"\"\n Loads an image from file.\n\n Parameters\n ----------\n path : str\n Path to image file.\n dtype : np.dtype\n Defaults to ``np.float64``, which means the image will be returned as a\n float with values between 0 and 1. If ``np.uint8`` is specified, the\n values will be between 0 and 255 and no conversion cost will be\n incurred.\n \"\"\"\n _import_skimage()\n import skimage.io\n im = skimage.io.imread(path)\n if dtype == np.uint8:\n return im\n elif dtype in {np.float16, np.float32, np.float64}:\n return im.astype(dtype) / 255\n else:\n raise ValueError('Unsupported dtype')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload image using PIL. Image. open with PIL. Pillow without any processing.", "response": "def load_raw(path):\n \"\"\"\n Load image using PIL/Pillow without any processing. This is particularly\n useful for palette images, which will be loaded using their palette index\n values as opposed to `load` which will convert them to RGB.\n\n Parameters\n ----------\n path : str\n Path to image file.\n \"\"\"\n _import_pil()\n from PIL import Image\n return np.array(Image.open(path))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsaves an image to file.", "response": "def save(path, im):\n \"\"\"\n Saves an image to file.\n\n If the image is type float, it will assume to have values in [0, 1].\n\n Parameters\n ----------\n path : str\n Path to which the image will be saved.\n im : ndarray (image)\n Image.\n \"\"\"\n from PIL import Image\n if im.dtype == np.uint8:\n pil_im = Image.fromarray(im)\n else:\n pil_im = Image.fromarray((im*255).astype(np.uint8))\n pil_im.save(path)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef integrate(ii, r0, c0, r1, c1):\n # This line is modified\n S = np.zeros(ii.shape[-1])\n\n S += ii[r1, c1]\n\n if (r0 - 1 >= 0) and (c0 - 1 >= 0):\n S += ii[r0 - 1, c0 - 1]\n\n if (r0 - 1 >= 0):\n S -= ii[r0 - 1, c1]\n\n if (c0 - 1 >= 0):\n S -= ii[r1, c0 - 1]\n\n return S", "response": "Integrate the image of a given window over a given top left corner of the block and the bottom right corner of the block."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmove the contents of an image with the specified offset along the two axes.", "response": "def offset(img, offset, fill_value=0):\n \"\"\"\n Moves the contents of image without changing the image size. The missing\n values are given a specified fill value.\n\n Parameters\n ----------\n img : array\n Image.\n offset : (vertical_offset, horizontal_offset)\n Tuple of length 2, specifying the offset along the two axes.\n fill_value : dtype of img\n Fill value. Defaults to 0.\n \"\"\"\n\n sh = img.shape\n if sh == (0, 0):\n return img\n else:\n x = np.empty(sh)\n x[:] = fill_value\n x[max(offset[0], 0):min(sh[0]+offset[0], sh[0]),\n max(offset[1], 0):min(sh[1]+offset[1], sh[1])] = \\\n img[max(-offset[0], 0):min(sh[0]-offset[0], sh[0]),\n max(-offset[1], 0):min(sh[1]-offset[1], sh[1])]\n return x"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a bounding box describing the smallest rectangle containing the foreground object.", "response": "def bounding_box(alpha, threshold=0.1):\n \"\"\"\n Returns a bounding box of the support.\n\n Parameters\n ----------\n alpha : ndarray, ndim=2\n Any one-channel image where the background has zero or low intensity.\n threshold : float\n The threshold that divides background from foreground.\n\n Returns\n -------\n bounding_box : (top, left, bottom, right)\n The bounding box describing the smallest rectangle containing the\n foreground object, as defined by the threshold.\n \"\"\"\n assert alpha.ndim == 2\n\n # Take the bounding box of the support, with a certain threshold.\n supp_axs = [alpha.max(axis=1-i) for i in range(2)]\n\n # Check first and last value of that threshold\n bb = [np.where(supp_axs[i] > threshold)[0][[0, -1]] for i in range(2)]\n\n return (bb[0][0], bb[1][0], bb[0][1], bb[1][1])"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef bounding_box_as_binary_map(alpha, threshold=0.1):\n\n bb = bounding_box(alpha)\n x = np.zeros(alpha.shape, dtype=np.bool_)\n x[bb[0]:bb[2], bb[1]:bb[3]] = 1\n return x", "response": "Returns the bounding box as a binary map."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef extract_patches(images, patch_shape, samples_per_image=40, seed=0,\n cycle=True):\n \"\"\"\n Takes a set of images and yields randomly chosen patches of specified size.\n\n Parameters\n ----------\n images : iterable\n The images have to be iterable, and each element must be a Numpy array\n with at least two spatial 2 dimensions as the first and second axis.\n patch_shape : tuple, length 2\n The spatial shape of the patches that should be extracted. If the\n images have further dimensions beyond the spatial, the patches will\n copy these too.\n samples_per_image : int\n Samples to extract before moving on to the next image.\n seed : int\n Seed with which to select the patches.\n cycle : bool\n If True, then the function will produce patches indefinitely, by going\n back to the first image when all are done. If False, the iteration will\n stop when there are no more images.\n\n Returns\n -------\n patch_generator\n This function returns a generator that will produce patches.\n\n Examples\n --------\n >>> import deepdish as dd\n >>> import matplotlib.pylab as plt\n >>> import itertools\n >>> images = ag.io.load_example('mnist')\n\n Now, let us say we want to exact patches from the these, where each patch\n has at least some activity.\n\n >>> gen = dd.image.extract_patches(images, (5, 5))\n >>> gen = (x for x in gen if x.mean() > 0.1)\n >>> patches = np.array(list(itertools.islice(gen, 25)))\n >>> patches.shape\n (25, 5, 5)\n >>> dd.plot.images(patches)\n >>> plt.show()\n\n \"\"\"\n rs = np.random.RandomState(seed)\n for Xi in itr.cycle(images):\n # How many patches could we extract?\n w, h = [Xi.shape[i]-patch_shape[i] for i in range(2)]\n\n assert w > 0 and h > 0\n\n # Maybe shuffle an iterator of the indices?\n indices = np.asarray(list(itr.product(range(w), range(h))))\n rs.shuffle(indices)\n for x, y in indices[:samples_per_image]:\n yield Xi[x:x+patch_shape[0], y:y+patch_shape[1]]", "response": "Takes a set of images and yields randomly chosen patches of specified size."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the memory byte size of a Numpy array as an integer.", "response": "def bytesize(arr):\n \"\"\"\n Returns the memory byte size of a Numpy array as an integer.\n \"\"\"\n byte_size = np.prod(arr.shape) * np.dtype(arr.dtype).itemsize\n return byte_size"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tupled_argmax(a):\n return np.unravel_index(np.argmax(a), np.shape(a))", "response": "Argmax that returns an index tuple. Note that numpy. argmax will return a\n scalar index as if you had flattened the array."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npads an array with a specific value.", "response": "def pad(data, padwidth, value=0.0):\n \"\"\"\n Pad an array with a specific value.\n\n Parameters\n ----------\n data : ndarray\n Numpy array of any dimension and type.\n padwidth : int or tuple\n If int, it will pad using this amount at the beginning and end of all\n dimensions. If it is a tuple (of same length as `ndim`), then the\n padding amount will be specified per axis.\n value : data.dtype\n The value with which to pad. Default is ``0.0``.\n\n See also\n --------\n pad_to_size, pad_repeat_border, pad_repeat_border_corner\n\n Examples\n --------\n >>> import deepdish as dd\n >>> import numpy as np\n\n Pad an array with zeros.\n\n >>> x = np.ones((3, 3))\n >>> dd.util.pad(x, (1, 2), value=0.0)\n array([[ 0., 0., 0., 0., 0., 0., 0.],\n [ 0., 0., 1., 1., 1., 0., 0.],\n [ 0., 0., 1., 1., 1., 0., 0.],\n [ 0., 0., 1., 1., 1., 0., 0.],\n [ 0., 0., 0., 0., 0., 0., 0.]])\n\n \"\"\"\n data = np.asarray(data)\n shape = data.shape\n if isinstance(padwidth, int):\n padwidth = (padwidth,)*len(shape)\n\n padded_shape = tuple(map(lambda ix: ix[1]+padwidth[ix[0]]*2,\n enumerate(shape)))\n new_data = np.empty(padded_shape, dtype=data.dtype)\n new_data[..., :] = value\n new_data[[slice(w, -w) if w > 0 else slice(None) for w in padwidth]] = data\n return new_data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pad_repeat_border_corner(data, shape):\n new_data = np.empty(shape)\n new_data[[slice(upper) for upper in data.shape]] = data\n for i in range(len(shape)):\n selection = [slice(None)]*i + [slice(data.shape[i], None)]\n selection2 = [slice(None)]*i + [slice(data.shape[i]-1, data.shape[i])]\n new_data[selection] = new_data[selection2]\n return new_data", "response": "Returns a new array with padded data by repeating the borders of each axis."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if a dictionary can be saved natively as HDF5 groups.", "response": "def _dict_native_ok(d):\n \"\"\"\n This checks if a dictionary can be saved natively as HDF5 groups.\n\n If it can't, it will be pickled.\n \"\"\"\n if len(d) >= 256:\n return False\n\n # All keys must be strings\n for k in d:\n if not isinstance(k, six.string_types):\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _load_nonlink_level(handler, level, pathtable, pathname):\n if isinstance(level, tables.Group):\n if _sns and (level._v_title.startswith('SimpleNamespace:') or\n DEEPDISH_IO_ROOT_IS_SNS in level._v_attrs):\n val = SimpleNamespace()\n dct = val.__dict__\n elif level._v_title.startswith('list:'):\n dct = {}\n val = []\n else:\n dct = {}\n val = dct\n # in case of recursion, object needs to be put in pathtable\n # before trying to fully load it\n pathtable[pathname] = val\n\n # Load sub-groups\n for grp in level:\n lev = _load_level(handler, grp, pathtable)\n n = grp._v_name\n # Check if it's a complicated pair or a string-value pair\n if n.startswith('__pair'):\n dct[lev['key']] = lev['value']\n else:\n dct[n] = lev\n\n # Load attributes\n for name in level._v_attrs._f_list():\n if name.startswith(DEEPDISH_IO_PREFIX):\n continue\n v = level._v_attrs[name]\n dct[name] = v\n\n if level._v_title.startswith('list:'):\n N = int(level._v_title[len('list:'):])\n for i in range(N):\n val.append(dct['i{}'.format(i)])\n return val\n elif level._v_title.startswith('tuple:'):\n N = int(level._v_title[len('tuple:'):])\n lst = []\n for i in range(N):\n lst.append(dct['i{}'.format(i)])\n return tuple(lst)\n elif level._v_title.startswith('nonetype:'):\n return None\n elif is_pandas_dataframe(level):\n assert _pandas, \"pandas is required to read this file\"\n store = _HDFStoreWithHandle(handler)\n return store.get(level._v_pathname)\n elif level._v_title.startswith('sparse:'):\n frm = level._v_attrs.format\n if frm in ('csr', 'csc', 'bsr'):\n shape = tuple(level.shape[:])\n cls = {'csr': sparse.csr_matrix,\n 'csc': sparse.csc_matrix,\n 'bsr': sparse.bsr_matrix}\n matrix = cls[frm](shape)\n matrix.data = level.data[:]\n matrix.indices = level.indices[:]\n matrix.indptr = level.indptr[:]\n matrix.maxprint = level._v_attrs.maxprint\n return matrix\n elif frm == 'dia':\n shape = tuple(level.shape[:])\n matrix = sparse.dia_matrix(shape)\n matrix.data = level.data[:]\n matrix.offsets = level.offsets[:]\n matrix.maxprint = level._v_attrs.maxprint\n return matrix\n elif frm == 'coo':\n shape = tuple(level.shape[:])\n matrix = sparse.coo_matrix(shape)\n matrix.data = level.data[:]\n matrix.col = level.col[:]\n matrix.row = level.row[:]\n matrix.maxprint = level._v_attrs.maxprint\n return matrix\n else:\n raise ValueError('Unknown sparse matrix type: {}'.format(frm))\n else:\n return val\n\n elif isinstance(level, tables.VLArray):\n if level.shape == (1,):\n return _load_pickled(level)\n else:\n return level[:]\n\n elif isinstance(level, tables.Array):\n if 'zeroarray_dtype' in level._v_attrs:\n # Unpack zero-size arrays (shape is stored in an HDF5 array and\n # type is stored in the attibute 'zeroarray_dtype')\n dtype = level._v_attrs.zeroarray_dtype\n sh = level[:]\n return np.zeros(tuple(sh), dtype=dtype)\n\n if 'strtype' in level._v_attrs:\n strtype = level._v_attrs.strtype\n itemsize = level._v_attrs.itemsize\n if strtype == b'unicode':\n return level[:].view(dtype=(np.unicode_, itemsize))\n elif strtype == b'ascii':\n return level[:].view(dtype=(np.string_, itemsize))\n # This serves two purposes:\n # (1) unpack big integers: the only time we save arrays like this\n # (2) unpack non-deepdish \"scalars\"\n if level.shape == ():\n return level[()]\n\n return level[:]", "response": "Loads the non - link level and builds appropriate type without handling softlinks"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _load_level(handler, level, pathtable):\n if isinstance(level, tables.link.SoftLink):\n # this is a link, so see if target is already loaded, return it\n pathname = level.target\n node = level()\n else:\n # not a link, but it might be a target that's already been\n # loaded ... if so, return it\n pathname = level._v_pathname\n node = level\n try:\n return pathtable[pathname]\n except KeyError:\n pathtable[pathname] = _load_nonlink_level(handler, node, pathtable,\n pathname)\n return pathtable[pathname]", "response": "Loads the level and builds appropriate type if necessary"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsaves a Python object to HDF5.", "response": "def save(path, data, compression='default'):\n \"\"\"\n Save any Python structure to an HDF5 file. It is particularly suited for\n Numpy arrays. This function works similar to ``numpy.save``, except if you\n save a Python object at the top level, you do not need to issue\n ``data.flat[0]`` to retrieve it from inside a Numpy array of type\n ``object``.\n\n Some types of objects get saved natively in HDF5. The rest get serialized\n automatically. For most needs, you should be able to stick to the natively\n supported types, which are:\n\n * Dictionaries\n * Short lists and tuples (<256 in length)\n * Basic data types (including strings and None)\n * Numpy arrays\n * Scipy sparse matrices\n * Pandas ``DataFrame``, ``Series``, and ``Panel``\n * SimpleNamespaces (for Python >= 3.3, but see note below)\n\n A recommendation is to always convert your data to using only these types\n That way your data will be portable and can be opened through any HDF5\n reader. A class that helps you with this is\n :class:`deepdish.util.Saveable`.\n\n Lists and tuples are supported and can contain heterogeneous types. This is\n mostly useful and plays well with HDF5 for short lists and tuples. If you\n have a long list (>256) it will be serialized automatically. However,\n in such cases it is common for the elements to have the same type, in which\n case we strongly recommend converting to a Numpy array first.\n\n Note that the SimpleNamespace type will be read in as dictionaries for\n earlier versions of Python.\n\n This function requires the `PyTables `_ module to\n be installed.\n\n You can change the default compression method to ``blosc`` (much faster,\n but less portable) by creating a ``~/.deepdish.conf`` with::\n\n [io]\n compression: blosc\n\n This is the recommended compression method if you plan to use your HDF5\n files exclusively through deepdish (or PyTables).\n\n Parameters\n ----------\n path : string\n Filename to which the data is saved.\n data : anything\n Data to be saved. This can be anything from a Numpy array, a string, an\n object, or a dictionary containing all of them including more\n dictionaries.\n compression : string or tuple\n Set compression method, choosing from `blosc`, `zlib`, `lzo`, `bzip2`\n and more (see PyTables documentation). It can also be specified as a\n tuple (e.g. ``('blosc', 5)``), with the latter value specifying the\n level of compression, choosing from 0 (no compression) to 9 (maximum\n compression). Set to `None` to turn off compression. The default is\n `zlib`, since it is highly portable; for much greater speed, try for\n instance `blosc`.\n\n See also\n --------\n load\n \"\"\"\n filters = _get_compression_filters(compression)\n\n with tables.open_file(path, mode='w') as h5file:\n # If the data is a dictionary, put it flatly in the root\n group = h5file.root\n group._v_attrs[DEEPDISH_IO_VERSION_STR] = IO_VERSION\n idtable = {} # dict to keep track of objects already saved\n # Sparse matrices match isinstance(data, dict), so we'll have to be\n # more strict with the type checking\n if type(data) == type({}) and _dict_native_ok(data):\n idtable[id(data)] = '/'\n for key, value in data.items():\n _save_level(h5file, group, value, name=key,\n filters=filters, idtable=idtable)\n\n elif (_sns and isinstance(data, SimpleNamespace) and\n _dict_native_ok(data.__dict__)):\n idtable[id(data)] = '/'\n group._v_attrs[DEEPDISH_IO_ROOT_IS_SNS] = True\n for key, value in data.__dict__.items():\n _save_level(h5file, group, value, name=key,\n filters=filters, idtable=idtable)\n\n else:\n _save_level(h5file, group, data, name='data',\n filters=filters, idtable=idtable)\n # Mark this to automatically unpack when loaded\n group._v_attrs[DEEPDISH_IO_UNPACK] = True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load(path, group=None, sel=None, unpack=False):\n with tables.open_file(path, mode='r') as h5file:\n pathtable = {} # dict to keep track of objects already loaded\n if group is not None:\n if isinstance(group, str):\n data = _load_specific_level(h5file, h5file, group, sel=sel,\n pathtable=pathtable)\n else: # Assume group is a list or tuple\n data = []\n for g in group:\n data_i = _load_specific_level(h5file, h5file, g, sel=sel,\n pathtable=pathtable)\n data.append(data_i)\n data = tuple(data)\n else:\n grp = h5file.root\n auto_unpack = (DEEPDISH_IO_UNPACK in grp._v_attrs and\n grp._v_attrs[DEEPDISH_IO_UNPACK])\n do_unpack = unpack or auto_unpack\n if do_unpack and len(grp._v_children) == 1:\n name = next(iter(grp._v_children))\n data = _load_specific_level(h5file, grp, name, sel=sel,\n pathtable=pathtable)\n do_unpack = False\n elif sel is not None:\n raise ValueError(\"Must specify group with `sel` unless it \"\n \"automatically unpacks\")\n else:\n data = _load_level(h5file, grp, pathtable)\n\n if DEEPDISH_IO_VERSION_STR in grp._v_attrs:\n v = grp._v_attrs[DEEPDISH_IO_VERSION_STR]\n else:\n v = 0\n\n if v > IO_VERSION:\n warnings.warn('This file was saved with a newer version of '\n 'deepdish. Please upgrade to make sure it loads '\n 'correctly.')\n\n # Attributes can't be unpacked with the method above, so fall back\n # to this\n if do_unpack and isinstance(data, dict) and len(data) == 1:\n data = next(iter(data.values()))\n\n return data", "response": "Loads an HDF5 file and returns a dict of the data that was saved with save."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsort x with numeric semantics.", "response": "def sorted_maybe_numeric(x):\n \"\"\"\n Sorts x with numeric semantics if all keys are nonnegative integers.\n Otherwise uses standard string sorting.\n \"\"\"\n all_numeric = all(map(str.isdigit, x))\n if all_numeric:\n return sorted(x, key=int)\n else:\n return sorted(x)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncolor - aware abbreviator", "response": "def abbreviate(s, maxlength=25):\n \"\"\"Color-aware abbreviator\"\"\"\n assert maxlength >= 4\n skip = False\n abbrv = None\n i = 0\n for j, c in enumerate(s):\n if c == '\\033':\n skip = True\n elif skip:\n if c == 'm':\n skip = False\n else:\n i += 1\n\n if i == maxlength - 1:\n abbrv = s[:j] + '\\033[0m...'\n elif i > maxlength:\n break\n\n if i <= maxlength:\n return s\n else:\n return abbrv"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef extend_settings(self, data_id, files, secrets):\n data = Data.objects.select_related('process').get(pk=data_id)\n\n files[ExecutorFiles.DJANGO_SETTINGS].update({\n 'USE_TZ': settings.USE_TZ,\n 'FLOW_EXECUTOR_TOOLS_PATHS': self.get_tools_paths(),\n })\n files[ExecutorFiles.DATA] = model_to_dict(data)\n files[ExecutorFiles.DATA_LOCATION] = model_to_dict(data.location)\n files[ExecutorFiles.PROCESS] = model_to_dict(data.process)\n files[ExecutorFiles.PROCESS]['resource_limits'] = data.process.get_resource_limits()\n\n # Add secrets if the process has permission to read them.\n secrets.update(data.resolve_secrets())", "response": "Extend the settings dictionary with the contents of the object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nresolve the data path for use with the executor.", "response": "def resolve_data_path(self, data=None, filename=None):\n \"\"\"Resolve data path for use with the executor.\n\n :param data: Data object instance\n :param filename: Filename to resolve\n :return: Resolved filename, which can be used to access the\n given data file in programs executed using this executor\n \"\"\"\n if data is None:\n return settings.FLOW_EXECUTOR['DATA_DIR']\n\n return data.location.get_path(filename=filename)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nresolves the upload path for use with the executor.", "response": "def resolve_upload_path(self, filename=None):\n \"\"\"Resolve upload path for use with the executor.\n\n :param filename: Filename to resolve\n :return: Resolved filename, which can be used to access the\n given uploaded file in programs executed using this\n executor\n \"\"\"\n if filename is None:\n return settings.FLOW_EXECUTOR['UPLOAD_DIR']\n\n return os.path.join(settings.FLOW_EXECUTOR['UPLOAD_DIR'], filename)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef wrap(access_pyxb, read_only=False):\n w = AccessPolicyWrapper(access_pyxb)\n yield w\n if not read_only:\n w.get_normalized_pyxb()", "response": "A context manager that wraps an AccessPolicy PyXB object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nworking with the AccessPolicy in a SystemMetadata PyXB object. Args: sysmeta_pyxb : SystemMetadata PyXB object SystemMetadata containing the AccessPolicy to modify. read_only: bool Do not update the wrapped AccessPolicy. When only a single AccessPolicy operation is needed, there's no need to use this context manager. Instead, use the generated context manager wrappers. There is no clean way in Python to make a context manager that allows client code to replace the object that is passed out of the manager. The AccessPolicy schema does not allow the AccessPolicy element to be empty. However, the SystemMetadata schema specifies the AccessPolicy as optional. By wrapping the SystemMetadata instead of the AccessPolicy when working with AccessPolicy that is within SystemMetadata, the wrapper can handle the situation of empty AccessPolicy by instead dropping the AccessPolicy from the SystemMetadata.", "response": "def wrap_sysmeta_pyxb(sysmeta_pyxb, read_only=False):\n \"\"\"Work with the AccessPolicy in a SystemMetadata PyXB object.\n\n Args:\n sysmeta_pyxb : SystemMetadata PyXB object\n SystemMetadata containing the AccessPolicy to modify.\n\n read_only: bool\n Do not update the wrapped AccessPolicy.\n\n When only a single AccessPolicy operation is needed, there's no need to use\n this context manager. Instead, use the generated context manager wrappers.\n\n There is no clean way in Python to make a context manager that allows client code to\n replace the object that is passed out of the manager. The AccessPolicy schema does not\n allow the AccessPolicy element to be empty. However, the SystemMetadata schema\n specifies the AccessPolicy as optional. By wrapping the SystemMetadata instead of the\n AccessPolicy when working with AccessPolicy that is within SystemMetadata, the wrapper\n can handle the situation of empty AccessPolicy by instead dropping the AccessPolicy\n from the SystemMetadata.\n\n \"\"\"\n w = AccessPolicyWrapper(sysmeta_pyxb.accessPolicy)\n yield w\n if not read_only:\n sysmeta_pyxb.accessPolicy = w.get_normalized_pyxb()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the highest permission for a subject.", "response": "def get_highest_perm_str(self, subj_str):\n \"\"\"\n Args:\n subj_str : str\n Subject for which to retrieve the highest permission.\n\n Return:\n The highest permission for subject or None if subject does not have any permissions.\n \"\"\"\n pres_perm_set = self._present_perm_set_for_subj(self._perm_dict, subj_str)\n return (\n None if not pres_perm_set else self._highest_perm_from_iter(pres_perm_set)\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the list of effective permissions for the subject.", "response": "def get_effective_perm_list(self, subj_str):\n \"\"\"\n Args:\n subj_str : str\n Subject for which to retrieve the effective permissions.\n\n Returns:\n list of str: List of permissions up to and including the highest permission for\n subject, ordered lower to higher, or empty list if subject does not have any\n permissions.\n\n E.g.: If 'write' is highest permission for subject, return ['read', 'write'].\n \"\"\"\n highest_perm_str = self.get_highest_perm_str(subj_str)\n if highest_perm_str is None:\n return []\n return self._equal_or_lower_perm_list(highest_perm_str)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a set of subjects with equal or higher permission.", "response": "def get_subjects_with_equal_or_higher_perm(self, perm_str):\n \"\"\"\n Args:\n perm_str : str\n Permission, ``read``, ``write`` or ``changePermission``.\n\n Returns:\n set of str : Subj that have perm equal or higher than ``perm_str``.\n\n Since the lowest permission a subject can have is ``read``, passing ``read``\n will return all subjects.\n \"\"\"\n self._assert_valid_permission(perm_str)\n return {\n s\n for p in self._equal_or_higher_perm(perm_str)\n for s in self._perm_dict.get(p, set())\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dump(self):\n logging.debug('AccessPolicy:')\n map(\n logging.debug,\n [\n ' {}'.format(s)\n for s in pprint.pformat(self.get_normalized_perm_list()).splitlines()\n ],\n )", "response": "Dump the current state to debug level log."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn True if subj_str has perm equal to or higher than perm_str.", "response": "def subj_has_perm(self, subj_str, perm_str):\n \"\"\"Returns:\n\n bool: ``True`` if ``subj_str`` has perm equal to or higher than ``perm_str``.\n\n \"\"\"\n self._assert_valid_permission(perm_str)\n return perm_str in self.get_effective_perm_list(subj_str)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_authenticated_read(self):\n self.remove_perm(d1_common.const.SUBJECT_PUBLIC, 'read')\n self.add_perm(d1_common.const.SUBJECT_AUTHENTICATED, 'read')", "response": "Add read perm for all authenticated subj."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding verified read perm for all public subj.", "response": "def add_verified_read(self):\n \"\"\"Add ``read`` perm for all verified subj.\n\n Public ``read`` is removed if present.\n\n \"\"\"\n self.remove_perm(d1_common.const.SUBJECT_PUBLIC, 'read')\n self.add_perm(d1_common.const.SUBJECT_VERIFIED, 'read')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a permission for a subject.", "response": "def add_perm(self, subj_str, perm_str):\n \"\"\"Add a permission for a subject.\n\n Args:\n subj_str : str\n Subject for which to add permission(s)\n\n perm_str : str\n Permission to add. Implicitly adds all lower permissions. E.g., ``write``\n will also add ``read``.\n\n \"\"\"\n self._assert_valid_permission(perm_str)\n self._perm_dict.setdefault(perm_str, set()).add(subj_str)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nremoves a permission from a subject.", "response": "def remove_perm(self, subj_str, perm_str):\n \"\"\"Remove permission from a subject.\n\n Args:\n subj_str : str\n Subject for which to remove permission(s)\n\n perm_str : str\n Permission to remove. Implicitly removes all higher permissions. E.g., ``write``\n will also remove ``changePermission`` if previously granted.\n\n \"\"\"\n self._assert_valid_permission(perm_str)\n for perm_str in self._equal_or_higher_perm(perm_str):\n self._perm_dict.setdefault(perm_str, set()).discard(subj_str)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef remove_subj(self, subj_str):\n for subj_set in list(self._perm_dict.values()):\n subj_set -= {subj_str}", "response": "Removes all permissions for a subject."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _perm_dict_from_pyxb(self, access_pyxb):\n subj_dict = self._subj_dict_from_pyxb(access_pyxb)\n return self._perm_dict_from_subj_dict(subj_dict)", "response": "Return dict representation of AccessPolicy PyXB obj."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _perm_dict_from_subj_dict(self, subj_dict):\n perm_dict = {}\n for subj_str, perm_set in list(subj_dict.items()):\n for perm_str in perm_set:\n perm_dict.setdefault(perm_str, set()).add(subj_str)\n return perm_dict", "response": "Return dict where keys and values of subj_dict have been flipped\n around."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn an AccessPolicy PyXB representation of perm_dict.", "response": "def _pyxb_from_perm_dict(self, perm_dict):\n \"\"\"Return an AccessPolicy PyXB representation of ``perm_dict``\n\n - If ``norm_perm_list`` is empty, None is returned. The schema does not allow\n AccessPolicy to be empty, but in SystemMetadata, it can be left out\n altogether. So returning None instead of an empty AccessPolicy allows the\n result to be inserted directly into a SystemMetadata PyXB object.\n\n \"\"\"\n norm_perm_list = self._norm_perm_list_from_perm_dict(perm_dict)\n return self._pyxb_from_norm_perm_list(norm_perm_list)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _pyxb_from_norm_perm_list(self, norm_perm_list):\n # Using accessPolicy() instead of AccessPolicy() and accessRule() instead of\n # AccessRule() gives PyXB the type information required for using this as a\n # root element.\n access_pyxb = d1_common.types.dataoneTypes.accessPolicy()\n for perm_str, subj_list in norm_perm_list:\n rule_pyxb = d1_common.types.dataoneTypes.accessRule()\n rule_pyxb.permission.append(perm_str)\n for subj_str in subj_list:\n rule_pyxb.subject.append(subj_str)\n access_pyxb.allow.append(rule_pyxb)\n if len(access_pyxb.allow):\n return access_pyxb", "response": "Return an AccessPolicy PyXB representation of the norm_perm_list."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a dict representation of an AccessPolicyPyXB object.", "response": "def _subj_dict_from_pyxb(self, access_pyxb):\n \"\"\"Return a dict representation of ``access_pyxb``, which is an AccessPolicy\n PyXB object.\n\n This also remove any duplicate subjects and permissions in the PyXB object.\n\n \"\"\"\n subj_dict = {}\n for allow_pyxb in access_pyxb.allow:\n perm_set = set()\n for perm_pyxb in allow_pyxb.permission:\n perm_set.add(perm_pyxb)\n for subj_pyxb in allow_pyxb.subject:\n subj_dict.setdefault(subj_pyxb.value(), set()).update(perm_set)\n return subj_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _highest_perm_dict_from_perm_dict(self, perm_dict):\n highest_perm_dict = copy.copy(perm_dict)\n for ordered_str in reversed(ORDERED_PERM_LIST):\n for lower_perm in self._lower_perm_list(ordered_str):\n highest_perm_dict.setdefault(lower_perm, set())\n highest_perm_dict[lower_perm] -= perm_dict.get(ordered_str, set())\n return highest_perm_dict", "response": "Return a perm_dict where only the highest permission for each subject is\n included."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _norm_perm_list_from_perm_dict(self, perm_dict):\n high_perm_dict = self._highest_perm_dict_from_perm_dict(perm_dict)\n return [\n [k, list(sorted(high_perm_dict[k]))]\n for k in ORDERED_PERM_LIST\n if high_perm_dict.get(k, False)\n ]", "response": "Return a minimal ordered hashable list of subjects and permissions."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _effective_perm_list_from_iter(self, perm_iter):\n highest_perm_str = self._highest_perm_from_iter(perm_iter)\n return (\n self._equal_or_lower_perm_list(highest_perm_str)\n if highest_perm_str is not None\n else None\n )", "response": "Return list of effective permissions for highest permission in\n perm_iter ordered lower to higher or None."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _present_perm_set_for_subj(self, perm_dict, subj_str):\n return {p for p, s in list(perm_dict.items()) if subj_str in s}", "response": "Return a set containing only the permissions that are present in the the\n perm_dict for subj_str"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the highest perm present in perm_iter or None.", "response": "def _highest_perm_from_iter(self, perm_iter):\n \"\"\"Return the highest perm present in ``perm_iter`` or None if ``perm_iter`` is\n empty.\"\"\"\n perm_set = set(perm_iter)\n for perm_str in reversed(ORDERED_PERM_LIST):\n if perm_str in perm_set:\n return perm_str"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the ordered index of perm_str.", "response": "def _ordered_idx_from_perm(self, perm_str):\n \"\"\"Return the ordered index of ``perm_str`` or None if ``perm_str`` is not a\n valid permission.\"\"\"\n for i, ordered_str in enumerate(ORDERED_PERM_LIST):\n if perm_str == ordered_str:\n return i"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _assert_valid_permission(self, perm_str):\n if perm_str not in ORDERED_PERM_LIST:\n raise d1_common.types.exceptions.InvalidRequest(\n 0,\n 'Permission must be one of {}. perm_str=\"{}\"'.format(\n ', '.join(ORDERED_PERM_LIST), perm_str\n ),\n )", "response": "Raise D1 exception if perm_str is not a valid permission."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nhandling an unexpected exception.", "response": "def handle_unexpected_exception(max_traceback_levels=100):\n \"\"\"Suppress stack traces for common errors and provide hints for how to resolve\n them.\"\"\"\n exc_type, exc_msgs = sys.exc_info()[:2]\n if exc_type.__name__ == \"SSLError\":\n d1_cli.impl.util.print_error(\n \"\"\"HTTPS / TLS / SSL / X.509v3 Certificate Error:\n An HTTPS connection could not be established. Verify that a DataONE node\n responds at the URL provided in the cn-url or mn-url session variable. If the\n URL is valid and if you intended to connect without authentication, make sure\n that the session variable, \"anonymous\", is set to True. If you intended to\n connect with authentication, make sure that the parameter, \"cert-file\", points\n to a valid certificate from CILogon. If the certificate has the private\n key in a separate file, also set \"key-file\" to the private key file.\n Otherwise, set \"key-file\" to None. Note that CILogon certificates must be\n renewed after 18 hours.\n \"\"\"\n )\n elif exc_type.__name__ == \"timeout\":\n d1_cli.impl.util.print_error(\n \"\"\"Timeout error:\n A connection to a DataONE node timed out. Verify that a DataONE node responds\n at the URL provided in the cn-url or mn-url session variable.\n \"\"\"\n )\n else:\n _print_unexpected_exception(max_traceback_levels)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_sciobj(request, sysmeta_pyxb):\n pid = d1_common.xml.get_req_val(sysmeta_pyxb.identifier)\n\n set_mn_controlled_values(request, sysmeta_pyxb, is_modification=False)\n d1_gmn.app.views.assert_db.is_valid_pid_for_create(pid)\n d1_gmn.app.views.assert_sysmeta.sanity(request, sysmeta_pyxb)\n\n if _is_proxy_sciobj(request):\n sciobj_url = _get_sciobj_proxy_url(request)\n _sanity_check_proxy_url(sciobj_url)\n else:\n sciobj_url = d1_gmn.app.sciobj_store.get_rel_sciobj_file_url_by_pid(pid)\n\n if not _is_proxy_sciobj(request):\n if d1_gmn.app.resource_map.is_resource_map_sysmeta_pyxb(sysmeta_pyxb):\n _create_resource_map(pid, request, sysmeta_pyxb, sciobj_url)\n else:\n _save_sciobj_bytes_from_request(request, pid)\n d1_gmn.app.scimeta.assert_valid(sysmeta_pyxb, pid)\n\n d1_gmn.app.sysmeta.create_or_update(sysmeta_pyxb, sciobj_url)\n\n d1_gmn.app.event_log.create(\n d1_common.xml.get_req_val(sysmeta_pyxb.identifier),\n request,\n timestamp=d1_common.date_time.normalize_datetime_to_utc(\n sysmeta_pyxb.dateUploaded\n ),\n )", "response": "Create a new native locally stored science object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _save_sciobj_bytes_from_request(request, pid):\n sciobj_path = d1_gmn.app.sciobj_store.get_abs_sciobj_file_path_by_pid(pid)\n if hasattr(request.FILES['object'], 'temporary_file_path'):\n d1_common.utils.filesystem.create_missing_directories_for_file(sciobj_path)\n django.core.files.move.file_move_safe(\n request.FILES['object'].temporary_file_path(), sciobj_path\n )\n else:\n with d1_gmn.app.sciobj_store.open_sciobj_file_by_path_ctx(\n sciobj_path, write=True\n ) as sciobj_stream:\n for chunk in request.FILES['object'].chunks():\n sciobj_stream.write(chunk)", "response": "Save bytes from request. FILES."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_mn_controlled_values(request, sysmeta_pyxb, is_modification):\n now_datetime = d1_common.date_time.utc_now()\n\n default_value_list = [\n ('originMemberNode', django.conf.settings.NODE_IDENTIFIER, True),\n ('authoritativeMemberNode', django.conf.settings.NODE_IDENTIFIER, True),\n ('serialVersion', 1, False),\n ('dateUploaded', now_datetime, False),\n ]\n\n if not is_modification:\n # submitter cannot be updated as the CN does not allow it.\n default_value_list.append(('submitter', request.primary_subject_str, True))\n # dateSysMetadataModified cannot be updated as it is used for optimistic\n # locking. If changed, it is assumed that optimistic locking failed, and the\n # update is rejected in order to prevent a concurrent update from being lost.\n default_value_list.append(('dateSysMetadataModified', now_datetime, False))\n else:\n sysmeta_pyxb.submitter = None\n sysmeta_pyxb.dateSysMetadataModified = now_datetime\n\n for attr_str, default_value, is_simple_content in default_value_list:\n is_trusted_from_client = getattr(\n django.conf.settings, 'TRUST_CLIENT_{}'.format(attr_str.upper()), False\n )\n override_value = None\n if is_trusted_from_client:\n override_value = (\n d1_common.xml.get_opt_val(sysmeta_pyxb, attr_str)\n if is_simple_content\n else getattr(sysmeta_pyxb, attr_str, None)\n )\n setattr(sysmeta_pyxb, attr_str, override_value or default_value)", "response": "Set the sysmeta_pyxb attributes to the appropriate values for the MN_CONTROLLED_ATTRIBUTE_SET_ALL attributes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef duplicate(self, contributor=None, inherit_collections=False):\n return [\n entity.duplicate(contributor, inherit_collections)\n for entity in self\n ]", "response": "Duplicate ( make a copy ) Entity objects."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef move_to_collection(self, source_collection, destination_collection):\n for entity in self:\n entity.move_to_collection(source_collection, destination_collection)", "response": "Move entities from source to destination collection."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nduplicate (make a copy).", "response": "def duplicate(self, contributor=None, inherit_collections=False):\n \"\"\"Duplicate (make a copy).\"\"\"\n duplicate = Entity.objects.get(id=self.id)\n duplicate.pk = None\n duplicate.slug = None\n duplicate.name = 'Copy of {}'.format(self.name)\n duplicate.duplicated = now()\n if contributor:\n duplicate.contributor = contributor\n\n duplicate.save(force_insert=True)\n\n assign_contributor_permissions(duplicate)\n\n # Override fields that are automatically set on create.\n duplicate.created = self.created\n duplicate.save()\n\n # Duplicate entity's data objects.\n data = get_objects_for_user(contributor, 'view_data', self.data.all()) # pylint: disable=no-member\n duplicated_data = data.duplicate(contributor)\n duplicate.data.add(*duplicated_data)\n\n if inherit_collections:\n collections = get_objects_for_user(\n contributor,\n 'add_collection',\n self.collections.all() # pylint: disable=no-member\n )\n for collection in collections:\n collection.entity_set.add(duplicate)\n copy_permissions(collection, duplicate)\n collection.data.add(*duplicated_data)\n for datum in duplicated_data:\n copy_permissions(collection, datum)\n\n return duplicate"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef move_to_collection(self, source_collection, destination_collection):\n # Remove from collection.\n self.collections.remove(source_collection) # pylint: disable=no-member\n source_collection.data.remove(*self.data.all()) # pylint: disable=no-member\n\n # Add to collection.\n self.collections.add(destination_collection) # pylint: disable=no-member\n destination_collection.data.add(*self.data.all())", "response": "Move entity from source to destination collection."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef quote(s, unsafe='/'):\n res = s.replace('%', '%25')\n for c in unsafe:\n res = res.replace(c, '%' + (hex(ord(c)).upper())[2:])\n return res", "response": "Pass in a dictionary that has unsafe characters as the keys and percent\n encoded value as the value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning dependencies which should trigger updates of this model.", "response": "def get_dependencies(self):\n \"\"\"Return dependencies, which should trigger updates of this model.\"\"\"\n # pylint: disable=no-member\n return super().get_dependencies() + [\n Data.collection_set,\n Data.entity_set,\n Data.parents,\n ]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef alpha_s(scale, f, alphasMZ=0.1185, loop=3):\n if scale == MZ and f == 5:\n return alphasMZ # nothing to do\n _sane(scale, f)\n crd = rundec.CRunDec()\n if f == 5:\n return_value = crd.AlphasExact(alphasMZ, MZ, scale, f, loop)\n elif f == 6:\n crd.nfMmu.Mth = 170\n crd.nfMmu.muth = 170\n crd.nfMmu.nf = 6\n return_value = crd.AlL2AlH(alphasMZ, MZ, crd.nfMmu, scale, loop)\n elif f == 4:\n crd.nfMmu.Mth = 4.8\n crd.nfMmu.muth = 4.8\n crd.nfMmu.nf = 5\n return_value = crd.AlH2AlL(alphasMZ, MZ, crd.nfMmu, scale, loop)\n elif f == 3:\n crd.nfMmu.Mth = 4.8\n crd.nfMmu.muth = 4.8\n crd.nfMmu.nf = 5\n mc = 1.3\n asmc = crd.AlH2AlL(alphasMZ, MZ, crd.nfMmu, mc, loop)\n crd.nfMmu.Mth = mc\n crd.nfMmu.muth = mc\n crd.nfMmu.nf = 4\n return_value = crd.AlH2AlL(asmc, mc, crd.nfMmu, scale, loop)\n else:\n raise ValueError(\"Invalid input: f={}, scale={}\".format(f, scale))\n if return_value == 0:\n raise ValueError(\"Return value is 0, probably `scale={}` is too small.\".format(scale))\n else:\n return return_value", "response": "3 - loop computation of alpha_s for f flavours\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef m_b(mbmb, scale, f, alphasMZ=0.1185, loop=3):\n if scale == mbmb and f == 5:\n return mbmb # nothing to do\n _sane(scale, f)\n alphas_mb = alpha_s(mbmb, 5, alphasMZ=alphasMZ, loop=loop)\n crd = rundec.CRunDec()\n if f == 5:\n alphas_scale = alpha_s(scale, f, alphasMZ=alphasMZ, loop=loop)\n return crd.mMS2mMS(mbmb, alphas_mb, alphas_scale, f, loop)\n elif f == 4:\n crd.nfMmu.Mth = 4.8\n crd.nfMmu.muth = 4.8\n crd.nfMmu.nf = 5\n return crd.mH2mL(mbmb, alphas_mb, mbmb, crd.nfMmu, scale, loop)\n elif f == 3:\n mc = 1.3\n crd.nfMmu.Mth = 4.8\n crd.nfMmu.muth = 4.8\n crd.nfMmu.nf = 5\n mbmc = crd.mH2mL(mbmb, alphas_mb, mbmb, crd.nfMmu, mc, loop)\n crd.nfMmu.Mth = mc\n crd.nfMmu.muth = mc\n crd.nfMmu.nf = 4\n alphas_mc = alpha_s(mc, 4, alphasMZ=alphasMZ, loop=loop)\n return crd.mH2mL(mbmc, alphas_mc, mc, crd.nfMmu, scale, loop)\n elif f == 6:\n crd.nfMmu.Mth = 170\n crd.nfMmu.muth = 170\n crd.nfMmu.nf = 6\n return crd.mL2mH(mbmb, alphas_mb, mbmb, crd.nfMmu, scale, loop)\n else:\n raise ValueError(\"Invalid input: f={}, scale={}\".format(f, scale))", "response": "r Return the running b quark mass in the MSbar scheme at the scale scale."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef m_c(mcmc, scale, f, alphasMZ=0.1185, loop=3):\n if scale == mcmc:\n return mcmc # nothing to do\n _sane(scale, f)\n crd = rundec.CRunDec()\n alphas_mc = alpha_s(mcmc, 4, alphasMZ=alphasMZ, loop=loop)\n if f == 4:\n alphas_scale = alpha_s(scale, f, alphasMZ=alphasMZ, loop=loop)\n return crd.mMS2mMS(mcmc, alphas_mc, alphas_scale, f, loop)\n elif f == 3:\n crd.nfMmu.Mth = 1.3\n crd.nfMmu.muth = 1.3\n crd.nfMmu.nf = 4\n return crd.mH2mL(mcmc, alphas_mc, mcmc, crd.nfMmu, scale, loop)\n elif f == 5:\n crd.nfMmu.Mth = 4.8\n crd.nfMmu.muth = 4.8\n crd.nfMmu.nf = 5\n return crd.mL2mH(mcmc, alphas_mc, mcmc, crd.nfMmu, scale, loop)\n else:\n raise ValueError(\"Invalid input: f={}, scale={}\".format(f, scale))", "response": "r Returns the running c quark mass in the MSbar scheme at the scale scale."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef m_s(ms2, scale, f, alphasMZ=0.1185, loop=3):\n if scale == 2 and f == 3:\n return ms2 # nothing to do\n _sane(scale, f)\n crd = rundec.CRunDec()\n alphas_2 = alpha_s(2, 3, alphasMZ=alphasMZ, loop=loop)\n if f == 3:\n alphas_scale = alpha_s(scale, f, alphasMZ=alphasMZ, loop=loop)\n return crd.mMS2mMS(ms2, alphas_2, alphas_scale, f, loop)\n elif f == 4:\n crd.nfMmu.Mth = 1.3\n crd.nfMmu.muth = 1.3\n crd.nfMmu.nf = 4\n return crd.mL2mH(ms2, alphas_2, 2, crd.nfMmu, scale, loop)\n elif f == 5:\n mc = 1.3\n crd.nfMmu.Mth = mc\n crd.nfMmu.muth = mc\n crd.nfMmu.nf = 4\n msmc = crd.mL2mH(ms2, alphas_2, 2, crd.nfMmu, mc, loop)\n crd.nfMmu.Mth = 4.8\n crd.nfMmu.muth = 4.8\n crd.nfMmu.nf = 5\n alphas_mc = alpha_s(mc, 4, alphasMZ=alphasMZ, loop=loop)\n return crd.mL2mH(msmc, alphas_mc, mc, crd.nfMmu, scale, loop)\n else:\n raise ValueError(\"Invalid input: f={}, scale={}\".format(f, scale))", "response": "r Returns the running s quark mass in the MSbar scheme at the scale scale."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_confirmation(self):\n if self.clear:\n action = 'This will DELETE ALL FILES in this location!'\n else:\n action = 'This will overwrite existing files!'\n\n message = (\n \"\\n\"\n \"You have requested to collect static files at the destination\\n\"\n \"location as specified in your settings\\n\"\n \"\\n\"\n \" {destination}\\n\"\n \"\\n\"\n \"{action}\\n\"\n \"Are you sure you want to do this?\\n\"\n \"\\n\"\n \"Type 'yes' to continue, or 'no' to cancel: \".format(\n destination=self.destination_path,\n action=action,\n )\n )\n\n if input(''.join(message)) != 'yes':\n raise CommandError(\"Collecting tools cancelled.\")", "response": "Get user confirmation to proceed."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndelete contents of the directory on the given path.", "response": "def clear_dir(self):\n \"\"\"Delete contents of the directory on the given path.\"\"\"\n self.stdout.write(\"Deleting contents of '{}'.\".format(self.destination_path))\n\n for filename in os.listdir(self.destination_path):\n if os.path.isfile(filename) or os.path.islink(filename):\n os.remove(filename)\n elif os.path.isdir(filename):\n shutil.rmtree(filename)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nchange path prefix and include app name.", "response": "def change_path_prefix(self, path, old_prefix, new_prefix, app_name):\n \"\"\"Change path prefix and include app name.\"\"\"\n relative_path = os.path.relpath(path, old_prefix)\n return os.path.join(new_prefix, app_name, relative_path)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef collect(self):\n for app_name, tools_path in get_apps_tools().items():\n self.stdout.write(\"Copying files from '{}'.\".format(tools_path))\n\n app_name = app_name.replace('.', '_')\n\n app_destination_path = os.path.join(self.destination_path, app_name)\n if not os.path.isdir(app_destination_path):\n os.mkdir(app_destination_path)\n\n for root, dirs, files in os.walk(tools_path):\n for dir_name in dirs:\n dir_source_path = os.path.join(root, dir_name)\n dir_destination_path = self.change_path_prefix(\n dir_source_path, tools_path, self.destination_path, app_name\n )\n\n if not os.path.isdir(dir_destination_path):\n os.mkdir(dir_destination_path)\n\n for file_name in files:\n file_source_path = os.path.join(root, file_name)\n file_destination_path = self.change_path_prefix(\n file_source_path, tools_path, self.destination_path, app_name\n )\n\n shutil.copy2(file_source_path, file_destination_path)", "response": "Get tools locations and copy them to a single location."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompute checksum of processor inputs name and version.", "response": "def get_data_checksum(proc_input, proc_slug, proc_version):\n \"\"\"Compute checksum of processor inputs, name and version.\"\"\"\n checksum = hashlib.sha256()\n checksum.update(json.dumps(proc_input, sort_keys=True).encode('utf-8'))\n checksum.update(proc_slug.encode('utf-8'))\n checksum.update(str(proc_version).encode('utf-8'))\n return checksum.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget or set value using a dot - notation key in a multilevel dict.", "response": "def dict_dot(d, k, val=None, default=None):\n \"\"\"Get or set value using a dot-notation key in a multilevel dict.\"\"\"\n if val is None and k == '':\n return d\n\n def set_default(dict_or_model, key, default_value):\n \"\"\"Set default field value.\"\"\"\n if isinstance(dict_or_model, models.Model):\n if not hasattr(dict_or_model, key):\n setattr(dict_or_model, key, default_value)\n\n return getattr(dict_or_model, key)\n else:\n return dict_or_model.setdefault(key, default_value)\n\n def get_item(dict_or_model, key):\n \"\"\"Get field value.\"\"\"\n if isinstance(dict_or_model, models.Model):\n return getattr(dict_or_model, key)\n else:\n return dict_or_model[key]\n\n def set_item(dict_or_model, key, value):\n \"\"\"Set field value.\"\"\"\n if isinstance(dict_or_model, models.Model):\n setattr(dict_or_model, key, value)\n else:\n dict_or_model[key] = value\n\n if val is None and callable(default):\n # Get value, default for missing\n return functools.reduce(lambda a, b: set_default(a, b, default()), k.split('.'), d)\n\n elif val is None:\n # Get value, error on missing\n return functools.reduce(get_item, k.split('.'), d)\n\n else:\n # Set value\n try:\n k, k_last = k.rsplit('.', 1)\n set_item(dict_dot(d, k, default=dict), k_last, val)\n except ValueError:\n set_item(d, k, val)\n return val"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets applications tools and their paths.", "response": "def get_apps_tools():\n \"\"\"Get applications' tools and their paths.\n\n Return a dict with application names as keys and paths to tools'\n directories as values. Applications without tools are omitted.\n \"\"\"\n tools_paths = {}\n\n for app_config in apps.get_app_configs():\n proc_path = os.path.join(app_config.path, 'tools')\n if os.path.isdir(proc_path):\n tools_paths[app_config.name] = proc_path\n\n custom_tools_paths = getattr(settings, 'RESOLWE_CUSTOM_TOOLS_PATHS', [])\n if not isinstance(custom_tools_paths, list):\n raise KeyError(\"`RESOLWE_CUSTOM_TOOLS_PATHS` setting must be a list.\")\n\n for seq, custom_path in enumerate(custom_tools_paths):\n custom_key = '_custom_{}'.format(seq)\n tools_paths[custom_key] = custom_path\n\n return tools_paths"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef rewire_inputs(data_list):\n if len(data_list) < 2:\n return data_list\n\n mapped_ids = {bundle['original'].id: bundle['copy'].id for bundle in data_list}\n\n for bundle in data_list:\n updated = False\n copy = bundle['copy']\n\n for field_schema, fields in iterate_fields(copy.input, copy.process.input_schema):\n name = field_schema['name']\n value = fields[name]\n\n if field_schema['type'].startswith('data:') and value in mapped_ids:\n fields[name] = mapped_ids[value]\n updated = True\n\n elif field_schema['type'].startswith('list:data:') and any([id_ in mapped_ids for id_ in value]):\n fields[name] = [mapped_ids[id_] if id_ in mapped_ids else id_ for id_ in value]\n updated = True\n\n if updated:\n copy.save()\n\n return data_list", "response": "Rewire inputs of provided data objects."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses the given XML and use the XML element to create a Python object.", "response": "def CreateFromDocument(xml_text, default_namespace=None, location_base=None):\n \"\"\"Parse the given XML and use the document element to create a Python instance.\n\n @param xml_text An XML document. This should be data (Python 2\n str or Python 3 bytes), or a text (Python 2 unicode or Python 3\n str) in the L{pyxb._InputEncoding} encoding.\n\n @keyword default_namespace The L{pyxb.Namespace} instance to use as the\n default namespace where there is no default namespace in scope.\n If unspecified or C{None}, the namespace of the module containing\n this function will be used.\n\n @keyword location_base: An object to be recorded as the base of all\n L{pyxb.utils.utility.Location} instances associated with events and\n objects handled by the parser. You might pass the URI from which\n the document was obtained.\n\n \"\"\"\n\n if pyxb.XMLStyle_saxer != pyxb._XMLStyle:\n dom = pyxb.utils.domutils.StringToDOM(xml_text)\n return CreateFromDOM(dom.documentElement, default_namespace=default_namespace)\n if default_namespace is None:\n default_namespace = Namespace.fallbackNamespace()\n saxer = pyxb.binding.saxer.make_parser(\n fallback_namespace=default_namespace, location_base=location_base\n )\n handler = saxer.getContentHandler()\n xmld = xml_text\n if isinstance(xmld, pyxb.utils.six.text_type):\n xmld = xmld.encode(pyxb._InputEncoding)\n saxer.parse(io.BytesIO(xmld))\n instance = handler.rootObject()\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a Python instance from the given DOM node.", "response": "def CreateFromDOM(node, default_namespace=None):\n \"\"\"Create a Python instance from the given DOM node. The node tag must correspond to\n an element declaration in this module.\n\n @deprecated: Forcing use of DOM interface is unnecessary; use L{CreateFromDocument}.\n\n \"\"\"\n if default_namespace is None:\n default_namespace = Namespace.fallbackNamespace()\n return pyxb.binding.basis.element.AnyCreateFromDOM(node, default_namespace)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef postloop(self):\n cmd.Cmd.postloop(self) # Clean up command completion\n d1_cli.impl.util.print_info(\"Exiting...\")", "response": "Take care of any unfinished business."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef default(self, line):\n args = self._split_args(line, 0, 99)\n d1_cli.impl.util.print_error(\"Unknown command: {}\".format(args[0]))", "response": "Called on an input line when the command prefix is not recognized."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_help(self, line):\n command, = self._split_args(line, 0, 1)\n if command is None:\n return self._print_help()\n cmd.Cmd.do_help(self, line)", "response": "Get help on commands help or \"?\" with no arguments displays a list_objects\n of commands which help is available"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef do_history(self, line):\n self._split_args(line, 0, 0)\n for idx, item in enumerate(self._history):\n d1_cli.impl.util.print_info(\"{0: 3d} {1}\".format(idx, item))", "response": "history Display a list of commands that have been entered."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nexiting Exit from the CLI.", "response": "def do_exit(self, line):\n \"\"\"exit Exit from the CLI.\"\"\"\n n_remaining_operations = len(self._command_processor.get_operation_queue())\n if n_remaining_operations:\n d1_cli.impl.util.print_warn(\n \"\"\"There are {} unperformed operations in the write operation queue. These will\nbe lost if you exit.\"\"\".format(\n n_remaining_operations\n )\n )\n if not d1_cli.impl.util.confirm(\"Exit?\", default=\"yes\"):\n return\n sys.exit()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nexiting on system EOF character.", "response": "def do_eof(self, line):\n \"\"\"Exit on system EOF character.\"\"\"\n d1_cli.impl.util.print_info(\"\")\n self.do_exit(line)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_set(self, line):\n session_parameter, value = self._split_args(line, 0, 2)\n if value is None:\n self._command_processor.get_session().print_variable(session_parameter)\n else:\n self._command_processor.get_session().set_with_conversion(\n session_parameter, value\n )\n self._print_info_if_verbose(\n 'Set session variable {} to \"{}\"'.format(session_parameter, value)\n )", "response": "\\ x1b [ 1mNAME \\ x1b [ 0m set - Set the value of the allCOOKIE_NAME \\ x1b [ 0m session_parameter - Name of the parameter."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef do_load(self, line):\n config_file = self._split_args(line, 0, 1)[0]\n self._command_processor.get_session().load(config_file)\n if config_file is None:\n config_file = (\n self._command_processor.get_session().get_default_pickle_file_path()\n )\n self._print_info_if_verbose(\"Loaded session from file: {}\".format(config_file))", "response": "Load a session from file"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsaving session variables to file save.", "response": "def do_save(self, line):\n \"\"\"save [config_file] Save session variables to file save (without parameters):\n Save session to default file ~/.dataone_cli.conf save.\n\n : Save session to specified file.\n\n \"\"\"\n config_file = self._split_args(line, 0, 1)[0]\n self._command_processor.get_session().save(config_file)\n if config_file is None:\n config_file = (\n self._command_processor.get_session().get_default_pickle_file_path()\n )\n self._print_info_if_verbose(\"Saved session to file: {}\".format(config_file))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nresets Reset all session variables to their default values.", "response": "def do_reset(self, line):\n \"\"\"reset Set all session variables to their default values.\"\"\"\n self._split_args(line, 0, 0)\n self._command_processor.get_session().reset()\n self._print_info_if_verbose(\"Successfully reset session variables\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef do_allowaccess(self, line):\n subject, permission = self._split_args(line, 1, 1)\n self._command_processor.get_session().get_access_control().add_allowed_subject(\n subject, permission\n )\n self._print_info_if_verbose(\n 'Set {} access for subject \"{}\"'.format(permission, subject)\n )", "response": "allowaccess - Set the access level for subject Access\n level is read write or changePermission."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef do_allowrep(self, line):\n self._split_args(line, 0, 0)\n self._command_processor.get_session().get_replication_policy().set_replication_allowed(\n True\n )\n self._print_info_if_verbose(\"Set replication policy to allow replication\")", "response": "allowrep Allow new objects to be replicated"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef do_denyrep(self, line):\n self._split_args(line, 0, 0)\n self._command_processor.get_session().get_replication_policy().set_replication_allowed(\n False\n )\n self._print_info_if_verbose(\"Set replication policy to deny replication\")", "response": "denyrep Prevent new objects from being replicated"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_blockrep(self, line):\n mns = self._split_args(line, 1, -1)\n self._command_processor.get_session().get_replication_policy().add_blocked(mns)\n self._print_info_if_verbose(\n \"Set {} as blocked replication target(s)\".format(\", \".join(mns))\n )", "response": "blockrep | blockrep | node name | |"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_removerep(self, line):\n mns = self._split_args(line, 1, -1)\n self._command_processor.get_session().get_replication_policy().repremove(mns)\n self._print_info_if_verbose(\n \"Removed {} from replication policy\".format(\", \".join(mns))\n )", "response": "removerep remove one or more Member Nodes\n from replication policy"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_numberrep(self, line):\n n_replicas = self._split_args(line, 1, 0)[0]\n self._command_processor.get_session().get_replication_policy().set_number_of_replicas(\n n_replicas\n )\n self._print_info_if_verbose(\"Set number of replicas to {}\".format(n_replicas))", "response": "numberrep Set preferred number of replicas for new\n objects."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef do_get(self, line):\n pid, output_file = self._split_args(line, 2, 0)\n self._command_processor.science_object_get(pid, output_file)\n self._print_info_if_verbose(\n 'Downloaded \"{}\" to file: {}'.format(pid, output_file)\n )", "response": "get - Get an object from a Member Node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef do_meta(self, line):\n pid, output_file = self._split_args(line, 1, 1)\n self._command_processor.system_metadata_get(pid, output_file)\n if output_file is not None:\n self._print_info_if_verbose(\n 'Downloaded system metadata for \"{}\" to file: {}'.format(\n pid, output_file\n )\n )", "response": "\\ x1b [ 1mNAME \\ x1b [ 0m meta - Gets the System Metadata that is associated with the Science Object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlisting a list of Science Data Objects from Member Node", "response": "def do_list(self, line):\n \"\"\"list [path] Retrieve a list of available Science Data Objects from Member\n Node The response is filtered by the from-date, to-date, search, start and count\n session variables.\n\n See also: search\n\n \"\"\"\n path = self._split_args(line, 0, 1, pad=False)\n if len(path):\n path = path[0]\n self._command_processor.list_objects(path)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef do_log(self, line):\n path = self._split_args(line, 0, 1, pad=False)\n if len(path):\n path = path[0]\n self._command_processor.log(path)", "response": "log [ path ] Retrieve event log from Member Node The response is filtered by from - date to - date start and count session parameters"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef do_resolve(self, line):\n pid, = self._split_args(line, 1, 0)\n self._command_processor.resolve(pid)", "response": "resolve Find all locations from which the given Science Object\n can be downloaded."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef do_create(self, line):\n pid, sciobj_path = self._split_args(line, 2, 0)\n self._command_processor.science_object_create(pid, sciobj_path)\n self._print_info_if_verbose(\n 'Added create operation for identifier \"{}\" to write queue'.format(pid)\n )", "response": "create - Create a new Science Object on a Member Node."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef do_update(self, line):\n curr_pid, pid_new, input_file = self._split_args(line, 3, 0)\n self._command_processor.science_object_update(curr_pid, input_file, pid_new)\n self._print_info_if_verbose(\n 'Added update operation for identifier \"{}\" to write queue'.format(curr_pid)\n )", "response": "update - update a.\n. pid"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_package(self, line):\n pids = self._split_args(line, 3, -1, pad=False)\n self._command_processor.create_package(pids)\n self._print_info_if_verbose(\n 'Added package create operation for identifier \"{}\" to write queue'.format(\n pids[0]\n )\n )", "response": "\\ x1b [ 1mNAME \\ x1b [ 0m package - Create a simple OAI -ORE Resource Map on a Member Node."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\narchive - Add one or more existing Science Objects as archived.", "response": "def do_archive(self, line):\n \"\"\"archive [identifier ...] Mark one or more existing Science\n Objects as archived.\"\"\"\n pids = self._split_args(line, 1, -1)\n self._command_processor.science_object_archive(pids)\n self._print_info_if_verbose(\n \"Added archive operation for identifier(s) {} to write queue\".format(\n \", \".join(pids)\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef do_updateaccess(self, line):\n pids = self._split_args(line, 1, -1)\n self._command_processor.update_access_policy(pids)\n self._print_info_if_verbose(\n \"Added access policy update operation for identifiers {} to write queue\".format(\n \", \".join(pids)\n )\n )", "response": "updateaccess [identifier ...] Update the Access Policy on one or\n more existing Science Data Objects."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsearch [ query ] Comprehensive search for Science Data Objects across all available MNs.", "response": "def do_search(self, line):\n \"\"\"search [query] Comprehensive search for Science Data Objects across all\n available MNs.\n\n See https://releases.dataone.org/online/api-\n documentation-v2.0.1/design/SearchMetadata.html for the available search terms.\n\n \"\"\"\n args = self._split_args(line, 0, -1)\n query = \" \".join([_f for _f in args if _f])\n self._command_processor.search(query)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_ping(self, line):\n hosts = self._split_args(line, 0, 99, pad=False)\n self._command_processor.ping(hosts)", "response": "ping - Ping the CN and MN of the current session."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nqueues Print the queue of write operations.", "response": "def do_queue(self, line):\n \"\"\"queue Print the queue of write operations.\"\"\"\n self._split_args(line, 0, 0)\n self._command_processor.get_operation_queue().display()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_run(self, line):\n self._split_args(line, 0, 0)\n self._command_processor.get_operation_queue().execute()\n self._print_info_if_verbose(\n \"All operations in the write queue were successfully executed\"\n )", "response": "run Perform each operation in the queue of write operations."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_edit(self, line):\n self._split_args(line, 0, 0)\n self._command_processor.get_operation_queue().edit()\n self._print_info_if_verbose(\"The write operation queue was successfully edited\")", "response": "edit Edit the queue of write operations."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _print_help(self):\n msg = \"\"\"Commands (type help for details)\n\nCLI: help history exit quit\nSession, General: set load save reset\nSession, Access Control: allowaccess denyaccess clearaccess\nSession, Replication: allowrep denyrep preferrep blockrep\n removerep numberrep clearrep\nRead Operations: get meta list log resolve\nWrite Operations: update create package archive\n updateaccess updatereplication\nUtilities: listformats listnodes search ping\nWrite Operation Queue: queue run edit clearqueue\n\nCommand History: Arrow Up, Arrow Down\nCommand Editing: Arrow Left, Arrow Right, Delete\n \"\"\"\n if platform.system() != \"Windows\":\n msg += \"\"\"Command Completion: Single Tab: Complete unique command\n Double Tab: Display possible commands\n \"\"\"\n d1_cli.impl.util.print_info(msg)", "response": "Print help message to group commands by functionality."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_groups_with_perms(obj, attach_perms=False):\n ctype = get_content_type(obj)\n group_model = get_group_obj_perms_model(obj)\n\n if not attach_perms:\n # It's much easier without attached perms so we do it first if that is the case\n group_rel_name = group_model.group.field.related_query_name()\n if group_model.objects.is_generic():\n group_filters = {\n '%s__content_type' % group_rel_name: ctype,\n '%s__object_pk' % group_rel_name: obj.pk,\n }\n else:\n group_filters = {'%s__content_object' % group_rel_name: obj}\n return Group.objects.filter(**group_filters).distinct()\n else:\n group_perms_mapping = defaultdict(list)\n groups_with_perms = get_groups_with_perms(obj)\n queryset = group_model.objects.filter(group__in=groups_with_perms).prefetch_related('group', 'permission')\n if group_model is GroupObjectPermission:\n queryset = queryset.filter(object_pk=obj.pk, content_type=ctype)\n else:\n queryset = queryset.filter(content_object_id=obj.pk)\n\n for group_perm in queryset:\n group_perms_mapping[group_perm.group].append(group_perm.permission.codename)\n return dict(group_perms_mapping)", "response": "Return queryset of all Groups with any object permissions for the given object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngroups permissions by group.", "response": "def _group_groups(perm_list):\n \"\"\"Group permissions by group.\n\n Input is list of tuples of length 3, where each tuple is in\n following format::\n\n (, , )\n\n Permissions are regrouped and returned in such way that there is\n only one tuple for each group::\n\n (, , [, ,...])\n\n :param list perm_list: list of touples of length 3\n :return: list tuples with grouped permissions\n :rtype: list\n\n \"\"\"\n perm_list = sorted(perm_list, key=lambda tup: tup[0])\n\n grouped_perms = []\n for key, group in groupby(perm_list, lambda tup: (tup[0], tup[1])):\n grouped_perms.append((key[0], key[1], [g[2] for g in group]))\n\n return grouped_perms"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget permissins for user groups.", "response": "def get_user_group_perms(user_or_group, obj):\n \"\"\"Get permissins for user groups.\n\n Based on guardian.core.ObjectPermissionChecker.\n\n \"\"\"\n user, group = get_identity(user_or_group)\n\n if user and not user.is_active:\n return [], []\n user_model = get_user_model()\n ctype = ContentType.objects.get_for_model(obj)\n\n group_model = get_group_obj_perms_model(obj)\n group_rel_name = group_model.permission.field.related_query_name()\n if user:\n user_rel_name = user_model.groups.field.related_query_name()\n group_filters = {user_rel_name: user}\n else:\n group_filters = {'pk': group.pk}\n if group_model.objects.is_generic():\n group_filters.update({\n '{}__content_type'.format(group_rel_name): ctype,\n '{}__object_pk'.format(group_rel_name): obj.pk,\n })\n else:\n group_filters['{}__content_object'.format(group_rel_name)] = obj\n\n user_perms, group_perms = [], []\n\n if user:\n perms_qs = Permission.objects.filter(content_type=ctype)\n if user.is_superuser:\n user_perms = list(chain(perms_qs.values_list(\"codename\", flat=True)))\n else:\n model = get_user_obj_perms_model(obj)\n related_name = model.permission.field.related_query_name()\n user_filters = {'{}__user'.format(related_name): user}\n if model.objects.is_generic():\n user_filters.update({\n '{}__content_type'.format(related_name): ctype,\n '{}__object_pk'.format(related_name): obj.pk,\n })\n else:\n user_filters['{}__content_object'.format(related_name)] = obj\n\n user_perms_qs = perms_qs.filter(**user_filters)\n user_perms = list(chain(user_perms_qs.values_list(\"codename\", flat=True)))\n\n group_perms_qs = Group.objects.filter(**group_filters)\n group_perms = list(chain(group_perms_qs.order_by(\"pk\").values_list(\n \"pk\", \"name\", \"{}__permission__codename\".format(group_rel_name))))\n\n group_perms = _group_groups(group_perms)\n\n return user_perms, group_perms"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_object_perms(obj, user=None):\n def format_permissions(perms):\n \"\"\"Remove model name from permission.\"\"\"\n ctype = ContentType.objects.get_for_model(obj)\n return [perm.replace('_{}'.format(ctype.name), '') for perm in perms]\n\n perms_list = []\n\n if user:\n if user.is_authenticated:\n user_perms, group_perms = get_user_group_perms(user, obj)\n else:\n user_perms, group_perms = [], []\n\n if user_perms != []:\n perms_list.append({\n 'type': 'user',\n 'id': user.pk,\n 'name': user.get_full_name() or user.username,\n 'permissions': format_permissions(user_perms),\n })\n\n if group_perms != []:\n for group_id, group_name, perms in group_perms:\n perms_list.append({\n 'type': 'group',\n 'id': group_id,\n 'name': group_name,\n 'permissions': format_permissions(perms),\n })\n else:\n user_options = {\n 'attach_perms': True,\n 'with_group_users': False\n }\n for user, perms in get_users_with_perms(obj, **user_options).items():\n if user.username == settings.ANONYMOUS_USER_NAME:\n # public user is treated separately\n continue\n perms_list.append({\n 'type': 'user',\n 'id': user.pk,\n 'name': user.get_full_name() or user.username,\n 'permissions': format_permissions(perms),\n })\n\n group_options = {\n 'attach_perms': True,\n }\n for group, perms in get_groups_with_perms(obj, **group_options).items():\n perms_list.append({\n 'type': 'group',\n 'id': group.pk,\n 'name': group.name,\n 'permissions': format_permissions(perms),\n })\n\n public_perms = get_perms(AnonymousUser(), obj)\n if public_perms != []:\n perms_list.append({\n 'type': 'public',\n 'permissions': format_permissions(public_perms),\n })\n\n return perms_list", "response": "Returns list of permissions for given object in Resolwe specific format."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a queryset with required permissions.", "response": "def get_objects_for_user(user, perms, klass=None, use_groups=True, any_perm=False,\n with_superuser=True, accept_global_perms=True, perms_filter='pk__in'):\n \"\"\"Return queryset with required permissions.\"\"\"\n if isinstance(perms, str):\n perms = [perms]\n\n ctype = None\n app_label = None\n codenames = set()\n\n # Compute codenames set and ctype if possible\n for perm in perms:\n if '.' in perm:\n new_app_label, codename = perm.split('.', 1)\n if app_label is not None and app_label != new_app_label:\n raise MixedContentTypeError(\n \"Given perms must have same app label \"\n \"({} != {})\".format(app_label, new_app_label))\n else:\n app_label = new_app_label\n else:\n codename = perm\n codenames.add(codename)\n\n if app_label is not None:\n new_ctype = ContentType.objects.get(app_label=app_label,\n permission__codename=codename)\n if ctype is not None and ctype != new_ctype:\n raise MixedContentTypeError(\n \"ContentType was once computed to be {} and another \"\n \"one {}\".format(ctype, new_ctype))\n else:\n ctype = new_ctype\n\n # Compute queryset and ctype if still missing\n if ctype is None and klass is not None:\n queryset = _get_queryset(klass)\n ctype = ContentType.objects.get_for_model(queryset.model)\n elif ctype is not None and klass is None:\n queryset = _get_queryset(ctype.model_class())\n elif klass is None:\n raise WrongAppError(\"Cannot determine content type\")\n else:\n queryset = _get_queryset(klass)\n if ctype.model_class() != queryset.model and perms_filter == 'pk__in':\n raise MixedContentTypeError(\"Content type for given perms and \"\n \"klass differs\")\n\n # At this point, we should have both ctype and queryset and they should\n # match which means: ctype.model_class() == queryset.model\n # we should also have `codenames` list\n\n # First check if user is superuser and if so, return queryset immediately\n if with_superuser and user.is_superuser:\n return queryset\n\n # Check if the user is anonymous. The\n # django.contrib.auth.models.AnonymousUser object doesn't work for queries\n # and it's nice to be able to pass in request.user blindly.\n if user.is_anonymous:\n user = get_anonymous_user()\n\n global_perms = set()\n has_global_perms = False\n # a superuser has by default assigned global perms for any\n if accept_global_perms and with_superuser:\n for code in codenames:\n if user.has_perm(ctype.app_label + '.' + code):\n global_perms.add(code)\n for code in global_perms:\n codenames.remove(code)\n # prerequisite: there must be elements in global_perms otherwise just\n # follow the procedure for object based permissions only AND\n # 1. codenames is empty, which means that permissions are ONLY set\n # globally, therefore return the full queryset.\n # OR\n # 2. any_perm is True, then the global permission beats the object\n # based permission anyway, therefore return full queryset\n if global_perms and (not codenames or any_perm):\n return queryset\n # if we have global perms and still some object based perms differing\n # from global perms and any_perm is set to false, then we have to flag\n # that global perms exist in order to merge object based permissions by\n # user and by group correctly. Scenario: global perm change_xx and\n # object based perm delete_xx on object A for user, and object based\n # permission delete_xx on object B for group, to which user is\n # assigned.\n # get_objects_for_user(user, [change_xx, delete_xx], use_groups=True,\n # any_perm=False, accept_global_perms=True) must retrieve object A and\n # B.\n elif global_perms and codenames:\n has_global_perms = True\n\n # Now we should extract list of pk values for which we would filter\n # queryset\n user_model = get_user_obj_perms_model(queryset.model)\n user_obj_perms_queryset = (user_model.objects\n .filter(Q(user=user) | Q(user=get_anonymous_user()))\n .filter(permission__content_type=ctype))\n\n if codenames:\n user_obj_perms_queryset = user_obj_perms_queryset.filter(\n permission__codename__in=codenames)\n direct_fields = ['content_object__pk', 'permission__codename']\n generic_fields = ['object_pk', 'permission__codename']\n if user_model.objects.is_generic():\n user_fields = generic_fields\n else:\n user_fields = direct_fields\n\n if use_groups:\n group_model = get_group_obj_perms_model(queryset.model)\n group_filters = {\n 'permission__content_type': ctype,\n 'group__{}'.format(get_user_model().groups.field.related_query_name()): user,\n }\n if codenames:\n group_filters.update({\n 'permission__codename__in': codenames,\n })\n groups_obj_perms_queryset = group_model.objects.filter(**group_filters)\n if group_model.objects.is_generic():\n group_fields = generic_fields\n else:\n group_fields = direct_fields\n if not any_perm and codenames and not has_global_perms:\n user_obj_perms = user_obj_perms_queryset.values_list(*user_fields)\n groups_obj_perms = groups_obj_perms_queryset.values_list(*group_fields)\n data = list(user_obj_perms) + list(groups_obj_perms)\n # sorting/grouping by pk (first in result tuple)\n data = sorted(data, key=lambda t: t[0])\n pk_list = []\n for pk, group in groupby(data, lambda t: t[0]):\n obj_codenames = set((e[1] for e in group))\n if codenames.issubset(obj_codenames):\n pk_list.append(pk)\n objects = queryset.filter(**{perms_filter: pk_list})\n return objects\n\n if not any_perm and len(codenames) > 1:\n counts = user_obj_perms_queryset.values(\n user_fields[0]).annotate(object_pk_count=Count(user_fields[0]))\n user_obj_perms_queryset = counts.filter(\n object_pk_count__gte=len(codenames))\n\n values = user_obj_perms_queryset.values_list(user_fields[0], flat=True)\n if user_model.objects.is_generic():\n values = list(values)\n query = Q(**{perms_filter: values})\n if use_groups:\n values = groups_obj_perms_queryset.values_list(group_fields[0], flat=True)\n if group_model.objects.is_generic():\n values = list(values)\n query |= Q(**{perms_filter: values})\n\n return queryset.filter(query)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_users_with_permission(obj, permission):\n user_model = get_user_model()\n return user_model.objects.filter(\n userobjectpermission__object_pk=obj.pk,\n userobjectpermission__permission__codename=permission,\n ).distinct()", "response": "Return users with specific permission on object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrun purge for the object with location_id specified in event argument.", "response": "def purge_run(self, event):\n \"\"\"Run purge for the object with ``location_id`` specified in ``event`` argument.\"\"\"\n location_id = event['location_id']\n verbosity = event['verbosity']\n\n try:\n logger.info(__(\"Running purge for location id {}.\", location_id))\n location_purge(location_id=location_id, delete=True, verbosity=verbosity)\n except Exception: # pylint: disable=broad-except\n logger.exception(\"Error while purging location.\", extra={'location_id': location_id})"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the core count and memory usage limits for this process.", "response": "def get_resource_limits(self):\n \"\"\"Get the core count and memory usage limits for this process.\n\n :return: A dictionary with the resource limits, containing the\n following keys:\n\n - ``memory``: Memory usage limit, in MB. Defaults to 4096 if\n not otherwise specified in the resource requirements.\n - ``cores``: Core count limit. Defaults to 1.\n\n :rtype: dict\n \"\"\"\n # Get limit defaults and overrides.\n limit_defaults = getattr(settings, 'FLOW_PROCESS_RESOURCE_DEFAULTS', {})\n limit_overrides = getattr(settings, 'FLOW_PROCESS_RESOURCE_OVERRIDES', {})\n\n limits = {}\n\n resources = self.requirements.get('resources', {}) # pylint: disable=no-member\n\n limits['cores'] = int(resources.get('cores', 1))\n\n max_cores = getattr(settings, 'FLOW_PROCESS_MAX_CORES', None)\n if max_cores:\n limits['cores'] = min(limits['cores'], max_cores)\n\n memory = limit_overrides.get('memory', {}).get(self.slug, None)\n if memory is None:\n memory = int(resources.get(\n 'memory',\n # If no memory resource is configured, check settings.\n limit_defaults.get('memory', 4096)\n ))\n limits['memory'] = memory\n\n return limits"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate_query_params(self):\n allowed_params = set(self.get_filters().keys())\n allowed_params.update(self.get_always_allowed_arguments())\n\n unallowed = set(self.request.query_params.keys()) - allowed_params\n\n if unallowed:\n msg = 'Unsupported parameter(s): {}. Please use a combination of: {}.'.format(\n ', '.join(unallowed),\n ', '.join(allowed_params),\n )\n self.form.add_error(field=None, error=ParseError(msg))", "response": "Ensure no unsupported query params were used."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nclasses method. Set the default value of the option key to value for all future instances of the class.", "response": "def set_default_option(cls, key, value):\n \"\"\"Class method. Set the default value of the option `key` (string)\n to `value` for all future instances of the class.\n\n Note that this does not affect existing instances or the instance\n called from.\"\"\"\n cls._default_options.update(cls._option_schema({key: value}))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_option(self, key, value):\n self._options.update(self._option_schema({key: value}))\n self.clear_cache()", "response": "Set the option key to value."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the current value of the option key ( string. Instance method only refers to current instance.", "response": "def get_option(self, key):\n \"\"\"Return the current value of the option `key` (string).\n\n Instance method, only refers to current instance.\"\"\"\n return self._options.get(key, self._default_options[key])"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a Wilson instance initialized by a wcxf. WC instance", "response": "def from_wc(cls, wc):\n \"\"\"Return a `Wilson` instance initialized by a `wcxf.WC` instance\"\"\"\n return cls(wcdict=wc.dict, scale=wc.scale, eft=wc.eft, basis=wc.basis)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a Wilson instance initialized by a WCxf file - like object", "response": "def load_wc(cls, stream):\n \"\"\"Return a `Wilson` instance initialized by a WCxf file-like object\"\"\"\n wc = wcxf.WC.load(stream)\n return cls.from_wc(wc)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef match_run(self, scale, eft, basis, sectors='all'):\n cached = self._get_from_cache(sector=sectors, scale=scale, eft=eft, basis=basis)\n if cached is not None:\n return cached\n if sectors == 'all':\n # the default value for sectors is \"None\" for translators\n translate_sectors = None\n else:\n translate_sectors = sectors\n scale_ew = self.get_option('smeft_matchingscale')\n mb = self.get_option('mb_matchingscale')\n mc = self.get_option('mc_matchingscale')\n if self.wc.basis == basis and self.wc.eft == eft and scale == self.wc.scale:\n return self.wc # nothing to do\n if self.wc.eft == 'SMEFT':\n smeft_accuracy = self.get_option('smeft_accuracy')\n if eft == 'SMEFT':\n smeft = SMEFT(self.wc.translate('Warsaw', sectors=translate_sectors, parameters=self.parameters))\n # if input and output EFT ist SMEFT, just run.\n wc_out = smeft.run(scale, accuracy=smeft_accuracy)\n self._set_cache('all', scale, 'SMEFT', wc_out.basis, wc_out)\n return wc_out\n else:\n # if SMEFT -> WET-x: match to WET at the EW scale\n wc_ew = self._get_from_cache(sector='all', scale=scale_ew, eft='WET', basis='JMS')\n if wc_ew is None:\n if self.wc.scale == scale_ew:\n wc_ew = self.wc.match('WET', 'JMS', parameters=self.parameters) # no need to run\n else:\n smeft = SMEFT(self.wc.translate('Warsaw', parameters=self.parameters))\n wc_ew = smeft.run(scale_ew, accuracy=smeft_accuracy).match('WET', 'JMS', parameters=self.parameters)\n self._set_cache('all', scale_ew, wc_ew.eft, wc_ew.basis, wc_ew)\n wet = WETrunner(wc_ew, **self._wetrun_opt())\n elif self.wc.eft in ['WET', 'WET-4', 'WET-3']:\n wet = WETrunner(self.wc.translate('JMS', parameters=self.parameters, sectors=translate_sectors), **self._wetrun_opt())\n else:\n raise ValueError(\"Input EFT {} unknown or not supported\".format(self.wc.eft))\n if eft == wet.eft: # just run\n wc_out = wet.run(scale, sectors=sectors).translate(basis, sectors=translate_sectors, parameters=self.parameters)\n self._set_cache(sectors, scale, eft, basis, wc_out)\n return wc_out\n elif eft == 'WET-4' and wet.eft == 'WET': # match at mb\n wc_mb = wet.run(mb, sectors=sectors).match('WET-4', 'JMS', parameters=self.parameters)\n wet4 = WETrunner(wc_mb, **self._wetrun_opt())\n wc_out = wet4.run(scale, sectors=sectors).translate(basis, sectors=translate_sectors, parameters=self.parameters)\n self._set_cache(sectors, scale, 'WET-4', basis, wc_out)\n return wc_out\n elif eft == 'WET-3' and wet.eft == 'WET-4': # match at mc\n wc_mc = wet.run(mc, sectors=sectors).match('WET-3', 'JMS', parameters=self.parameters)\n wet3 = WETrunner(wc_mc, **self._wetrun_opt())\n wc_out = wet3.run(scale, sectors=sectors).translate(basis, sectors=translate_sectors, parameters=self.parameters)\n return wc_out\n self._set_cache(sectors, scale, 'WET-3', basis, wc_out)\n elif eft == 'WET-3' and wet.eft == 'WET': # match at mb and mc\n wc_mb = wet.run(mb, sectors=sectors).match('WET-4', 'JMS', parameters=self.parameters)\n wet4 = WETrunner(wc_mb, **self._wetrun_opt())\n wc_mc = wet4.run(mc, sectors=sectors).match('WET-3', 'JMS', parameters=self.parameters)\n wet3 = WETrunner(wc_mc, **self._wetrun_opt())\n wc_out = wet3.run(scale, sectors=sectors).translate(basis, sectors=translate_sectors, parameters=self.parameters)\n self._set_cache(sectors, scale, 'WET-3', basis, wc_out)\n return wc_out\n else:\n raise ValueError(\"Running from {} to {} not implemented\".format(wet.eft, eft))", "response": "Returns the Wilson coefficients to a different scale and EFT and basis."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_from_cache(self, sector, scale, eft, basis):\n try:\n return self._cache[eft][scale][basis][sector]\n except KeyError:\n return None", "response": "Try to load a set of Wilson coefficients from the cache else return None."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plotdata(self, key, part='re', scale='log', steps=50):\n if scale == 'log':\n x = np.logspace(log(self.scale_min),\n log(self.scale_max),\n steps,\n base=e)\n elif scale == 'linear':\n x = np.linspace(self.scale_min,\n self.scale_max,\n steps)\n y = self.fun(x)\n y = np.array([d[key] for d in y])\n if part == 're':\n return x, y.real\n elif part == 'im':\n return x, y.imag", "response": "Return a tuple of arrays x y that can be fed to plt. plot"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plot(self, key, part='re', scale='log', steps=50, legend=True, plotargs={}):\n try:\n import matplotlib.pyplot as plt\n except ImportError:\n raise ImportError(\"Please install matplotlib if you want to use the plot method\")\n pdat = self.plotdata(key, part=part, scale=scale, steps=steps)\n plt.plot(*pdat, label=key, **plotargs)\n if scale == 'log':\n plt.xscale('log')\n if legend:\n plt.legend()", "response": "Plot the RG evolution of parameter key."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\nasync def run_consumer(timeout=None, dry_run=False):\n channel = state.MANAGER_CONTROL_CHANNEL\n scope = {\n 'type': 'control_event',\n 'channel': channel,\n }\n\n app = ApplicationCommunicator(ManagerConsumer, scope)\n\n channel_layer = get_channel_layer()\n\n async def _consume_loop():\n \"\"\"Run a loop to consume messages off the channels layer.\"\"\"\n while True:\n message = await channel_layer.receive(channel)\n if dry_run:\n continue\n if message.get('type', {}) == '_resolwe_manager_quit':\n break\n message.update(scope)\n await app.send_input(message)\n\n if timeout is None:\n await _consume_loop()\n try:\n # A further grace period to catch late messages.\n async with async_timeout.timeout(timeout or 1):\n await _consume_loop()\n except asyncio.TimeoutError:\n pass\n\n await app.wait()", "response": "Run the consumer until it finishes processing messages."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_class_path(self, klass_or_instance):\n if inspect.isclass(klass_or_instance):\n klass = '{}.{}'.format(klass_or_instance.__module__, klass_or_instance.__name__)\n elif not isinstance(klass_or_instance, str):\n klass = klass_or_instance.__class__\n klass = '{}.{}'.format(klass.__module__, klass.__name__)\n else:\n klass = klass_or_instance\n\n return klass", "response": "Return the class path for a given class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nregistering an extension for a class.", "response": "def add_extension(self, klass, extension):\n \"\"\"Register an extension for a class.\n\n :param klass: Class to register an extension for\n :param extension: Extension (arbitrary type)\n \"\"\"\n klass = self._get_class_path(klass)\n\n # TODO: Take order into account.\n self._extensions.setdefault(klass, []).append(extension)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_extensions(self, klass):\n self.discover_extensions()\n\n return self._extensions.get(self._get_class_path(klass), [])", "response": "Return all registered extensions of a given class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the running parameters for the given scale.", "response": "def _get_running_parameters(self, scale, f, loop=3):\n \"\"\"Get the running parameters (e.g. quark masses and the strong\n coupling at a given scale.\"\"\"\n p = {}\n p['alpha_s'] = qcd.alpha_s(scale, self.f, self.parameters['alpha_s'], loop=loop)\n p['m_b'] = qcd.m_b(self.parameters['m_b'], scale, self.f, self.parameters['alpha_s'], loop=loop)\n p['m_c'] = qcd.m_c(self.parameters['m_c'], scale, self.f, self.parameters['alpha_s'], loop=loop)\n p['m_s'] = qcd.m_s(self.parameters['m_s'], scale, self.f, self.parameters['alpha_s'], loop=loop)\n p['m_u'] = qcd.m_s(self.parameters['m_u'], scale, self.f, self.parameters['alpha_s'], loop=loop)\n p['m_d'] = qcd.m_s(self.parameters['m_d'], scale, self.f, self.parameters['alpha_s'], loop=loop)\n # running ignored for alpha_e and lepton mass\n p['alpha_e'] = self.parameters['alpha_e']\n p['m_e'] = self.parameters['m_e']\n p['m_mu'] = self.parameters['m_mu']\n p['m_tau'] = self.parameters['m_tau']\n return p"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nevolves the Wilson coefficients to the scale scale_out.", "response": "def run(self, scale_out, sectors='all'):\n \"\"\"Evolve the Wilson coefficients to the scale `scale_out`.\n\n Parameters:\n\n - scale_out: output scale\n - sectors: optional. If provided, must be a tuple of strings\n corresponding to WCxf sector names. Only Wilson coefficients\n belonging to these sectors will be present in the output.\n\n Returns an instance of `wcxf.WC`.\n \"\"\"\n C_out = self._run_dict(scale_out, sectors=sectors)\n all_wcs = set(wcxf.Basis[self.eft, 'JMS'].all_wcs) # to speed up lookup\n C_out = {k: v for k, v in C_out.items()\n if v != 0 and k in all_wcs}\n return wcxf.WC(eft=self.eft, basis='JMS',\n scale=scale_out,\n values=wcxf.WC.dict2values(C_out))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the total number of bytes that will be returned by the iterator.", "response": "def size(self):\n \"\"\"Returns:\n int : The total number of bytes that will be returned by the iterator.\n \"\"\"\n if hasattr(self._stream, 'len'):\n return len(self._stream)\n elif hasattr(self._stream, 'fileno'):\n return os.fstat(self._stream.fileno()).st_size\n else:\n cur_pos = self._stream.tell()\n size = self._stream.seek(0, os.SEEK_END)\n self._stream.seek(cur_pos)\n return size"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_cn_cert():\n try:\n cert_obj = django.core.cache.cache.cn_cert_obj\n d1_common.cert.x509.log_cert_info(\n logging.debug, 'Using cached CN cert for JWT validation', cert_obj\n )\n return cert_obj\n except AttributeError:\n cn_cert_obj = _download_and_decode_cn_cert()\n django.core.cache.cache.cn_cert_obj = cn_cert_obj\n return cn_cert_obj", "response": "Get the public TLS X.509 certificate from the root CN of the DataONE\n environment."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef key(self, key=\"\"):\n params = join_params(self.parameters, {\"key\": key})\n\n return self.__class__(**params)", "response": "Takes a user s API key string which applies content settings."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef query(self, *q):\n params = join_params(self.parameters, {\"q\": q})\n\n return self.__class__(**params)", "response": "Returns a new object with the given q"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a new object with the sort parameters set to sf.", "response": "def sort_by(self, sf):\n \"\"\"\n Determines how to sort search results. Available sorting methods are\n sort.SCORE, sort.COMMENTS, sort.HEIGHT, sort.RELEVANCE, sort.CREATED_AT,\n and sort.RANDOM; default is sort.CREATED_AT.\n \"\"\"\n params = join_params(self.parameters, {\"sf\": sf})\n\n return self.__class__(**params)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef limit(self, limit):\n params = join_params(self.parameters, {\"limit\": limit})\n\n return self.__class__(**params)", "response": "Returns a new object with the specified limit."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef filter(self, filter_id=\"\"):\n params = join_params(self.parameters, {\"filter_id\": filter_id})\n\n return self.__class__(**params)", "response": "Returns a new object with the current filter ID."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef faves(self, option):\n params = join_params(self.parameters, {\"faves\": option})\n\n return self.__class__(**params)", "response": "Returns a new object with the specified option applied to all users."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef uploads(self, option):\n params = join_params(self.parameters, {\"uploads\": option})\n\n return self.__class__(**params)", "response": "Sets whether to filter by a user s uploads list. Options available are user. ONLY user. NOT user. ONLY user. NOT user. ONLY user. NOT user. ONLY user. NOT user. ONLY user. NOT user. ONLY user. NOT user. ONLY user. NOT user. ONLY user. NOT user. ONLY user. NOT user. ONLY user. NOT user. None is used."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef define_contributor(self, request):\n request.data['contributor'] = self.resolve_user(request.user).pk", "response": "Define contributor by adding it to request. data."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if given url slug exists. Return True if slug exists and False otherwise.", "response": "def slug_exists(self, request):\n \"\"\"Check if given url slug exists.\n\n Check if slug given in query parameter ``name`` exists. Return\n ``True`` if slug already exists and ``False`` otherwise.\n\n \"\"\"\n if not request.user.is_authenticated:\n return Response(status=status.HTTP_401_UNAUTHORIZED)\n\n if 'name' not in request.query_params:\n return Response({'error': 'Query parameter `name` must be given.'},\n status=status.HTTP_400_BAD_REQUEST)\n\n queryset = self.get_queryset()\n slug_name = request.query_params['name']\n return Response(queryset.filter(slug__iexact=slug_name).exists())"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_ids(self, request_data, parameter_name='ids'):\n if parameter_name not in request_data:\n raise ParseError(\"`{}` parameter is required\".format(parameter_name))\n\n ids = request_data.get(parameter_name)\n if not isinstance(ids, list):\n raise ParseError(\"`{}` parameter not a list\".format(parameter_name))\n\n if not ids:\n raise ParseError(\"`{}` parameter is empty\".format(parameter_name))\n\n if any(map(lambda id: not isinstance(id, int), ids)):\n raise ParseError(\"`{}` parameter contains non-integers\".format(parameter_name))\n\n return ids", "response": "Extract a list of integers from request data."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_id(self, request_data, parameter_name='id'):\n if parameter_name not in request_data:\n raise ParseError(\"`{}` parameter is required\".format(parameter_name))\n\n id_parameter = request_data.get(parameter_name, None)\n if not isinstance(id_parameter, int):\n raise ParseError(\"`{}` parameter not an integer\".format(parameter_name))\n\n return id_parameter", "response": "Extract an integer from request data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef deserialize_from_headers(headers):\n return create_exception_by_name(\n _get_header(headers, 'DataONE-Exception-Name'),\n _get_header(headers, 'DataONE-Exception-DetailCode'),\n _get_header(headers, 'DataONE-Exception-Description'),\n _get_header(headers, 'DataONE-Exception-TraceInformation'),\n _get_header(headers, 'DataONE-Exception-Identifier'),\n _get_header(headers, 'DataONE-Exception-NodeId'),\n )", "response": "Deserialize a DataONE Exception from a dictionary of HTTP headers."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_exception_by_name(\n name,\n detailCode='0',\n description='',\n traceInformation=None,\n identifier=None,\n nodeId=None,\n):\n \"\"\"Create a DataONEException based object by name.\n\n Args:\n name: str\n The type name of a DataONE Exception. E.g. NotFound.\n\n If an unknown type name is used, it is automatically set to ServiceFailure. As\n the XML Schema for DataONE Exceptions does not restrict the type names, this\n may occur when deserializing an exception not defined by DataONE.\n\n detailCode: int\n Optional index into a table of predefined error conditions.\n\n See Also:\n For remaining args, see: ``DataONEException()``\n\n \"\"\"\n try:\n dataone_exception = globals()[name]\n except LookupError:\n dataone_exception = ServiceFailure\n return dataone_exception(\n detailCode, description, traceInformation, identifier, nodeId\n )", "response": "Create a DataONE Exception based object by name."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a DataONE Exception object by errorCode.", "response": "def create_exception_by_error_code(\n errorCode,\n detailCode='0',\n description='',\n traceInformation=None,\n identifier=None,\n nodeId=None,\n):\n \"\"\"Create a DataONE Exception object by errorCode.\n\n See Also: For args, see: ``DataONEException()``\n\n \"\"\"\n try:\n dataone_exception = ERROR_CODE_TO_EXCEPTION_DICT[errorCode]\n except LookupError:\n dataone_exception = ServiceFailure\n return dataone_exception(\n detailCode, description, traceInformation, identifier, nodeId\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nformat a string for inclusion in the exception s string representation.", "response": "def _fmt(self, tag, msg):\n \"\"\"Format a string for inclusion in the exception's string representation.\n\n If msg is None, format to empty string. If msg has a single line, format to:\n tag: msg If msg has multiple lines, format to: tag: line 1 line 2 Msg is\n truncated to 1024 chars.\n\n \"\"\"\n msg = msg or ''\n msg = str(msg)\n msg = msg.strip()\n if not msg:\n return\n if len(msg) > 2048:\n msg = msg[:1024] + '...'\n if msg.count('\\n') <= 1:\n return '{}: {}\\n'.format(tag, msg.strip())\n else:\n return '{}:\\n {}\\n'.format(tag, msg.replace('\\n', '\\n ').strip())"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nserialize to a format more suitable for displaying to end users.", "response": "def friendly_format(self):\n \"\"\"Serialize to a format more suitable for displaying to end users.\"\"\"\n if self.description is not None:\n msg = self.description\n else:\n msg = 'errorCode: {} / detailCode: {}'.format(\n self.errorCode, self.detailCode\n )\n return self._fmt(self.name, msg)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef serialize_to_transport(self, encoding='utf-8', xslt_url=None):\n assert encoding in ('utf-8', 'UTF-8')\n dataone_exception_pyxb = self.get_pyxb()\n return d1_common.xml.serialize_for_transport(\n dataone_exception_pyxb, xslt_url=xslt_url\n )", "response": "Serialize to XML bytes with prolog."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef serialize_to_display(self, xslt_url=None):\n return d1_common.xml.serialize_to_xml_str(\n self.get_pyxb(), pretty=True, xslt_url=xslt_url\n )", "response": "Serialize to a pretty printed Unicode str suitable for display."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef serialize_to_headers(self):\n return {\n 'DataONE-Exception-Name': self.__class__.__name__,\n 'DataONE-Exception-ErrorCode': self._format_header(self.errorCode),\n 'DataONE-Exception-DetailCode': self._format_header(self.detailCode),\n 'DataONE-Exception-Description': self._format_header(self.description),\n 'DataONE-Exception-TraceInformation': self._format_header(\n self.traceInformation\n ),\n 'DataONE-Exception-Identifier': self._format_header(self.identifier),\n 'DataONE-Exception-NodeID': self._format_header(self.nodeId),\n }", "response": "Serialize to a dict of HTTP headers."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates a DataONE Exception PyXB object.", "response": "def get_pyxb(self):\n \"\"\"Generate a DataONE Exception PyXB object.\n\n The PyXB object supports directly reading and writing the individual values that\n may be included in a DataONE Exception.\n\n \"\"\"\n dataone_exception_pyxb = dataoneErrors.error()\n dataone_exception_pyxb.name = self.__class__.__name__\n dataone_exception_pyxb.errorCode = self.errorCode\n dataone_exception_pyxb.detailCode = self.detailCode\n if self.description is not None:\n dataone_exception_pyxb.description = self.description\n dataone_exception_pyxb.traceInformation = self.traceInformation\n if self.identifier is not None:\n dataone_exception_pyxb.identifier = self.identifier\n if self.nodeId is not None:\n dataone_exception_pyxb.nodeId = self.nodeId\n return dataone_exception_pyxb"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef recreate_parent_dependencies(apps, schema_editor):\n Data = apps.get_model('flow', 'Data')\n DataDependency = apps.get_model('flow', 'DataDependency')\n\n def process_dependency(data, parent):\n if not Data.objects.filter(pk=parent).exists():\n DataDependency.objects.create(\n child=data, parent=None, kind='io'\n )\n\n for data in Data.objects.all():\n for field_schema, fields in iterate_fields(data.input, data.process.input_schema):\n name = field_schema['name']\n value = fields[name]\n\n if field_schema.get('type', '').startswith('data:'):\n process_dependency(data, value)\n\n elif field_schema.get('type', '').startswith('list:data:'):\n for parent in value:\n process_dependency(data, parent)", "response": "Recreate empty dependency relation if parent has been deleted."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a filter to the query that only has read or better access for one or more active subjects.", "response": "def add_access_policy_filter(request, query, column_name):\n \"\"\"Filter records that do not have ``read`` or better access for one or more of the\n active subjects.\n\n Since ``read`` is the lowest access level that a subject can have, this method only\n has to filter on the presence of the subject.\n\n \"\"\"\n q = d1_gmn.app.models.Subject.objects.filter(\n subject__in=request.all_subjects_set\n ).values('permission__sciobj')\n filter_arg = '{}__in'.format(column_name)\n return query.filter(**{filter_arg: q})"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_redact_annotation(request, query):\n return query.annotate(\n redact=django.db.models.Exists(\n d1_gmn.app.models.Permission.objects.filter(\n sciobj=django.db.models.OuterRef('sciobj'),\n subject__subject__in=request.all_subjects_set,\n level__gte=d1_gmn.app.auth.WRITE_LEVEL,\n ),\n negated=True,\n )\n )", "response": "Add redact annotation to LogEntry records that require unredacted IP and subject fields to be returned to the client."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds total_size field to all file - type outputs.", "response": "def calculate_total_size(apps, schema_editor):\n \"\"\"Add ``total_size`` field to all file/dir-type outputs.\"\"\"\n Data = apps.get_model('flow', 'Data')\n for data in Data.objects.all():\n hydrate_size(data, force=True)\n data.save()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nremoves total_size field from all file - type outputs.", "response": "def remove_total_size(apps, schema_editor):\n \"\"\"Remove ``total_size`` field from all file/dir-type outputs.\"\"\"\n Data = apps.get_model('flow', 'Data')\n for data in Data.objects.all():\n for field_schema, fields in iterate_fields(data.output, data.process.output_schema):\n name = field_schema['name']\n value = fields[name]\n if 'type' in field_schema:\n if field_schema['type'].startswith('basic:file:'):\n del value['total_size']\n elif field_schema['type'].startswith('list:basic:file:'):\n for obj in value:\n del obj['total_size']\n elif field_schema['type'].startswith('basic:dir:'):\n del value['total_size']\n elif field_schema['type'].startswith('list:basic:dir:'):\n for obj in value:\n del obj['total_size']\n data.save()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nassert that the given DID can be used as a PID for creating a new object with ivgs.", "response": "def is_valid_pid_for_create(did):\n \"\"\"Assert that ``did`` can be used as a PID for creating a new object with\n MNStorage.create() or MNStorage.update().\"\"\"\n if not d1_gmn.app.did.is_valid_pid_for_create(did):\n raise d1_common.types.exceptions.IdentifierNotUnique(\n 0,\n 'Identifier is already in use as {}. did=\"{}\"'.format(\n d1_gmn.app.did.classify_identifier(did), did\n ),\n identifier=did,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef is_valid_pid_to_be_updated(did):\n if not d1_gmn.app.did.is_valid_pid_to_be_updated(did):\n raise d1_common.types.exceptions.InvalidRequest(\n 0,\n 'Object cannot be updated because the identifier for the object to be '\n 'updated is {}. did=\"{}\"'.format(\n d1_gmn.app.did.classify_identifier(did), did\n ),\n identifier=did,\n )", "response": "Assert that the PID of an object that can be updated."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nvalidating that a MMP POST contains all required parts.", "response": "def post_has_mime_parts(request, parts):\n \"\"\"Validate that a MMP POST contains all required sections.\n\n :param request: Django Request\n :param parts: [(part_type, part_name), ...]\n :return: None or raises exception.\n\n Where information is stored in the request:\n part_type header: request.META['HTTP_']\n part_type file: request.FILES['']\n part_type field: request.POST['']\n\n \"\"\"\n missing = []\n\n for part_type, part_name in parts:\n if part_type == 'header':\n if 'HTTP_' + part_name.upper() not in request.META:\n missing.append('{}: {}'.format(part_type, part_name))\n elif part_type == 'file':\n if part_name not in list(request.FILES.keys()):\n missing.append('{}: {}'.format(part_type, part_name))\n elif part_type == 'field':\n if part_name not in list(request.POST.keys()):\n missing.append('{}: {}'.format(part_type, part_name))\n else:\n raise d1_common.types.exceptions.ServiceFailure(\n 0, 'Invalid part_type. part_type=\"{}\"'.format(part_type)\n )\n\n if len(missing) > 0:\n raise d1_common.types.exceptions.InvalidRequest(\n 0,\n 'Missing part(s) in MIME Multipart document. missing=\"{}\"'.format(\n ', '.join(missing)\n ),\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _decade_ranges_in_date_range(self, begin_date, end_date):\n begin_dated = begin_date.year / 10\n end_dated = end_date.year / 10\n decades = []\n for d in range(begin_dated, end_dated + 1):\n decades.append('{}-{}'.format(d * 10, d * 10 + 9))\n return decades", "response": "Return a list of decades which is covered by date range."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a list of years in one decade which is covered by date range.", "response": "def _years_in_date_range_within_decade(self, decade, begin_date, end_date):\n \"\"\"Return a list of years in one decade which is covered by date range.\"\"\"\n begin_year = begin_date.year\n end_year = end_date.year\n if begin_year < decade:\n begin_year = decade\n if end_year > decade + 9:\n end_year = decade + 9\n return list(range(begin_year, end_year + 1))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsolves the SMEFT RGEs in the leading log approximation. Input C_in and output C_out are dictionaries of arrays.", "response": "def smeft_evolve_leadinglog(C_in, scale_in, scale_out, newphys=True):\n \"\"\"Solve the SMEFT RGEs in the leading log approximation.\n\n Input C_in and output C_out are dictionaries of arrays.\"\"\"\n C_out = deepcopy(C_in)\n b = beta.beta(C_out, newphys=newphys)\n for k, C in C_out.items():\n C_out[k] = C + b[k] / (16 * pi**2) * log(scale_out / scale_in)\n return C_out"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _smeft_evolve(C_in, scale_in, scale_out, newphys=True, **kwargs):\n def fun(t0, y):\n return beta.beta_array(C=C_array2dict(y.view(complex)),\n newphys=newphys).view(float) / (16 * pi**2)\n y0 = C_dict2array(C_in).view(float)\n sol = solve_ivp(fun=fun,\n t_span=(log(scale_in), log(scale_out)),\n y0=y0, **kwargs)\n return sol", "response": "Axuliary function used in meft_evolve and meft_evolve_continuous"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsolves the SMEFT RGEs by numeric integration. Input C_in and output C_out are dictionaries of arrays.", "response": "def smeft_evolve(C_in, scale_in, scale_out, newphys=True, **kwargs):\n \"\"\"Solve the SMEFT RGEs by numeric integration.\n\n Input C_in and output C_out are dictionaries of arrays.\"\"\"\n sol = _smeft_evolve(C_in, scale_in, scale_out, newphys=newphys, **kwargs)\n return C_array2dict(sol.y[:, -1].view(complex))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsolves the SMEFT RGEs by numeric integration returning a function that allows to compute an interpolated solution at arbitrary intermediate scales.", "response": "def smeft_evolve_continuous(C_in, scale_in, scale_out, newphys=True, **kwargs):\n \"\"\"Solve the SMEFT RGEs by numeric integration, returning a function that\n allows to compute an interpolated solution at arbitrary intermediate\n scales.\"\"\"\n sol = _smeft_evolve(C_in, scale_in, scale_out, newphys=newphys,\n dense_output=True, **kwargs)\n @np.vectorize\n def _rge_solution(scale):\n t = log(scale)\n y = sol.sol(t).view(complex)\n yd = C_array2dict(y)\n yw = arrays2wcxf_nonred(yd)\n return yw\n def rge_solution(scale):\n # this is to return a scalar if the input is scalar\n return _rge_solution(scale)[()]\n return rge_solution"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nshow an invalid index error message.", "response": "def invalid_index(self, name):\n \"\"\"Show an invalid index error message.\"\"\"\n self.stderr.write(\"Unknown index: {}\".format(name))\n self.stderr.write(\"Supported indices are:\")\n for index in index_builder.indexes:\n self.stderr.write(\" * {}\".format(index.__class__.__name__))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef filter_indices(self, options, verbosity, *args, **kwargs):\n index_name_map = {\n index.__class__.__name__: index\n for index in index_builder.indexes\n }\n\n # Process includes.\n if options['index']:\n indices = set(options['index'])\n else:\n indices = set(index_name_map.keys())\n\n # Process excludes.\n for index_name in options['exclude']:\n if index_name not in index_name_map:\n self.invalid_index(index_name)\n return\n\n indices.discard(index_name)\n\n # Execute action for each remaining index.\n for index_name in indices:\n try:\n index = index_name_map[index_name]\n except KeyError:\n self.invalid_index(index_name)\n return\n\n if verbosity > 0:\n self.stdout.write(\"Processing index '{}'...\".format(index_name))\n self.handle_index(index, *args, **kwargs)", "response": "Filter indices and execute an action for each index."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_contributor_sort_value(self, obj):\n user = obj.contributor\n\n if user.first_name or user.last_name:\n contributor = user.get_full_name()\n else:\n contributor = user.username\n\n return contributor.strip().lower()", "response": "Generate display name for contributor."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating user filtering tokens.", "response": "def _get_user(self, user):\n \"\"\"Generate user filtering tokens.\"\"\"\n return ' '.join([user.username, user.first_name, user.last_name])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning param_str converted to an int. Raise InvalidRequest.", "response": "def _get_and_assert_slice_param(url_dict, param_name, default_int):\n \"\"\"Return ``param_str`` converted to an int.\n\n If str cannot be converted to int or int is not zero or positive, raise\n InvalidRequest.\n\n \"\"\"\n param_str = url_dict['query'].get(param_name, default_int)\n try:\n n = int(param_str)\n except ValueError:\n raise d1_common.types.exceptions.InvalidRequest(\n 0,\n 'Slice parameter is not a valid integer. {}=\"{}\"'.format(\n param_name, param_str\n ),\n )\n if n < 0:\n raise d1_common.types.exceptions.InvalidRequest(\n 0,\n 'Slice parameter cannot be a negative number. {}=\"{}\"'.format(\n param_name, param_str\n ),\n )\n return n"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _assert_valid_start(start_int, count_int, total_int):\n if total_int and start_int >= total_int:\n raise d1_common.types.exceptions.InvalidRequest(\n 0,\n 'Requested a non-existing slice. start={} count={} total={}'.format(\n start_int, count_int, total_int\n ),\n )", "response": "Assert that the requested start position is lower than the requested count."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _adjust_count_if_required(start_int, count_int, total_int):\n if start_int + count_int > total_int:\n count_int = total_int - start_int\n count_int = min(count_int, django.conf.settings.MAX_SLICE_ITEMS)\n return count_int", "response": "Adjust requested object count down if needed."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _add_fallback_slice_filter(query, start_int, count_int, total_int):\n logging.debug(\n 'Adding fallback slice filter. start={} count={} total={} '.format(\n start_int, count_int, total_int\n )\n )\n if not count_int:\n return query.none()\n else:\n return query[start_int : start_int + count_int]", "response": "Add a slice filter to the query that will not return any results."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _cache_get_last_in_slice(url_dict, start_int, total_int, authn_subj_list):\n key_str = _gen_cache_key_for_slice(url_dict, start_int, total_int, authn_subj_list)\n # TODO: Django docs state that cache.get() should return None on unknown key.\n try:\n last_ts_tup = django.core.cache.cache.get(key_str)\n except KeyError:\n last_ts_tup = None\n logging.debug('Cache get. key=\"{}\" -> last_ts_tup={}'.format(key_str, last_ts_tup))\n return last_ts_tup", "response": "Return the last cached version of a given slice."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate a key for the REST URL for a given slice.", "response": "def _gen_cache_key_for_slice(url_dict, start_int, total_int, authn_subj_list):\n \"\"\"Generate cache key for the REST URL the client is currently accessing or is\n expected to access in order to get the slice starting at the given ``start_int`` of\n a multi-slice result set.\n\n When used for finding the key to check in the current call, ``start_int`` is\n 0, or the start that was passed in the current call.\n\n When used for finding the key to set for the anticipated call, ``start_int`` is\n current ``start_int`` + ``count_int``, the number of objects the current call will\n return.\n\n The URL for the slice is the same as for the current slice, except that the\n `start` query parameter has been increased by the number of items returned in\n the current slice.\n\n Except for advancing the start value and potentially adjusting the desired\n slice size, it doesn't make sense for the client to change the REST URL during\n slicing, but such queries are supported. They will, however, trigger\n potentially expensive database queries to find the current slice position.\n\n To support adjustments in desired slice size during slicing, the count is not\n used when generating the key.\n\n The active subjects are used in the key in order to prevent potential security\n issues if authenticated subjects change during slicing.\n\n The url_dict is normalized by encoding it to a JSON string with sorted keys. A\n hash of the JSON is used for better distribution in a hash map and to avoid\n the 256 bytes limit on keys in some caches.\n\n \"\"\"\n # logging.debug('Gen key. result_record_count={}'.format(result_record_count))\n key_url_dict = copy.deepcopy(url_dict)\n key_url_dict['query'].pop('start', None)\n key_url_dict['query'].pop('count', None)\n key_json = d1_common.util.serialize_to_normalized_compact_json(\n {\n 'url_dict': key_url_dict,\n 'start': start_int,\n 'total': total_int,\n 'subject': authn_subj_list,\n }\n )\n logging.debug('key_json={}'.format(key_json))\n return hashlib.sha256(key_json.encode('utf-8')).hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconstruct a numpy array with Wilson coefficient values from a dictionary of label - value pairs corresponding to the non - redundant elements.", "response": "def smeft_toarray(wc_name, wc_dict):\n \"\"\"Construct a numpy array with Wilson coefficient values from a\n dictionary of label-value pairs corresponding to the non-redundant\n elements.\"\"\"\n shape = smeftutil.C_keys_shape[wc_name]\n C = np.zeros(shape, dtype=complex)\n for k, v in wc_dict.items():\n if k.split('_')[0] != wc_name:\n continue\n indices = k.split('_')[-1] # e.g. '1213'\n indices = tuple(int(s) - 1 for s in indices) # e.g. (1, 2, 1, 3)\n C[indices] = v\n C = smeftutil.symmetrize({wc_name: C})[wc_name]\n return C"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntranslate from the Warsaw basis to the Warsaw mass basis.", "response": "def warsaw_to_warsawmass(C, parameters=None, sectors=None):\n \"\"\"Translate from the Warsaw basis to the 'Warsaw mass' basis.\n\n Parameters used:\n - `Vus`, `Vub`, `Vcb`, `gamma`: elements of the unitary CKM matrix (defined\n as the mismatch between left-handed quark mass matrix diagonalization\n matrices).\n \"\"\"\n p = default_parameters.copy()\n if parameters is not None:\n # if parameters are passed in, overwrite the default values\n p.update(parameters)\n # start out with a 1:1 copy\n C_out = C.copy()\n # rotate left-handed up-type quark fields in uL-uR operator WCs\n C_rotate_u = ['uphi', 'uG', 'uW', 'uB']\n for name in C_rotate_u:\n _array = smeft_toarray(name, C)\n V = ckmutil.ckm.ckm_tree(p[\"Vus\"], p[\"Vub\"], p[\"Vcb\"], p[\"delta\"])\n UuL = V.conj().T\n _array = UuL.conj().T @ _array\n _dict = smeft_fromarray(name, _array)\n C_out.update(_dict)\n # diagonalize dimension-5 Weinberg operator\n _array = smeft_toarray('llphiphi', C)\n _array = np.diag(ckmutil.diag.msvd(_array)[1])\n _dict = smeft_fromarray('llphiphi', _array)\n C_out.update(_dict)\n return C_out"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntranslating from the Warsaw up basis to the Warsaw basis.", "response": "def warsaw_up_to_warsaw(C, parameters=None, sectors=None):\n \"\"\"Translate from the 'Warsaw up' basis to the Warsaw basis.\n\n Parameters used:\n - `Vus`, `Vub`, `Vcb`, `gamma`: elements of the unitary CKM matrix (defined\n as the mismatch between left-handed quark mass matrix diagonalization\n matrices).\n \"\"\"\n C_in = smeftutil.wcxf2arrays_symmetrized(C)\n p = default_parameters.copy()\n if parameters is not None:\n # if parameters are passed in, overwrite the default values\n p.update(parameters)\n Uu = Ud = Ul = Ue = np.eye(3)\n V = ckmutil.ckm.ckm_tree(p[\"Vus\"], p[\"Vub\"], p[\"Vcb\"], p[\"delta\"])\n Uq = V\n C_out = smeftutil.flavor_rotation(C_in, Uq, Uu, Ud, Ul, Ue)\n C_out = smeftutil.arrays2wcxf_nonred(C_out)\n warsaw = wcxf.Basis['SMEFT', 'Warsaw']\n all_wcs = set(warsaw.all_wcs) # to speed up lookup\n return {k: v for k, v in C_out.items() if k in all_wcs}"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef sysmeta_add_preferred(sysmeta_pyxb, node_urn):\n if not has_replication_policy(sysmeta_pyxb):\n sysmeta_set_default_rp(sysmeta_pyxb)\n rp_pyxb = sysmeta_pyxb.replicationPolicy\n _add_node(rp_pyxb, 'pref', node_urn)\n _remove_node(rp_pyxb, 'block', node_urn)", "response": "Add a preferred replication target to the list of blocked Member Nodes."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nnormalize a ReplicationPolicy PyXB type in place.", "response": "def normalize(rp_pyxb):\n \"\"\"Normalize a ReplicationPolicy PyXB type in place.\n\n The preferred and blocked lists are sorted alphabetically. As blocked nodes\n override preferred nodes, and any node present in both lists is removed from the\n preferred list.\n\n Args:\n rp_pyxb : ReplicationPolicy PyXB object\n The object will be normalized in place.\n\n \"\"\"\n\n # noinspection PyMissingOrEmptyDocstring\n def sort(r, a):\n d1_common.xml.sort_value_list_pyxb(_get_attr_or_list(r, a))\n\n rp_pyxb.preferredMemberNode = set(_get_attr_or_list(rp_pyxb, 'pref')) - set(\n _get_attr_or_list(rp_pyxb, 'block')\n )\n sort(rp_pyxb, 'block')\n sort(rp_pyxb, 'pref')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef are_equivalent_xml(a_xml, b_xml):\n return are_equivalent_pyxb(\n d1_common.xml.deserialize(a_xml), d1_common.xml.deserialize(b_xml)\n )", "response": "Checks if two ReplicationPolicy XML docs are semantically equivalent."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts ReplicationPolicy PyXB object to a normalized dict.", "response": "def pyxb_to_dict(rp_pyxb):\n \"\"\"Convert ReplicationPolicy PyXB object to a normalized dict.\n\n Args:\n rp_pyxb: ReplicationPolicy to convert.\n\n Returns:\n dict : Replication Policy as normalized dict.\n\n Example::\n\n {\n 'allowed': True,\n 'num': 3,\n 'blockedMemberNode': {'urn:node:NODE1', 'urn:node:NODE2', 'urn:node:NODE3'},\n 'preferredMemberNode': {'urn:node:NODE4', 'urn:node:NODE5'},\n }\n\n \"\"\"\n return {\n 'allowed': bool(_get_attr_or_list(rp_pyxb, 'allowed')),\n 'num': _get_as_int(rp_pyxb),\n 'block': _get_as_set(rp_pyxb, 'block'),\n 'pref': _get_as_set(rp_pyxb, 'pref'),\n }"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert dict to ReplicationPolicy PyXB object.", "response": "def dict_to_pyxb(rp_dict):\n \"\"\"Convert dict to ReplicationPolicy PyXB object.\n\n Args:\n rp_dict: Native Python structure representing a Replication Policy.\n\n Example::\n\n {\n 'allowed': True,\n 'num': 3,\n 'blockedMemberNode': {'urn:node:NODE1', 'urn:node:NODE2', 'urn:node:NODE3'},\n 'preferredMemberNode': {'urn:node:NODE4', 'urn:node:NODE5'},\n }\n\n Returns:\n ReplicationPolicy PyXB object.\n\n \"\"\"\n rp_pyxb = d1_common.types.dataoneTypes.replicationPolicy()\n rp_pyxb.replicationAllowed = rp_dict['allowed']\n rp_pyxb.numberReplicas = rp_dict['num']\n rp_pyxb.blockedMemberNode = rp_dict['block']\n rp_pyxb.preferredMemberNode = rp_dict['pref']\n normalize(rp_pyxb)\n return rp_pyxb"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nensures that RP allows replication.", "response": "def _ensure_allow_rp(rp_pyxb):\n \"\"\"Ensure that RP allows replication.\"\"\"\n if not rp_pyxb.replicationAllowed:\n rp_pyxb.replicationAllowed = True\n if not rp_pyxb.numberReplicas:\n rp_pyxb.numberReplicas = 3"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _display_interval(i):\n sigils = [\"d\", \"h\", \"m\", \"s\"]\n factors = [24 * 60 * 60, 60 * 60, 60, 1]\n remain = int(i)\n result = \"\"\n for fac, sig in zip(factors, sigils):\n if remain < fac:\n continue\n result += \"{}{}\".format(remain // fac, sig)\n remain = remain % fac\n return result", "response": "Convert a time interval into a human - readable string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update(self, num):\n num = float(num)\n self.count += 1\n self.low = min(self.low, num)\n self.high = max(self.high, num)\n\n # Welford's online mean and variance algorithm.\n delta = num - self.mean\n self.mean = self.mean + delta / self.count\n delta2 = num - self.mean\n self._rolling_variance = self._rolling_variance + delta * delta2\n\n if self.count > 1:\n self.deviation = math.sqrt(self._rolling_variance / (self.count - 1))\n else:\n self.deviation = 0.0", "response": "Update the internal statistics with the new number."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_dict(self):\n return {\n 'high': self.high,\n 'low': self.low,\n 'mean': self.mean,\n 'count': self.count,\n 'deviation': self.deviation,\n }", "response": "Pack the stats computed into a dictionary."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a value at the specified time to the series.", "response": "def add(self, count, timestamp=None):\n \"\"\"Add a value at the specified time to the series.\n\n :param count: The number of work items ready at the specified\n time.\n :param timestamp: The timestamp to add. Defaults to None,\n meaning current time. It should be strictly greater (newer)\n than the last added timestamp.\n \"\"\"\n if timestamp is None:\n timestamp = time.time()\n if self.last_data >= timestamp:\n raise ValueError(\"Time {} >= {} in load average calculation\".format(self.last_data, timestamp))\n self.last_data = timestamp\n\n for meta in self.intervals.values():\n meta.push(count, timestamp)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\npack the load averages into a nicely - keyed dictionary.", "response": "def to_dict(self):\n \"\"\"Pack the load averages into a nicely-keyed dictionary.\"\"\"\n result = {}\n for meta in self.intervals.values():\n result[meta.display] = meta.value\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef listFormats(self, vendorSpecific=None):\n response = self.listFormatsResponse(vendorSpecific)\n return self._read_dataone_type_response(response, 'ObjectFormatList')", "response": "See Also : listFormatsResponse"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsees Also : getFormatResponse", "response": "def getFormat(self, formatId, vendorSpecific=None):\n \"\"\"See Also: getFormatResponse()\n\n Args:\n formatId:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.getFormatResponse(formatId, vendorSpecific)\n return self._read_dataone_type_response(response, 'ObjectFormat')"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreserves the identifier of a specific object.", "response": "def reserveIdentifierResponse(self, pid, vendorSpecific=None):\n \"\"\"CNCore.getLogRecords(session[, fromDate][, toDate][, event][, start][,\n count]) \u2192 Log https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNCore.getLogRecords Implemented in\n d1_client.baseclient.py.\n\n CNCore.reserveIdentifier(session, pid) \u2192 Identifier\n https://releases.dataone.org/online/api-documentation-v2.0.1/apis/CN_APIs.html#CNCore.reserveIdentifier\n\n Args:\n pid:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {'pid': pid}\n return self.POST(['reserve', pid], fields=mmp_dict, headers=vendorSpecific)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsees Also : reserveIdentifierResponse", "response": "def reserveIdentifier(self, pid, vendorSpecific=None):\n \"\"\"See Also: reserveIdentifierResponse()\n\n Args:\n pid:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.reserveIdentifierResponse(pid, vendorSpecific)\n return self._read_dataone_type_response(response, 'Identifier', vendorSpecific)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef listChecksumAlgorithms(self, vendorSpecific=None):\n response = self.listChecksumAlgorithmsResponse(vendorSpecific)\n return self._read_dataone_type_response(response, 'ChecksumAlgorithmList')", "response": "See Also : listChecksumAlgorithmsResponse"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef setObsoletedByResponse(\n self, pid, obsoletedByPid, serialVersion, vendorSpecific=None\n ):\n \"\"\"CNCore.setObsoletedBy(session, pid, obsoletedByPid, serialVersion) \u2192 boolean\n https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNCore.setObsoletedBy.\n\n Args:\n pid:\n obsoletedByPid:\n serialVersion:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {\n 'obsoletedByPid': obsoletedByPid,\n 'serialVersion': str(serialVersion),\n }\n return self.PUT(['obsoletedBy', pid], fields=mmp_dict, headers=vendorSpecific)", "response": "This method is used to set the state of an object in the NX - API to indicate that a specific object is obsoleted by another object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the flag that the user is obsoleted by the object.", "response": "def setObsoletedBy(self, pid, obsoletedByPid, serialVersion, vendorSpecific=None):\n \"\"\"See Also: setObsoletedByResponse()\n\n Args:\n pid:\n obsoletedByPid:\n serialVersion:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.setObsoletedByResponse(\n pid, obsoletedByPid, serialVersion, vendorSpecific\n )\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef listNodes(self, vendorSpecific=None):\n response = self.listNodesResponse(vendorSpecific)\n return self._read_dataone_type_response(response, 'NodeList')", "response": "See Also : listNodesResponse"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a boolean version of the CN has reservation response.", "response": "def hasReservationResponse(self, pid, subject, vendorSpecific=None):\n \"\"\"CNCore.registerSystemMetadata(session, pid, sysmeta) \u2192 Identifier CN\n INTERNAL.\n\n CNCore.hasReservation(session, pid) \u2192 boolean\n https://releases.dataone.org/online/api-documentation-v2.0.1/apis/CN_APIs.html#CNCore.hasReservation\n\n Args:\n pid:\n subject:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n return self.GET(['reserve', pid, subject], headers=vendorSpecific)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsees Also : hasReservationResponse", "response": "def hasReservation(self, pid, subject, vendorSpecific=None):\n \"\"\"See Also: hasReservationResponse()\n\n Args:\n pid:\n subject:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.hasReservationResponse(pid, subject, vendorSpecific)\n return self._read_boolean_404_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsee Also : resolveResponse", "response": "def resolve(self, pid, vendorSpecific=None):\n \"\"\"See Also: resolveResponse()\n\n Args:\n pid:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.resolveResponse(pid, vendorSpecific)\n return self._read_dataone_type_response(\n response, 'ObjectLocationList', response_is_303_redirect=True\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsees Also : searchResponse", "response": "def search(self, queryType, query=None, vendorSpecific=None, **kwargs):\n \"\"\"See Also: searchResponse()\n\n Args:\n queryType:\n query:\n vendorSpecific:\n **kwargs:\n\n Returns:\n\n \"\"\"\n response = self.searchResponse(queryType, query, vendorSpecific, **kwargs)\n return self._read_dataone_type_response(response, 'ObjectList')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef queryResponse(self, queryEngine, query=None, vendorSpecific=None, **kwargs):\n return self.GET(\n ['query', queryEngine, query], headers=vendorSpecific, query=kwargs\n )", "response": "Query the set of available data for a given query."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsee Also : queryResponse", "response": "def query(self, queryEngine, query=None, vendorSpecific=None, **kwargs):\n \"\"\"See Also: queryResponse()\n\n Args:\n queryEngine:\n query:\n vendorSpecific:\n **kwargs:\n\n Returns:\n\n \"\"\"\n response = self.queryResponse(queryEngine, query, vendorSpecific, **kwargs)\n return self._read_stream_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the rights holder for a user.", "response": "def setRightsHolderResponse(self, pid, userId, serialVersion, vendorSpecific=None):\n \"\"\"CNAuthorization.setRightsHolder(session, pid, userId, serialVersion)\n\n \u2192 Identifier https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNAuthorization.setRightsHolder.\n\n Args:\n pid:\n userId:\n serialVersion:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {'userId': userId, 'serialVersion': str(serialVersion)}\n return self.PUT(['owner', pid], headers=vendorSpecific, fields=mmp_dict)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef setRightsHolder(self, pid, userId, serialVersion, vendorSpecific=None):\n response = self.setRightsHolderResponse(\n pid, userId, serialVersion, vendorSpecific\n )\n return self._read_boolean_response(response)", "response": "See Also : setRightsHolderResponse"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the access policy for a specific resource.", "response": "def setAccessPolicyResponse(\n self, pid, accessPolicy, serialVersion, vendorSpecific=None\n ):\n \"\"\"CNAuthorization.setAccessPolicy(session, pid, accessPolicy, serialVersion) \u2192\n boolean https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNAuthorization.setAccessPolicy.\n\n Args:\n pid:\n accessPolicy:\n serialVersion:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {\n 'serialVersion': str(serialVersion),\n 'accessPolicy': ('accessPolicy.xml', accessPolicy.toxml('utf-8')),\n }\n return self.PUT(['accessRules', pid], fields=mmp_dict, headers=vendorSpecific)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef setAccessPolicy(self, pid, accessPolicy, serialVersion, vendorSpecific=None):\n response = self.setAccessPolicyResponse(\n pid, accessPolicy, serialVersion, vendorSpecific\n )\n return self._read_boolean_response(response)", "response": "Sets the access policy of a specific object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef registerAccountResponse(self, person, vendorSpecific=None):\n mmp_dict = {'person': ('person.xml', person.toxml('utf-8'))}\n return self.POST('accounts', fields=mmp_dict, headers=vendorSpecific)", "response": "CNIdentity. registerAccount - Creates a new CN identity and returns the response."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsees Also : registerAccountResponse", "response": "def registerAccount(self, person, vendorSpecific=None):\n \"\"\"See Also: registerAccountResponse()\n\n Args:\n person:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.registerAccountResponse(person, vendorSpecific)\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef updateAccountResponse(self, subject, person, vendorSpecific=None):\n mmp_dict = {'person': ('person.xml', person.toxml('utf-8'))}\n return self.PUT(['accounts', subject], fields=mmp_dict, headers=vendorSpecific)", "response": "CNIdentity. updateAccount - Updates the NX - API for the given subject and person."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsee Also : updateAccountResponse", "response": "def updateAccount(self, subject, person, vendorSpecific=None):\n \"\"\"See Also: updateAccountResponse()\n\n Args:\n subject:\n person:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.updateAccountResponse(subject, person, vendorSpecific)\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef verifyAccount(self, subject, vendorSpecific=None):\n response = self.verifyAccountResponse(subject, vendorSpecific)\n return self._read_boolean_response(response)", "response": "See Also : verifyAccountResponse"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getSubjectInfo(self, subject, vendorSpecific=None):\n response = self.getSubjectInfoResponse(subject, vendorSpecific)\n return self._read_dataone_type_response(response, 'SubjectInfo')", "response": "See Also : getSubjectInfoResponse"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef listSubjectsResponse(\n self, query, status=None, start=None, count=None, vendorSpecific=None\n ):\n \"\"\"CNIdentity.listSubjects(session, query, status, start, count) \u2192 SubjectList\n https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNIdentity.listSubjects.\n\n Args:\n query:\n status:\n start:\n count:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n url_query = {'status': status, 'start': start, 'count': count, 'query': query}\n return self.GET('accounts', query=url_query, headers=vendorSpecific)", "response": "CNIdentity. listSubjects - Returns a list of Subjects for the given resource"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsees Also : listSubjectsResponse", "response": "def listSubjects(\n self, query, status=None, start=None, count=None, vendorSpecific=None\n ):\n \"\"\"See Also: listSubjectsResponse()\n\n Args:\n query:\n status:\n start:\n count:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.listSubjectsResponse(\n query, status, start, count, vendorSpecific\n )\n return self._read_dataone_type_response(response, 'SubjectInfo')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mapIdentityResponse(\n self, primarySubject, secondarySubject, vendorSpecific=None\n ):\n \"\"\"CNIdentity.mapIdentity(session, subject) \u2192 boolean\n https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNIdentity.mapIdentity.\n\n Args:\n primarySubject:\n secondarySubject:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {\n 'primarySubject': primarySubject.toxml('utf-8'),\n 'secondarySubject': secondarySubject.toxml('utf-8'),\n }\n return self.POST(['accounts', 'map'], fields=mmp_dict, headers=vendorSpecific)", "response": "This is a helper method that returns the response from the CN Identity API."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef mapIdentity(self, primarySubject, secondarySubject, vendorSpecific=None):\n response = self.mapIdentityResponse(\n primarySubject, secondarySubject, vendorSpecific\n )\n return self._read_boolean_response(response)", "response": "See Also : mapIdentityResponse"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef removeMapIdentity(self, subject, vendorSpecific=None):\n response = self.removeMapIdentityResponse(subject, vendorSpecific)\n return self._read_boolean_response(response)", "response": "See Also : removeMapIdentityResponse"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsee Also : denyMapIdentityResponse", "response": "def denyMapIdentity(self, subject, vendorSpecific=None):\n \"\"\"See Also: denyMapIdentityResponse()\n\n Args:\n subject:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.denyMapIdentityResponse(subject, vendorSpecific)\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nrequests Map Identity Response", "response": "def requestMapIdentityResponse(self, subject, vendorSpecific=None):\n \"\"\"CNIdentity.requestMapIdentity(session, subject) \u2192 boolean\n https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNIdentity.requestMapIdentity.\n\n Args:\n subject:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {'subject': subject.toxml('utf-8')}\n return self.POST('accounts', fields=mmp_dict, headers=vendorSpecific)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef requestMapIdentity(self, subject, vendorSpecific=None):\n response = self.requestMapIdentityResponse(subject, vendorSpecific)\n return self._read_boolean_response(response)", "response": "See Also : requestMapIdentityResponse"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsee Also : confirmMapIdentityResponse", "response": "def confirmMapIdentity(self, subject, vendorSpecific=None):\n \"\"\"See Also: confirmMapIdentityResponse()\n\n Args:\n subject:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.confirmMapIdentityResponse(subject, vendorSpecific)\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef createGroupResponse(self, group, vendorSpecific=None):\n mmp_dict = {'group': ('group.xml', group.toxml('utf-8'))}\n return self.POST('groups', fields=mmp_dict, headers=vendorSpecific)", "response": "CNIdentity. createGroup - Creates a new group"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef createGroup(self, group, vendorSpecific=None):\n response = self.createGroupResponse(group, vendorSpecific)\n return self._read_boolean_response(response)", "response": "See Also : createGroupResponse"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating the group response.", "response": "def updateGroupResponse(self, group, vendorSpecific=None):\n \"\"\"CNIdentity.addGroupMembers(session, groupName, members) \u2192 boolean\n https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNIdentity.addGroupMembers.\n\n Args:\n group:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {'group': ('group.xml', group.toxml('utf-8'))}\n return self.PUT('groups', fields=mmp_dict, headers=vendorSpecific)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsees Also : updateGroupResponse", "response": "def updateGroup(self, group, vendorSpecific=None):\n \"\"\"See Also: updateGroupResponse()\n\n Args:\n group:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.updateGroupResponse(group, vendorSpecific)\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef setReplicationStatusResponse(\n self, pid, nodeRef, status, dataoneError=None, vendorSpecific=None\n ):\n \"\"\"CNReplication.setReplicationStatus(session, pid, nodeRef, status, failure) \u2192\n boolean https://releases.dataone.org/online/api-documentatio\n n-v2.0.1/apis/CN_APIs.html#CNReplication.setReplicationStatus.\n\n Args:\n pid:\n nodeRef:\n status:\n dataoneError:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {'nodeRef': nodeRef, 'status': status} # .toxml('utf-8'),\n if dataoneError is not None:\n mmp_dict['failure'] = ('failure.xml', dataoneError.serialize_to_transport())\n return self.PUT(\n ['replicaNotifications', pid], fields=mmp_dict, headers=vendorSpecific\n )", "response": "Set replication status for a specific resource."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the replication status of a specific resource.", "response": "def setReplicationStatus(\n self, pid, nodeRef, status, dataoneError=None, vendorSpecific=None\n ):\n \"\"\"See Also: setReplicationStatusResponse()\n\n Args:\n pid:\n nodeRef:\n status:\n dataoneError:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.setReplicationStatusResponse(\n pid, nodeRef, status, dataoneError, vendorSpecific\n )\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef updateReplicationMetadataResponse(\n self, pid, replicaMetadata, serialVersion, vendorSpecific=None\n ):\n \"\"\"CNReplication.updateReplicationMetadata(session, pid, replicaMetadata,\n serialVersion) \u2192 boolean https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_AP Is.html#CNReplication.updateReplicationMetadata\n Not implemented.\n\n Args:\n pid:\n replicaMetadata:\n serialVersion:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {\n 'replicaMetadata': ('replicaMetadata.xml', replicaMetadata.toxml('utf-8')),\n 'serialVersion': str(serialVersion),\n }\n return self.PUT(\n ['replicaMetadata', pid], fields=mmp_dict, headers=vendorSpecific\n )", "response": "Updates the replica metadata for a specific resource."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef updateReplicationMetadata(\n self, pid, replicaMetadata, serialVersion, vendorSpecific=None\n ):\n \"\"\"See Also: updateReplicationMetadataResponse()\n\n Args:\n pid:\n replicaMetadata:\n serialVersion:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.updateReplicationMetadataResponse(\n pid, replicaMetadata, serialVersion, vendorSpecific\n )\n return self._read_boolean_response(response)", "response": "Update the replica metadata for a specific object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setReplicationPolicy(self, pid, policy, serialVersion, vendorSpecific=None):\n response = self.setReplicationPolicyResponse(\n pid, policy, serialVersion, vendorSpecific\n )\n return self._read_boolean_response(response)", "response": "Sets the replication policy for a specific object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef isNodeAuthorizedResponse(self, targetNodeSubject, pid, vendorSpecific=None):\n query_dict = {'targetNodeSubject': targetNodeSubject}\n return self.GET(\n ['replicaAuthorizations', pid], query=query_dict, headers=vendorSpecific\n )", "response": "CNReplication. isNodeAuthorized - Returns a boolean response if the user is authorized to replicate the given node."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nseeing Also : isNodeAuthorizedResponse", "response": "def isNodeAuthorized(self, targetNodeSubject, pid, vendorSpecific=None):\n \"\"\"See Also: isNodeAuthorizedResponse()\n\n Args:\n targetNodeSubject:\n pid:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.isNodeAuthorizedResponse(targetNodeSubject, pid, vendorSpecific)\n return self._read_boolean_401_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef deleteReplicationMetadataResponse(\n self, pid, nodeId, serialVersion, vendorSpecific=None\n ):\n \"\"\"CNReplication.deleteReplicationMetadata(session, pid, policy, serialVersion)\n\n \u2192 boolean https://releases.dataone.org/online/api-docume\n ntation-v2.0.1/apis/CN_APIs.html#CNReplication.deleteReplicationMetadat a.\n\n Args:\n pid:\n nodeId:\n serialVersion:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n mmp_dict = {'nodeId': nodeId, 'serialVersion': str(serialVersion)}\n return self.PUT(\n ['removeReplicaMetadata', pid], fields=mmp_dict, headers=vendorSpecific\n )", "response": "Delete replication metadata response."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deleteReplicationMetadata(\n self, pid, nodeId, serialVersion, vendorSpecific=None\n ):\n \"\"\"See Also: deleteReplicationMetadataResponse()\n\n Args:\n pid:\n nodeId:\n serialVersion:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.deleteReplicationMetadataResponse(\n pid, nodeId, serialVersion, vendorSpecific\n )\n return self._read_boolean_response(response)", "response": "Delete the replication metadata for a node."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef updateNodeCapabilitiesResponse(self, nodeId, node, vendorSpecific=None):\n mmp_dict = {'node': ('node.xml', node.toxml('utf-8'))}\n return self.PUT(['node', nodeId], fields=mmp_dict, headers=vendorSpecific)", "response": "CNRegister. updateNodeCapabilities - Updates node capabilities"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsees Also : updateNodeCapabilitiesResponse", "response": "def updateNodeCapabilities(self, nodeId, node, vendorSpecific=None):\n \"\"\"See Also: updateNodeCapabilitiesResponse()\n\n Args:\n nodeId:\n node:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.updateNodeCapabilitiesResponse(nodeId, node, vendorSpecific)\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsees Also : registerResponse", "response": "def register(self, node, vendorSpecific=None):\n \"\"\"See Also: registerResponse()\n\n Args:\n node:\n vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.registerResponse(node, vendorSpecific)\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef valid(self, instance, schema):\n try:\n jsonschema.validate(instance, schema)\n except jsonschema.exceptions.ValidationError as ex:\n self.stderr.write(\" VALIDATION ERROR: {}\".format(instance['name'] if 'name' in instance else ''))\n self.stderr.write(\" path: {}\".format(ex.path))\n self.stderr.write(\" message: {}\".format(ex.message))\n self.stderr.write(\" validator: {}\".format(ex.validator))\n self.stderr.write(\" val. value: {}\".format(ex.validator_value))\n return False\n\n try:\n # Check that default values fit field schema.\n for field in ['input', 'output', 'schema']:\n for schema, _, path in iterate_schema({}, instance.get(field, {})):\n if 'default' in schema:\n validate_schema({schema['name']: schema['default']}, [schema])\n except ValidationError:\n self.stderr.write(\" VALIDATION ERROR: {}\".format(instance['name']))\n self.stderr.write(\" Default value of field '{}' is not valid.\". format(path))\n return False\n\n return True", "response": "Validate the given instance."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfind descriptor schemas in given path.", "response": "def find_descriptor_schemas(self, schema_file):\n \"\"\"Find descriptor schemas in given path.\"\"\"\n if not schema_file.lower().endswith(('.yml', '.yaml')):\n return []\n\n with open(schema_file) as fn:\n schemas = yaml.load(fn, Loader=yaml.FullLoader)\n if not schemas:\n self.stderr.write(\"Could not read YAML file {}\".format(schema_file))\n return []\n\n descriptor_schemas = []\n for schema in schemas:\n if 'schema' not in schema:\n continue\n\n descriptor_schemas.append(schema)\n\n return descriptor_schemas"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding schemas in a directory that match filters.", "response": "def find_schemas(self, schema_path, schema_type=SCHEMA_TYPE_PROCESS, verbosity=1):\n \"\"\"Find schemas in packages that match filters.\"\"\"\n schema_matches = []\n\n if not os.path.isdir(schema_path):\n if verbosity > 0:\n self.stdout.write(\"Invalid path {}\".format(schema_path))\n return\n\n if schema_type not in [SCHEMA_TYPE_PROCESS, SCHEMA_TYPE_DESCRIPTOR]:\n raise ValueError('Invalid schema type')\n\n for root, _, files in os.walk(schema_path):\n for schema_file in [os.path.join(root, fn) for fn in files]:\n schemas = None\n if schema_type == SCHEMA_TYPE_DESCRIPTOR:\n # Discover descriptors.\n schemas = self.find_descriptor_schemas(schema_file)\n elif schema_type == SCHEMA_TYPE_PROCESS:\n # Perform process discovery for all supported execution engines.\n schemas = []\n for execution_engine in manager.execution_engines.values():\n schemas.extend(execution_engine.discover_process(schema_file))\n\n for schema in schemas:\n schema_matches.append(schema)\n\n return schema_matches"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread and register processes.", "response": "def register_processes(self, process_schemas, user, force=False, verbosity=1):\n \"\"\"Read and register processors.\"\"\"\n log_processors = []\n log_templates = []\n\n for p in process_schemas:\n # TODO: Remove this when all processes are migrated to the\n # new syntax.\n if 'flow_collection' in p:\n if 'entity' in p:\n self.stderr.write(\n \"Skip processor {}: only one of 'flow_collection' and 'entity' fields \"\n \"allowed\".format(p['slug'])\n )\n continue\n\n p['entity'] = {'type': p.pop('flow_collection')}\n\n if p['type'][-1] != ':':\n p['type'] += ':'\n\n if 'category' in p and not p['category'].endswith(':'):\n p['category'] += ':'\n\n for field in ['input', 'output']:\n for schema, _, _ in iterate_schema({}, p[field] if field in p else {}):\n if not schema['type'][-1].endswith(':'):\n schema['type'] += ':'\n # TODO: Check if schemas validate with our JSON meta schema and Processor model docs.\n\n if not self.valid(p, PROCESSOR_SCHEMA):\n continue\n\n if 'entity' in p:\n if 'type' not in p['entity']:\n self.stderr.write(\n \"Skip process {}: 'entity.type' required if 'entity' defined\".format(p['slug'])\n )\n continue\n\n p['entity_type'] = p['entity']['type']\n p['entity_descriptor_schema'] = p['entity'].get('descriptor_schema', p['entity_type'])\n p['entity_input'] = p['entity'].get('input', None)\n p.pop('entity')\n\n if not DescriptorSchema.objects.filter(slug=p['entity_descriptor_schema']).exists():\n self.stderr.write(\n \"Skip processor {}: Unknown descriptor schema '{}' used in 'entity' \"\n \"field.\".format(p['slug'], p['entity_descriptor_schema'])\n )\n continue\n\n if 'persistence' in p:\n persistence_mapping = {\n 'RAW': Process.PERSISTENCE_RAW,\n 'CACHED': Process.PERSISTENCE_CACHED,\n 'TEMP': Process.PERSISTENCE_TEMP,\n }\n\n p['persistence'] = persistence_mapping[p['persistence']]\n\n if 'scheduling_class' in p:\n scheduling_class_mapping = {\n 'interactive': Process.SCHEDULING_CLASS_INTERACTIVE,\n 'batch': Process.SCHEDULING_CLASS_BATCH\n }\n\n p['scheduling_class'] = scheduling_class_mapping[p['scheduling_class']]\n\n if 'input' in p:\n p['input_schema'] = p.pop('input')\n\n if 'output' in p:\n p['output_schema'] = p.pop('output')\n\n slug = p['slug']\n\n if 'run' in p:\n # Set default language to 'bash' if not set.\n p['run'].setdefault('language', 'bash')\n\n # Transform output schema using the execution engine.\n try:\n execution_engine = manager.get_execution_engine(p['run']['language'])\n extra_output_schema = execution_engine.get_output_schema(p)\n if extra_output_schema:\n p.setdefault('output_schema', []).extend(extra_output_schema)\n except InvalidEngineError:\n self.stderr.write(\"Skip processor {}: execution engine '{}' not supported\".format(\n slug, p['run']['language']\n ))\n continue\n\n # Validate if container image is allowed based on the configured pattern.\n # NOTE: This validation happens here and is not deferred to executors because the idea\n # is that this will be moved to a \"container\" requirement independent of the\n # executor.\n if hasattr(settings, 'FLOW_CONTAINER_VALIDATE_IMAGE'):\n try:\n container_image = dict_dot(p, 'requirements.executor.docker.image')\n if not re.match(settings.FLOW_CONTAINER_VALIDATE_IMAGE, container_image):\n self.stderr.write(\"Skip processor {}: container image does not match '{}'\".format(\n slug, settings.FLOW_CONTAINER_VALIDATE_IMAGE,\n ))\n continue\n except KeyError:\n pass\n\n version = p['version']\n int_version = convert_version_string_to_int(version, VERSION_NUMBER_BITS)\n\n # `latest version` is returned as `int` so it has to be compared to `int_version`\n latest_version = Process.objects.filter(slug=slug).aggregate(Max('version'))['version__max']\n if latest_version is not None and latest_version > int_version:\n self.stderr.write(\"Skip processor {}: newer version installed\".format(slug))\n continue\n\n previous_process_qs = Process.objects.filter(slug=slug)\n if previous_process_qs.exists():\n previous_process = previous_process_qs.latest()\n else:\n previous_process = None\n\n process_query = Process.objects.filter(slug=slug, version=version)\n if process_query.exists():\n if not force:\n if verbosity > 0:\n self.stdout.write(\"Skip processor {}: same version installed\".format(slug))\n continue\n\n process_query.update(**p)\n log_processors.append(\"Updated {}\".format(slug))\n else:\n process = Process.objects.create(contributor=user, **p)\n assign_contributor_permissions(process)\n if previous_process:\n copy_permissions(previous_process, process)\n log_processors.append(\"Inserted {}\".format(slug))\n\n if verbosity > 0:\n if log_processors:\n self.stdout.write(\"Processor Updates:\")\n for log in log_processors:\n self.stdout.write(\" {}\".format(log))\n\n if log_templates:\n self.stdout.write(\"Default Template Updates:\")\n for log in log_templates:\n self.stdout.write(\" {}\".format(log))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread and register descriptors.", "response": "def register_descriptors(self, descriptor_schemas, user, force=False, verbosity=1):\n \"\"\"Read and register descriptors.\"\"\"\n log_descriptors = []\n\n for descriptor_schema in descriptor_schemas:\n for schema, _, _ in iterate_schema({}, descriptor_schema.get('schema', {})):\n if not schema['type'][-1].endswith(':'):\n schema['type'] += ':'\n\n if 'schema' not in descriptor_schema:\n descriptor_schema['schema'] = []\n\n if not self.valid(descriptor_schema, DESCRIPTOR_SCHEMA):\n continue\n\n slug = descriptor_schema['slug']\n version = descriptor_schema.get('version', '0.0.0')\n int_version = convert_version_string_to_int(version, VERSION_NUMBER_BITS)\n\n # `latest version` is returned as `int` so it has to be compared to `int_version`\n latest_version = DescriptorSchema.objects.filter(slug=slug).aggregate(Max('version'))['version__max']\n if latest_version is not None and latest_version > int_version:\n self.stderr.write(\"Skip descriptor schema {}: newer version installed\".format(slug))\n continue\n\n previous_descriptor_qs = DescriptorSchema.objects.filter(slug=slug)\n if previous_descriptor_qs.exists():\n previous_descriptor = previous_descriptor_qs.latest()\n else:\n previous_descriptor = None\n\n descriptor_query = DescriptorSchema.objects.filter(slug=slug, version=version)\n if descriptor_query.exists():\n if not force:\n if verbosity > 0:\n self.stdout.write(\"Skip descriptor schema {}: same version installed\".format(slug))\n continue\n\n descriptor_query.update(**descriptor_schema)\n log_descriptors.append(\"Updated {}\".format(slug))\n else:\n descriptor = DescriptorSchema.objects.create(contributor=user, **descriptor_schema)\n assign_contributor_permissions(descriptor)\n if previous_descriptor:\n copy_permissions(previous_descriptor, descriptor)\n log_descriptors.append(\"Inserted {}\".format(slug))\n\n if log_descriptors and verbosity > 0:\n self.stdout.write(\"Descriptor schemas Updates:\")\n for log in log_descriptors:\n self.stdout.write(\" {}\".format(log))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef retire(self, process_schemas):\n process_slugs = set(ps['slug'] for ps in process_schemas)\n\n # Processes that are in DB but not in the code\n retired_processes = Process.objects.filter(~Q(slug__in=process_slugs))\n\n # Remove retired processes which do not have data\n retired_processes.filter(data__exact=None).delete()\n\n # Remove non-latest processes which do not have data\n latest_version_processes = Process.objects.order_by('slug', '-version').distinct('slug')\n Process.objects.filter(data__exact=None).difference(latest_version_processes).delete()\n\n # Deactivate retired processes which have data\n retired_processes.update(is_active=False)", "response": "Retire obsolete processes.\n\n Remove old process versions without data. Find processes that have been\n registered but do not exist in the code anymore, then:\n\n - If they do not have data: remove them\n - If they have data: flag them not active (``is_active=False``)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef handle(self, *args, **options):\n force = options.get('force')\n retire = options.get('retire')\n verbosity = int(options.get('verbosity'))\n\n users = get_user_model().objects.filter(is_superuser=True).order_by('date_joined')\n\n if not users.exists():\n self.stderr.write(\"Admin does not exist: create a superuser\")\n exit(1)\n\n process_paths, descriptor_paths = [], []\n process_schemas, descriptor_schemas = [], []\n\n for finder in get_finders():\n process_paths.extend(finder.find_processes())\n descriptor_paths.extend(finder.find_descriptors())\n\n for proc_path in process_paths:\n process_schemas.extend(\n self.find_schemas(proc_path, schema_type=SCHEMA_TYPE_PROCESS, verbosity=verbosity))\n\n for desc_path in descriptor_paths:\n descriptor_schemas.extend(\n self.find_schemas(desc_path, schema_type=SCHEMA_TYPE_DESCRIPTOR, verbosity=verbosity))\n\n user_admin = users.first()\n self.register_descriptors(descriptor_schemas, user_admin, force, verbosity=verbosity)\n # NOTE: Descriptor schemas must be registered first, so\n # processes can validate 'entity_descriptor_schema' field.\n self.register_processes(process_schemas, user_admin, force, verbosity=verbosity)\n\n if retire:\n self.retire(process_schemas)\n\n if verbosity > 0:\n self.stdout.write(\"Running executor post-registration hook...\")\n manager.get_executor().post_register_hook(verbosity=verbosity)", "response": "Register processes and descriptors."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns True if the DID is valid for create and update.", "response": "def is_valid_pid_for_create(did):\n \"\"\"Return True if ``did`` is the PID of an object that can be created with\n MNStorage.create() or MNStorage.update().\n\n To be valid for create() and update(), the DID:\n\n - Must not be the PID of an object that exists on this MN\n - Must not be a known SID known to this MN\n - Must not have been accepted for replication by this MN.\n - Must not be referenced as obsoletes or obsoletedBy in an object that exists on\n this MN\n\n In addition, if the DID exists in a resource map:\n\n - If RESOURCE_MAP_CREATE = 'reserve':\n\n - The DataONE subject that is making the call must have write or changePermission\n on the resource map.\n\n \"\"\"\n # logger.debug('existing: {}'.format(is_existing_object(did)))\n # logger.debug('sid: {}'.format(is_sid(did)))\n # logger.debug('local_replica: {}'.format(is_local_replica(did)))\n # logger.debug('revision: {}'.format(d1_gmn.app.revision.is_revision(did)))\n return (\n not is_existing_object(did)\n and not is_sid(did)\n and not is_local_replica(did)\n and not d1_gmn.app.revision.is_revision(did)\n and d1_gmn.app.resource_map.is_sciobj_valid_for_create()\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn True if the given object is a valid PID to be updated.", "response": "def is_valid_pid_to_be_updated(did):\n \"\"\"Return True if ``did`` is the PID of an object that can be updated (obsoleted)\n with MNStorage.update()\"\"\"\n return (\n is_existing_object(did)\n and not is_local_replica(did)\n and not is_archived(did)\n and not is_obsoleted(did)\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning True if sid can be assigned to the single object or to the chain to which pid belongs.", "response": "def is_valid_sid_for_chain(pid, sid):\n \"\"\"Return True if ``sid`` can be assigned to the single object ``pid`` or to the\n chain to which ``pid`` belongs.\n\n - If the chain does not have a SID, the new SID must be previously unused.\n - If the chain already has a SID, the new SID must match the existing SID.\n\n All known PIDs are associated with a chain.\n\n Preconditions:\n - ``pid`` is verified to exist. E.g., with\n d1_gmn.app.views.asserts.is_existing_object().\n - ``sid`` is None or verified to be a SID\n\n \"\"\"\n if _is_unused_did(sid):\n return True\n existing_sid = d1_gmn.app.revision.get_sid_by_pid(pid)\n if existing_sid is None:\n return False\n return existing_sid == sid"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if PID is for an object for which science bytes are stored locally.", "response": "def is_existing_object(did):\n \"\"\"Return True if PID is for an object for which science bytes are stored locally.\n\n This excludes SIDs and PIDs for unprocessed replica requests, remote or non-existing\n revisions of local replicas and objects aggregated in Resource Maps.\n\n \"\"\"\n return d1_gmn.app.models.ScienceObject.objects.filter(pid__did=did).exists()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a text fragment classifying the DID.", "response": "def classify_identifier(did):\n \"\"\"Return a text fragment classifying the ``did``\n\n Return if the DID could not be classified. This should not normally happen\n and may indicate that the DID was orphaned in the database.\n\n \"\"\"\n if _is_unused_did(did):\n return 'unused on this Member Node'\n elif is_sid(did):\n return 'a Series ID (SID) of a revision chain'\n elif is_local_replica(did):\n return 'a Persistent ID (PID) of a local replica'\n elif is_unprocessed_local_replica(did):\n return (\n 'a Persistent ID (PID) of an accepted but not yet processed local replica'\n )\n elif is_archived(did):\n return 'a Persistent ID (PID) of a previously archived local object'\n elif is_obsoleted(did):\n return 'a Persistent ID (PID) of a previously updated (obsoleted) local object'\n elif is_resource_map_db(did):\n return 'a Persistent ID (PID) of a local resource map'\n elif is_existing_object(did):\n return 'a Persistent ID (PID) of an existing local object'\n elif is_revision_chain_placeholder(did):\n return (\n 'a Persistent ID (PID) of a remote or non-existing revision of a local '\n 'replica'\n )\n elif is_resource_map_member(did):\n return (\n 'a Persistent ID (PID) of a remote or non-existing object aggregated in '\n 'a local Resource Map'\n )\n logger.warning('Unable to classify known identifier. did=\"{}\"'.format(did))\n return ''"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns whether the given PID is a local replica.", "response": "def is_local_replica(pid):\n \"\"\"Includes unprocessed replication requests.\"\"\"\n return d1_gmn.app.models.LocalReplica.objects.filter(pid__did=pid).exists()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nam local replica with status queued?", "response": "def is_unprocessed_local_replica(pid):\n \"\"\"Is local replica with status \"queued\".\"\"\"\n return d1_gmn.app.models.LocalReplica.objects.filter(\n pid__did=pid, info__status__status='queued'\n ).exists()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef is_revision_chain_placeholder(pid):\n return d1_gmn.app.models.ReplicaRevisionChainReference.objects.filter(\n pid__did=pid\n ).exists()", "response": "Returns whether the given PID is reserved for use by\n other replicas."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _is_did(did):\n return d1_gmn.app.models.IdNamespace.objects.filter(did=did).exists()", "response": "Return True if the given DID is recorded in a local context."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef prepare_connection():\n elasticsearch_host = getattr(settings, 'ELASTICSEARCH_HOST', 'localhost')\n elasticsearch_port = getattr(settings, 'ELASTICSEARCH_PORT', 9200)\n connections.create_connection(hosts=['{}:{}'.format(elasticsearch_host, elasticsearch_port)])", "response": "Prepare connection for ElasticSearch."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _log(pid, request, event, timestamp=None):\n # Support logging events that are not associated with an object.\n sciobj_model = None\n if pid is not None:\n try:\n sciobj_model = d1_gmn.app.models.ScienceObject.objects.filter(pid__did=pid)[\n 0\n ]\n except IndexError:\n raise d1_common.types.exceptions.ServiceFailure(\n 0,\n 'Attempted to create event log for non-existing object. pid=\"{}\"'.format(\n pid\n ),\n )\n\n event_log_model = create_log_entry(\n sciobj_model,\n event,\n request.META['REMOTE_ADDR'],\n request.META.get('HTTP_USER_AGENT', ''),\n request.primary_subject_str,\n )\n\n # The datetime is an optional parameter. If it is not provided, a\n # \"auto_now_add=True\" value in the the model defaults it to Now. The\n # disadvantage to this approach is that we have to update the timestamp in a\n # separate step if we want to set it to anything other than Now.\n if timestamp is not None:\n event_log_model.timestamp = timestamp\n event_log_model.save()", "response": "Log an operation that was performed on a sciobj."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _is_ignored_read_event(request):\n if (\n django.conf.settings.LOG_IGNORE_TRUSTED_SUBJECT\n and d1_gmn.app.auth.is_trusted_subject(request)\n ):\n return True\n if (\n django.conf.settings.LOG_IGNORE_NODE_SUBJECT\n and d1_gmn.app.auth.is_client_side_cert_subject(request)\n ):\n return True\n if _has_regex_match(\n request.META['REMOTE_ADDR'], django.conf.settings.LOG_IGNORE_IP_ADDRESS\n ):\n return True\n if _has_regex_match(\n request.META.get('HTTP_USER_AGENT', ''),\n django.conf.settings.LOG_IGNORE_USER_AGENT,\n ):\n return True\n if _has_regex_match(\n request.primary_subject_str, django.conf.settings.LOG_IGNORE_SUBJECT\n ):\n return True\n return False", "response": "Return True if this read event was generated by an automated process and if the user has specified the LOG_IGNORE settings."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef action_to_level(action):\n try:\n return ACTION_LEVEL_MAP[action]\n except LookupError:\n raise d1_common.types.exceptions.InvalidRequest(\n 0, 'Unknown action. action=\"{}\"'.format(action)\n )", "response": "Map action name to action level."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef level_to_action(level):\n try:\n return LEVEL_ACTION_MAP[level]\n except LookupError:\n raise d1_common.types.exceptions.InvalidRequest(\n 0, 'Unknown action level. level=\"{}\"'.format(level)\n )", "response": "Map action level to action name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting set of subjects that have unlimited access to all SciObj and APIs on this node.", "response": "def get_trusted_subjects():\n \"\"\"Get set of subjects that have unlimited access to all SciObj and APIs on this\n node.\"\"\"\n cert_subj = _get_client_side_certificate_subject()\n return (\n d1_gmn.app.node_registry.get_cn_subjects()\n | django.conf.settings.DATAONE_TRUSTED_SUBJECTS\n | {cert_subj}\n if cert_subj is not None\n else set()\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndetermines if calling subject is fully trusted.", "response": "def is_trusted_subject(request):\n \"\"\"Determine if calling subject is fully trusted.\"\"\"\n logging.debug('Active subjects: {}'.format(', '.join(request.all_subjects_set)))\n logging.debug('Trusted subjects: {}'.format(', '.join(get_trusted_subjects())))\n return not request.all_subjects_set.isdisjoint(get_trusted_subjects())"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_client_side_certificate_subject():\n subject = django.core.cache.cache.get('client_side_certificate_subject')\n if subject is not None:\n return subject\n cert_pem = _get_client_side_certificate_pem()\n if cert_pem is None:\n return None\n subject = _extract_subject_from_pem(cert_pem)\n django.core.cache.cache.set('client_side_certificate_subject', subject)\n return subject", "response": "Return the DN from the client side certificate as a D1 subject if a client side\n cert has been configured. Otherwise return None."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef is_allowed(request, level, pid):\n if is_trusted_subject(request):\n return True\n return d1_gmn.app.models.Permission.objects.filter(\n sciobj__pid__did=pid,\n subject__subject__in=request.all_subjects_set,\n level__gte=level,\n ).exists()", "response": "Check if one or more subjects are allowed to perform action level on object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef assert_create_update_delete_permission(request):\n if not has_create_update_delete_permission(request):\n raise d1_common.types.exceptions.NotAuthorized(\n 0,\n 'Access allowed only for subjects with Create/Update/Delete '\n 'permission. active_subjects=\"{}\"'.format(format_active_subjects(request)),\n )", "response": "Access only by subjects with Create Update and Delete permission and by trusted\n infrastructure ( CNs )."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef assert_allowed(request, level, pid):\n if not d1_gmn.app.models.ScienceObject.objects.filter(pid__did=pid).exists():\n raise d1_common.types.exceptions.NotFound(\n 0,\n 'Attempted to perform operation on non-existing object. pid=\"{}\"'.format(\n pid\n ),\n )\n if not is_allowed(request, level, pid):\n raise d1_common.types.exceptions.NotAuthorized(\n 0,\n 'Operation is denied. level=\"{}\", pid=\"{}\", active_subjects=\"{}\"'.format(\n level_to_action(level), pid, format_active_subjects(request)\n ),\n )", "response": "Assert that one or more subjects are allowed to perform action on object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef format_active_subjects(request):\n decorated_subject_list = [request.primary_subject_str + ' (primary)']\n for subject in request.all_subjects_set:\n if subject != request.primary_subject_str:\n decorated_subject_list.append(subject)\n return ', '.join(decorated_subject_list)", "response": "Create a string listing active subjects for this connection suitable for\n appending to authentication error messages."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves unused imports Unsafe! Only tested on our codebase, which uses simple absolute imports on the form, \"import a.b.c\".", "response": "def main():\n \"\"\"Remove unused imports Unsafe!\n\n Only tested on our codebase, which uses simple absolute imports on the form, \"import\n a.b.c\".\n\n \"\"\"\n parser = argparse.ArgumentParser(\n description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter\n )\n parser.add_argument(\"path\", nargs=\"+\", help=\"File or directory path\")\n parser.add_argument(\"--exclude\", nargs=\"+\", help=\"Exclude glob patterns\")\n parser.add_argument(\n \"--no-recursive\",\n dest=\"recursive\",\n action=\"store_false\",\n help=\"Search directories recursively\",\n )\n parser.add_argument(\n \"--ignore-invalid\", action=\"store_true\", help=\"Ignore invalid paths\"\n )\n parser.add_argument(\n \"--pycharm\", action=\"store_true\", help=\"Enable PyCharm integration\"\n )\n parser.add_argument(\n \"--diff\",\n dest=\"show_diff\",\n action=\"store_true\",\n help=\"Show diff and do not modify any files\",\n )\n parser.add_argument(\n \"--dry-run\", action=\"store_true\", help=\"Process files but do not write results\"\n )\n parser.add_argument(\"--debug\", action=\"store_true\", help=\"Debug level logging\")\n\n args = parser.parse_args()\n d1_common.util.log_setup(args.debug)\n\n repo_path = d1_dev.util.find_repo_root_by_path(__file__)\n repo = git.Repo(repo_path)\n\n specified_file_path_list = get_specified_file_path_list(args)\n # tracked_path_list = list(d1_dev.util.get_tracked_files(repo))\n # format_path_list = sorted(\n # set(specified_file_path_list).intersection(tracked_path_list)\n # )\n format_path_list = specified_file_path_list\n for format_path in format_path_list:\n comment_unused_imports(args, format_path)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncommenting out import for unused_dot_list.", "response": "def comment_import(r, unused_dot_list):\n \"\"\"Comment out import for {dot_str}.\"\"\"\n unused_dot_str = \".\".join(unused_dot_list)\n for n in r(\"ImportNode\"):\n if n.names()[0] == unused_dot_str:\n # The \"!\" is inserted so that this line doesn't show up when searching for\n # the comment pattern in code.\n n.replace(\"#{}# {}\".format(\"!\", str(n)))\n break"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncapturing only the leading dotted name list.", "response": "def get_atomtrailer_list(r):\n \"\"\"Capture only the leading dotted name list.\n\n A full sequence typically includes function calls and parameters.\n pkga.pkgb.pkgc.one_call(arg1, arg2, arg3=4)\n\n \"\"\"\n dot_set = set()\n for n in r.find_all((\"atomtrailers\",)):\n name_list = []\n for x in n.value:\n if x.type != \"name\":\n break\n name_list.append(x.value)\n if name_list:\n dot_set.add(tuple(name_list))\n return sorted(dot_set)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef custom_filter_text(self, value, search):\n if isinstance(value, list):\n value = ' '.join(value)\n\n should = [\n Q('match', slug={'query': value, 'operator': 'and', 'boost': 10.0}),\n Q('match', **{'slug.ngrams': {'query': value, 'operator': 'and', 'boost': 5.0}}),\n Q('match', name={'query': value, 'operator': 'and', 'boost': 10.0}),\n Q('match', **{'name.ngrams': {'query': value, 'operator': 'and', 'boost': 5.0}}),\n Q('match', contributor_name={'query': value, 'operator': 'and', 'boost': 5.0}),\n Q('match', **{'contributor_name.ngrams': {'query': value, 'operator': 'and', 'boost': 2.0}}),\n Q('match', owner_names={'query': value, 'operator': 'and', 'boost': 5.0}),\n Q('match', **{'owner_names.ngrams': {'query': value, 'operator': 'and', 'boost': 2.0}}),\n Q('match', descriptor_data={'query': value, 'operator': 'and'}),\n ]\n\n # Add registered text extensions.\n for extension in composer.get_extensions(self):\n if hasattr(extension, 'text_filter'):\n should += extension.text_filter(value)\n\n search = search.query('bool', should=should)\n\n return search", "response": "Support general query using the text attribute."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_content_permissions(self, user, obj, payload):\n for entity in obj.entity_set.all():\n if user.has_perm('share_entity', entity):\n update_permission(entity, payload)\n\n # Data doesn't have \"ADD\" permission, so it has to be removed\n payload = remove_permission(payload, 'add')\n\n for data in obj.data.all():\n if user.has_perm('share_data', data):\n update_permission(data, payload)", "response": "Apply permissions to data objects and entities in Collection."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create(self, request, *args, **kwargs):\n if not request.user.is_authenticated:\n raise exceptions.NotFound\n\n return super().create(request, *args, **kwargs)", "response": "Only authenticated usesr can create new collections."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndestroying a model instance.", "response": "def destroy(self, request, *args, **kwargs):\n \"\"\"Destroy a model instance.\n\n If ``delete_content`` flag is set in query parameters, also all\n Data objects and Entities, on which user has ``EDIT``\n permission, contained in collection will be deleted.\n \"\"\"\n obj = self.get_object()\n user = request.user\n\n if strtobool(request.query_params.get('delete_content', 'false')):\n for entity in obj.entity_set.all():\n if user.has_perm('edit_entity', entity):\n entity.delete()\n\n for data in obj.data.all():\n if user.has_perm('edit_data', data):\n data.delete()\n\n return super().destroy(request, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd data to collection.", "response": "def add_data(self, request, pk=None):\n \"\"\"Add data to collection.\"\"\"\n collection = self.get_object()\n\n if 'ids' not in request.data:\n return Response({\"error\": \"`ids`parameter is required\"}, status=status.HTTP_400_BAD_REQUEST)\n\n missing = []\n for data_id in request.data['ids']:\n if not Data.objects.filter(pk=data_id).exists():\n missing.append(data_id)\n\n if missing:\n return Response(\n {\"error\": \"Data objects with following ids are missing: {}\".format(', '.join(missing))},\n status=status.HTTP_400_BAD_REQUEST)\n\n for data_id in request.data['ids']:\n collection.data.add(data_id)\n\n return Response()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving data from collection.", "response": "def remove_data(self, request, pk=None):\n \"\"\"Remove data from collection.\"\"\"\n collection = self.get_object()\n\n if 'ids' not in request.data:\n return Response({\"error\": \"`ids`parameter is required\"}, status=status.HTTP_400_BAD_REQUEST)\n\n for data_id in request.data['ids']:\n collection.data.remove(data_id)\n\n return Response()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nduplicating ( make copy of ) Collection models.", "response": "def duplicate(self, request, *args, **kwargs):\n \"\"\"Duplicate (make copy of) ``Collection`` models.\"\"\"\n if not request.user.is_authenticated:\n raise exceptions.NotFound\n\n ids = self.get_ids(request.data)\n queryset = get_objects_for_user(request.user, 'view_collection', Collection.objects.filter(id__in=ids))\n actual_ids = queryset.values_list('id', flat=True)\n missing_ids = list(set(ids) - set(actual_ids))\n if missing_ids:\n raise exceptions.ParseError(\n \"Collections with the following ids not found: {}\".format(', '.join(map(str, missing_ids)))\n )\n\n duplicated = queryset.duplicate(contributor=request.user)\n\n serializer = self.get_serializer(duplicated, many=True)\n return Response(serializer.data)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks if an object with pid has successfully synced to CN.", "response": "async def is_object_synced_to_cn(self, client, pid):\n \"\"\"Check if object with {pid} has successfully synced to the CN.\n\n CNRead.describe() is used as it's a light-weight HTTP HEAD request.\n\n This assumes that the call is being made over a connection that has been\n authenticated and has read or better access on the given object if it exists.\n\n \"\"\"\n try:\n await client.describe(pid)\n except d1_common.types.exceptions.DataONEException:\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef apply(self, search, field, value):\n return search.query(\n self.query_type( # pylint: disable=not-callable\n **{\n field: {\n self.operator: self.get_value_query(value)\n }\n }\n )\n )", "response": "Apply lookup expression to search query."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef apply(self, search, field, value):\n if not isinstance(value, list):\n value = [x for x in value.strip().split(',') if x]\n\n filters = [Q('match', **{field: item}) for item in value]\n return search.query('bool', should=filters)", "response": "Apply lookup expression to search query."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef apply(self, search, field, value):\n # We assume that the field in question has a \"raw\" counterpart.\n return search.query('match', **{'{}.raw'.format(field): value})", "response": "Apply lookup expression to search query."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlooking up a lookup.", "response": "def get_lookup(self, operator):\n \"\"\"Look up a lookup.\n\n :param operator: Name of the lookup operator\n \"\"\"\n try:\n return self._lookups[operator]\n except KeyError:\n raise NotImplementedError(\"Lookup operator '{}' is not supported\".format(operator))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build(self, search, raw_query):\n unmatched_items = {}\n for expression, value in raw_query.items():\n # Parse query expression into tokens.\n tokens = expression.split(TOKEN_SEPARATOR)\n\n field = tokens[0]\n tail = tokens[1:]\n\n if field not in self.fields:\n unmatched_items[expression] = value\n continue\n\n # Map field alias to final field.\n field = self.fields_map.get(field, field)\n\n # Parse lookup expression. Currently only no token or a single token is allowed.\n if tail:\n if len(tail) > 1:\n raise NotImplementedError(\"Nested lookup expressions are not supported\")\n\n lookup = self.get_lookup(tail[0])\n search = lookup.apply(search, field, value)\n else:\n # Default lookup.\n custom_filter = getattr(self.custom_filter_object, 'custom_filter_{}'.format(field), None)\n if custom_filter is not None:\n search = custom_filter(value, search)\n elif isinstance(value, list):\n # Default is 'should' between matches. If you need anything else,\n # a custom filter for this field should be implemented.\n filters = [Q('match', **{field: item}) for item in value]\n search = search.query('bool', should=filters)\n else:\n search = search.query('match', **{field: {'query': value, 'operator': 'and'}})\n\n return (search, unmatched_items)", "response": "Build query.\n\n :param search: Search query instance\n :param raw_query: Raw query arguments dictionary"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nview handler decorator that adds SID resolve and PID validation. - For v1 calls, assume that ``did`` is a pid and raise NotFound exception if it's not valid. - For v2 calls, if DID is a valid PID, return it. If not, try to resolve it as a SID and, if successful, return the new PID. Else, raise NotFound exception.", "response": "def resolve_sid(f):\n \"\"\"View handler decorator that adds SID resolve and PID validation.\n\n - For v1 calls, assume that ``did`` is a pid and raise NotFound exception if it's\n not valid.\n - For v2 calls, if DID is a valid PID, return it. If not, try to resolve it as a\n SID and, if successful, return the new PID. Else, raise NotFound exception.\n\n \"\"\"\n\n @functools.wraps(f)\n def wrapper(request, did, *args, **kwargs):\n pid = resolve_sid_func(request, did)\n return f(request, pid, *args, **kwargs)\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef decode_did(f):\n\n @functools.wraps(f)\n def wrapper(request, did, *args, **kwargs):\n return f(request, decode_path_segment(did), *args, **kwargs)\n\n return wrapper", "response": "Decorator that decodes a DID from URL\n path segment by Django."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef trusted_permission(f):\n\n @functools.wraps(f)\n def wrapper(request, *args, **kwargs):\n trusted(request)\n return f(request, *args, **kwargs)\n\n return wrapper", "response": "Access only by D1 infrastructure."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_objects_access(f):\n\n @functools.wraps(f)\n def wrapper(request, *args, **kwargs):\n if not django.conf.settings.PUBLIC_OBJECT_LIST:\n trusted(request)\n return f(request, *args, **kwargs)\n\n return wrapper", "response": "Decorator to allow access to listObjects() controlled by settings. PUBLIC_OBJECT_LIST."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_log_records_access(f):\n\n @functools.wraps(f)\n def wrapper(request, *args, **kwargs):\n if not django.conf.settings.PUBLIC_LOG_RECORDS:\n trusted(request)\n return f(request, *args, **kwargs)\n\n return wrapper", "response": "Decorator to allow access to the log records of the current page."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\naccess only by subjects with Create/Update/Delete permission and by trusted infrastructure (CNs).", "response": "def assert_create_update_delete_permission(f):\n \"\"\"Access only by subjects with Create/Update/Delete permission and by trusted\n infrastructure (CNs).\"\"\"\n\n @functools.wraps(f)\n def wrapper(request, *args, **kwargs):\n d1_gmn.app.auth.assert_create_update_delete_permission(request)\n return f(request, *args, **kwargs)\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\naccesses only with a valid session.", "response": "def authenticated(f):\n \"\"\"Access only with a valid session.\"\"\"\n\n @functools.wraps(f)\n def wrapper(request, *args, **kwargs):\n if d1_common.const.SUBJECT_AUTHENTICATED not in request.all_subjects_set:\n raise d1_common.types.exceptions.NotAuthorized(\n 0,\n 'Access allowed only for authenticated subjects. Please reconnect with '\n 'a valid DataONE session certificate. active_subjects=\"{}\"'.format(\n d1_gmn.app.auth.format_active_subjects(request)\n ),\n )\n return f(request, *args, **kwargs)\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef required_permission(f, level):\n\n @functools.wraps(f)\n def wrapper(request, pid, *args, **kwargs):\n d1_gmn.app.auth.assert_allowed(request, level, pid)\n return f(request, pid, *args, **kwargs)\n\n return wrapper", "response": "Decorator that asserts that subject has access at given level or higher for object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _read_and_deserialize_dataone_type(self, response):\n try:\n return d1_common.xml.deserialize(response.content)\n except ValueError as e:\n self._raise_service_failure_invalid_dataone_type(response, e)", "response": "Given a response body try to create an instance of a DataONE type."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninitiate a MNRead. get. Return a Requests Response object from which the object bytes can be retrieved.", "response": "def get(self, pid, stream=False, vendorSpecific=None):\n \"\"\"Initiate a MNRead.get(). Return a Requests Response object from which the\n object bytes can be retrieved.\n\n When ``stream`` is False, Requests buffers the entire object in memory before\n returning the Response. This can exhaust available memory on the local machine\n when retrieving large science objects. The solution is to set ``stream`` to\n True, which causes the returned Response object to contain a a stream. However,\n see note below.\n\n When ``stream`` = True, the Response object will contain a stream which can be\n processed without buffering the entire science object in memory. However,\n failure to read all data from the stream can cause connections to be blocked.\n Due to this, the ``stream`` parameter is False by default.\n\n Also see:\n\n - http://docs.python-requests.org/en/master/user/advanced/body-content-workflow\n - get_and_save() in this module.\n\n \"\"\"\n response = self.getResponse(pid, stream, vendorSpecific)\n return self._read_stream_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlikes MNRead. get but also retrieve the object bytes and store them in aSciObjStream.", "response": "def get_and_save(\n self, pid, sciobj_stream, create_missing_dirs=False, vendorSpecific=None\n ):\n \"\"\"Like MNRead.get(), but also retrieve the object bytes and store them in a\n stream. This method does not have the potential issue with excessive memory\n usage that get() with ``stream``=False has.\n\n Also see MNRead.get().\n\n \"\"\"\n response = self.get(pid, stream=True, vendorSpecific=vendorSpecific)\n try:\n if create_missing_dirs:\n d1_common.utils.filesystem.create_missing_directories_for_file(\n sciobj_stream\n )\n for chunk_str in response.iter_content(\n chunk_size=d1_common.const.DEFAULT_CHUNK_SIZE\n ):\n if chunk_str:\n sciobj_stream.write(chunk_str)\n finally:\n response.close()\n return response"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef describe(self, pid, vendorSpecific=None):\n response = self.describeResponse(pid, vendorSpecific=vendorSpecific)\n return self._read_header_response(response)", "response": "Returns a dictionary of the metadata for the specified resource ID."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn True if user is allowed to perform action on pid else False.", "response": "def isAuthorized(self, pid, action, vendorSpecific=None):\n \"\"\"Return True if user is allowed to perform ``action`` on ``pid``, else\n False.\"\"\"\n response = self.isAuthorizedResponse(pid, action, vendorSpecific)\n return self._read_boolean_401_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fields(self):\n fields = super().fields\n return apply_subfield_projection(self, copy.copy(fields))", "response": "Filter fields based on request query parameters."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\niterate over all field values sub - fields.", "response": "def iterate_fields(fields, schema, path_prefix=None):\n \"\"\"Iterate over all field values sub-fields.\n\n This will iterate over all field values. Some fields defined in the schema\n might not be visited.\n\n :param fields: field values to iterate over\n :type fields: dict\n :param schema: schema to iterate over\n :type schema: dict\n :return: (field schema, field value)\n :rtype: tuple\n\n \"\"\"\n if path_prefix is not None and path_prefix != '' and path_prefix[-1] != '.':\n path_prefix += '.'\n\n schema_dict = {val['name']: val for val in schema}\n for field_id, properties in fields.items():\n path = '{}{}'.format(path_prefix, field_id) if path_prefix is not None else None\n if field_id not in schema_dict:\n raise KeyError(\"Field definition ({}) missing in schema\".format(field_id))\n if 'group' in schema_dict[field_id]:\n for rvals in iterate_fields(properties, schema_dict[field_id]['group'], path):\n yield rvals if path_prefix is not None else rvals[:2]\n else:\n rvals = (schema_dict[field_id], fields, path)\n yield rvals if path_prefix is not None else rvals[:2]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\niterate over all schema sub - fields.", "response": "def iterate_schema(fields, schema, path_prefix=''):\n \"\"\"Iterate over all schema sub-fields.\n\n This will iterate over all field definitions in the schema. Some field v\n alues might be None.\n\n :param fields: field values to iterate over\n :type fields: dict\n :param schema: schema to iterate over\n :type schema: dict\n :param path_prefix: dot separated path prefix\n :type path_prefix: str\n :return: (field schema, field value, field path)\n :rtype: tuple\n\n \"\"\"\n if path_prefix and path_prefix[-1] != '.':\n path_prefix += '.'\n\n for field_schema in schema:\n name = field_schema['name']\n if 'group' in field_schema:\n for rvals in iterate_schema(fields[name] if name in fields else {},\n field_schema['group'], '{}{}'.format(path_prefix, name)):\n yield rvals\n else:\n yield (field_schema, fields, '{}{}'.format(path_prefix, name))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef iterate_dict(container, exclude=None, path=None):\n if path is None:\n path = []\n\n for key, value in container.items():\n if callable(exclude) and exclude(key, value):\n continue\n\n if isinstance(value, collections.Mapping):\n for inner_path, inner_key, inner_value in iterate_dict(value, exclude=exclude, path=path + [key]):\n yield inner_path, inner_key, inner_value\n\n yield path, key, value", "response": "Iterate over a nested dictionary."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_subjects(request):\n if _is_certificate_provided(request):\n try:\n return get_authenticated_subjects(request.META['SSL_CLIENT_CERT'])\n except Exception as e:\n raise d1_common.types.exceptions.InvalidToken(\n 0,\n 'Error extracting session from certificate. error=\"{}\"'.format(str(e)),\n )\n else:\n return d1_common.const.SUBJECT_PUBLIC, set()", "response": "Get all subjects in the certificate."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_authenticated_subjects(cert_pem):\n if isinstance(cert_pem, str):\n cert_pem = cert_pem.encode('utf-8')\n return d1_common.cert.subjects.extract_subjects(cert_pem)", "response": "Return primary subject and set of equivalents authenticated by certificate."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_serializer_class(self):\n base_class = super().get_serializer_class()\n\n class SerializerWithPermissions(base_class):\n \"\"\"Augment serializer class.\"\"\"\n\n def get_fields(serializer_self): # pylint: disable=no-self-argument\n \"\"\"Return serializer's fields.\"\"\"\n fields = super().get_fields()\n fields['current_user_permissions'] = CurrentUserPermissionsSerializer(read_only=True)\n return fields\n\n def to_representation(serializer_self, instance): # pylint: disable=no-self-argument\n \"\"\"Object serializer.\"\"\"\n data = super().to_representation(instance)\n\n if ('fields' not in self.request.query_params\n or 'current_user_permissions' in self.request.query_params['fields']):\n data['current_user_permissions'] = get_object_perms(instance, self.request.user)\n\n return data\n\n return SerializerWithPermissions", "response": "Augment base serializer class."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef detail_permissions(self, request, pk=None):\n obj = self.get_object()\n\n if request.method == 'POST':\n content_type = ContentType.objects.get_for_model(obj)\n payload = request.data\n share_content = strtobool(payload.pop('share_content', 'false'))\n user = request.user\n is_owner = user.has_perm('owner_{}'.format(content_type), obj=obj)\n\n allow_owner = is_owner or user.is_superuser\n check_owner_permission(payload, allow_owner)\n check_public_permissions(payload)\n check_user_permissions(payload, request.user.pk)\n\n with transaction.atomic():\n update_permission(obj, payload)\n\n owner_count = UserObjectPermission.objects.filter(\n object_pk=obj.id,\n content_type=content_type,\n permission__codename__startswith='owner_'\n ).count()\n\n if not owner_count:\n raise exceptions.ParseError('Object must have at least one owner.')\n\n if share_content:\n self.set_content_permissions(user, obj, payload)\n\n return Response(get_object_perms(obj))", "response": "Get or set permissions API endpoint."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nperforming descriptor validation and save object.", "response": "def save(self, *args, **kwargs):\n \"\"\"Perform descriptor validation and save object.\"\"\"\n if self.descriptor_schema:\n try:\n validate_schema(self.descriptor, self.descriptor_schema.schema) # pylint: disable=no-member\n self.descriptor_dirty = False\n except DirtyError:\n self.descriptor_dirty = True\n elif self.descriptor and self.descriptor != {}:\n raise ValueError(\"`descriptor_schema` must be defined if `descriptor` is given\")\n\n super().save()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef duplicate(self, contributor=None):\n duplicate = Collection.objects.get(id=self.id)\n duplicate.pk = None\n duplicate.slug = None\n duplicate.name = 'Copy of {}'.format(self.name)\n duplicate.duplicated = now()\n if contributor:\n duplicate.contributor = contributor\n\n duplicate.save(force_insert=True)\n\n assign_contributor_permissions(duplicate)\n\n # Fields to inherit from original data object.\n duplicate.created = self.created\n duplicate.save()\n\n # Duplicate collection's entities.\n entities = get_objects_for_user(contributor, 'view_entity', self.entity_set.all()) # pylint: disable=no-member\n duplicated_entities = entities.duplicate(contributor=contributor)\n duplicate.entity_set.add(*duplicated_entities)\n\n # Add duplicated data objects to collection.\n for duplicated_entity in duplicate.entity_set.all():\n duplicate.data.add(*duplicated_entity.data.all())\n\n return duplicate", "response": "Duplicate the current instance."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _scalar2array(d):\n da = {}\n for k, v in d.items():\n if '_' not in k:\n da[k] = v\n else:\n name = ''.join(k.split('_')[:-1])\n ind = k.split('_')[-1]\n dim = len(ind)\n if name not in da:\n shape = tuple(3 for i in range(dim))\n da[name] = np.empty(shape, dtype=complex)\n da[name][:] = np.nan\n da[name][tuple(int(i) - 1 for i in ind)] = v\n return da", "response": "Convert a dictionary with scalar elements and string indices _1234 to a dictionary of arrays. Unspecified entries are np. nan."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _symm_current(C):\n nans = np.isnan(C)\n C[nans] = np.einsum('klij', C)[nans]\n return C", "response": "To get rid of NaNs produced by _scalar2array symmetrize operators\n where C_ijkl = C_klij"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _antisymm_12(C):\n nans = np.isnan(C)\n C[nans] = -np.einsum('jikl', C)[nans]\n return C", "response": "Antisymmetrize the first two indices of operators where C_ijkl = - C_jikl"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nundoing the scaling applied in scale_dict_wet.", "response": "def unscale_dict_wet(C):\n \"\"\"Undo the scaling applied in `scale_dict_wet`.\"\"\"\n return {k: _scale_dict[k] * v for k, v in C.items()}"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_reqs(req_path, req_list):\n with open(req_path, 'w') as f:\n f.write('\\n'.join(req_list) + \"\\n\")", "response": "Writes a list of requests to a file"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsubmitting a new process locally.", "response": "def submit(self, data, runtime_dir, argv):\n \"\"\"Run process locally.\n\n For details, see\n :meth:`~resolwe.flow.managers.workload_connectors.base.BaseConnector.submit`.\n \"\"\"\n logger.debug(__(\n \"Connector '{}' running for Data with id {} ({}).\",\n self.__class__.__module__,\n data.id,\n repr(argv)\n ))\n subprocess.Popen(\n argv,\n cwd=runtime_dir,\n stdin=subprocess.DEVNULL\n ).wait()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_checksum_object_from_stream(\n f, algorithm=d1_common.const.DEFAULT_CHECKSUM_ALGORITHM\n):\n \"\"\"Calculate the checksum of a stream.\n\n Args:\n f: file-like object\n Only requirement is a ``read()`` method that returns ``bytes``.\n\n algorithm: str\n Checksum algorithm, ``MD5`` or ``SHA1`` / ``SHA-1``.\n\n Returns:\n Populated Checksum PyXB object.\n\n \"\"\"\n checksum_str = calculate_checksum_on_stream(f, algorithm)\n checksum_pyxb = d1_common.types.dataoneTypes.checksum(checksum_str)\n checksum_pyxb.algorithm = algorithm\n return checksum_pyxb", "response": "Calculate the checksum of a file - like object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalculating the checksum of an iterator.", "response": "def create_checksum_object_from_iterator(\n itr, algorithm=d1_common.const.DEFAULT_CHECKSUM_ALGORITHM\n):\n \"\"\"Calculate the checksum of an iterator.\n\n Args:\n itr: iterable\n Object which supports the iterator protocol.\n\n algorithm: str\n Checksum algorithm, ``MD5`` or ``SHA1`` / ``SHA-1``.\n\n Returns:\n Populated Checksum PyXB object.\n\n \"\"\"\n checksum_str = calculate_checksum_on_iterator(itr, algorithm)\n checksum_pyxb = d1_common.types.dataoneTypes.checksum(checksum_str)\n checksum_pyxb.algorithm = algorithm\n return checksum_pyxb"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncalculate the checksum of the bytes b and return a PyXB Checksum object.", "response": "def create_checksum_object_from_bytes(\n b, algorithm=d1_common.const.DEFAULT_CHECKSUM_ALGORITHM\n):\n \"\"\"Calculate the checksum of ``bytes``.\n\n Warning:\n This method requires the entire object to be buffered in (virtual) memory, which\n should normally be avoided in production code.\n\n Args:\n b: bytes\n Raw bytes\n\n algorithm: str\n Checksum algorithm, ``MD5`` or ``SHA1`` / ``SHA-1``.\n\n Returns:\n Populated PyXB Checksum object.\n\n \"\"\"\n checksum_str = calculate_checksum_on_bytes(b, algorithm)\n checksum_pyxb = d1_common.types.dataoneTypes.checksum(checksum_str)\n checksum_pyxb.algorithm = algorithm\n return checksum_pyxb"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef calculate_checksum_on_stream(\n f,\n algorithm=d1_common.const.DEFAULT_CHECKSUM_ALGORITHM,\n chunk_size=DEFAULT_CHUNK_SIZE,\n):\n \"\"\"Calculate the checksum of a stream.\n\n Args:\n f: file-like object\n Only requirement is a ``read()`` method that returns ``bytes``.\n\n algorithm: str\n Checksum algorithm, ``MD5`` or ``SHA1`` / ``SHA-1``.\n\n chunk_size : int\n Number of bytes to read from the file and add to the checksum at a time.\n\n Returns:\n str : Checksum as a hexadecimal string, with length decided by the algorithm.\n\n \"\"\"\n checksum_calc = get_checksum_calculator_by_dataone_designator(algorithm)\n while True:\n chunk = f.read(chunk_size)\n if not chunk:\n break\n checksum_calc.update(chunk)\n return checksum_calc.hexdigest()", "response": "Calculate the checksum of a file - like object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncalculating the checksum of an iterator.", "response": "def calculate_checksum_on_iterator(\n itr, algorithm=d1_common.const.DEFAULT_CHECKSUM_ALGORITHM\n):\n \"\"\"Calculate the checksum of an iterator.\n\n Args:\n itr: iterable\n Object which supports the iterator protocol.\n\n algorithm: str\n Checksum algorithm, ``MD5`` or ``SHA1`` / ``SHA-1``.\n\n Returns:\n str : Checksum as a hexadecimal string, with length decided by the algorithm.\n\n \"\"\"\n checksum_calc = get_checksum_calculator_by_dataone_designator(algorithm)\n for chunk in itr:\n checksum_calc.update(chunk)\n return checksum_calc.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef calculate_checksum_on_bytes(\n b, algorithm=d1_common.const.DEFAULT_CHECKSUM_ALGORITHM\n):\n \"\"\"Calculate the checksum of ``bytes``.\n\n Warning: This method requires the entire object to be buffered in (virtual) memory,\n which should normally be avoided in production code.\n\n Args:\n b: bytes\n Raw bytes\n\n algorithm: str\n Checksum algorithm, ``MD5`` or ``SHA1`` / ``SHA-1``.\n\n Returns:\n str : Checksum as a hexadecimal string, with length decided by the algorithm.\n\n \"\"\"\n checksum_calc = get_checksum_calculator_by_dataone_designator(algorithm)\n checksum_calc.update(b)\n return checksum_calc.hexdigest()", "response": "Calculate the checksum of the given bytes."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning True if checksums are equal.", "response": "def are_checksums_equal(checksum_a_pyxb, checksum_b_pyxb):\n \"\"\"Determine if checksums are equal.\n\n Args:\n checksum_a_pyxb, checksum_b_pyxb: PyXB Checksum objects to compare.\n\n Returns:\n bool\n - **True**: The checksums contain the same hexadecimal values calculated with\n the same algorithm. Identical checksums guarantee (for all practical\n purposes) that the checksums were calculated from the same sequence of bytes.\n - **False**: The checksums were calculated with the same algorithm but the\n hexadecimal values are different.\n\n Raises:\n ValueError\n The checksums were calculated with different algorithms, hence cannot be\n compared.\n\n \"\"\"\n if checksum_a_pyxb.algorithm != checksum_b_pyxb.algorithm:\n raise ValueError(\n 'Cannot compare checksums calculated with different algorithms. '\n 'a=\"{}\" b=\"{}\"'.format(checksum_a_pyxb.algorithm, checksum_b_pyxb.algorithm)\n )\n return checksum_a_pyxb.value().lower() == checksum_b_pyxb.value().lower()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a string representation of a PyXB Checksum object.", "response": "def format_checksum(checksum_pyxb):\n \"\"\"Create string representation of a PyXB Checksum object.\n\n Args:\n PyXB Checksum object\n\n Returns:\n str : Combined hexadecimal value and algorithm name.\n\n \"\"\"\n return '{}/{}'.format(\n checksum_pyxb.algorithm.upper().replace('-', ''), checksum_pyxb.value().lower()\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef handle(self, *args, **kwargs):\n listener = ExecutorListener(redis_params=getattr(settings, 'FLOW_MANAGER', {}).get('REDIS_CONNECTION', {}))\n\n def _killer(signum, frame):\n \"\"\"Kill the listener on receipt of a signal.\"\"\"\n listener.terminate()\n signal(SIGINT, _killer)\n signal(SIGTERM, _killer)\n\n async def _runner():\n \"\"\"Run the listener instance.\"\"\"\n if kwargs['clear_queue']:\n await listener.clear_queue()\n async with listener:\n pass\n\n loop = asyncio.new_event_loop()\n loop.run_until_complete(_runner())\n loop.close()", "response": "Run the executor listener. This method never returns."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\napply a projection to a single resource.", "response": "def apply_subfield_projection(field, value, deep=False):\n \"\"\"Apply projection from request context.\n\n The passed dictionary may be mutated.\n\n :param field: An instance of `Field` or `Serializer`\n :type field: `Field` or `Serializer`\n :param value: Dictionary to apply the projection to\n :type value: dict\n :param deep: Also process all deep projections\n :type deep: bool\n \"\"\"\n # Discover the root manually. We cannot use either `self.root` or `self.context`\n # due to a bug with incorrect caching (see DRF issue #5087).\n prefix = []\n root = field\n while root.parent is not None:\n # Skip anonymous serializers (e.g., intermediate ListSerializers).\n if root.field_name:\n prefix.append(root.field_name)\n root = root.parent\n prefix = prefix[::-1]\n\n context = getattr(root, '_context', {})\n\n # If there is no request, we cannot perform filtering.\n request = context.get('request')\n if request is None:\n return value\n\n filtered = set(request.query_params.get('fields', '').split(FIELD_SEPARATOR))\n filtered.discard('')\n if not filtered:\n # If there are no fields specified in the filter, return all fields.\n return value\n\n # Extract projection for current and deeper levels.\n current_level = len(prefix)\n current_projection = []\n for item in filtered:\n item = item.split(FIELD_DEREFERENCE)\n if len(item) <= current_level:\n continue\n\n if item[:current_level] == prefix:\n if deep:\n current_projection.append(item[current_level:])\n else:\n current_projection.append([item[current_level]])\n\n if deep and not current_projection:\n # For deep projections, an empty projection means that all fields should\n # be returned without any projection.\n return value\n\n # Apply projection.\n return apply_projection(current_projection, value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\napplying a projection to the value.", "response": "def apply_projection(projection, value):\n \"\"\"Apply projection.\"\"\"\n if isinstance(value, Sequence):\n # Apply projection to each item in the list.\n return [\n apply_projection(projection, item)\n for item in value\n ]\n elif not isinstance(value, Mapping):\n # Non-dictionary values are simply ignored.\n return value\n\n # Extract projection for current level.\n try:\n current_projection = [p[0] for p in projection]\n except IndexError:\n return value\n\n # Apply projection.\n for name in list(value.keys()):\n if name not in current_projection:\n value.pop(name)\n elif isinstance(value[name], dict):\n # Apply projection recursively.\n value[name] = apply_projection(\n [p[1:] for p in projection if p[0] == name],\n value[name]\n )\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate Relation object and add partitions of Entities.", "response": "def create(self, validated_data):\n \"\"\"Create ``Relation`` object and add partitions of ``Entities``.\"\"\"\n # `partitions` field is renamed to `relationpartition_set` based on source of nested serializer\n partitions = validated_data.pop('relationpartition_set')\n\n with transaction.atomic():\n instance = Relation.objects.create(**validated_data)\n self._create_partitions(instance, partitions)\n\n return instance"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the relation with the given data.", "response": "def update(self, instance, validated_data):\n \"\"\"Update ``Relation``.\"\"\"\n # `partitions` field is renamed to `relationpartition_set` based on source of nested serializer\n partitions = validated_data.pop('relationpartition_set', None)\n\n with transaction.atomic():\n instance = super().update(instance, validated_data)\n\n if partitions is not None:\n # TODO: Apply the diff instead of recreating all objects.\n instance.relationpartition_set.all().delete()\n self._create_partitions(instance, partitions)\n\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef output(file_like_object, path, verbose=False):\n if not path:\n for line in file_like_object:\n if verbose:\n print_info(line.rstrip())\n else:\n print(line.rstrip())\n else:\n try:\n object_file = open(os.path.expanduser(path), \"w\", encoding=\"utf-8\")\n shutil.copyfileobj(file_like_object, object_file)\n object_file.close()\n except EnvironmentError as xxx_todo_changeme:\n (errno, strerror) = xxx_todo_changeme.args\n error_line_list = [\n \"Could not write to object_file: {}\".format(path),\n \"I/O error({}): {}\".format(errno, strerror),\n ]\n error_message = \"\\n\".join(error_line_list)\n raise d1_cli.impl.exceptions.CLIError(error_message)", "response": "Display or save file like object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _print_level(level, msg):\n for l in str(msg.rstrip()).split(\"\\n\"):\n print(\"{0:<9s}{1}\".format(level, str(l)))", "response": "Print the information in Unicode safe manner."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding the first line of process definition.", "response": "def get_process_definition_start(fname, slug):\n \"\"\"Find the first line of process definition.\n\n The first line of process definition is the line with a slug.\n\n :param str fname: Path to filename with processes\n :param string slug: process slug\n :return: line where the process definiton starts\n :rtype: int\n\n \"\"\"\n with open(fname) as file_:\n for i, line in enumerate(file_):\n if re.search(r'slug:\\s*{}'.format(slug), line):\n return i + 1\n # In case starting line is not found just return first line\n return 1"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_processes(process_dir, base_source_uri):\n global PROCESS_CACHE # pylint: disable=global-statement\n if PROCESS_CACHE is not None:\n return PROCESS_CACHE\n\n all_process_files = []\n process_file_extensions = ['*.yaml', '*.yml']\n for root, _, filenames in os.walk(process_dir):\n for extension in process_file_extensions:\n for filename in fnmatch.filter(filenames, extension):\n all_process_files.append(os.path.join(root, filename))\n\n def read_yaml_file(fname):\n \"\"\"Read the yaml file.\"\"\"\n with open(fname) as f:\n return yaml.load(f, Loader=yaml.FullLoader)\n\n processes = []\n for process_file in all_process_files:\n processes_in_file = read_yaml_file(process_file)\n for process in processes_in_file:\n # This section finds the line in file where the\n # defintion of the process starts. (there are\n # multiple process definition in some files).\n startline = get_process_definition_start(process_file, process['slug'])\n\n # Put together URL to starting line of process definition.\n process['source_uri'] = base_source_uri + process_file[len(process_dir) + 1:] + '#L' + str(startline)\n\n if 'category' not in process:\n process['category'] = 'uncategorized'\n\n processes.append(process)\n\n PROCESS_CACHE = processes\n return processes", "response": "Find processes in a directory and return a dictionary of processes where keys are URLs pointing to processes source code and values are processes definitions parsed from YAML files."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setup(app):\n app.add_config_value('autoprocess_process_dir', '', 'env')\n app.add_config_value('autoprocess_source_base_url', '', 'env')\n app.add_config_value('autoprocess_definitions_uri', '', 'env')\n\n app.add_directive('autoprocess', AutoProcessDirective)\n app.add_directive('autoprocesscategory', AutoProcessCategoryDirective)\n app.add_directive('autoprocesstype', AutoProcessTypesDirective)\n\n # The setup() function can return a dictionary. This is treated by\n # Sphinx as metadata of the extension:\n return {'version': '0.2'}", "response": "Register directives.\n\n When sphinx loads the extension (= imports the extension module) it\n also executes the setup() function. Setup is the way extension\n informs Sphinx about everything that the extension enables: which\n config_values are introduced, which custom nodes/directives/roles\n and which events are defined in extension.\n\n In this case, only one new directive is created. All used nodes are\n constructed from already existing nodes in docutils.nodes package."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_field(self, field_name, field_body):\n name = nodes.field_name()\n name += nodes.Text(field_name)\n\n paragraph = nodes.paragraph()\n if isinstance(field_body, str):\n # This is the case when field_body is just a string:\n paragraph += nodes.Text(field_body)\n else:\n # This is the case when field_body is a complex node:\n # useful when constructing nested field lists\n paragraph += field_body\n\n body = nodes.field_body()\n body += paragraph\n\n field = nodes.field()\n field.extend([name, body])\n return field", "response": "Fill content into nodes. field instance with given name and body."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_properties_list(self, field):\n properties_list = nodes.field_list()\n\n # changing the order of elements in this list affects\n # the order in which they are displayed\n property_names = ['label', 'type', 'description', 'required',\n 'disabled', 'hidden', 'default', 'placeholder',\n 'validate_regex', 'choices', 'collapse', 'group']\n\n for name in property_names:\n if name not in field:\n continue\n\n value = field[name]\n\n # Value should be formatted in code-style (=literal) mode\n if name in ['type', 'default', 'placeholder', 'validate_regex']:\n literal_node = nodes.literal(str(value), str(value))\n properties_list += self.make_field(name, literal_node)\n\n # Special formating of ``value`` is needed if name == 'choices'\n elif name == 'choices':\n bullet_list = nodes.bullet_list()\n for choice in value:\n label = nodes.Text(choice['label'] + ': ')\n val = nodes.literal(choice['value'], choice['value'])\n\n paragraph = nodes.paragraph()\n paragraph += label\n paragraph += val\n list_item = nodes.list_item()\n list_item += paragraph\n bullet_list += list_item\n\n properties_list += self.make_field(name, bullet_list)\n\n else:\n properties_list += self.make_field(name, str(value))\n\n return properties_list", "response": "Fill the given field into a properties list and return it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate a process definition header.", "response": "def make_process_header(self, slug, typ, version, source_uri, description, inputs):\n \"\"\"Generate a process definition header.\n\n :param str slug: process' slug\n :param str typ: process' type\n :param str version: process' version\n :param str source_uri: url to the process definition\n :param str description: process' description\n :param dict inputs: process' inputs\n\n \"\"\"\n node = addnodes.desc()\n signode = addnodes.desc_signature(slug, '')\n node.append(signode)\n\n node['objtype'] = node['desctype'] = typ\n\n signode += addnodes.desc_annotation(typ, typ, classes=['process-type'])\n signode += addnodes.desc_addname('', '')\n signode += addnodes.desc_name(slug + ' ', slug + ' ')\n\n paramlist = addnodes.desc_parameterlist()\n\n for field_schema, _, _ in iterate_schema({}, inputs, ''):\n field_type = field_schema['type']\n field_name = field_schema['name']\n\n field_default = field_schema.get('default', None)\n field_default = '' if field_default is None else '={}'.format(field_default)\n\n param = addnodes.desc_parameter('', '', noemph=True)\n param += nodes.emphasis(field_type, field_type, classes=['process-type'])\n # separate by non-breaking space in the output\n param += nodes.strong(text='\\xa0\\xa0' + field_name)\n\n paramlist += param\n\n signode += paramlist\n signode += nodes.reference('', nodes.Text('[Source: v{}]'.format(version)),\n refuri=source_uri, classes=['viewcode-link'])\n\n desc = nodes.paragraph()\n desc += nodes.Text(description, description)\n\n return [node, desc]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfill the content of the process definiton node.", "response": "def make_process_node(self, process):\n \"\"\"Fill the content of process definiton node.\n\n :param dict process: process data as given from yaml.load function\n :return: process node\n\n \"\"\"\n name = process['name']\n slug = process['slug']\n typ = process['type']\n version = process['version']\n description = process.get('description', '')\n source_uri = process['source_uri']\n inputs = process.get('input', [])\n outputs = process.get('output', [])\n\n # Make process name a section title:\n section = nodes.section(ids=['process-' + slug])\n section += nodes.title(name, name)\n\n # Make process header:\n section += self.make_process_header(slug, typ, version, source_uri, description, inputs)\n\n # Make inputs section:\n container_node = nodes.container(classes=['toggle'])\n container_header = nodes.paragraph(classes=['header'])\n container_header += nodes.strong(text='Input arguments')\n container_node += container_header\n\n container_body = nodes.container()\n for field_schema, _, path in iterate_schema({}, inputs, ''):\n container_body += nodes.strong(text=path)\n container_body += self.make_properties_list(field_schema)\n\n container_node += container_body\n section += container_node\n\n # Make outputs section:\n container_node = nodes.container(classes=['toggle'])\n container_header = nodes.paragraph(classes=['header'])\n container_header += nodes.strong(text='Output results')\n container_node += container_header\n\n container_body = nodes.container()\n for field_schema, _, path in iterate_schema({}, outputs, ''):\n container_body += nodes.strong(text=path)\n container_body += self.make_properties_list(field_schema)\n\n container_node += container_body\n section += container_node\n\n return [section, addnodes.index(entries=[('single', name, 'process-' + slug, '', None)])]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run(self):\n config = self.state.document.settings.env.config\n\n # Get all processes:\n processes = get_processes(config.autoprocess_process_dir, config.autoprocess_source_base_url)\n process_nodes = []\n\n for process in sorted(processes, key=itemgetter('name')):\n process_nodes.extend(self.make_process_node(process))\n\n return process_nodes", "response": "Create a list of process definitions."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a category tree.", "response": "def run(self):\n \"\"\"Create a category tree.\"\"\"\n config = self.state.document.settings.env.config\n\n # Group processes by category\n processes = get_processes(config.autoprocess_process_dir, config.autoprocess_source_base_url)\n processes.sort(key=itemgetter('category'))\n categorized_processes = {k: list(g) for k, g in groupby(processes, itemgetter('category'))}\n\n # Build category tree\n category_sections = {'': nodes.container(ids=['categories'])}\n top_categories = []\n\n for category in sorted(categorized_processes.keys()):\n category_path = ''\n\n for category_node in category.split(':'):\n parent_category_path = category_path\n category_path += '{}:'.format(category_node)\n\n if category_path in category_sections:\n continue\n\n category_name = category_node.capitalize()\n\n section = nodes.section(ids=['category-' + category_node])\n section += nodes.title(category_name, category_name)\n\n # Add process list\n category_key = category_path[:-1]\n if category_key in categorized_processes:\n listnode = nodes.bullet_list()\n section += listnode\n\n for process in categorized_processes[category_key]:\n par = nodes.paragraph()\n\n node = nodes.reference('', process['name'], internal=True)\n node['refuri'] = config.autoprocess_definitions_uri + '#process-' + process['slug']\n node['reftitle'] = process['name']\n\n par += node\n listnode += nodes.list_item('', par)\n\n category_sections[parent_category_path] += section\n category_sections[category_path] = section\n\n if parent_category_path == '':\n top_categories.append(section)\n\n # Return top sections only\n return top_categories"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run(self):\n config = self.state.document.settings.env.config\n\n # Group processes by category\n processes = get_processes(config.autoprocess_process_dir, config.autoprocess_source_base_url)\n processes.sort(key=itemgetter('type'))\n processes_by_types = {k: list(g) for k, g in groupby(processes, itemgetter('type'))}\n\n listnode = nodes.bullet_list()\n\n for typ in sorted(processes_by_types.keys()):\n par = nodes.paragraph()\n par += nodes.literal(typ, typ)\n par += nodes.Text(' - ')\n\n processes = sorted(processes_by_types[typ], key=itemgetter('name'))\n last_process = processes[-1]\n for process in processes:\n node = nodes.reference('', process['name'], internal=True)\n node['refuri'] = config.autoprocess_definitions_uri + '#process-' + process['slug']\n node['reftitle'] = process['name']\n par += node\n if process != last_process:\n par += nodes.Text(', ')\n\n listnode += nodes.list_item('', par)\n\n return [listnode]", "response": "Create a type list."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\nasync def get(self, file_stream, pid, vendor_specific=None):\n async with await self._retry_request(\n \"get\", [\"object\", pid], vendor_specific=vendor_specific\n ) as response:\n self._assert_valid_response(response)\n async for chunk_str, _ in response.content.iter_chunks():\n file_stream.write(chunk_str)", "response": "MNRead. get - Retrieve the SciObj bytes and write them to a stream."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsends an object synchronization request to the CN.", "response": "async def synchronize(self, pid, vendor_specific=None):\n \"\"\"Send an object synchronization request to the CN.\"\"\"\n return await self._request_pyxb(\n \"post\",\n [\"synchronize\", pid],\n {},\n mmp_dict={\"pid\": pid},\n vendor_specific=vendor_specific,\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nencode any datetime query parameters to ISO8601.", "response": "def _datetime_to_iso8601(self, query_dict):\n \"\"\"Encode any datetime query parameters to ISO8601.\"\"\"\n return {\n k: v if not isinstance(v, datetime.datetime) else v.isoformat()\n for k, v in list(query_dict.items())\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_representation(self, value):\n value = apply_subfield_projection(self, value, deep=True)\n return super().to_representation(value)", "response": "Project outgoing native value."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef synchronizeResponse(self, pid, vendorSpecific=None):\n mmp_dict = {'pid': pid}\n return self.POST(['synchronize'], fields=mmp_dict, headers=vendorSpecific)", "response": "CNRead. synchronize - POST / synchronize. pid - PID of the current session Returns True if successful False otherwise"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsee Also : synchronizeResponse", "response": "def synchronize(self, pid, vendorSpecific=None):\n \"\"\"See Also: synchronizeResponse() Args: pid: vendorSpecific:\n\n Returns:\n\n \"\"\"\n response = self.synchronizeResponse(pid, vendorSpecific)\n return self._read_boolean_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate_bagit_file(bagit_path):\n _assert_zip_file(bagit_path)\n bagit_zip = zipfile.ZipFile(bagit_path)\n manifest_info_list = _get_manifest_info_list(bagit_zip)\n _validate_checksums(bagit_zip, manifest_info_list)\n return True", "response": "Check if a BagIt file is valid."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a stream containing a BagIt zip archive.", "response": "def create_bagit_stream(dir_name, payload_info_list):\n \"\"\"Create a stream containing a BagIt zip archive.\n\n Args:\n dir_name : str\n The name of the root directory in the zip file, under which all the files\n are placed (avoids \"zip bombs\").\n\n payload_info_list: list\n List of payload_info_dict, each dict describing a file.\n\n - keys: pid, filename, iter, checksum, checksum_algorithm\n - If the filename is None, the pid is used for the filename.\n\n \"\"\"\n zip_file = zipstream.ZipFile(mode='w', compression=zipstream.ZIP_DEFLATED)\n _add_path(dir_name, payload_info_list)\n payload_byte_count, payload_file_count = _add_payload_files(\n zip_file, payload_info_list\n )\n tag_info_list = _add_tag_files(\n zip_file, dir_name, payload_info_list, payload_byte_count, payload_file_count\n )\n _add_manifest_files(zip_file, dir_name, payload_info_list, tag_info_list)\n _add_tag_manifest_file(zip_file, dir_name, tag_info_list)\n return zip_file"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _add_path(dir_name, payload_info_list):\n for payload_info_dict in payload_info_list:\n file_name = payload_info_dict['filename'] or payload_info_dict['pid']\n payload_info_dict['path'] = d1_common.utils.filesystem.gen_safe_path(\n dir_name, 'data', file_name\n )", "response": "Add a key with the path to each payload_info_dict."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd the payload files to the zip.", "response": "def _add_payload_files(zip_file, payload_info_list):\n \"\"\"Add the payload files to the zip.\"\"\"\n payload_byte_count = 0\n payload_file_count = 0\n for payload_info_dict in payload_info_list:\n zip_file.write_iter(payload_info_dict['path'], payload_info_dict['iter'])\n payload_byte_count += payload_info_dict['iter'].size\n payload_file_count += 1\n return payload_byte_count, payload_file_count"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _add_tag_files(\n zip_file, dir_name, payload_info_list, payload_byte_count, payload_file_count\n):\n \"\"\"Generate the tag files and add them to the zip.\"\"\"\n tag_info_list = []\n _add_tag_file(zip_file, dir_name, tag_info_list, _gen_bagit_text_file_tup())\n _add_tag_file(\n zip_file,\n dir_name,\n tag_info_list,\n _gen_bag_info_file_tup(payload_byte_count, payload_file_count),\n )\n _add_tag_file(\n zip_file, dir_name, tag_info_list, _gen_pid_mapping_file_tup(payload_info_list)\n )\n return tag_info_list", "response": "Generate the tag files and add them to the zip."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngenerating the manifest files and add them to the zip.", "response": "def _add_manifest_files(zip_file, dir_name, payload_info_list, tag_info_list):\n \"\"\"Generate the manifest files and add them to the zip.\"\"\"\n for checksum_algorithm in _get_checksum_algorithm_set(payload_info_list):\n _add_tag_file(\n zip_file,\n dir_name,\n tag_info_list,\n _gen_manifest_file_tup(payload_info_list, checksum_algorithm),\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _add_tag_manifest_file(zip_file, dir_name, tag_info_list):\n _add_tag_file(\n zip_file, dir_name, tag_info_list, _gen_tag_manifest_file_tup(tag_info_list)\n )", "response": "Generate the tag manifest file and add it to the zip."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _add_tag_file(zip_file, dir_name, tag_info_list, tag_tup):\n tag_name, tag_str = tag_tup\n tag_path = d1_common.utils.filesystem.gen_safe_path(dir_name, tag_name)\n tag_iter = _create_and_add_tag_iter(zip_file, tag_path, tag_str)\n tag_info_list.append(\n {\n 'path': tag_path,\n 'checksum': d1_common.checksum.calculate_checksum_on_iterator(\n tag_iter, TAG_CHECKSUM_ALGO\n ),\n }\n )", "response": "Add a tag file to zip_file and record info for the tag manifest file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pyxb_to_dict(node_list_pyxb):\n f_dict = {}\n for f_pyxb in sorted(node_list_pyxb.node, key=lambda x: x.identifier.value()):\n f_dict[f_pyxb.identifier.value()] = {\n 'name': f_pyxb.name,\n 'description': f_pyxb.description,\n 'base_url': f_pyxb.baseURL,\n 'ping': f_pyxb.ping,\n 'replicate': f_pyxb.replicate,\n 'synchronize': f_pyxb.synchronize,\n 'type': f_pyxb.type,\n 'state': f_pyxb.state,\n }\n # TODO:\n # f_pyxb.services\n # f_pyxb.synchronization\n # f_pyxb.subject\n # f_pyxb.contactSubject\n # f_pyxb.nodeReplicationPolicy,\n\n return f_dict", "response": "Convert a node_list_pyxb to a dict."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if the user has permission to edit the object.", "response": "def has_object_permission(self, request, view, obj):\n \"\"\"Check object permissions.\"\"\"\n # admins can do anything\n if request.user.is_superuser:\n return True\n\n # `share` permission is required for editing permissions\n if 'permissions' in view.action:\n self.perms_map['POST'] = ['%(app_label)s.share_%(model_name)s']\n\n if view.action in ['add_data', 'remove_data']:\n self.perms_map['POST'] = ['%(app_label)s.add_%(model_name)s']\n\n if hasattr(view, 'get_queryset'):\n queryset = view.get_queryset()\n else:\n queryset = getattr(view, 'queryset', None)\n\n assert queryset is not None, (\n 'Cannot apply DjangoObjectPermissions on a view that '\n 'does not set `.queryset` or have a `.get_queryset()` method.'\n )\n\n model_cls = queryset.model\n user = request.user\n\n perms = self.get_required_object_permissions(request.method, model_cls)\n\n if not user.has_perms(perms, obj) and not AnonymousUser().has_perms(perms, obj):\n # If the user does not have permissions we need to determine if\n # they have read permissions to see 403, or not, and simply see\n # a 404 response.\n\n if request.method in permissions.SAFE_METHODS:\n # Read permissions already checked and failed, no need\n # to make another lookup.\n raise Http404\n\n read_perms = self.get_required_object_permissions('GET', model_cls)\n if not user.has_perms(read_perms, obj):\n raise Http404\n\n # Has read permissions.\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_active_subject_set(self, request):\n # Handle complete certificate in vendor specific extension.\n if django.conf.settings.DEBUG_GMN:\n if 'HTTP_VENDOR_INCLUDE_CERTIFICATE' in request.META:\n request.META[\n 'SSL_CLIENT_CERT'\n ] = self.pem_in_http_header_to_pem_in_string(\n request.META['HTTP_VENDOR_INCLUDE_CERTIFICATE']\n )\n\n # Add subjects from any provided certificate and JWT and store them in\n # the Django request obj.\n cert_primary_str, cert_equivalent_set = d1_gmn.app.middleware.session_cert.get_subjects(\n request\n )\n jwt_subject_list = d1_gmn.app.middleware.session_jwt.validate_jwt_and_get_subject_list(\n request\n )\n primary_subject_str = cert_primary_str\n all_subjects_set = (\n cert_equivalent_set | {cert_primary_str} | set(jwt_subject_list)\n )\n if len(jwt_subject_list) == 1:\n jwt_primary_str = jwt_subject_list[0]\n if jwt_primary_str != cert_primary_str:\n if cert_primary_str == d1_common.const.SUBJECT_PUBLIC:\n primary_subject_str = jwt_primary_str\n else:\n logging.warning(\n 'Both a certificate and a JWT were provided and the primary '\n 'subjects differ. Using the certificate for primary subject and'\n 'the JWT as equivalent.'\n )\n\n logging.info('Primary active subject: {}'.format(primary_subject_str))\n logging.info(\n 'All active subjects: {}'.format(', '.join(sorted(all_subjects_set)))\n )\n\n # Handle list of subjects in vendor specific extension:\n if django.conf.settings.DEBUG_GMN:\n # This is added to any subjects obtained from cert and/or JWT.\n if 'HTTP_VENDOR_INCLUDE_SUBJECTS' in request.META:\n request.all_subjects_set.update(\n request.META['HTTP_VENDOR_INCLUDE_SUBJECTS'].split('\\t')\n )\n\n return primary_subject_str, all_subjects_set", "response": "Get a set containing all subjects for which the current connection has been successfully authenticated."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef log_setup(debug_bool):\n\n level = logging.DEBUG if debug_bool else logging.INFO\n\n logging.config.dictConfig(\n {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"verbose\": {\n \"format\": \"%(asctime)s %(levelname)-8s %(name)s %(module)s \"\n \"%(process)d %(thread)d %(message)s\",\n \"datefmt\": \"%Y-%m-%d %H:%M:%S\",\n }\n },\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n \"level\": level,\n \"stream\": \"ext://sys.stdout\",\n }\n },\n \"loggers\": {\n \"\": {\n \"handlers\": [\"console\"],\n \"level\": level,\n \"class\": \"logging.StreamHandler\",\n }\n },\n }\n )", "response": "Set up logging.\n\n We output only to stdout. Instead of also writing to a log file, redirect stdout to\n a log file when the script is executed from cron."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef connect(self, dsn):\n self.con = psycopg2.connect(dsn)\n self.cur = self.con.cursor(cursor_factory=psycopg2.extras.DictCursor)\n # autocommit: Disable automatic transactions\n self.con.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)", "response": "Connect to the database."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a query parameter uniformly for GET and POST requests.", "response": "def get_query_param(self, key, default=None):\n \"\"\"Get query parameter uniformly for GET and POST requests.\"\"\"\n value = self.request.query_params.get(key, None)\n if value is None:\n value = self.request.data.get(key, None)\n if value is None:\n value = default\n return value"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget combined query parameters.", "response": "def get_query_params(self):\n \"\"\"Get combined query parameters (GET and POST).\"\"\"\n params = self.request.query_params.copy()\n params.update(self.request.data)\n return params"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nordering given search by the ordering parameter given in request.", "response": "def order_search(self, search):\n \"\"\"Order given search by the ordering parameter given in request.\n\n :param search: ElasticSearch query object\n\n \"\"\"\n ordering = self.get_query_param('ordering', self.ordering)\n if not ordering:\n return search\n\n sort_fields = []\n for raw_ordering in ordering.split(','):\n ordering_field = raw_ordering.lstrip('-')\n if ordering_field not in self.ordering_fields:\n raise ParseError('Ordering by `{}` is not supported.'.format(ordering_field))\n\n ordering_field = self.ordering_map.get(ordering_field, ordering_field)\n direction = '-' if raw_ordering[0] == '-' else ''\n sort_fields.append('{}{}'.format(direction, ordering_field))\n\n return search.sort(*sort_fields)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfiltering given search by the filter parameter given in request.", "response": "def filter_search(self, search):\n \"\"\"Filter given search by the filter parameter given in request.\n\n :param search: ElasticSearch query object\n\n \"\"\"\n builder = QueryBuilder(\n self.filtering_fields,\n self.filtering_map,\n self\n )\n search, unmatched = builder.build(search, self.get_query_params())\n\n # Ensure that no unsupported arguments were used.\n for argument in self.get_always_allowed_arguments():\n unmatched.pop(argument, None)\n\n if unmatched:\n msg = 'Unsupported parameter(s): {}. Please use a combination of: {}.'.format(\n ', '.join(unmatched),\n ', '.join(self.filtering_fields),\n )\n raise ParseError(msg)\n\n return search"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfilters given query based on permissions of the user in the request.", "response": "def filter_permissions(self, search):\n \"\"\"Filter given query based on permissions of the user in the request.\n\n :param search: ElasticSearch query object\n\n \"\"\"\n user = self.request.user\n if user.is_superuser:\n return search\n if user.is_anonymous:\n user = get_anonymous_user()\n\n filters = [Q('match', users_with_permissions=user.pk)]\n filters.extend([\n Q('match', groups_with_permissions=group.pk) for group in user.groups.all()\n ])\n filters.append(Q('match', public_permission=True))\n\n # `minimum_should_match` is set to 1 by default\n return search.query('bool', should=filters)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef paginate_response(self, queryset, serializers_kwargs={}):\n page = self.paginate_queryset(queryset)\n if page is not None:\n serializer = self.get_serializer(page, many=True, **serializers_kwargs)\n return self.get_paginated_response(serializer.data)\n\n serializer = self.get_serializer(queryset, many=True, **serializers_kwargs)\n return Response(serializer.data)", "response": "Optionally return paginated response."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef search(self):\n search = self.document_class().search() # pylint: disable=not-callable\n\n search = self.custom_filter(search)\n\n search = self.filter_search(search)\n search = self.order_search(search)\n search = self.filter_permissions(search)\n\n if search.count() > ELASTICSEARCH_SIZE:\n limit = self.paginator.get_limit(self.request)\n\n if not limit or limit > ELASTICSEARCH_SIZE:\n raise TooManyResults()\n\n search = search.extra(size=ELASTICSEARCH_SIZE)\n return search", "response": "Handle the search request."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncalling by FUSE when the attributes for a file or directory are required.", "response": "def getattr(self, path, fh):\n \"\"\"Called by FUSE when the attributes for a file or directory are required.\n\n Returns a dictionary with keys identical to the stat C structure of stat(2).\n st_atime, st_mtime and st_ctime should be floats. On OSX, st_nlink should count\n all files inside the directory. On Linux, only the subdirectories are counted.\n The 'st_dev' and 'st_blksize' fields are ignored. The 'st_ino' field is ignored\n except if the 'use_ino' mount option is given.\n\n This method gets very heavy traffic.\n\n \"\"\"\n self._raise_error_if_os_special_file(path)\n # log.debug(u'getattr(): {0}'.format(path))\n attribute = self._get_attributes_through_cache(path)\n # log.debug('getattr() returned attribute: {0}'.format(attribute))\n return self._stat_from_attributes(attribute)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncall by FUSE when a directory is opened. Returns a list of file and directory names for the directory.", "response": "def readdir(self, path, fh):\n \"\"\"Called by FUSE when a directory is opened.\n\n Returns a list of file and directory names for the directory.\n\n \"\"\"\n log.debug('readdir(): {}'.format(path))\n try:\n dir = self._directory_cache[path]\n except KeyError:\n dir = self._get_directory(path)\n self._directory_cache[path] = dir\n return dir"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncall by FUSE when a file is opened.", "response": "def open(self, path, flags):\n \"\"\"Called by FUSE when a file is opened.\n\n Determines if the provided path and open flags are valid.\n\n \"\"\"\n log.debug('open(): {}'.format(path))\n # ONEDrive is currently read only. Anything but read access is denied.\n if (flags & self._READ_ONLY_ACCESS_MODE) != os.O_RDONLY:\n self._raise_error_permission_denied(path)\n # Any file in the filesystem can be opened.\n attribute = self._get_attributes_through_cache(path)\n return attribute.is_dir()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nextracting an inline expression from the given text.", "response": "def get_inline_expression(self, text):\n \"\"\"Extract an inline expression from the given text.\"\"\"\n text = text.strip()\n if not text.startswith(self.inline_tags[0]) or not text.endswith(self.inline_tags[1]):\n return\n\n return text[2:-2]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfiltering queryset by entity label and position.", "response": "def _filter_queryset(self, queryset):\n \"\"\"Filter queryset by entity, label and position.\n\n Due to a bug in django-filter these filters have to be applied\n manually:\n https://github.com/carltongibson/django-filter/issues/883\n \"\"\"\n entities = self.request.query_params.getlist('entity')\n labels = self.request.query_params.getlist('label')\n positions = self.request.query_params.getlist('position')\n\n if labels and len(labels) != len(entities):\n raise exceptions.ParseError(\n 'If `labels` query parameter is given, also `entities` '\n 'must be given and they must be of the same length.'\n )\n\n if positions and len(positions) != len(entities):\n raise exceptions.ParseError(\n 'If `positions` query parameter is given, also `entities` '\n 'must be given and they must be of the same length.'\n )\n\n if entities:\n for entity, label, position in zip_longest(entities, labels, positions):\n filter_params = {'entities__pk': entity}\n if label:\n filter_params['relationpartition__label'] = label\n if position:\n filter_params['relationpartition__position'] = position\n\n queryset = queryset.filter(**filter_params)\n\n return queryset"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update(self, request, *args, **kwargs):\n instance = self.get_object()\n if (not request.user.has_perm('edit_collection', instance.collection)\n and not request.user.is_superuser):\n return Response(status=status.HTTP_401_UNAUTHORIZED)\n\n return super().update(request, *args, **kwargs)", "response": "Update the relation object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrebuilding indexes affected by the given permission.", "response": "def _process_permission(perm):\n \"\"\"Rebuild indexes affected by the given permission.\"\"\"\n # XXX: Optimize: rebuild only permissions, not whole document\n codename = perm.permission.codename\n if not codename.startswith('view') and not codename.startswith('owner'):\n return\n\n index_builder.build(perm.content_object)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_response(response, encoding='utf-8'):\n return requests_toolbelt.multipart.decoder.MultipartDecoder.from_response(\n response, encoding\n ).parts", "response": "Parse a multipart Requests. Response into a tuple of BodyPart objects."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing a string representation of a multipart document into a tuple of BodyPart objects.", "response": "def parse_str(mmp_bytes, content_type, encoding='utf-8'):\n \"\"\"Parse multipart document bytes into a tuple of BodyPart objects.\n\n Args:\n mmp_bytes: bytes\n Multipart document.\n\n content_type : str\n Must be on the form, ``multipart/form-data; boundary=``, where\n ```` is the string that separates the parts of the multipart document\n in ``mmp_bytes``. In HTTP requests and responses, it is passed in the\n Content-Type header.\n\n encoding : str\n The coding used for the text in the HTML body.\n\n Returns:\n tuple of BodyPart\n Members: headers (CaseInsensitiveDict), content (bytes), text (Unicode),\n encoding (str).\n\n \"\"\"\n return requests_toolbelt.multipart.decoder.MultipartDecoder(\n mmp_bytes, content_type, encoding\n ).parts"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nnormalizes a tuple of BodyPart objects to a string.", "response": "def normalize(body_part_tup,):\n \"\"\"Normalize a tuple of BodyPart objects to a string.\n\n Normalization is done by sorting the body_parts by the Content- Disposition headers,\n which is typically on the form, ``form-data; name=\"name_of_part``.\n\n \"\"\"\n return '\\n\\n'.join(\n [\n '{}\\n\\n{}'.format(\n str(p.headers[b'Content-Disposition'], p.encoding), p.text\n )\n for p in sorted(\n body_part_tup, key=lambda p: p.headers[b'Content-Disposition']\n )\n ]\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef is_multipart(header_dict):\n return (\n {k.lower(): v for k, v in header_dict.items()}\n .get('content-type', '')\n .startswith('multipart')\n )", "response": "Checks if the given dict contains a Content - Type key that starts with multipart."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _split_path_by_reserved_name(self, path):\n for i, e in enumerate(path):\n if e in self._resolvers or e == self._get_readme_filename():\n return path[:i], path[i], path[i + 1 :]\n raise d1_onedrive.impl.onedrive_exceptions.PathException(\n 'Invalid folder: %s' % str(path)\n )", "response": "Split the path by reserved name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate a human readable description of the folder in text format.", "response": "def _generate_readme_text(self, object_tree_path):\n \"\"\"Generate a human readable description of the folder in text format.\"\"\"\n\n wdef_folder = self._object_tree.get_source_tree_folder(object_tree_path)\n res = StringIO()\n if len(object_tree_path):\n folder_name = object_tree_path[-1]\n else:\n folder_name = 'root'\n header = 'ObjectTree Folder \"{}\"'.format(folder_name)\n res.write(header + '\\n')\n res.write('{}\\n\\n'.format('=' * len(header)))\n res.write(\n 'The content present in object_tree folders is determined by a list\\n'\n )\n res.write(\n 'of specific identifiers and by queries applied against the DataONE\\n'\n )\n res.write('search index.\\n\\n')\n res.write('Queries:\\n\\n')\n if len(wdef_folder['queries']):\n for query in wdef_folder['queries']:\n res.write('- {}\\n'.format(query))\n else:\n res.write('No queries specified at this level.\\n')\n res.write('\\n\\n')\n res.write('Identifiers:\\n\\n')\n if len(wdef_folder['identifiers']):\n for pid in wdef_folder['identifiers']:\n res.write('- {}\\n'.format(pid))\n else:\n res.write('No individual identifiers selected at this level.\\n')\n res.write('\\n\\n')\n res.write('Sub-folders:\\n\\n')\n if len(wdef_folder['collections']):\n for f in wdef_folder['collections']:\n res.write('- {}\\n'.format(f))\n else:\n res.write('No object_tree sub-folders are specified at this level.\\n')\n return res.getvalue().encode('utf-8')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning serialized data or list of ids depending on hydrate_data query param.", "response": "def _serialize_data(self, data):\n \"\"\"Return serialized data or list of ids, depending on `hydrate_data` query param.\"\"\"\n if self.request and self.request.query_params.get('hydrate_data', False):\n serializer = DataSerializer(data, many=True, read_only=True)\n serializer.bind('data', self)\n return serializer.data\n else:\n return [d.id for d in data]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfilter object objects by permissions of user in request.", "response": "def _filter_queryset(self, perms, queryset):\n \"\"\"Filter object objects by permissions of user in request.\"\"\"\n user = self.request.user if self.request else AnonymousUser()\n return get_objects_for_user(user, perms, queryset)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_data(self, collection):\n data = self._filter_queryset('view_data', collection.data.all())\n\n return self._serialize_data(data)", "response": "Return serialized list of data objects on collection that user has view permission on."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_fields(self):\n fields = super(CollectionSerializer, self).get_fields()\n\n if self.request.method == \"GET\":\n fields['data'] = serializers.SerializerMethodField()\n else:\n fields['data'] = serializers.PrimaryKeyRelatedField(many=True, read_only=True)\n\n return fields", "response": "Dynamically adapt fields based on the current request."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if sysmeta_pyxb is a SystemMetadata PyXB object.", "response": "def is_sysmeta_pyxb(sysmeta_pyxb):\n \"\"\"Args: sysmeta_pyxb: Object that may or may not be a SystemMetadata PyXB object.\n\n Returns:\n bool:\n - ``True`` if ``sysmeta_pyxb`` is a SystemMetadata PyXB object.\n - ``False`` if ``sysmeta_pyxb`` is not a PyXB object or is a PyXB object of a\n type other than SystemMetadata.\n\n \"\"\"\n return (\n d1_common.type_conversions.is_pyxb_d1_type(sysmeta_pyxb)\n and d1_common.type_conversions.pyxb_get_type_name(sysmeta_pyxb)\n == 'SystemMetadata'\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef normalize_in_place(sysmeta_pyxb, reset_timestamps=False):\n if sysmeta_pyxb.accessPolicy is not None:\n sysmeta_pyxb.accessPolicy = d1_common.wrap.access_policy.get_normalized_pyxb(\n sysmeta_pyxb.accessPolicy\n )\n if getattr(sysmeta_pyxb, 'mediaType', False):\n d1_common.xml.sort_value_list_pyxb(sysmeta_pyxb.mediaType.property_)\n if getattr(sysmeta_pyxb, 'replicationPolicy', False):\n d1_common.xml.sort_value_list_pyxb(\n sysmeta_pyxb.replicationPolicy.preferredMemberNode\n )\n d1_common.xml.sort_value_list_pyxb(\n sysmeta_pyxb.replicationPolicy.blockedMemberNode\n )\n d1_common.xml.sort_elements_by_child_values(\n sysmeta_pyxb.replica,\n ['replicaVerified', 'replicaMemberNode', 'replicationStatus'],\n )\n sysmeta_pyxb.archived = bool(sysmeta_pyxb.archived)\n if reset_timestamps:\n epoch_dt = datetime.datetime(1970, 1, 1, tzinfo=d1_common.date_time.UTC())\n sysmeta_pyxb.dateUploaded = epoch_dt\n sysmeta_pyxb.dateSysMetadataModified = epoch_dt\n for replica_pyxb in getattr(sysmeta_pyxb, 'replica', []):\n replica_pyxb.replicaVerified = epoch_dt\n else:\n sysmeta_pyxb.dateUploaded = d1_common.date_time.round_to_nearest(\n sysmeta_pyxb.dateUploaded\n )\n sysmeta_pyxb.dateSysMetadataModified = d1_common.date_time.round_to_nearest(\n sysmeta_pyxb.dateSysMetadataModified\n )\n for replica_pyxb in getattr(sysmeta_pyxb, 'replica', []):\n replica_pyxb.replicaVerified = d1_common.date_time.round_to_nearest(\n replica_pyxb.replicaVerified\n )", "response": "Normalizes SystemMetadata PyXB object in - place."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef are_equivalent_pyxb(a_pyxb, b_pyxb, ignore_timestamps=False):\n normalize_in_place(a_pyxb, ignore_timestamps)\n normalize_in_place(b_pyxb, ignore_timestamps)\n a_xml = d1_common.xml.serialize_to_xml_str(a_pyxb)\n b_xml = d1_common.xml.serialize_to_xml_str(b_pyxb)\n are_equivalent = d1_common.xml.are_equivalent(a_xml, b_xml)\n if not are_equivalent:\n logger.debug('XML documents not equivalent:')\n logger.debug(d1_common.xml.format_diff_xml(a_xml, b_xml))\n return are_equivalent", "response": "Determine if two SystemMetadata PyXB objects are semantically equivalent."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndetermines if two SystemMetadata XML docs are semantically equivalent.", "response": "def are_equivalent_xml(a_xml, b_xml, ignore_timestamps=False):\n \"\"\"Determine if two SystemMetadata XML docs are semantically equivalent.\n\n Normalize then compare SystemMetadata XML docs for equivalency.\n\n Args:\n a_xml, b_xml: bytes\n UTF-8 encoded SystemMetadata XML docs to compare\n\n ignore_timestamps: bool\n ``True``: Timestamps in the SystemMetadata are ignored so that objects that are\n compared register as equivalent if only their timestamps differ.\n\n Returns:\n bool: **True** if SystemMetadata XML docs are semantically equivalent.\n\n Notes:\n The SystemMetadata is normalized by removing any redundant information and\n ordering all sections where there are no semantics associated with the order. The\n normalized SystemMetadata is intended to be semantically equivalent to the\n un-normalized one.\n\n \"\"\"\n\n \"\"\"Normalizes then compares SystemMetadata XML docs for equivalency.\n ``a_xml`` and ``b_xml`` should be utf-8 encoded DataONE System Metadata XML\n documents.\n \"\"\"\n return are_equivalent_pyxb(\n d1_common.xml.deserialize(a_xml),\n d1_common.xml.deserialize(b_xml),\n ignore_timestamps,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nclears the sysmeta_pyxb s elements.", "response": "def clear_elements(sysmeta_pyxb, clear_replica=True, clear_serial_version=True):\n \"\"\"{clear_replica} causes any replica information to be removed from the object.\n\n {clear_replica} ignores any differences in replica information, as this information\n is often different between MN and CN.\n\n \"\"\"\n if clear_replica:\n sysmeta_pyxb.replica = None\n if clear_serial_version:\n sysmeta_pyxb.serialVersion = None\n\n sysmeta_pyxb.replicationPolicy = None"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncopies elements specified in el_list from src_pyxb to dst_pyxb.", "response": "def update_elements(dst_pyxb, src_pyxb, el_list):\n \"\"\"Copy elements specified in ``el_list`` from ``src_pyxb`` to ``dst_pyxb``\n\n Only elements that are children of root are supported. See\n SYSMETA_ROOT_CHILD_LIST.\n\n If an element in ``el_list`` does not exist in ``src_pyxb``, it is removed from\n ``dst_pyxb``.\n\n \"\"\"\n invalid_element_set = set(el_list) - set(SYSMETA_ROOT_CHILD_LIST)\n if invalid_element_set:\n raise ValueError(\n 'Passed one or more invalid elements. invalid=\"{}\"'.format(\n ', '.join(sorted(list(invalid_element_set)))\n )\n )\n for el_str in el_list:\n setattr(dst_pyxb, el_str, getattr(src_pyxb, el_str, None))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\njoining action with the content type of obj.", "response": "def get_full_perm(perm, obj):\n \"\"\"Join action with the content type of ``obj``.\n\n Permission is returned in the format of ``_``.\n \"\"\"\n ctype = ContentType.objects.get_for_model(obj)\n # Camel case class names are converted into a space-separated\n # content types, so spaces have to be removed.\n ctype = str(ctype).replace(' ', '')\n\n return '{}_{}'.format(perm.lower(), ctype)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef copy_permissions(src_obj, dest_obj):\n def _process_permission(codename, user_or_group, dest_obj, relabel):\n \"\"\"Process single permission.\"\"\"\n if relabel:\n codename = change_perm_ctype(codename, dest_obj)\n if codename not in dest_all_perms:\n return # dest object doesn't have matching permission\n\n assign_perm(codename, user_or_group, dest_obj)\n\n src_obj_ctype = ContentType.objects.get_for_model(src_obj)\n dest_obj_ctype = ContentType.objects.get_for_model(dest_obj)\n dest_all_perms = get_all_perms(dest_obj)\n\n relabel = (src_obj_ctype != dest_obj_ctype)\n\n for perm in UserObjectPermission.objects.filter(object_pk=src_obj.pk, content_type=src_obj_ctype):\n _process_permission(perm.permission.codename, perm.user, dest_obj, relabel)\n\n for perm in GroupObjectPermission.objects.filter(object_pk=src_obj.pk, content_type=src_obj_ctype):\n _process_permission(perm.permission.codename, perm.group, dest_obj, relabel)", "response": "Copy permissions from src_obj to dest_obj."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget user by pk or username. Raise error if it doesn t exist.", "response": "def fetch_user(query):\n \"\"\"Get user by ``pk`` or ``username``. Raise error if it doesn't exist.\"\"\"\n user_filter = {'pk': query} if query.isdigit() else {'username': query}\n user_model = get_user_model()\n\n try:\n return user_model.objects.get(**user_filter)\n except user_model.DoesNotExist:\n raise exceptions.ParseError(\"Unknown user: {}\".format(query))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting group by pk or name. Raise error if it doesn t exist.", "response": "def fetch_group(query):\n \"\"\"Get group by ``pk`` or ``name``. Raise error if it doesn't exist.\"\"\"\n group_filter = {'pk': query} if query.isdigit() else {'name': query}\n\n try:\n return Group.objects.get(**group_filter)\n except Group.DoesNotExist:\n raise exceptions.ParseError(\"Unknown group: {}\".format(query))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_owner_permission(payload, allow_user_owner):\n for entity_type in ['users', 'groups']:\n for perm_type in ['add', 'remove']:\n for perms in payload.get(entity_type, {}).get(perm_type, {}).values():\n if 'owner' in perms:\n if entity_type == 'users' and allow_user_owner:\n continue\n\n if entity_type == 'groups':\n raise exceptions.ParseError(\"Owner permission cannot be assigned to a group\")\n\n raise exceptions.PermissionDenied(\"Only owners can grant/revoke owner permission\")", "response": "Raise PermissionDenied if owner is not allowed."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_public_permissions(payload):\n allowed_public_permissions = ['view', 'add', 'download']\n for perm_type in ['add', 'remove']:\n for perm in payload.get('public', {}).get(perm_type, []):\n if perm not in allowed_public_permissions:\n raise exceptions.PermissionDenied(\"Permissions for public users are too open\")", "response": "Raise PermissionDenied if public permissions are too open."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nraising PermissionDenied if user_pk is not in the payload.", "response": "def check_user_permissions(payload, user_pk):\n \"\"\"Raise ``PermissionDenied`` if ``payload`` includes ``user_pk``.\"\"\"\n for perm_type in ['add', 'remove']:\n user_pks = payload.get('users', {}).get(perm_type, {}).keys()\n if user_pk in user_pks:\n raise exceptions.PermissionDenied(\"You cannot change your own permissions\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nremove all occurrences of permission from payload.", "response": "def remove_permission(payload, permission):\n \"\"\"Remove all occurrences of ``permission`` from ``payload``.\"\"\"\n payload = copy.deepcopy(payload)\n\n for entity_type in ['users', 'groups']:\n for perm_type in ['add', 'remove']:\n for perms in payload.get(entity_type, {}).get(perm_type, {}).values():\n if permission in perms:\n perms.remove(permission)\n\n for perm_type in ['add', 'remove']:\n perms = payload.get('public', {}).get(perm_type, [])\n if permission in perms:\n perms.remove(permission)\n\n return payload"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_permission(obj, data):\n full_permissions = get_all_perms(obj)\n\n def apply_perm(perm_func, perms, entity):\n \"\"\"Apply permissions using given ``perm_func``.\n\n ``perm_func`` is intended to be ``assign_perms`` or\n ``remove_perms`` shortcut function from ``django-guardian``, but\n can be any function that accepts permission codename,\n user/group and object parameters (in this order).\n\n If given permission does not exist, ``exceptions.ParseError`` is\n raised.\n\n \"ALL\" passed as ``perms`` parameter, will call ``perm_function``\n with ``full_permissions`` list.\n\n :param func perm_func: Permissions function to be applied\n :param list params: list of params to be allpied\n :param entity: user or group to be passed to ``perm_func``\n :type entity: `~django.contrib.auth.models.User` or\n `~django.contrib.auth.models.Group`\n\n \"\"\"\n if perms == 'ALL':\n perms = full_permissions\n for perm in perms:\n perm_codename = get_full_perm(perm, obj)\n if perm_codename not in full_permissions:\n raise exceptions.ParseError(\"Unknown permission: {}\".format(perm))\n perm_func(perm_codename, entity, obj)\n\n def set_permissions(entity_type, perm_type):\n \"\"\"Set object permissions.\"\"\"\n perm_func = assign_perm if perm_type == 'add' else remove_perm\n fetch_fn = fetch_user if entity_type == 'users' else fetch_group\n\n for entity_id in data.get(entity_type, {}).get(perm_type, []):\n entity = fetch_fn(entity_id)\n if entity:\n perms = data[entity_type][perm_type][entity_id]\n apply_perm(perm_func, perms, entity)\n\n def set_public_permissions(perm_type):\n \"\"\"Set public permissions.\"\"\"\n perm_func = assign_perm if perm_type == 'add' else remove_perm\n user = AnonymousUser()\n perms = data.get('public', {}).get(perm_type, [])\n apply_perm(perm_func, perms, user)\n\n with transaction.atomic():\n set_permissions('users', 'add')\n set_permissions('users', 'remove')\n set_permissions('groups', 'add')\n set_permissions('groups', 'remove')\n set_public_permissions('add')\n set_public_permissions('remove')", "response": "Update object s permissions using given data."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef assign_contributor_permissions(obj, contributor=None):\n for permission in get_all_perms(obj):\n assign_perm(permission, contributor if contributor else obj.contributor, obj)", "response": "Assign all permissions to object s contributor."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nexecutes the script and save results.", "response": "async def run_script(self, script):\n \"\"\"Execute the script and save results.\"\"\"\n script = os.linesep.join(['set -x', 'set +B', script, 'exit']) + os.linesep\n self.proc.stdin.write(script.encode('utf-8'))\n await self.proc.stdin.drain()\n self.proc.stdin.close()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\nasync def terminate(self):\n self.proc.terminate()\n\n await asyncio.wait_for(self.proc.wait(), self.kill_delay)\n if self.proc.returncode is None:\n self.proc.kill()\n await self.proc.wait()\n\n await super().terminate()", "response": "Terminate a running script."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconstructs a connection to Redis.", "response": "async def _make_connection(self):\n \"\"\"Construct a connection to Redis.\"\"\"\n return await aioredis.create_redis(\n 'redis://{}:{}'.format(\n self._redis_params.get('host', 'localhost'),\n self._redis_params.get('port', 6379)\n ),\n db=int(self._redis_params.get('db', 1))\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nperforms a Redis call and handle connection dropping.", "response": "async def _call_redis(self, meth, *args, **kwargs):\n \"\"\"Perform a Redis call and handle connection dropping.\"\"\"\n while True:\n try:\n if not self._redis:\n self._redis = await self._make_connection()\n return await meth(self._redis, *args, **kwargs)\n except aioredis.RedisError:\n logger.exception(\"Redis connection error\")\n if self._redis:\n self._redis.close()\n await self._redis.wait_closed()\n self._redis = None\n await asyncio.sleep(3)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nclears the executor queue channel.", "response": "async def clear_queue(self):\n \"\"\"Reset the executor queue channel to an empty state.\"\"\"\n conn = await self._make_connection()\n try:\n script = \"\"\"\n local keys = redis.call('KEYS', ARGV[1])\n redis.call('DEL', unpack(keys))\n \"\"\"\n await conn.eval(\n script,\n keys=[],\n args=['*{}*'.format(settings.FLOW_MANAGER['REDIS_PREFIX'])],\n )\n finally:\n conn.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef terminate(self):\n logger.info(__(\n \"Terminating Resolwe listener on channel '{}'.\",\n state.MANAGER_EXECUTOR_CHANNELS.queue\n ))\n self._should_stop = True", "response": "Stop the standalone manager."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate the feedback channel name from the object s id.", "response": "def _queue_response_channel(self, obj):\n \"\"\"Generate the feedback channel name from the object's id.\n\n :param obj: The Channels message object.\n \"\"\"\n return '{}.{}'.format(state.MANAGER_EXECUTOR_CHANNELS.queue_response, obj[ExecutorProtocol.DATA_ID])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsend a reply to the executor.", "response": "async def _send_reply(self, obj, reply):\n \"\"\"Send a reply with added standard fields back to executor.\n\n :param obj: The original Channels message object to which we're\n replying.\n :param reply: The message contents dictionary. The data id is\n added automatically (``reply`` is modified in place).\n \"\"\"\n reply.update({\n ExecutorProtocol.DATA_ID: obj[ExecutorProtocol.DATA_ID],\n })\n await self._call_redis(aioredis.Redis.rpush, self._queue_response_channel(obj), json.dumps(reply))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\npop the given file s map from the exported files mapping.", "response": "def hydrate_spawned_files(self, exported_files_mapper, filename, data_id):\n \"\"\"Pop the given file's map from the exported files mapping.\n\n :param exported_files_mapper: The dict of file mappings this\n process produced.\n :param filename: The filename to format and remove from the\n mapping.\n :param data_id: The id of the :meth:`~resolwe.flow.models.Data`\n object owning the mapping.\n :return: The formatted mapping between the filename and\n temporary file path.\n :rtype: dict\n \"\"\"\n # JSON only has string dictionary keys, so the Data object id\n # needs to be stringified first.\n data_id = str(data_id)\n\n if filename not in exported_files_mapper[data_id]:\n raise KeyError(\"Use 're-export' to prepare the file for spawned process: {}\".format(filename))\n\n export_fn = exported_files_mapper[data_id].pop(filename)\n\n if exported_files_mapper[data_id] == {}:\n exported_files_mapper.pop(data_id)\n\n return {'file_temp': export_fn, 'file': filename}"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef handle_update(self, obj, internal_call=False):\n data_id = obj[ExecutorProtocol.DATA_ID]\n changeset = obj[ExecutorProtocol.UPDATE_CHANGESET]\n if not internal_call:\n logger.debug(\n __(\"Handling update for Data with id {} (handle_update).\", data_id),\n extra={\n 'data_id': data_id,\n 'packet': obj\n }\n )\n try:\n d = Data.objects.get(pk=data_id)\n except Data.DoesNotExist:\n logger.warning(\n \"Data object does not exist (handle_update).\",\n extra={\n 'data_id': data_id,\n }\n )\n\n if not internal_call:\n async_to_sync(self._send_reply)(obj, {ExecutorProtocol.RESULT: ExecutorProtocol.RESULT_ERROR})\n\n async_to_sync(consumer.send_event)({\n WorkerProtocol.COMMAND: WorkerProtocol.ABORT,\n WorkerProtocol.DATA_ID: obj[ExecutorProtocol.DATA_ID],\n WorkerProtocol.FINISH_COMMUNICATE_EXTRA: {\n 'executor': getattr(settings, 'FLOW_EXECUTOR', {}).get('NAME', 'resolwe.flow.executors.local'),\n },\n })\n\n return\n\n if changeset.get('status', None) == Data.STATUS_ERROR:\n logger.error(\n __(\"Error occured while running process '{}' (handle_update).\", d.process.slug),\n extra={\n 'data_id': data_id,\n 'api_url': '{}{}'.format(\n getattr(settings, 'RESOLWE_HOST_URL', ''),\n reverse('resolwe-api:data-detail', kwargs={'pk': data_id})\n ),\n }\n )\n\n if d.status == Data.STATUS_ERROR:\n changeset['status'] = Data.STATUS_ERROR\n\n if not d.started:\n changeset['started'] = now()\n changeset['modified'] = now()\n\n for key, val in changeset.items():\n if key in ['process_error', 'process_warning', 'process_info']:\n # Trim process_* fields to not exceed max length of the database field.\n for i, entry in enumerate(val):\n max_length = Data._meta.get_field(key).base_field.max_length # pylint: disable=protected-access\n if len(entry) > max_length:\n val[i] = entry[:max_length - 3] + '...'\n\n getattr(d, key).extend(val)\n\n elif key != 'output':\n setattr(d, key, val)\n\n if 'output' in changeset:\n if not isinstance(d.output, dict):\n d.output = {}\n for key, val in changeset['output'].items():\n dict_dot(d.output, key, val)\n\n try:\n d.save(update_fields=list(changeset.keys()))\n except ValidationError as exc:\n logger.error(\n __(\n \"Validation error when saving Data object of process '{}' (handle_update):\\n\\n{}\",\n d.process.slug,\n traceback.format_exc()\n ),\n extra={\n 'data_id': data_id\n }\n )\n\n d.refresh_from_db()\n\n d.process_error.append(exc.message)\n d.status = Data.STATUS_ERROR\n\n try:\n d.save(update_fields=['process_error', 'status'])\n except Exception: # pylint: disable=broad-except\n pass\n except Exception: # pylint: disable=broad-except\n logger.error(\n __(\n \"Error when saving Data object of process '{}' (handle_update):\\n\\n{}\",\n d.process.slug,\n traceback.format_exc()\n ),\n extra={\n 'data_id': data_id\n }\n )\n\n if not internal_call:\n async_to_sync(self._send_reply)(obj, {ExecutorProtocol.RESULT: ExecutorProtocol.RESULT_OK})", "response": "Handles an incoming update request."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nhandles an incoming Data finished processing request.", "response": "def handle_finish(self, obj):\n \"\"\"Handle an incoming ``Data`` finished processing request.\n\n :param obj: The Channels message object. Command object format:\n\n .. code-block:: none\n\n {\n 'command': 'finish',\n 'data_id': [id of the :class:`~resolwe.flow.models.Data` object\n this command changes],\n 'process_rc': [exit status of the processing]\n 'spawn_processes': [optional; list of spawn dictionaries],\n 'exported_files_mapper': [if spawn_processes present]\n }\n \"\"\"\n data_id = obj[ExecutorProtocol.DATA_ID]\n logger.debug(\n __(\"Finishing Data with id {} (handle_finish).\", data_id),\n extra={\n 'data_id': data_id,\n 'packet': obj\n }\n )\n spawning_failed = False\n with transaction.atomic():\n # Spawn any new jobs in the request.\n spawned = False\n if ExecutorProtocol.FINISH_SPAWN_PROCESSES in obj:\n if is_testing():\n # NOTE: This is a work-around for Django issue #10827\n # (https://code.djangoproject.com/ticket/10827), same as in\n # TestCaseHelpers._pre_setup(). Because the listener is running\n # independently, it must clear the cache on its own.\n ContentType.objects.clear_cache()\n\n spawned = True\n exported_files_mapper = obj[ExecutorProtocol.FINISH_EXPORTED_FILES]\n logger.debug(\n __(\"Spawning new Data objects for Data with id {} (handle_finish).\", data_id),\n extra={\n 'data_id': data_id\n }\n )\n\n try:\n # This transaction is needed because we're running\n # asynchronously with respect to the main Django code\n # here; the manager can get nudged from elsewhere.\n with transaction.atomic():\n parent_data = Data.objects.get(pk=data_id)\n\n # Spawn processes.\n for d in obj[ExecutorProtocol.FINISH_SPAWN_PROCESSES]:\n d['contributor'] = parent_data.contributor\n d['process'] = Process.objects.filter(slug=d['process']).latest()\n d['tags'] = parent_data.tags\n\n for field_schema, fields in iterate_fields(d.get('input', {}), d['process'].input_schema):\n type_ = field_schema['type']\n name = field_schema['name']\n value = fields[name]\n\n if type_ == 'basic:file:':\n fields[name] = self.hydrate_spawned_files(\n exported_files_mapper, value, data_id\n )\n elif type_ == 'list:basic:file:':\n fields[name] = [self.hydrate_spawned_files(exported_files_mapper, fn, data_id)\n for fn in value]\n\n with transaction.atomic():\n d = Data.objects.create(**d)\n DataDependency.objects.create(\n parent=parent_data,\n child=d,\n kind=DataDependency.KIND_SUBPROCESS,\n )\n\n # Copy permissions.\n copy_permissions(parent_data, d)\n\n # Entity is added to the collection only when it is\n # created - when it only contains 1 Data object.\n entities = Entity.objects.filter(data=d).annotate(num_data=Count('data')).filter(\n num_data=1)\n\n # Copy collections.\n for collection in parent_data.collection_set.all():\n collection.data.add(d)\n\n # Add entities to which data belongs to the collection.\n for entity in entities:\n entity.collections.add(collection)\n\n except Exception: # pylint: disable=broad-except\n logger.error(\n __(\n \"Error while preparing spawned Data objects of process '{}' (handle_finish):\\n\\n{}\",\n parent_data.process.slug,\n traceback.format_exc()\n ),\n extra={\n 'data_id': data_id\n }\n )\n spawning_failed = True\n\n # Data wrap up happens last, so that any triggered signals\n # already see the spawned children. What the children themselves\n # see is guaranteed by the transaction we're in.\n if ExecutorProtocol.FINISH_PROCESS_RC in obj:\n process_rc = obj[ExecutorProtocol.FINISH_PROCESS_RC]\n\n try:\n d = Data.objects.get(pk=data_id)\n except Data.DoesNotExist:\n logger.warning(\n \"Data object does not exist (handle_finish).\",\n extra={\n 'data_id': data_id,\n }\n )\n async_to_sync(self._send_reply)(obj, {ExecutorProtocol.RESULT: ExecutorProtocol.RESULT_ERROR})\n return\n\n changeset = {\n 'process_progress': 100,\n 'finished': now(),\n }\n\n if spawning_failed:\n changeset['status'] = Data.STATUS_ERROR\n changeset['process_error'] = [\"Error while preparing spawned Data objects\"]\n\n elif process_rc == 0 and not d.status == Data.STATUS_ERROR:\n changeset['status'] = Data.STATUS_DONE\n\n else:\n changeset['status'] = Data.STATUS_ERROR\n changeset['process_rc'] = process_rc\n\n obj[ExecutorProtocol.UPDATE_CHANGESET] = changeset\n self.handle_update(obj, internal_call=True)\n\n if not getattr(settings, 'FLOW_MANAGER_KEEP_DATA', False):\n # Purge worker is not running in test runner, so we should skip triggering it.\n if not is_testing():\n channel_layer = get_channel_layer()\n try:\n async_to_sync(channel_layer.send)(\n CHANNEL_PURGE_WORKER,\n {\n 'type': TYPE_PURGE_RUN,\n 'location_id': d.location.id,\n 'verbosity': self._verbosity,\n }\n )\n except ChannelFull:\n logger.warning(\n \"Cannot trigger purge because channel is full.\",\n extra={'data_id': data_id}\n )\n\n # Notify the executor that we're done.\n async_to_sync(self._send_reply)(obj, {ExecutorProtocol.RESULT: ExecutorProtocol.RESULT_OK})\n\n # Now nudge the main manager to perform final cleanup. This is\n # needed even if there was no spawn baggage, since the manager\n # may need to know when executors have finished, to keep count\n # of them and manage synchronization.\n async_to_sync(consumer.send_event)({\n WorkerProtocol.COMMAND: WorkerProtocol.FINISH,\n WorkerProtocol.DATA_ID: data_id,\n WorkerProtocol.FINISH_SPAWNED: spawned,\n WorkerProtocol.FINISH_COMMUNICATE_EXTRA: {\n 'executor': getattr(settings, 'FLOW_EXECUTOR', {}).get('NAME', 'resolwe.flow.executors.local'),\n },\n })"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef handle_abort(self, obj):\n async_to_sync(consumer.send_event)({\n WorkerProtocol.COMMAND: WorkerProtocol.ABORT,\n WorkerProtocol.DATA_ID: obj[ExecutorProtocol.DATA_ID],\n WorkerProtocol.FINISH_COMMUNICATE_EXTRA: {\n 'executor': getattr(settings, 'FLOW_EXECUTOR', {}).get('NAME', 'resolwe.flow.executors.local'),\n },\n })", "response": "Handle an incoming Abort processing request."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nhandles an incoming log processing request.", "response": "def handle_log(self, obj):\n \"\"\"Handle an incoming log processing request.\n\n :param obj: The Channels message object. Command object format:\n\n .. code-block:: none\n\n {\n 'command': 'log',\n 'message': [log message]\n }\n \"\"\"\n record_dict = json.loads(obj[ExecutorProtocol.LOG_MESSAGE])\n record_dict['msg'] = record_dict['msg']\n\n executors_dir = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'executors')\n record_dict['pathname'] = os.path.join(executors_dir, record_dict['pathname'])\n\n logger.handle(logging.makeLogRecord(record_dict))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npushing current stats to Redis.", "response": "async def push_stats(self):\n \"\"\"Push current stats to Redis.\"\"\"\n snapshot = self._make_stats()\n try:\n serialized = json.dumps(snapshot)\n await self._call_redis(aioredis.Redis.set, state.MANAGER_LISTENER_STATS, serialized)\n await self._call_redis(aioredis.Redis.expire, state.MANAGER_LISTENER_STATS, 3600)\n except TypeError:\n logger.error(__(\n \"Listener can't serialize statistics:\\n\\n{}\",\n traceback.format_exc()\n ))\n except aioredis.RedisError:\n logger.error(__(\n \"Listener can't store updated statistics:\\n\\n{}\",\n traceback.format_exc()\n ))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks for critical load and log an error if necessary.", "response": "def check_critical_load(self):\n \"\"\"Check for critical load and log an error if necessary.\"\"\"\n if self.load_avg.intervals['1m'].value > 1:\n if self.last_load_level == 1 and time.time() - self.last_load_log < 30:\n return\n self.last_load_log = time.time()\n self.last_load_level = 1\n logger.error(\n \"Listener load limit exceeded, the system can't handle this!\",\n extra=self._make_stats()\n )\n\n elif self.load_avg.intervals['1m'].value > 0.8:\n if self.last_load_level == 0.8 and time.time() - self.last_load_log < 30:\n return\n self.last_load_log = time.time()\n self.last_load_level = 0.8\n logger.warning(\n \"Listener load approaching critical!\",\n extra=self._make_stats()\n )\n\n else:\n self.last_load_log = -math.inf\n self.last_load_level = 0"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrun the main listener run loop. Doesn't return until :meth:`terminate` is called.", "response": "async def run(self):\n \"\"\"Run the main listener run loop.\n\n Doesn't return until :meth:`terminate` is called.\n \"\"\"\n logger.info(__(\n \"Starting Resolwe listener on channel '{}'.\",\n state.MANAGER_EXECUTOR_CHANNELS.queue\n ))\n while not self._should_stop:\n await self.push_stats()\n ret = await self._call_redis(aioredis.Redis.blpop, state.MANAGER_EXECUTOR_CHANNELS.queue, timeout=1)\n if ret is None:\n self.load_avg.add(0)\n continue\n remaining = await self._call_redis(aioredis.Redis.llen, state.MANAGER_EXECUTOR_CHANNELS.queue)\n self.load_avg.add(remaining + 1)\n self.check_critical_load()\n _, item = ret\n try:\n item = item.decode('utf-8')\n logger.debug(__(\"Got command from executor: {}\", item))\n obj = json.loads(item)\n except json.JSONDecodeError:\n logger.error(\n __(\"Undecodable command packet:\\n\\n{}\"),\n traceback.format_exc()\n )\n continue\n\n command = obj.get(ExecutorProtocol.COMMAND, None)\n if command is None:\n continue\n\n service_start = time.perf_counter()\n\n handler = getattr(self, 'handle_' + command, None)\n if handler:\n try:\n with PrioritizedBatcher.global_instance():\n await database_sync_to_async(handler)(obj)\n except Exception: # pylint: disable=broad-except\n logger.error(__(\n \"Executor command handling error:\\n\\n{}\",\n traceback.format_exc()\n ))\n else:\n logger.error(\n __(\"Unknown executor command '{}'.\", command),\n extra={'decoded_packet': obj}\n )\n\n # We do want to measure wall-clock time elapsed, because\n # system load will impact event handling performance. On\n # a lagging system, good internal performance is meaningless.\n service_end = time.perf_counter()\n self.service_time.update(service_end - service_start)\n logger.info(__(\n \"Stopping Resolwe listener on channel '{}'.\",\n state.MANAGER_EXECUTOR_CHANNELS.queue\n ))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dokanMain(self, dokanOptions, dokanOperations):\n return int(\n self.dokanDLL.DokanMain(\n PDOKAN_OPTIONS(dokanOptions), PDOKAN_OPERATIONS(dokanOperations)\n )\n )", "response": "Issue callback to start dokan drive."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef createFile(\n self,\n fileName,\n desiredAccess,\n shareMode,\n creationDisposition,\n flagsAndAttributes,\n dokanFileInfo,\n ):\n \"\"\"Creates a file.\n\n :param fileName: name of file to create\n :type fileName: ctypes.c_wchar_p\n :param desiredAccess: desired access flags\n :type desiredAccess: ctypes.c_ulong\n :param shareMode: share mode flags\n :type shareMode: ctypes.c_ulong\n :param creationDisposition: creation disposition flags\n :type creationDisposition: ctypes.c_ulong\n :param flagsAndAttributes: creation flags and attributes\n :type flagsAndAttributes: ctypes.c_ulong\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n return self.operations('createFile', fileName)", "response": "Creates a file in the cache."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading a file from the Dokan and returns the size of the file.", "response": "def readFile(\n self,\n fileName,\n buffer,\n numberOfBytesToRead,\n numberOfBytesRead,\n offset,\n dokanFileInfo,\n ):\n \"\"\"Read a file.\n\n :param fileName: name of file to read\n :type fileName: ctypes.c_wchar_p\n :param buffer: buffer for content read\n :type buffer: ctypes.c_void_p\n :param numberOfBytesToRead: number of bytes to read\n :type numberOfBytesToRead: ctypes.c_ulong\n :param numberOfBytesRead: number of bytes read\n :type numberOfBytesRead: ctypes.POINTER(ctypes.c_ulong)\n :param offset: byte offset\n :type offset: ctypes.c_longlong\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n try:\n ret = self.operations('readFile', fileName, numberOfBytesToRead, offset)\n data = ctypes.create_string_buffer(\n ret[:numberOfBytesToRead], numberOfBytesToRead\n )\n ctypes.memmove(buffer, data, numberOfBytesToRead)\n sizeRead = ctypes.c_ulong(len(ret))\n ctypes.memmove(\n numberOfBytesRead, ctypes.byref(sizeRead), ctypes.sizeof(ctypes.c_ulong)\n )\n return d1_onedrive.impl.drivers.dokan.const.DOKAN_SUCCESS\n except Exception:\n # logging.error('%s', e)\n return d1_onedrive.impl.drivers.dokan.const.DOKAN_ERROR"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwrites a file from the cache.", "response": "def writeFile(\n self,\n fileName,\n buffer,\n numberOfBytesToWrite,\n numberOfBytesWritten,\n offset,\n dokanFileInfo,\n ):\n \"\"\"Read a file.\n\n :param fileName: name of file to write\n :type fileName: ctypes.c_wchar_p\n :param buffer: buffer to write\n :type buffer: ctypes.c_void_p\n :param numberOfBytesToWrite: number of bytes to write\n :type numberOfBytesToWrite: ctypes.c_ulong\n :param numberOfBytesWritten: number of bytes written\n :type numberOfBytesWritten: ctypes.POINTER(ctypes.c_ulong)\n :param offset: byte offset\n :type offset: ctypes.c_longlong\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n return self.operations(\n 'writeFile', fileName, buffer, numberOfBytesToWrite, offset\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding files in a certain path that match the search pattern. :param fileName: path to search :type fileName: ctypes.c_wchar_p :param searchPattern: pattern to search for :type searchPattern: ctypes.c_wchar_p :param fillFindData: function pointer for populating search results :type fillFindData: PFillFindData :param dokanFileInfo: used by Dokan :type dokanFileInfo: PDOKAN_FILE_INFO :return: error code :rtype: ctypes.c_int", "response": "def findFilesWithPattern(\n self, fileName, searchPattern, fillFindData, dokanFileInfo\n ):\n \"\"\"Find files in a certain path that match the search pattern.\n\n :param fileName: path to search\n :type fileName: ctypes.c_wchar_p\n :param searchPattern: pattern to search for\n :type searchPattern: ctypes.c_wchar_p\n :param fillFindData: function pointer for populating search results\n :type fillFindData: PFillFindData\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n try:\n ret = self.operations('findFilesWithPattern', fileName, searchPattern)\n if ret is None:\n return d1_onedrive.impl.drivers.dokan.const.DOKAN_ERROR\n for r in ret:\n create_ft = self.python_timestamp_to_win32_filetime(r['ctime'])\n last_access_ft = self.python_timestamp_to_win32_filetime(r['atime'])\n last_write_ft = self.python_timestamp_to_win32_filetime(r['wtime'])\n cft = ctypes.wintypes.FILETIME(create_ft[0], create_ft[1])\n laft = ctypes.wintypes.FILETIME(last_access_ft[0], last_access_ft[1])\n lwft = ctypes.wintypes.FILETIME(last_write_ft[0], last_write_ft[1])\n size = self.pyint_to_double_dwords(r['size'])\n File = ctypes.wintypes.WIN32_FIND_DATAW(\n ctypes.c_ulong(r['attr']), # attributes\n cft, # creation time\n laft, # last access time\n lwft, # last write time\n size[1], # upper bits of size\n size[0], # lower bits of size\n ctypes.c_ulong(0), # reserved for FS\n ctypes.c_ulong(0), # reserved for FS\n r['name'], # file name\n '',\n ) # alternate name\n pFile = ctypes.wintypes.PWIN32_FIND_DATAW(File)\n fillFindData(pFile, dokanFileInfo)\n return d1_onedrive.impl.drivers.dokan.const.DOKAN_SUCCESS\n except Exception as e:\n logging.error('%s', e)\n return d1_onedrive.impl.drivers.dokan.const.DOKAN_ERROR"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setFileTime(\n self, fileName, creationTime, lastAccessTime, lastWriteTime, dokanFileInfo\n ):\n \"\"\"Set time values for a file.\n\n :param fileName: name of file to set time values for\n :type fileName: ctypes.c_wchar_p\n :param creationTime: creation time of file\n :type creationTime: ctypes.POINTER(ctypes.wintypes.FILETIME)\n :param lastAccessTime: last access time of file\n :type lastAccessTime: ctypes.POINTER(ctypes.wintypes.FILETIME)\n :param lastWriteTime: last write time of file\n :type lastWriteTime: ctypes.POINTER(ctypes.wintypes.FILETIME)\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n return self.operations('setFileTime', fileName)", "response": "Set the file time values for a specific file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef moveFile(self, existingFileName, newFileName, replaceExisiting, dokanFileInfo):\n return self.operations('moveFile', existingFileName, newFileName)", "response": "Move a file.\n\n :param existingFileName: name of file to move\n :type existingFileName: ctypes.c_wchar_p\n :param newFileName: new name of file\n :type newFileName: ctypes.c_wchar_p\n :param replaceExisting: flag to indicate replacement of existing file\n :type replaceExisting: ctypes.c_bool\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef lockFile(self, fileName, byteOffset, length, dokanFileInfo):\n return self.operations('lockFile', fileName, byteOffset, length)", "response": "Lock a file.\n\n :param fileName: name of file to lock\n :type fileName: ctypes.c_wchar_p\n :param byteOffset: location to start lock\n :type byteOffset: ctypes.c_longlong\n :param length: number of bytes to lock\n :type length: ctypes.c_longlong\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nunlocks a file. :param fileName: name of file to unlock :type fileName: ctypes.c_wchar_p :param byteOffset: location to start unlock :type byteOffset: ctypes.c_longlong :param length: number of bytes to unlock :type length: ctypes.c_longlong :param dokanFileInfo: used by Dokan :type dokanFileInfo: PDOKAN_FILE_INFO :return: error code :rtype: ctypes.c_int", "response": "def unlockFile(self, fileName, byteOffset, length, dokanFileInfo):\n \"\"\"Unlock a file.\n\n :param fileName: name of file to unlock\n :type fileName: ctypes.c_wchar_p\n :param byteOffset: location to start unlock\n :type byteOffset: ctypes.c_longlong\n :param length: number of bytes to unlock\n :type length: ctypes.c_longlong\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n return self.operations('unlockFile', fileName, byteOffset, length)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef getDiskFreeSpace(\n self,\n freeBytesAvailable,\n totalNumberOfBytes,\n totalNumberOfFreeBytes,\n dokanFileInfo,\n ):\n \"\"\"Get the amount of free space on this volume.\n\n :param freeBytesAvailable: pointer for free bytes available\n :type freeBytesAvailable: ctypes.c_void_p\n :param totalNumberOfBytes: pointer for total number of bytes\n :type totalNumberOfBytes: ctypes.c_void_p\n :param totalNumberOfFreeBytes: pointer for total number of free bytes\n :type totalNumberOfFreeBytes: ctypes.c_void_p\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n ret = self.operations('getDiskFreeSpace')\n ctypes.memmove(\n freeBytesAvailable,\n ctypes.byref(ctypes.c_longlong(ret['freeBytesAvailable'])),\n ctypes.sizeof(ctypes.c_longlong),\n )\n ctypes.memmove(\n totalNumberOfBytes,\n ctypes.byref(ctypes.c_longlong(ret['totalNumberOfBytes'])),\n ctypes.sizeof(ctypes.c_longlong),\n )\n ctypes.memmove(\n totalNumberOfFreeBytes,\n ctypes.byref(ctypes.c_longlong(ret['totalNumberOfFreeBytes'])),\n ctypes.sizeof(ctypes.c_longlong),\n )\n return d1_onedrive.impl.drivers.dokan.const.DOKAN_SUCCESS", "response": "This method returns the amount of free space on this volume."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget information about the volume. :param volumeNameBuffer: buffer for volume name :type volumeNameBuffer: ctypes.c_void_p :param volumeNameSize: volume name buffer size :type volumeNameSize: ctypes.c_ulong :param volumeSerialNumber: buffer for volume serial number :type volumeSerialNumber: ctypes.c_void_p :param maximumComponentLength: buffer for maximum component length :type maximumComponentLength: ctypes.c_void_p :param fileSystemFlags: buffer for file system flags :type fileSystemFlags: ctypes.c_void_p :param fileSystemNameBuffer: buffer for file system name :type fileSystemNameBuffer: ctypes.c_void_p :param fileSystemNameSize: file system name buffer size :type fileSystemNameSize: ctypes.c_ulong :param dokanFileInfo: used by Dokan :type dokanFileInfo: PDOKAN_FILE_INFO :return: error code :rtype: ctypes.c_int", "response": "def getVolumeInformation(\n self,\n volumeNameBuffer,\n volumeNameSize,\n volumeSerialNumber,\n maximumComponentLength,\n fileSystemFlags,\n fileSystemNameBuffer,\n fileSystemNameSize,\n dokanFileInfo,\n ):\n \"\"\"Get information about the volume.\n\n :param volumeNameBuffer: buffer for volume name\n :type volumeNameBuffer: ctypes.c_void_p\n :param volumeNameSize: volume name buffer size\n :type volumeNameSize: ctypes.c_ulong\n :param volumeSerialNumber: buffer for volume serial number\n :type volumeSerialNumber: ctypes.c_void_p\n :param maximumComponentLength: buffer for maximum component length\n :type maximumComponentLength: ctypes.c_void_p\n :param fileSystemFlags: buffer for file system flags\n :type fileSystemFlags: ctypes.c_void_p\n :param fileSystemNameBuffer: buffer for file system name\n :type fileSystemNameBuffer: ctypes.c_void_p\n :param fileSystemNameSize: file system name buffer size\n :type fileSystemNameSize: ctypes.c_ulong\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n ret = self.operations('getVolumeInformation')\n # populate volume name buffer\n ctypes.memmove(\n volumeNameBuffer,\n ret['volumeNameBuffer'],\n min(\n ctypes.sizeof(ctypes.c_wchar) * len(ret['volumeNameBuffer']),\n volumeNameSize,\n ),\n )\n # populate serial number buffer\n serialNum = ctypes.c_ulong(self.serialNumber)\n ctypes.memmove(\n volumeSerialNumber, ctypes.byref(serialNum), ctypes.sizeof(ctypes.c_ulong)\n )\n # populate max component length\n maxCompLen = ctypes.c_ulong(ret['maximumComponentLength'])\n ctypes.memmove(\n maximumComponentLength,\n ctypes.byref(maxCompLen),\n ctypes.sizeof(ctypes.c_ulong),\n )\n # populate filesystem flags buffer\n fsFlags = ctypes.c_ulong(ret['fileSystemFlags'])\n ctypes.memmove(\n fileSystemFlags, ctypes.byref(fsFlags), ctypes.sizeof(ctypes.c_ulong)\n )\n # populate filesystem name\n ctypes.memmove(\n fileSystemNameBuffer,\n ret['fileSystemNameBuffer'],\n min(\n ctypes.sizeof(ctypes.c_wchar) * len(ret['fileSystemNameBuffer']),\n fileSystemNameSize,\n ),\n )\n return d1_onedrive.impl.drivers.dokan.const.DOKAN_SUCCESS"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getFileSecurity(\n self,\n fileName,\n securityInformation,\n securityDescriptor,\n lengthSecurityDescriptorBuffer,\n lengthNeeded,\n dokanFileInfo,\n ):\n \"\"\"Get security attributes of a file.\n\n :param fileName: name of file to get security for\n :type fileName: ctypes.c_wchar_p\n :param securityInformation: buffer for security information\n :type securityInformation: PSECURITY_INFORMATION\n :param securityDescriptor: buffer for security descriptor\n :type securityDescriptor: PSECURITY_DESCRIPTOR\n :param lengthSecurityDescriptorBuffer: length of descriptor buffer\n :type lengthSecurityDescriptorBuffer: ctypes.c_ulong\n :param lengthNeeded: length needed for the buffer\n :type lengthNeeded: ctypes.POINTER(ctypes.c_ulong)\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n return self.operations('getFileSecurity', fileName)", "response": "Get security attributes of a file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset security attributes of a file.", "response": "def setFileSecurity(\n self,\n fileName,\n securityInformation,\n securityDescriptor,\n lengthSecurityDescriptorBuffer,\n dokanFileInfo,\n ):\n \"\"\"Set security attributes of a file.\n\n :param fileName: name of file to set security for\n :type fileName: ctypes.c_wchar_p\n :param securityInformation: new security information\n :type securityInformation: PSECURITY_INFORMATION\n :param securityDescriptor: newsecurity descriptor\n :type securityDescriptor: PSECURITY_DESCRIPTOR\n :param lengthSecurityDescriptorBuffer: length of descriptor buffer\n :type lengthSecurityDescriptorBuffer: ctypes.c_ulong\n :param dokanFileInfo: used by Dokan\n :type dokanFileInfo: PDOKAN_FILE_INFO\n :return: error code\n :rtype: ctypes.c_int\n\n \"\"\"\n return self.operations('setFileSecurity', fileName)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a simple resource map with one metadata document and n data objects.", "response": "def createSimpleResourceMap(ore_pid, sci_meta_pid, data_pids):\n \"\"\"Create a simple resource map with one metadata document and n data objects.\"\"\"\n ore = ResourceMap()\n ore.initialize(ore_pid)\n ore.addMetadataDocument(sci_meta_pid)\n ore.addDataDocuments(data_pids, sci_meta_pid)\n return ore"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreading pids from in_stream and generate a resource map.", "response": "def pids2ore(in_stream, fmt='xml', base_url='https://cn.dataone.org/cn'):\n \"\"\"read pids from in_stream and generate a resource map.\n\n first pid is the ore_pid second is the sci meta pid remainder are data pids\n\n \"\"\"\n pids = []\n for line in in_stream:\n pid = line.strip()\n if len(pid) > 0:\n if not pid.startswith(\"# \"):\n pids.append(pid)\n if (len(pids)) < 2:\n raise ValueError(\"Insufficient identifiers provided.\")\n\n logging.info(\"Read %d identifiers\", len(pids))\n\n ore = ResourceMap(base_url=base_url)\n\n logging.info(\"ORE PID = %s\", pids[0])\n ore.initialize(pids[0])\n\n logging.info(\"Metadata PID = %s\", pids[1])\n ore.addMetadataDocument(pids[1])\n\n ore.addDataDocuments(pids[2:], pids[1])\n return ore.serialize_to_display(doc_format=fmt)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef submit(self, data, runtime_dir, argv):\n queue = 'ordinary'\n if data.process.scheduling_class == Process.SCHEDULING_CLASS_INTERACTIVE:\n queue = 'hipri'\n\n logger.debug(__(\n \"Connector '{}' running for Data with id {} ({}) in celery queue {}, EAGER is {}.\",\n self.__class__.__module__,\n data.id,\n repr(argv),\n queue,\n getattr(settings, 'CELERY_ALWAYS_EAGER', None)\n ))\n celery_run.apply_async((data.id, runtime_dir, argv), queue=queue)", "response": "Submit a new process."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsynchronize the local tree of Solr records for DataONE identifiers and queries with the reference tree.", "response": "def refresh(self):\n \"\"\"Synchronize the local tree of Solr records for DataONE identifiers and\n queries with the reference tree.\"\"\"\n if self._source_tree.cache_is_stale():\n self._source_tree.refresh()\n logging.info('Refreshing object tree')\n self._init_cache()\n self.sync_cache_with_source_tree()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_object_record(self, pid):\n try:\n return self._cache['records'][pid]\n except KeyError:\n raise d1_onedrive.impl.onedrive_exceptions.ONEDriveException('Unknown PID')", "response": "Get an object record from the object tree."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_object_record_with_sync(self, pid):\n try:\n return self._cache['records'][pid]\n except KeyError:\n return self._get_uncached_object_record(pid)", "response": "Get an object record with sync."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a cache item for a given PID.", "response": "def _create_cache_item_for_pid(self, cache_folder, pid):\n \"\"\"The source tree can contain identifiers that are no longer valid (or were\n never valid).\n\n Any items for which a Solr record cannot be retrieved are silently skipped.\n\n \"\"\"\n try:\n record = self._solr_client.get_solr_record(pid)\n except d1_onedrive.impl.onedrive_exceptions.ONEDriveException:\n pass\n else:\n self._create_cache_item(cache_folder, record)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn abstracted status of dependencies.", "response": "def dependency_status(data):\n \"\"\"Return abstracted status of dependencies.\n\n - ``STATUS_ERROR`` .. one dependency has error status or was deleted\n - ``STATUS_DONE`` .. all dependencies have done status\n - ``None`` .. other\n\n \"\"\"\n parents_statuses = set(\n DataDependency.objects.filter(\n child=data, kind=DataDependency.KIND_IO\n ).distinct('parent__status').values_list('parent__status', flat=True)\n )\n\n if not parents_statuses:\n return Data.STATUS_DONE\n\n if None in parents_statuses:\n # Some parents have been deleted.\n return Data.STATUS_ERROR\n\n if Data.STATUS_ERROR in parents_statuses:\n return Data.STATUS_ERROR\n\n if len(parents_statuses) == 1 and Data.STATUS_DONE in parents_statuses:\n return Data.STATUS_DONE\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef discover_engines(self, executor=None):\n if executor is None:\n executor = getattr(settings, 'FLOW_EXECUTOR', {}).get('NAME', 'resolwe.flow.executors.local')\n self.executor = self.load_executor(executor)\n logger.info(\n __(\"Loaded '{}' executor.\", str(self.executor.__class__.__module__).replace('.prepare', ''))\n )\n\n expression_engines = getattr(settings, 'FLOW_EXPRESSION_ENGINES', ['resolwe.flow.expression_engines.jinja'])\n self.expression_engines = self.load_expression_engines(expression_engines)\n logger.info(__(\n \"Found {} expression engines: {}\", len(self.expression_engines), ', '.join(self.expression_engines.keys())\n ))\n\n execution_engines = getattr(settings, 'FLOW_EXECUTION_ENGINES', ['resolwe.flow.execution_engines.bash'])\n self.execution_engines = self.load_execution_engines(execution_engines)\n logger.info(__(\n \"Found {} execution engines: {}\", len(self.execution_engines), ', '.join(self.execution_engines.keys())\n ))", "response": "Discover configured engines.\n\n :param executor: Optional executor module override"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef reset(self, keep_state=False):\n if not keep_state:\n self.state = state.ManagerState(state.MANAGER_STATE_PREFIX)\n self.state.reset()\n async_to_sync(consumer.run_consumer)(timeout=1)\n async_to_sync(self.sync_counter.reset)()", "response": "Reset the shared state and drain Django Channels."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmarshaling Django settings into a serializable object.", "response": "def _marshal_settings(self):\n \"\"\"Marshal Django settings into a serializable object.\n\n :return: The serialized settings.\n :rtype: dict\n \"\"\"\n result = {}\n for key in dir(settings):\n if any(map(key.startswith, ['FLOW_', 'RESOLWE_', 'CELERY_'])):\n result[key] = getattr(settings, key)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nselect a concrete connector and run the process through it.", "response": "def run(self, data, runtime_dir, argv):\n \"\"\"Select a concrete connector and run the process through it.\n\n :param data: The :class:`~resolwe.flow.models.Data` object that\n is to be run.\n :param runtime_dir: The directory the executor is run from.\n :param argv: The argument vector used to spawn the executor.\n \"\"\"\n process_scheduling = self.scheduling_class_map[data.process.scheduling_class]\n if 'DISPATCHER_MAPPING' in getattr(settings, 'FLOW_MANAGER', {}):\n class_name = settings.FLOW_MANAGER['DISPATCHER_MAPPING'][process_scheduling]\n else:\n class_name = getattr(settings, 'FLOW_MANAGER', {}).get('NAME', DEFAULT_CONNECTOR)\n\n data.scheduled = now()\n data.save(update_fields=['scheduled'])\n\n async_to_sync(self.sync_counter.inc)('executor')\n return self.connectors[class_name].submit(data, runtime_dir, argv)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_per_data_dir(self, dir_base, subpath):\n # Use Django settings here, because the state must be preserved\n # across events. This also implies the directory settings can't\n # be patched outside the manager and then just sent along in the\n # command packets.\n result = self.settings_actual.get('FLOW_EXECUTOR', {}).get(dir_base, '')\n return os.path.join(result, subpath)", "response": "Extend the given base directory with a per - data component."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprepares the private execution directory for the given data object.", "response": "def _prepare_data_dir(self, data):\n \"\"\"Prepare destination directory where the data will live.\n\n :param data: The :class:`~resolwe.flow.models.Data` object for\n which to prepare the private execution directory.\n :return: The prepared data directory path.\n :rtype: str\n \"\"\"\n logger.debug(__(\"Preparing data directory for Data with id {}.\", data.id))\n\n with transaction.atomic():\n # Create a temporary random location and then override it with data\n # location id since object has to be created first.\n # TODO Find a better solution, e.g. defer the database constraint.\n temporary_location_string = uuid.uuid4().hex[:10]\n data_location = DataLocation.objects.create(subpath=temporary_location_string)\n data_location.subpath = str(data_location.id)\n data_location.save()\n data_location.data.add(data)\n\n output_path = self._get_per_data_dir('DATA_DIR', data_location.subpath)\n dir_mode = self.settings_actual.get('FLOW_EXECUTOR', {}).get('DATA_DIR_MODE', 0o755)\n os.mkdir(output_path, mode=dir_mode)\n # os.mkdir is not guaranteed to set the given mode\n os.chmod(output_path, dir_mode)\n return output_path"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nprepare the context for the executor.", "response": "def _prepare_context(self, data_id, data_dir, runtime_dir, **kwargs):\n \"\"\"Prepare settings and constants JSONs for the executor.\n\n Settings and constants provided by other ``resolwe`` modules and\n :class:`~django.conf.settings` are all inaccessible in the\n executor once it is deployed, so they need to be serialized into\n the runtime directory.\n\n :param data_id: The :class:`~resolwe.flow.models.Data` object id\n being prepared for.\n :param data_dir: The target execution directory for this\n :class:`~resolwe.flow.models.Data` object.\n :param runtime_dir: The target runtime support directory for\n this :class:`~resolwe.flow.models.Data` object; this is\n where the environment is serialized into.\n :param kwargs: Extra settings to include in the main settings\n file.\n \"\"\"\n files = {}\n secrets = {}\n\n settings_dict = {}\n settings_dict['DATA_DIR'] = data_dir\n settings_dict['REDIS_CHANNEL_PAIR'] = state.MANAGER_EXECUTOR_CHANNELS\n files[ExecutorFiles.EXECUTOR_SETTINGS] = settings_dict\n\n django_settings = {}\n django_settings.update(self.settings_actual)\n django_settings.update(kwargs)\n files[ExecutorFiles.DJANGO_SETTINGS] = django_settings\n\n # Add scheduling classes.\n files[ExecutorFiles.PROCESS_META] = {\n k: getattr(Process, k) for k in dir(Process)\n if k.startswith('SCHEDULING_CLASS_') and isinstance(getattr(Process, k), str)\n }\n\n # Add Data status constants.\n files[ExecutorFiles.DATA_META] = {\n k: getattr(Data, k) for k in dir(Data)\n if k.startswith('STATUS_') and isinstance(getattr(Data, k), str)\n }\n\n # Extend the settings with whatever the executor wants.\n self.executor.extend_settings(data_id, files, secrets)\n\n # Save the settings into the various files in the runtime dir.\n settings_dict[ExecutorFiles.FILE_LIST_KEY] = list(files.keys())\n for file_name in files:\n file_path = os.path.join(runtime_dir, file_name)\n with open(file_path, 'wt') as json_file:\n json.dump(files[file_name], json_file, cls=SettingsJSONifier)\n\n # Save the secrets in the runtime dir, with permissions to prevent listing the given\n # directory.\n secrets_dir = os.path.join(runtime_dir, ExecutorFiles.SECRETS_DIR)\n os.makedirs(secrets_dir, mode=0o300)\n for file_name, value in secrets.items():\n file_path = os.path.join(secrets_dir, file_name)\n\n # Set umask to 0 to ensure that we set the correct permissions.\n old_umask = os.umask(0)\n try:\n # We need to use os.open in order to correctly enforce file creation. Otherwise,\n # there is a race condition which can be used to create the file with different\n # ownership/permissions.\n file_descriptor = os.open(file_path, os.O_WRONLY | os.O_CREAT, mode=0o600)\n with os.fdopen(file_descriptor, 'w') as raw_file:\n raw_file.write(value)\n finally:\n os.umask(old_umask)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _prepare_executor(self, data, executor):\n logger.debug(__(\"Preparing executor for Data with id {}\", data.id))\n\n # Both of these imports are here only to get the packages' paths.\n import resolwe.flow.executors as executor_package\n\n exec_dir = os.path.dirname(inspect.getsourcefile(executor_package))\n dest_dir = self._get_per_data_dir('RUNTIME_DIR', data.location.subpath)\n dest_package_dir = os.path.join(dest_dir, 'executors')\n shutil.copytree(exec_dir, dest_package_dir)\n dir_mode = self.settings_actual.get('FLOW_EXECUTOR', {}).get('RUNTIME_DIR_MODE', 0o755)\n os.chmod(dest_dir, dir_mode)\n\n class_name = executor.rpartition('.executors.')[-1]\n return '.{}'.format(class_name), dest_dir", "response": "Prepare the executor class for the given data object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncopies the script into the destination directory.", "response": "def _prepare_script(self, dest_dir, program):\n \"\"\"Copy the script into the destination directory.\n\n :param dest_dir: The target directory where the script will be\n saved.\n :param program: The script text to be saved.\n :return: The name of the script file.\n :rtype: str\n \"\"\"\n script_name = ExecutorFiles.PROCESS_SCRIPT\n dest_file = os.path.join(dest_dir, script_name)\n with open(dest_file, 'wt') as dest_file_obj:\n dest_file_obj.write(program)\n os.chmod(dest_file, 0o700)\n return script_name"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\nasync def handle_control_event(self, message):\n cmd = message[WorkerProtocol.COMMAND]\n logger.debug(__(\"Manager worker got channel command '{}'.\", cmd))\n\n # Prepare settings for use; Django overlaid by state overlaid by\n # anything immediate in the current packet.\n immediates = {}\n if cmd == WorkerProtocol.COMMUNICATE:\n immediates = message.get(WorkerProtocol.COMMUNICATE_SETTINGS, {}) or {}\n override = self.state.settings_override or {}\n override.update(immediates)\n self.settings_actual = self._marshal_settings()\n self.settings_actual.update(override)\n\n if cmd == WorkerProtocol.COMMUNICATE:\n try:\n await database_sync_to_async(self._data_scan)(**message[WorkerProtocol.COMMUNICATE_EXTRA])\n except Exception:\n logger.exception(\"Unknown error occured while processing communicate control command.\")\n raise\n finally:\n await self.sync_counter.dec('communicate')\n\n elif cmd == WorkerProtocol.FINISH:\n try:\n data_id = message[WorkerProtocol.DATA_ID]\n data_location = DataLocation.objects.get(data__id=data_id)\n if not getattr(settings, 'FLOW_MANAGER_KEEP_DATA', False):\n try:\n def handle_error(func, path, exc_info):\n \"\"\"Handle permission errors while removing data directories.\"\"\"\n if isinstance(exc_info[1], PermissionError):\n os.chmod(path, 0o700)\n shutil.rmtree(path)\n\n # Remove secrets directory, but leave the rest of the runtime directory\n # intact. Runtime directory will be removed during data purge, when the\n # data object is removed.\n secrets_dir = os.path.join(\n self._get_per_data_dir('RUNTIME_DIR', data_location.subpath),\n ExecutorFiles.SECRETS_DIR\n )\n shutil.rmtree(secrets_dir, onerror=handle_error)\n except OSError:\n logger.exception(\"Manager exception while removing data runtime directory.\")\n\n if message[WorkerProtocol.FINISH_SPAWNED]:\n await database_sync_to_async(self._data_scan)(**message[WorkerProtocol.FINISH_COMMUNICATE_EXTRA])\n except Exception:\n logger.exception(\n \"Unknown error occured while processing finish control command.\",\n extra={'data_id': data_id}\n )\n raise\n finally:\n await self.sync_counter.dec('executor')\n\n elif cmd == WorkerProtocol.ABORT:\n await self.sync_counter.dec('executor')\n\n else:\n logger.error(__(\"Ignoring unknown manager control command '{}'.\", cmd))", "response": "Handle an event from the Channels layer callback do not call directly."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _ensure_counter(self):\n if not isinstance(self.sync_counter, self._SynchronizationManager):\n self.sync_counter = self._SynchronizationManager()", "response": "Ensure the sync counter is a valid non - dummy object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\nasync def execution_barrier(self):\n async def _barrier():\n \"\"\"Enter the sync block and exit the app afterwards.\"\"\"\n async with self.sync_counter:\n pass\n await consumer.exit_consumer()\n\n self._ensure_counter()\n await asyncio.wait([\n _barrier(),\n consumer.run_consumer(),\n ])\n self.sync_counter = self._SynchronizationManagerDummy()", "response": "Wait for all executors to finish."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\nasync def communicate(self, data_id=None, run_sync=False, save_settings=True):\n executor = getattr(settings, 'FLOW_EXECUTOR', {}).get('NAME', 'resolwe.flow.executors.local')\n logger.debug(__(\n \"Manager sending communicate command on '{}' triggered by Data with id {}.\",\n state.MANAGER_CONTROL_CHANNEL,\n data_id,\n ))\n\n saved_settings = self.state.settings_override\n if save_settings:\n saved_settings = self._marshal_settings()\n self.state.settings_override = saved_settings\n\n if run_sync:\n self._ensure_counter()\n await self.sync_counter.inc('communicate')\n try:\n await consumer.send_event({\n WorkerProtocol.COMMAND: WorkerProtocol.COMMUNICATE,\n WorkerProtocol.COMMUNICATE_SETTINGS: saved_settings,\n WorkerProtocol.COMMUNICATE_EXTRA: {\n 'data_id': data_id,\n 'executor': executor,\n },\n })\n except ChannelFull:\n logger.exception(\"ChannelFull error occurred while sending communicate message.\")\n await self.sync_counter.dec('communicate')\n\n if run_sync and not self.sync_counter.active:\n logger.debug(__(\n \"Manager on channel '{}' entering synchronization block.\",\n state.MANAGER_CONTROL_CHANNEL\n ))\n await self.execution_barrier()\n logger.debug(__(\n \"Manager on channel '{}' exiting synchronization block.\",\n state.MANAGER_CONTROL_CHANNEL\n ))", "response": "This method is used to send a COMMUNICATE command to the manager s channel workers."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _data_execute(self, data, program, executor):\n if not program:\n return\n\n logger.debug(__(\"Manager preparing Data with id {} for processing.\", data.id))\n\n # Prepare the executor's environment.\n try:\n executor_env_vars = self.get_executor().get_environment_variables()\n program = self._include_environment_variables(program, executor_env_vars)\n data_dir = self._prepare_data_dir(data)\n executor_module, runtime_dir = self._prepare_executor(data, executor)\n\n # Execute execution engine specific runtime preparation.\n execution_engine = data.process.run.get('language', None)\n volume_maps = self.get_execution_engine(execution_engine).prepare_runtime(runtime_dir, data)\n\n self._prepare_context(data.id, data_dir, runtime_dir, RUNTIME_VOLUME_MAPS=volume_maps)\n self._prepare_script(runtime_dir, program)\n\n argv = [\n '/bin/bash',\n '-c',\n self.settings_actual.get('FLOW_EXECUTOR', {}).get('PYTHON', '/usr/bin/env python')\n + ' -m executors ' + executor_module\n ]\n except PermissionDenied as error:\n data.status = Data.STATUS_ERROR\n data.process_error.append(\"Permission denied for process: {}\".format(error))\n data.save()\n return\n except OSError as err:\n logger.error(__(\n \"OSError occurred while preparing data {} (will skip): {}\",\n data.id, err\n ))\n return\n\n # Hand off to the run() method for execution.\n logger.info(__(\"Running {}\", runtime_dir))\n self.run(data, runtime_dir, argv)", "response": "Execute the data object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nscans for new Data objects and execute them.", "response": "def _data_scan(self, data_id=None, executor='resolwe.flow.executors.local', **kwargs):\n \"\"\"Scan for new Data objects and execute them.\n\n :param data_id: Optional id of Data object which (+ its\n children) should be scanned. If it is not given, all\n resolving objects are processed.\n :param executor: The fully qualified name of the executor to use\n for all :class:`~resolwe.flow.models.Data` objects\n discovered in this pass.\n \"\"\"\n def process_data_object(data):\n \"\"\"Process a single data object.\"\"\"\n # Lock for update. Note that we want this transaction to be as short as possible in\n # order to reduce contention and avoid deadlocks. This is why we do not lock all\n # resolving objects for update, but instead only lock one object at a time. This\n # allows managers running in parallel to process different objects.\n data = Data.objects.select_for_update().get(pk=data.pk)\n if data.status != Data.STATUS_RESOLVING:\n # The object might have already been processed while waiting for the lock to be\n # obtained. In this case, skip the object.\n return\n\n dep_status = dependency_status(data)\n\n if dep_status == Data.STATUS_ERROR:\n data.status = Data.STATUS_ERROR\n data.process_error.append(\"One or more inputs have status ERROR\")\n data.process_rc = 1\n data.save()\n return\n\n elif dep_status != Data.STATUS_DONE:\n return\n\n if data.process.run:\n try:\n execution_engine = data.process.run.get('language', None)\n # Evaluation by the execution engine may spawn additional data objects and\n # perform other queries on the database. Queries of all possible execution\n # engines need to be audited for possibilities of deadlocks in case any\n # additional locks are introduced. Currently, we only take an explicit lock on\n # the currently processing object.\n program = self.get_execution_engine(execution_engine).evaluate(data)\n except (ExecutionError, InvalidEngineError) as error:\n data.status = Data.STATUS_ERROR\n data.process_error.append(\"Error in process script: {}\".format(error))\n data.save()\n return\n\n # Set allocated resources:\n resource_limits = data.process.get_resource_limits()\n data.process_memory = resource_limits['memory']\n data.process_cores = resource_limits['cores']\n else:\n # If there is no run section, then we should not try to run anything. But the\n # program must not be set to None as then the process will be stuck in waiting\n # state.\n program = ''\n\n if data.status != Data.STATUS_DONE:\n # The data object may already be marked as done by the execution engine. In this\n # case we must not revert the status to STATUS_WAITING.\n data.status = Data.STATUS_WAITING\n data.save(render_name=True)\n\n # Actually run the object only if there was nothing with the transaction.\n transaction.on_commit(\n # Make sure the closure gets the right values here, since they're\n # changed in the loop.\n lambda d=data, p=program: self._data_execute(d, p, executor)\n )\n\n logger.debug(__(\"Manager processing communicate command triggered by Data with id {}.\", data_id))\n\n if is_testing():\n # NOTE: This is a work-around for Django issue #10827\n # (https://code.djangoproject.com/ticket/10827), same as in\n # TestCaseHelpers._pre_setup(). Because the worker is running\n # independently, it must clear the cache on its own.\n ContentType.objects.clear_cache()\n\n # Ensure settings overrides apply\n self.discover_engines(executor=executor)\n\n try:\n queryset = Data.objects.filter(status=Data.STATUS_RESOLVING)\n if data_id is not None:\n # Scan only given data object and its children.\n queryset = queryset.filter(Q(parents=data_id) | Q(id=data_id)).distinct()\n\n for data in queryset:\n try:\n with transaction.atomic():\n process_data_object(data)\n\n # All data objects created by the execution engine are commited after this\n # point and may be processed by other managers running in parallel. At the\n # same time, the lock for the current data object is released.\n except Exception as error: # pylint: disable=broad-except\n logger.exception(__(\n \"Unhandled exception in _data_scan while processing data object {}.\",\n data.pk\n ))\n\n # Unhandled error while processing a data object. We must set its\n # status to STATUS_ERROR to prevent the object from being retried\n # on next _data_scan run. We must perform this operation without\n # using the Django ORM as using the ORM may be the reason the error\n # occurred in the first place.\n error_msg = \"Internal error: {}\".format(error)\n process_error_field = Data._meta.get_field('process_error') # pylint: disable=protected-access\n max_length = process_error_field.base_field.max_length\n if len(error_msg) > max_length:\n error_msg = error_msg[:max_length - 3] + '...'\n\n try:\n with connection.cursor() as cursor:\n cursor.execute(\n \"\"\"\n UPDATE {table}\n SET\n status = %(status)s,\n process_error = process_error || (%(error)s)::varchar[]\n WHERE id = %(id)s\n \"\"\".format(\n table=Data._meta.db_table # pylint: disable=protected-access\n ),\n {\n 'status': Data.STATUS_ERROR,\n 'error': [error_msg],\n 'id': data.pk\n }\n )\n except Exception as error: # pylint: disable=broad-except\n # If object's state cannot be changed due to some database-related\n # issue, at least skip the object for this run.\n logger.exception(__(\n \"Unhandled exception in _data_scan while trying to emit error for {}.\",\n data.pk\n ))\n\n except IntegrityError as exp:\n logger.error(__(\"IntegrityError in manager {}\", exp))\n return"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn an expression engine instance.", "response": "def get_expression_engine(self, name):\n \"\"\"Return an expression engine instance.\"\"\"\n try:\n return self.expression_engines[name]\n except KeyError:\n raise InvalidEngineError(\"Unsupported expression engine: {}\".format(name))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns an execution engine instance.", "response": "def get_execution_engine(self, name):\n \"\"\"Return an execution engine instance.\"\"\"\n try:\n return self.execution_engines[name]\n except KeyError:\n raise InvalidEngineError(\"Unsupported execution engine: {}\".format(name))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nextract subjects from a DataONE PEM encoded X. 509 certificate.", "response": "def extract_subjects(cert_pem):\n \"\"\"Extract subjects from a DataONE PEM (Base64) encoded X.509 v3 certificate.\n\n Args:\n cert_pem: str or bytes\n PEM (Base64) encoded X.509 v3 certificate\n\n Returns:\n 2-tuple:\n - The primary subject string, extracted from the certificate DN.\n - A set of equivalent identities, group memberships and inferred symbolic\n subjects extracted from the SubjectInfo (if present.)\n - All returned subjects are DataONE compliant serializations.\n - A copy of the primary subject is always included in the set of equivalent\n identities.\n\n \"\"\"\n primary_str, subject_info_xml = d1_common.cert.x509.extract_subjects(cert_pem)\n equivalent_set = {\n primary_str,\n d1_common.const.SUBJECT_AUTHENTICATED,\n d1_common.const.SUBJECT_PUBLIC,\n }\n if subject_info_xml is not None:\n equivalent_set |= d1_common.cert.subject_info.extract_subjects(\n subject_info_xml, primary_str\n )\n return primary_str, equivalent_set"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_collection_for_user(self, collection_id, user):\n collection_query = Collection.objects.filter(pk=collection_id)\n if not collection_query.exists():\n raise exceptions.ValidationError('Collection id does not exist')\n\n collection = collection_query.first()\n if not user.has_perm('add_collection', obj=collection):\n if user.is_authenticated:\n raise exceptions.PermissionDenied()\n else:\n raise exceptions.NotFound()\n\n return collection", "response": "Check that collection exists and user has add permission."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_entities(self, user, ids):\n queryset = get_objects_for_user(user, 'view_entity', Entity.objects.filter(id__in=ids))\n actual_ids = queryset.values_list('id', flat=True)\n missing_ids = list(set(ids) - set(actual_ids))\n if missing_ids:\n raise exceptions.ParseError(\n \"Entities with the following ids not found: {}\" .format(', '.join(map(str, missing_ids)))\n )\n\n return queryset", "response": "Return entities queryset based on provided entity ids."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\napply permissions to data objects in Entity.", "response": "def set_content_permissions(self, user, obj, payload):\n \"\"\"Apply permissions to data objects in ``Entity``.\"\"\"\n # Data doesn't have \"ADD\" permission, so it has to be removed\n payload = remove_permission(payload, 'add')\n\n for data in obj.data.all():\n if user.has_perm('share_data', data):\n update_permission(data, payload)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndestroying a model instance.", "response": "def destroy(self, request, *args, **kwargs):\n \"\"\"Destroy a model instance.\n\n If ``delete_content`` flag is set in query parameters, also all\n Data objects contained in entity will be deleted.\n \"\"\"\n obj = self.get_object()\n user = request.user\n\n if strtobool(request.query_params.get('delete_content', 'false')):\n for data in obj.data.all():\n if user.has_perm('edit_data', data):\n data.delete()\n\n # If all data objects in an entity are removed, the entity may\n # have already been removed, so there is no need to call destroy.\n if not Entity.objects.filter(pk=obj.pk).exists():\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n # NOTE: Collection's ``destroy`` method should be skiped, so we\n # intentionaly call it's parent.\n return super(CollectionViewSet, self).destroy( # pylint: disable=no-member,bad-super-call\n request, *args, **kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd Entity to a collection.", "response": "def add_to_collection(self, request, pk=None):\n \"\"\"Add Entity to a collection.\"\"\"\n entity = self.get_object()\n\n # TODO use `self.get_ids` (and elsewhere). Backwards\n # incompatible because raised error's response contains\n # ``detail`` instead of ``error``).\n if 'ids' not in request.data:\n return Response({\"error\": \"`ids` parameter is required\"}, status=status.HTTP_400_BAD_REQUEST)\n\n for collection_id in request.data['ids']:\n self._get_collection_for_user(collection_id, request.user)\n\n for collection_id in request.data['ids']:\n entity.collections.add(collection_id)\n\n collection = Collection.objects.get(pk=collection_id)\n for data in entity.data.all():\n collection.data.add(data)\n\n return Response()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds data to Entity and its collection.", "response": "def add_data(self, request, pk=None):\n \"\"\"Add data to Entity and it's collection.\"\"\"\n # add data to entity\n resp = super().add_data(request, pk)\n\n # add data to collections in which entity is\n entity = self.get_object()\n for collection in entity.collections.all():\n collection.data.add(*request.data['ids'])\n\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmoves samples from source to destination collection.", "response": "def move_to_collection(self, request, *args, **kwargs):\n \"\"\"Move samples from source to destination collection.\"\"\"\n ids = self.get_ids(request.data)\n src_collection_id = self.get_id(request.data, 'source_collection')\n dst_collection_id = self.get_id(request.data, 'destination_collection')\n\n src_collection = self._get_collection_for_user(src_collection_id, request.user)\n dst_collection = self._get_collection_for_user(dst_collection_id, request.user)\n\n entity_qs = self._get_entities(request.user, ids)\n entity_qs.move_to_collection(src_collection, dst_collection)\n\n return Response()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate an entity. Original queryset produces a temporary database table whose rows cannot be selected for an update. As a workaround, we patch get_queryset function to return only Entity objects without additional data that is not needed for the update.", "response": "def update(self, request, *args, **kwargs):\n \"\"\"Update an entity.\n\n Original queryset produces a temporary database table whose rows\n cannot be selected for an update. As a workaround, we patch\n get_queryset function to return only Entity objects without\n additional data that is not needed for the update.\n \"\"\"\n orig_get_queryset = self.get_queryset\n\n def patched_get_queryset():\n \"\"\"Patched get_queryset method.\"\"\"\n entity_ids = orig_get_queryset().values_list('id', flat=True)\n return Entity.objects.filter(id__in=entity_ids)\n\n self.get_queryset = patched_get_queryset\n resp = super().update(request, *args, **kwargs)\n self.get_queryset = orig_get_queryset\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nstarting the process execution.", "response": "async def start(self):\n \"\"\"Start process execution.\"\"\"\n # arguments passed to the Docker command\n command_args = {\n 'command': self.command,\n 'container_image': self.requirements.get('image', constants.DEFAULT_CONTAINER_IMAGE),\n }\n\n # Get limit defaults.\n limit_defaults = SETTINGS.get('FLOW_PROCESS_RESOURCE_DEFAULTS', {})\n\n # Set resource limits.\n limits = []\n # Each core is equivalent to 1024 CPU shares. The default for Docker containers\n # is 1024 shares (we don't need to explicitly set that).\n limits.append('--cpu-shares={}'.format(int(self.process['resource_limits']['cores']) * 1024))\n\n # Some SWAP is needed to avoid OOM signal. Swappiness is low to prevent\n # extensive usage of SWAP (this would reduce the performance).\n memory = self.process['resource_limits']['memory'] + DOCKER_MEMORY_HARD_LIMIT_BUFFER\n memory_swap = int(memory * DOCKER_MEMORY_SWAP_RATIO)\n\n limits.append('--memory={}m'.format(memory))\n limits.append('--memory-swap={}m'.format(memory_swap))\n limits.append('--memory-reservation={}m'.format(self.process['resource_limits']['memory']))\n limits.append('--memory-swappiness={}'.format(DOCKER_MEMORY_SWAPPINESS))\n\n # Set ulimits for interactive processes to prevent them from running too long.\n if self.process['scheduling_class'] == PROCESS_META['SCHEDULING_CLASS_INTERACTIVE']:\n # TODO: This is not very good as each child gets the same limit.\n limits.append('--ulimit cpu={}'.format(limit_defaults.get('cpu_time_interactive', 30)))\n\n command_args['limits'] = ' '.join(limits)\n\n # set container name\n self.container_name_prefix = SETTINGS.get('FLOW_EXECUTOR', {}).get('CONTAINER_NAME_PREFIX', 'resolwe')\n command_args['container_name'] = '--name={}'.format(self._generate_container_name())\n\n if 'network' in self.resources:\n # Configure Docker network mode for the container (if specified).\n # By default, current Docker versions use the 'bridge' mode which\n # creates a network stack on the default Docker bridge.\n network = SETTINGS.get('FLOW_EXECUTOR', {}).get('NETWORK', '')\n command_args['network'] = '--net={}'.format(network) if network else ''\n else:\n # No network if not specified.\n command_args['network'] = '--net=none'\n\n # Security options.\n security = []\n\n # Generate and set seccomp policy to limit syscalls.\n policy_file = tempfile.NamedTemporaryFile(mode='w')\n json.dump(SECCOMP_POLICY, policy_file)\n policy_file.file.flush()\n if not SETTINGS.get('FLOW_DOCKER_DISABLE_SECCOMP', False):\n security.append('--security-opt seccomp={}'.format(policy_file.name))\n self.temporary_files.append(policy_file)\n\n # Drop all capabilities and only add ones that are needed.\n security.append('--cap-drop=all')\n\n command_args['security'] = ' '.join(security)\n\n # Setup Docker volumes.\n def new_volume(kind, base_dir_name, volume, path=None, read_only=True):\n \"\"\"Generate a new volume entry.\n\n :param kind: Kind of volume, which is used for getting extra options from\n settings (the ``FLOW_DOCKER_VOLUME_EXTRA_OPTIONS`` setting)\n :param base_dir_name: Name of base directory setting for volume source path\n :param volume: Destination volume mount point\n :param path: Optional additional path atoms appended to source path\n :param read_only: True to make the volume read-only\n \"\"\"\n if path is None:\n path = []\n\n path = [str(atom) for atom in path]\n\n options = set(SETTINGS.get('FLOW_DOCKER_VOLUME_EXTRA_OPTIONS', {}).get(kind, '').split(','))\n options.discard('')\n # Do not allow modification of read-only option.\n options.discard('ro')\n options.discard('rw')\n\n if read_only:\n options.add('ro')\n else:\n options.add('rw')\n\n return {\n 'src': os.path.join(SETTINGS['FLOW_EXECUTOR'].get(base_dir_name, ''), *path),\n 'dest': volume,\n 'options': ','.join(options),\n }\n\n volumes = [\n new_volume(\n 'data', 'DATA_DIR', constants.DATA_VOLUME, [DATA_LOCATION['subpath']], read_only=False\n ),\n new_volume('data_all', 'DATA_DIR', constants.DATA_ALL_VOLUME),\n new_volume('upload', 'UPLOAD_DIR', constants.UPLOAD_VOLUME, read_only=False),\n new_volume(\n 'secrets',\n 'RUNTIME_DIR',\n constants.SECRETS_VOLUME,\n [DATA_LOCATION['subpath'], ExecutorFiles.SECRETS_DIR]\n ),\n ]\n\n # Generate dummy passwd and create mappings for it. This is required because some tools\n # inside the container may try to lookup the given UID/GID and will crash if they don't\n # exist. So we create minimal user/group files.\n passwd_file = tempfile.NamedTemporaryFile(mode='w')\n passwd_file.write('root:x:0:0:root:/root:/bin/bash\\n')\n passwd_file.write('user:x:{}:{}:user:/:/bin/bash\\n'.format(os.getuid(), os.getgid()))\n passwd_file.file.flush()\n self.temporary_files.append(passwd_file)\n\n group_file = tempfile.NamedTemporaryFile(mode='w')\n group_file.write('root:x:0:\\n')\n group_file.write('user:x:{}:user\\n'.format(os.getgid()))\n group_file.file.flush()\n self.temporary_files.append(group_file)\n\n volumes += [\n new_volume('users', None, '/etc/passwd', [passwd_file.name]),\n new_volume('users', None, '/etc/group', [group_file.name]),\n ]\n\n # Create volumes for tools.\n # NOTE: To prevent processes tampering with tools, all tools are mounted read-only\n self.tools_volumes = []\n for index, tool in enumerate(self.get_tools_paths()):\n self.tools_volumes.append(new_volume(\n 'tools',\n None,\n os.path.join('/usr/local/bin/resolwe', str(index)),\n [tool]\n ))\n\n volumes += self.tools_volumes\n\n # Create volumes for runtime (all read-only).\n runtime_volume_maps = SETTINGS.get('RUNTIME_VOLUME_MAPS', None)\n if runtime_volume_maps:\n for src, dst in runtime_volume_maps.items():\n volumes.append(new_volume(\n 'runtime',\n 'RUNTIME_DIR',\n dst,\n [DATA_LOCATION['subpath'], src],\n ))\n\n # Add any extra volumes verbatim.\n volumes += SETTINGS.get('FLOW_DOCKER_EXTRA_VOLUMES', [])\n\n # Make sure that tmp dir exists.\n os.makedirs(constants.TMPDIR, mode=0o755, exist_ok=True)\n\n # Create Docker --volume parameters from volumes.\n command_args['volumes'] = ' '.join(['--volume=\"{src}\":\"{dest}\":{options}'.format(**volume)\n for volume in volumes])\n\n # Set working directory to the data volume.\n command_args['workdir'] = '--workdir={}'.format(constants.DATA_VOLUME)\n\n # Change user inside the container.\n command_args['user'] = '--user={}:{}'.format(os.getuid(), os.getgid())\n\n # A non-login Bash shell should be used here (a subshell will be spawned later).\n command_args['shell'] = '/bin/bash'\n\n # Check if image exists locally. If not, command will exit with non-zero returncode\n check_command = '{command} image inspect {container_image}'.format(**command_args)\n\n logger.debug(\"Checking existence of docker image: {}\".format(command_args['container_image']))\n\n check_proc = await subprocess.create_subprocess_exec( # pylint: disable=no-member\n *shlex.split(check_command),\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE\n )\n await check_proc.communicate()\n\n if check_proc.returncode != 0:\n pull_command = '{command} pull {container_image}'.format(**command_args)\n\n logger.info(\"Pulling docker image: {}\".format(command_args['container_image']))\n\n pull_proc = await subprocess.create_subprocess_exec( # pylint: disable=no-member\n *shlex.split(pull_command),\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE\n )\n _, stderr = await pull_proc.communicate()\n\n if pull_proc.returncode != 0:\n error_msg = \"Docker failed to pull {} image.\".format(command_args['container_image'])\n if stderr:\n error_msg = '\\n'.join([error_msg, stderr.decode('utf-8')])\n raise RuntimeError(error_msg)\n\n docker_command = (\n '{command} run --rm --interactive {container_name} {network} {volumes} {limits} '\n '{security} {workdir} {user} {container_image} {shell}'.format(**command_args)\n )\n\n logger.info(\"Starting docker container with command: {}\".format(docker_command))\n start_time = time.time()\n\n # Workaround for pylint issue #1469\n # (https://github.com/PyCQA/pylint/issues/1469).\n self.proc = await subprocess.create_subprocess_exec( # pylint: disable=no-member\n *shlex.split(docker_command),\n limit=4 * (2 ** 20), # 4MB buffer size for line buffering\n stdin=subprocess.PIPE,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT\n )\n\n stdout = []\n\n async def wait_for_container():\n \"\"\"Wait for Docker container to start to avoid blocking the code that uses it.\"\"\"\n self.proc.stdin.write(('echo PING' + os.linesep).encode('utf-8'))\n await self.proc.stdin.drain()\n while True:\n line = await self.proc.stdout.readline()\n stdout.append(line)\n if line.rstrip() == b'PING':\n break\n if self.proc.stdout.at_eof():\n raise RuntimeError()\n\n try:\n await asyncio.wait_for(wait_for_container(), timeout=DOCKER_START_TIMEOUT)\n except (asyncio.TimeoutError, RuntimeError):\n error_msg = \"Docker container has not started for {} seconds.\".format(DOCKER_START_TIMEOUT)\n stdout = ''.join([line.decode('utf-8') for line in stdout if line])\n if stdout:\n error_msg = '\\n'.join([error_msg, stdout])\n raise RuntimeError(error_msg)\n\n end_time = time.time()\n logger.info(\"It took {:.2f}s for Docker container to start\".format(end_time - start_time))\n\n self.stdout = self.proc.stdout"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\nasync def run_script(self, script):\n # Create a Bash command to add all the tools to PATH.\n tools_paths = ':'.join([map_[\"dest\"] for map_ in self.tools_volumes])\n add_tools_path = 'export PATH=$PATH:{}'.format(tools_paths)\n # Spawn another child bash, to avoid running anything as PID 1, which has special\n # signal handling (e.g., cannot be SIGKILL-ed from inside).\n # A login Bash shell is needed to source /etc/profile.\n bash_line = '/bin/bash --login; exit $?' + os.linesep\n script = os.linesep.join(['set -x', 'set +B', add_tools_path, script]) + os.linesep\n self.proc.stdin.write(bash_line.encode('utf-8'))\n await self.proc.stdin.drain()\n self.proc.stdin.write(script.encode('utf-8'))\n await self.proc.stdin.drain()\n self.proc.stdin.close()", "response": "Execute the script and save results."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nterminate a running script.", "response": "async def terminate(self):\n \"\"\"Terminate a running script.\"\"\"\n # Workaround for pylint issue #1469\n # (https://github.com/PyCQA/pylint/issues/1469).\n cmd = await subprocess.create_subprocess_exec( # pylint: disable=no-member\n *shlex.split('{} rm -f {}'.format(self.command, self._generate_container_name()))\n )\n await cmd.wait()\n await self.proc.wait()\n\n await super().terminate()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsend an update to manager and terminate the process if it fails.", "response": "async def _send_manager_command(self, *args, **kwargs):\n \"\"\"Send an update to manager and terminate the process if it fails.\"\"\"\n resp = await send_manager_command(*args, **kwargs)\n\n if resp is False:\n await self.terminate()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the status of the current data object.", "response": "async def update_data_status(self, **kwargs):\n \"\"\"Update (PATCH) Data object.\n\n :param kwargs: The dictionary of\n :class:`~resolwe.flow.models.Data` attributes to be changed.\n \"\"\"\n await self._send_manager_command(ExecutorProtocol.UPDATE, extra_fields={\n ExecutorProtocol.UPDATE_CHANGESET: kwargs\n })"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nexecutes the script and save the results.", "response": "async def run(self, data_id, script):\n \"\"\"Execute the script and save results.\"\"\"\n logger.debug(\"Executor for Data with id {} has started.\".format(data_id))\n try:\n finish_fields = await self._run(data_id, script)\n except SystemExit as ex:\n raise ex\n except Exception as error: # pylint: disable=broad-except\n logger.exception(\"Unhandled exception in executor\")\n\n # Send error report.\n await self.update_data_status(process_error=[str(error)], status=DATA_META['STATUS_ERROR'])\n\n finish_fields = {\n ExecutorProtocol.FINISH_PROCESS_RC: 1,\n }\n\n if finish_fields is not None:\n await self._send_manager_command(ExecutorProtocol.FINISH, extra_fields=finish_fields)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nensure a new file is created and opened for writing.", "response": "def _create_file(self, filename):\n \"\"\"Ensure a new file is created and opened for writing.\"\"\"\n file_descriptor = os.open(filename, os.O_WRONLY | os.O_CREAT | os.O_EXCL)\n return os.fdopen(file_descriptor, 'w')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\nasync def _run(self, data_id, script):\n self.data_id = data_id\n\n # Fetch data instance to get any executor requirements.\n self.process = PROCESS\n requirements = self.process['requirements']\n self.requirements = requirements.get('executor', {}).get(self.name, {}) # pylint: disable=no-member\n self.resources = requirements.get('resources', {})\n\n logger.debug(\"Preparing output files for Data with id {}\".format(data_id))\n os.chdir(EXECUTOR_SETTINGS['DATA_DIR'])\n try:\n log_file = self._create_file('stdout.txt')\n json_file = self._create_file('jsonout.txt')\n except FileExistsError:\n logger.error(\"Stdout or jsonout out file already exists.\")\n # Looks like executor was already ran for this Data object,\n # so don't raise the error to prevent setting status to error.\n await self._send_manager_command(ExecutorProtocol.ABORT, expect_reply=False)\n return\n\n proc_pid = await self.start()\n\n await self.update_data_status(\n status=DATA_META['STATUS_PROCESSING'],\n process_pid=proc_pid\n )\n\n # Run process and handle intermediate results\n logger.info(\"Running program for Data with id {}\".format(data_id))\n logger.debug(\"The program for Data with id {} is: \\n{}\".format(data_id, script))\n await self.run_script(script)\n spawn_processes = []\n output = {}\n process_error, process_warning, process_info = [], [], []\n process_progress, process_rc = 0, 0\n\n # read process output\n try:\n stdout = self.get_stdout()\n while True:\n line = await stdout.readline()\n logger.debug(\"Process's output: {}\".format(line.strip()))\n\n if not line:\n break\n line = line.decode('utf-8')\n\n try:\n if line.strip().startswith('run'):\n # Save process and spawn if no errors\n log_file.write(line)\n log_file.flush()\n\n for obj in iterjson(line[3:].strip()):\n spawn_processes.append(obj)\n elif line.strip().startswith('export'):\n file_name = line[6:].strip()\n\n export_folder = SETTINGS['FLOW_EXECUTOR']['UPLOAD_DIR']\n unique_name = 'export_{}'.format(uuid.uuid4().hex)\n export_path = os.path.join(export_folder, unique_name)\n\n self.exported_files_mapper[self.data_id][file_name] = unique_name\n\n shutil.move(file_name, export_path)\n else:\n # If JSON, save to MongoDB\n updates = {}\n for obj in iterjson(line):\n for key, val in obj.items():\n if key.startswith('proc.'):\n if key == 'proc.error':\n process_error.append(val)\n if not process_rc:\n process_rc = 1\n updates['process_rc'] = process_rc\n updates['process_error'] = process_error\n updates['status'] = DATA_META['STATUS_ERROR']\n elif key == 'proc.warning':\n process_warning.append(val)\n updates['process_warning'] = process_warning\n elif key == 'proc.info':\n process_info.append(val)\n updates['process_info'] = process_info\n elif key == 'proc.rc':\n process_rc = int(val)\n updates['process_rc'] = process_rc\n if process_rc != 0:\n updates['status'] = DATA_META['STATUS_ERROR']\n elif key == 'proc.progress':\n process_progress = int(float(val) * 100)\n updates['process_progress'] = process_progress\n else:\n output[key] = val\n updates['output'] = output\n\n if updates:\n await self.update_data_status(**updates)\n # Process meta fields are collected in listener, so we can clear them.\n process_error, process_warning, process_info = [], [], []\n\n if process_rc > 0:\n log_file.close()\n json_file.close()\n await self._send_manager_command(ExecutorProtocol.FINISH, extra_fields={\n ExecutorProtocol.FINISH_PROCESS_RC: process_rc\n })\n return\n\n # Debug output\n # Not referenced in Data object\n json_file.write(line)\n json_file.flush()\n\n except ValueError as ex:\n # Ignore if not JSON\n log_file.write(line)\n log_file.flush()\n\n except MemoryError as ex:\n logger.error(\"Out of memory:\\n\\n{}\".format(ex))\n\n except IOError as ex:\n # TODO: if ex.errno == 28: no more free space\n raise ex\n finally:\n # Store results\n log_file.close()\n json_file.close()\n\n return_code = await self.end()\n\n if process_rc < return_code:\n process_rc = return_code\n\n # send a notification to the executor listener that we're done\n finish_fields = {\n ExecutorProtocol.FINISH_PROCESS_RC: process_rc\n }\n if spawn_processes and process_rc == 0:\n finish_fields[ExecutorProtocol.FINISH_SPAWN_PROCESSES] = spawn_processes\n finish_fields[ExecutorProtocol.FINISH_EXPORTED_FILES] = self.exported_files_mapper\n\n return finish_fields", "response": "Execute the script and save the results."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the total number of bytes of requested unprocessed replicas.", "response": "def get_total_size_of_queued_replicas():\n \"\"\"Return the total number of bytes of requested, unprocessed replicas.\"\"\"\n return (\n d1_gmn.app.models.ReplicationQueue.objects.filter(\n local_replica__info__status__status='queued'\n ).aggregate(Sum('size'))['size__sum']\n or 0\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_to_replication_queue(source_node_urn, sysmeta_pyxb):\n replica_info_model = d1_gmn.app.models.replica_info(\n status_str='queued', source_node_urn=source_node_urn\n )\n local_replica_model = d1_gmn.app.models.local_replica(\n pid=d1_common.xml.get_req_val(sysmeta_pyxb.identifier),\n replica_info_model=replica_info_model,\n )\n d1_gmn.app.models.replication_queue(\n local_replica_model=local_replica_model, size=sysmeta_pyxb.size\n )", "response": "Add a replication request issued by a CN to a queue that is processed asynchronously."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds standard arguments for DataONE utilities to a command line parser.", "response": "def add_arguments(parser, doc_str, add_base_url=True):\n \"\"\"Add standard arguments for DataONE utilities to a command line parser.\"\"\"\n parser.description = doc_str\n parser.formatter_class = argparse.RawDescriptionHelpFormatter\n parser.add_argument(\"--debug\", action=\"store_true\", help=\"Debug level logging\")\n parser.add_argument(\n \"--cert-pub\",\n dest=\"cert_pem_path\",\n action=\"store\",\n default=django.conf.settings.CLIENT_CERT_PATH,\n help=\"Path to PEM formatted public key of certificate\",\n )\n parser.add_argument(\n \"--cert-key\",\n dest=\"cert_key_path\",\n action=\"store\",\n default=django.conf.settings.CLIENT_CERT_PRIVATE_KEY_PATH,\n help=\"Path to PEM formatted private key of certificate\",\n )\n parser.add_argument(\n \"--public\", action=\"store_true\", help=\"Do not use certificate even if available\"\n )\n parser.add_argument(\n \"--disable-server-cert-validation\",\n action=\"store_true\",\n help=\"Do not validate the TLS/SSL server side certificate of the source node (insecure)\",\n )\n parser.add_argument(\n \"--timeout\",\n type=float,\n action=\"store\",\n default=DEFAULT_TIMEOUT_SEC,\n help=\"Timeout for DataONE API calls to the source MN\",\n )\n parser.add_argument(\n \"--retries\",\n type=int,\n action=\"store\",\n default=DEFAULT_RETRY_COUNT,\n help=\"Retry DataONE API calls that raise HTTP level exceptions\",\n )\n parser.add_argument(\n \"--page-size\",\n type=int,\n action=\"store\",\n default=DEFAULT_PAGE_SIZE,\n help=\"Number of objects to retrieve in each list method API call to source MN\",\n )\n parser.add_argument(\n \"--major\",\n type=int,\n action=\"store\",\n help=\"Skip automatic detection of API major version and use the provided version\",\n )\n parser.add_argument(\n \"--max-concurrent\",\n type=int,\n action=\"store\",\n default=DEFAULT_MAX_CONCURRENT_TASK_COUNT,\n help=\"Max number of concurrent DataONE API\",\n )\n if not add_base_url:\n parser.add_argument(\n \"--baseurl\",\n action=\"store\",\n default=django.conf.settings.DATAONE_ROOT,\n help=\"Remote MN or CN BaseURL\",\n )\n else:\n parser.add_argument(\"baseurl\", help=\"Remote MN or CN BaseURL\")"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a list of all members of a Resource Map.", "response": "def get_resource_map_members(pid):\n \"\"\"``pid`` is the PID of a Resource Map or the PID of a member of a Resource Map.\"\"\"\n if d1_gmn.app.did.is_resource_map_db(pid):\n return get_resource_map_members_by_map(pid)\n elif d1_gmn.app.did.is_resource_map_member(pid):\n return get_resource_map_members_by_member(pid)\n else:\n raise d1_common.types.exceptions.InvalidRequest(\n 0, 'Not a Resource Map or Resource Map member. pid=\"{}\"'.format(pid)\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_with_conversion(self, variable, value_string):\n self._assert_valid_variable(variable)\n try:\n v = ast.literal_eval(value_string)\n except (ValueError, SyntaxError):\n v = value_string\n if v is None or v == \"none\":\n self._variables[variable] = None\n else:\n try:\n type_converter = variable_type_map[variable]\n value_string = self._validate_variable_type(\n value_string, type_converter\n )\n value = type_converter(value_string)\n self._variables[variable] = value\n except ValueError:\n raise d1_cli.impl.exceptions.InvalidArguments(\n \"Invalid value for {}: {}\".format(variable, value_string)\n )", "response": "Convert user supplied string to Python type."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef log_setup(is_debug=False, is_multiprocess=False):\n format_str = (\n '%(asctime)s %(name)s %(module)s:%(lineno)d %(process)4d %(levelname)-8s %(message)s'\n if is_multiprocess\n else '%(asctime)s %(name)s %(module)s:%(lineno)d %(levelname)-8s %(message)s'\n )\n formatter = logging.Formatter(format_str, '%Y-%m-%d %H:%M:%S')\n console_logger = logging.StreamHandler(sys.stdout)\n console_logger.setFormatter(formatter)\n logging.getLogger('').addHandler(console_logger)\n if is_debug:\n logging.getLogger('').setLevel(logging.DEBUG)\n else:\n logging.getLogger('').setLevel(logging.INFO)", "response": "Setup a standardized log format for the DataONE Python stack."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_content_type(content_type):\n m = email.message.Message()\n m['Content-Type'] = content_type\n return m.get_content_type()", "response": "Extract the MIME type value from a content type string."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmerging two nested dicts.", "response": "def nested_update(d, u):\n \"\"\"Merge two nested dicts.\n\n Nested dicts are sometimes used for representing various recursive structures. When\n updating such a structure, it may be convenient to present the updated data as a\n corresponding recursive structure. This function will then apply the update.\n\n Args:\n d: dict\n dict that will be updated in-place. May or may not contain nested dicts.\n\n u: dict\n dict with contents that will be merged into ``d``. May or may not contain\n nested dicts.\n\n \"\"\"\n for k, v in list(u.items()):\n if isinstance(v, collections.Mapping):\n r = nested_update(d.get(k, {}), v)\n d[k] = r\n else:\n d[k] = u[k]\n return d"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef print_logging():\n root_logger = logging.getLogger()\n old_level_list = [h.level for h in root_logger.handlers]\n for h in root_logger.handlers:\n h.setLevel(logging.WARN)\n log_format = logging.Formatter('%(message)s')\n stream_handler = logging.StreamHandler(sys.stdout)\n stream_handler.setFormatter(log_format)\n stream_handler.setLevel(logging.DEBUG)\n root_logger.addHandler(stream_handler)\n yield\n root_logger.removeHandler(stream_handler)\n for h, level in zip(root_logger.handlers, old_level_list):\n h.setLevel(level)", "response": "Context manager that temporarily suppress additional information such as timestamps\n when writing to loggers."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef save_json(py_obj, json_path):\n with open(json_path, 'w', encoding='utf-8') as f:\n f.write(serialize_to_normalized_pretty_json(py_obj))", "response": "Serialize a native object to JSON and save it normalized pretty printed to a\n file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nserialize a native object to normalized pretty printed JSON.", "response": "def serialize_to_normalized_pretty_json(py_obj):\n \"\"\"Serialize a native object to normalized, pretty printed JSON.\n\n The JSON string is normalized by sorting any dictionary keys.\n\n Args:\n py_obj: object\n Any object that can be represented in JSON. Some types, such as datetimes are\n automatically converted to strings.\n\n Returns:\n str: normalized, pretty printed JSON string.\n\n \"\"\"\n return json.dumps(py_obj, sort_keys=True, indent=2, cls=ToJsonCompatibleTypes)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef serialize_to_normalized_compact_json(py_obj):\n return json.dumps(\n py_obj, sort_keys=True, separators=(',', ':'), cls=ToJsonCompatibleTypes\n )", "response": "Serialize a native object to normalized compact JSON."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef format_sec_to_dhm(sec):\n rem_int, s_int = divmod(int(sec), 60)\n rem_int, m_int, = divmod(rem_int, 60)\n d_int, h_int, = divmod(rem_int, 24)\n return '{}d{:02d}h{:02d}m'.format(d_int, h_int, m_int)", "response": "Format seconds to days hours minutes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef count(self, event_str, inc_int=1):\n self._event_dict.setdefault(event_str, 0)\n self._event_dict[event_str] += inc_int", "response": "Counts the number of times an event occurs in the event dict."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncounts an event and write a message to a logger.", "response": "def log_and_count(self, event_str, msg_str=None, inc_int=None):\n \"\"\"Count an event and write a message to a logger.\n\n Args:\n event_str: str\n The name of an event to count. Used as a key in the event dict. The same\n name will be used in the summary. This also becomes a part of the message\n logged by this function.\n\n msg_str: str\n Optional message with details about the events. The message is only written\n to the log. While the ``event_str`` functions as a key and must remain the\n same for the same type of event, ``log_str`` may change between calls.\n\n inc_int: int\n Optional argument to increase the count for the event by more than 1.\n\n \"\"\"\n logger.info(\n ' - '.join(map(str, [v for v in (event_str, msg_str, inc_int) if v]))\n )\n self.count(event_str, inc_int or 1)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dump_to_log(self):\n if self._event_dict:\n logger.info('Events:')\n for event_str, count_int in sorted(self._event_dict.items()):\n logger.info(' {}: {}'.format(event_str, count_int))\n else:\n logger.info('No Events')", "response": "Write summary to logger."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nclears the database. Used for testing and debugging.", "response": "def delete_all_from_db():\n \"\"\"Clear the database.\n\n Used for testing and debugging.\n\n \"\"\"\n # The models.CASCADE property is set on all ForeignKey fields, so tables can\n # be deleted in any order without breaking constraints.\n for model in django.apps.apps.get_models():\n model.objects.all().delete()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a query parameter uniformly for GET and POST requests.", "response": "def get_query_param(request, key):\n \"\"\"Get query parameter uniformly for GET and POST requests.\"\"\"\n value = request.query_params.get(key) or request.data.get(key)\n if value is None:\n raise KeyError()\n return value"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef futurize_module(module_path, show_diff, write_update):\n logging.info('Futurizing module... path=\"{}\"'.format(module_path))\n ast_tree = back_to_the_futurize(module_path)\n return d1_dev.util.update_module_file_ast(\n ast_tree, module_path, show_diff, write_update\n )", "response": "2to3 uses AST, not Baron."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving comments from a single line import statement.", "response": "def _remove_single_line_import_comments(r):\n \"\"\"We previously used more groups for the import statements and named each group.\"\"\"\n logging.info('Removing single line import comments')\n import_r, remaining_r = split_by_last_import(r)\n new_import_r = redbaron.NodeList()\n for i, v in enumerate(import_r):\n if 1 < i < len(import_r) - 2:\n if not (\n import_r[i - 2].type != 'comment'\n and v.type == 'comment'\n and import_r[i + 2].type != 'comment'\n ) or _is_keep_comment(v):\n new_import_r.append(v)\n else:\n new_import_r.append(v)\n return new_import_r + remaining_r"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _update_init_all(module_path, r):\n module_dir_path = os.path.split(module_path)[0]\n module_list = []\n for item_name in os.listdir(module_dir_path):\n item_path = os.path.join(module_dir_path, item_name)\n if os.path.isfile(item_path) and item_name in ('__init__.py', 'setup.py'):\n continue\n if os.path.isfile(item_path) and not item_name.endswith('.py'):\n continue\n # if os.path.isdir(item_path) and not os.path.isfile(\n # os.path.join(item_path, '__init__.py')\n # ):\n # continue\n if os.path.isdir(item_path):\n continue\n module_list.append(re.sub(r'.py$', '', item_name).encode('utf-8'))\n module_literal_str = str(sorted(module_list))\n\n assignment_node_list = r('AssignmentNode', recursive=False)\n for n in assignment_node_list:\n if n.type == 'assignment' and n.target.value == '__all__':\n n.value = module_literal_str\n break\n else:\n r.node_list.append(\n redbaron.RedBaron('__all__ = {}\\n'.format(module_literal_str))\n )\n return r", "response": "Add or update __all__ in __init__. py file."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves any __all__ in __init__. py file.", "response": "def _remove_init_all(r):\n \"\"\"Remove any __all__ in __init__.py file.\"\"\"\n new_r = redbaron.NodeList()\n for n in r.node_list:\n if n.type == 'assignment' and n.target.value == '__all__':\n pass\n else:\n new_r.append(n)\n return new_r"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_object_list_json(request):\n # TODO: Add to documentation\n if \"f\" in request.GET:\n field_list = request.GET.getlist(\"f\")\n else:\n field_list = None\n\n result_dict = d1_gmn.app.views.util.query_object_list(request, \"object_list_json\")\n result_dict[\"fields\"] = field_list\n result_dict[\"objects\"] = d1_gmn.app.sysmeta_extract.extract_values_query(\n result_dict[\"query\"], field_list\n )\n del result_dict[\"query\"]\n\n return django.http.HttpResponse(\n d1_common.util.serialize_to_normalized_pretty_json(result_dict),\n d1_common.const.CONTENT_TYPE_JSON,\n )", "response": "Return a JSON representation of the object list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconfigure logging to send log records to the master.", "response": "def configure_logging(emit_list):\n \"\"\"Configure logging to send log records to the master.\"\"\"\n if 'sphinx' in sys.modules:\n module_base = 'resolwe.flow.executors'\n else:\n module_base = 'executors'\n logging_config = dict(\n version=1,\n formatters={\n 'json_formatter': {\n '()': JSONFormatter\n },\n },\n handlers={\n 'redis': {\n 'class': module_base + '.logger.RedisHandler',\n 'formatter': 'json_formatter',\n 'level': logging.INFO,\n 'emit_list': emit_list\n },\n 'console': {\n 'class': 'logging.StreamHandler',\n 'level': logging.WARNING\n },\n\n },\n root={\n 'handlers': ['redis', 'console'],\n 'level': logging.DEBUG,\n },\n loggers={\n # Don't use redis logger to prevent circular dependency.\n module_base + '.manager_comm': {\n 'level': 'INFO',\n 'handlers': ['console'],\n 'propagate': False,\n },\n },\n )\n\n dictConfig(logging_config)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef format(self, record):\n data = record.__dict__.copy()\n\n data['data_id'] = DATA['id']\n data['data_location_id'] = DATA_LOCATION['id']\n data['hostname'] = socket.gethostname()\n\n # Get relative path, so listener can reconstruct the path to the actual code.\n data['pathname'] = os.path.relpath(data['pathname'], os.path.dirname(__file__))\n\n # Exception and Traceback cannot be serialized.\n data['exc_info'] = None\n\n # Ensure logging message is instantiated to a string.\n data['msg'] = str(data['msg'])\n\n return json.dumps(data)", "response": "Dump the record to JSON."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsend log message to the listener.", "response": "def emit(self, record):\n \"\"\"Send log message to the listener.\"\"\"\n future = asyncio.ensure_future(send_manager_command(\n ExecutorProtocol.LOG,\n extra_fields={\n ExecutorProtocol.LOG_MESSAGE: self.format(record),\n },\n expect_reply=False\n ))\n self.emit_list.append(future)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\nasync def is_object_synced_to_cn(self, client, pid):\n status = await client.describe(pid)\n if status == 200:\n self.progress_logger.event(\"SciObj already synced on CN\")\n return True\n elif status == 404:\n self.progress_logger.event(\"SciObj has not synced to CN\")\n return False\n self.progress_logger.event(\n \"CNRead.describe() returned unexpected status code. \"\n 'pid=\"{}\" status=\"{}\"'.format(pid, status)\n )\n return True", "response": "Check if object with pid has successfully synced to CN."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nissues a notification and request for sync for object with pid to the CN.", "response": "async def send_synchronization_request(self, client, pid):\n \"\"\"Issue a notification and request for sync for object with {pid} to the CN.\"\"\"\n # Skip CN call for debugging\n # status = 200\n status = await client.synchronize(pid)\n if status == 200:\n self.progress_logger.event(\"Issued sync request, CN accepted\")\n else:\n self.progress_logger.event(\n \"CNRead.synchronize() returned unexpected status code. \"\n 'pid=\"{}\" status=\"{}\"'.format(pid, status)\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new secret returning its handle.", "response": "def create_secret(self, value, contributor, metadata=None, expires=None):\n \"\"\"Create a new secret, returning its handle.\n\n :param value: Secret value to store\n :param contributor: User owning the secret\n :param metadata: Optional metadata dictionary (must be JSON serializable)\n :param expires: Optional date/time of expiry (defaults to None, which means that\n the secret never expires)\n :return: Secret handle\n \"\"\"\n if metadata is None:\n metadata = {}\n\n secret = self.create(\n value=value,\n contributor=contributor,\n metadata=metadata,\n expires=expires,\n )\n return str(secret.handle)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve an existing secret s value.", "response": "def get_secret(self, handle, contributor):\n \"\"\"Retrieve an existing secret's value.\n\n :param handle: Secret handle\n :param contributor: User instance to perform contributor validation,\n which means that only secrets for the given contributor will be\n looked up.\n \"\"\"\n queryset = self.all()\n if contributor is not None:\n queryset = queryset.filter(contributor=contributor)\n secret = queryset.get(handle=handle)\n return secret.value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning json schema for json validation.", "response": "def validation_schema(name):\n \"\"\"Return json schema for json validation.\"\"\"\n schemas = {\n 'processor': 'processSchema.json',\n 'descriptor': 'descriptorSchema.json',\n 'field': 'fieldSchema.json',\n 'type': 'typeSchema.json',\n }\n\n if name not in schemas:\n raise ValueError()\n\n field_schema_file = finders.find('flow/{}'.format(schemas['field']), all=True)[0]\n with open(field_schema_file, 'r') as fn:\n field_schema = fn.read()\n\n if name == 'field':\n return json.loads(field_schema.replace('{{PARENT}}', ''))\n\n schema_file = finders.find('flow/{}'.format(schemas[name]), all=True)[0]\n with open(schema_file, 'r') as fn:\n schema = fn.read()\n\n return json.loads(schema.replace('{{FIELD}}', field_schema).replace('{{PARENT}}', '/field'))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate_schema(instance, schema, test_required=True, data_location=None,\n skip_missing_data=False):\n \"\"\"Check if DictField values are consistent with our data types.\n\n Perform basic JSON schema validation and our custom validations:\n\n * check that required fields are given (if `test_required` is set\n to ``True``)\n * check if ``basic:file:`` and ``list:basic:file`` fields match\n regex given in schema (only if ``validate_regex`` is defined in\n schema for coresponding fields) and exists (only if\n ``data_location`` is given)\n * check if directories referenced in ``basic:dir:`` and\n ``list:basic:dir``fields exist (only if ``data_location`` is\n given)\n * check that referenced ``Data`` objects (in ``data:``\n and ``list:data:`` fields) exists and are of type\n ````\n * check that referenced ``Storage`` objects (in ``basic:json``\n fields) exists\n\n :param list instance: Instance to be validated\n :param list schema: Schema for validation\n :param bool test_required: Flag for testing if all required fields\n are present. It is usefule if validation is run before ``Data``\n object is finished and there are some field stil missing\n (default: ``False``)\n :param :class:`~resolwe.flow.models.data.DataLocation` data_location:\n data location used for checking if files and directories exist\n (default: ``None``)\n :param bool skip_missing_data: Don't raise an error if referenced\n ``Data`` object does not exist\n :rtype: None\n :raises ValidationError: if ``instance`` doesn't match schema\n defined in ``schema``\n\n \"\"\"\n from .storage import Storage # Prevent circular import.\n\n path_prefix = None\n if data_location:\n path_prefix = data_location.get_path()\n\n def validate_refs(field):\n \"\"\"Validate reference paths.\"\"\"\n for ref_filename in field.get('refs', []):\n ref_path = os.path.join(path_prefix, ref_filename)\n if not os.path.exists(ref_path):\n raise ValidationError(\"Path referenced in `refs` ({}) does not exist.\".format(ref_path))\n if not (os.path.isfile(ref_path) or os.path.isdir(ref_path)):\n raise ValidationError(\n \"Path referenced in `refs` ({}) is neither a file or directory.\".format(ref_path))\n\n def validate_file(field, regex):\n \"\"\"Validate file name (and check that it exists).\"\"\"\n filename = field['file']\n\n if regex and not re.search(regex, filename):\n raise ValidationError(\n \"File name {} does not match regex {}\".format(filename, regex))\n\n if path_prefix:\n path = os.path.join(path_prefix, filename)\n if not os.path.exists(path):\n raise ValidationError(\"Referenced path ({}) does not exist.\".format(path))\n if not os.path.isfile(path):\n raise ValidationError(\"Referenced path ({}) is not a file.\".format(path))\n\n validate_refs(field)\n\n def validate_dir(field):\n \"\"\"Check that dirs and referenced files exists.\"\"\"\n dirname = field['dir']\n\n if path_prefix:\n path = os.path.join(path_prefix, dirname)\n if not os.path.exists(path):\n raise ValidationError(\"Referenced path ({}) does not exist.\".format(path))\n if not os.path.isdir(path):\n raise ValidationError(\"Referenced path ({}) is not a directory.\".format(path))\n\n validate_refs(field)\n\n def validate_data(data_pk, type_):\n \"\"\"Check that `Data` objects exist and is of right type.\"\"\"\n from .data import Data # prevent circular import\n\n data_qs = Data.objects.filter(pk=data_pk).values('process__type')\n if not data_qs.exists():\n if skip_missing_data:\n return\n\n raise ValidationError(\n \"Referenced `Data` object does not exist (id:{})\".format(data_pk))\n data = data_qs.first()\n if not data['process__type'].startswith(type_):\n raise ValidationError(\n \"Data object of type `{}` is required, but type `{}` is given. \"\n \"(id:{})\".format(type_, data['process__type'], data_pk))\n\n def validate_range(value, interval, name):\n \"\"\"Check that given value is inside the specified range.\"\"\"\n if not interval:\n return\n\n if value < interval[0] or value > interval[1]:\n raise ValidationError(\n \"Value of field '{}' is out of range. It should be between {} and {}.\".format(\n name, interval[0], interval[1]\n )\n )\n\n is_dirty = False\n dirty_fields = []\n for _schema, _fields, _ in iterate_schema(instance, schema):\n name = _schema['name']\n is_required = _schema.get('required', True)\n\n if test_required and is_required and name not in _fields:\n is_dirty = True\n dirty_fields.append(name)\n\n if name in _fields:\n field = _fields[name]\n type_ = _schema.get('type', \"\")\n\n # Treat None as if the field is missing.\n if not is_required and field is None:\n continue\n\n try:\n jsonschema.validate([{\"type\": type_, \"value\": field}], TYPE_SCHEMA)\n except jsonschema.exceptions.ValidationError as ex:\n raise ValidationError(ex.message)\n\n choices = [choice['value'] for choice in _schema.get('choices', [])]\n allow_custom_choice = _schema.get('allow_custom_choice', False)\n if choices and not allow_custom_choice and field not in choices:\n raise ValidationError(\n \"Value of field '{}' must match one of predefined choices. \"\n \"Current value: {}\".format(name, field)\n )\n\n if type_ == 'basic:file:':\n validate_file(field, _schema.get('validate_regex'))\n\n elif type_ == 'list:basic:file:':\n for obj in field:\n validate_file(obj, _schema.get('validate_regex'))\n\n elif type_ == 'basic:dir:':\n validate_dir(field)\n\n elif type_ == 'list:basic:dir:':\n for obj in field:\n validate_dir(obj)\n\n elif type_ == 'basic:json:' and not Storage.objects.filter(pk=field).exists():\n raise ValidationError(\n \"Referenced `Storage` object does not exist (id:{})\".format(field))\n\n elif type_.startswith('data:'):\n validate_data(field, type_)\n\n elif type_.startswith('list:data:'):\n for data_id in field:\n validate_data(data_id, type_[5:]) # remove `list:` from type\n\n elif type_ == 'basic:integer:' or type_ == 'basic:decimal:':\n validate_range(field, _schema.get('range'), name)\n\n elif type_ == 'list:basic:integer:' or type_ == 'list:basic:decimal:':\n for obj in field:\n validate_range(obj, _schema.get('range'), name)\n\n try:\n # Check that schema definitions exist for all fields\n for _, _ in iterate_fields(instance, schema):\n pass\n except KeyError as ex:\n raise ValidationError(str(ex))\n\n if is_dirty:\n dirty_fields = ['\"{}\"'.format(field) for field in dirty_fields]\n raise DirtyError(\"Required fields {} not given.\".format(', '.join(dirty_fields)))", "response": "Validate that the given instance and schema are consistent with our data types."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _hydrate_values(output, output_schema, data):\n def hydrate_path(file_name):\n \"\"\"Hydrate file paths.\"\"\"\n from resolwe.flow.managers import manager\n\n class HydratedPath(str):\n \"\"\"String wrapper, which also stores the original filename.\"\"\"\n\n __slots__ = ('data_id', 'file_name')\n\n def __new__(cls, value=''):\n \"\"\"Initialize hydrated path.\"\"\"\n hydrated = str.__new__(cls, value)\n hydrated.data_id = data.id\n hydrated.file_name = file_name\n return hydrated\n\n return HydratedPath(manager.get_executor().resolve_data_path(data, file_name))\n\n def hydrate_storage(storage_id):\n \"\"\"Hydrate storage fields.\"\"\"\n from .storage import LazyStorageJSON # Prevent circular import.\n\n return LazyStorageJSON(pk=storage_id)\n\n for field_schema, fields in iterate_fields(output, output_schema):\n name = field_schema['name']\n value = fields[name]\n if 'type' in field_schema:\n if field_schema['type'].startswith('basic:file:'):\n value['file'] = hydrate_path(value['file'])\n value['refs'] = [hydrate_path(ref) for ref in value.get('refs', [])]\n\n elif field_schema['type'].startswith('list:basic:file:'):\n for obj in value:\n obj['file'] = hydrate_path(obj['file'])\n obj['refs'] = [hydrate_path(ref) for ref in obj.get('refs', [])]\n\n if field_schema['type'].startswith('basic:dir:'):\n value['dir'] = hydrate_path(value['dir'])\n value['refs'] = [hydrate_path(ref) for ref in value.get('refs', [])]\n\n elif field_schema['type'].startswith('list:basic:dir:'):\n for obj in value:\n obj['dir'] = hydrate_path(obj['dir'])\n obj['refs'] = [hydrate_path(ref) for ref in obj.get('refs', [])]\n\n elif field_schema['type'].startswith('basic:json:'):\n fields[name] = hydrate_storage(value)\n\n elif field_schema['type'].startswith('list:basic:json:'):\n fields[name] = [hydrate_storage(storage_id) for storage_id in value]", "response": "Hydrate basic file and basic. json values."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef hydrate_input_references(input_, input_schema, hydrate_values=True):\n from .data import Data # prevent circular import\n\n for field_schema, fields in iterate_fields(input_, input_schema):\n name = field_schema['name']\n value = fields[name]\n if 'type' in field_schema:\n if field_schema['type'].startswith('data:'):\n if value is None:\n continue\n\n try:\n data = Data.objects.get(id=value)\n except Data.DoesNotExist:\n fields[name] = {}\n continue\n\n output = copy.deepcopy(data.output)\n if hydrate_values:\n _hydrate_values(output, data.process.output_schema, data)\n output[\"__id\"] = data.id\n output[\"__type\"] = data.process.type\n output[\"__descriptor\"] = data.descriptor\n output[\"__entity_name\"] = None\n output[\"__output_schema\"] = data.process.output_schema\n\n entity = data.entity_set.values('name').first()\n if entity:\n output[\"__entity_name\"] = entity['name']\n\n fields[name] = output\n\n elif field_schema['type'].startswith('list:data:'):\n outputs = []\n for val in value:\n if val is None:\n continue\n\n try:\n data = Data.objects.get(id=val)\n except Data.DoesNotExist:\n outputs.append({})\n continue\n\n output = copy.deepcopy(data.output)\n if hydrate_values:\n _hydrate_values(output, data.process.output_schema, data)\n\n output[\"__id\"] = data.id\n output[\"__type\"] = data.process.type\n output[\"__descriptor\"] = data.descriptor\n output[\"__output_schema\"] = data.process.output_schema\n\n entity = data.entity_set.values('name').first()\n if entity:\n output[\"__entity_name\"] = entity['name']\n\n outputs.append(output)\n\n fields[name] = outputs", "response": "Hydrate input_ with linked data."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef hydrate_input_uploads(input_, input_schema, hydrate_values=True):\n from resolwe.flow.managers import manager\n\n files = []\n for field_schema, fields in iterate_fields(input_, input_schema):\n name = field_schema['name']\n value = fields[name]\n if 'type' in field_schema:\n if field_schema['type'] == 'basic:file:':\n files.append(value)\n\n elif field_schema['type'] == 'list:basic:file:':\n files.extend(value)\n\n urlregex = re.compile(r'^(https?|ftp)://[-A-Za-z0-9\\+&@#/%?=~_|!:,.;]*[-A-Za-z0-9\\+&@#/%=~_|]')\n for value in files:\n if 'file_temp' in value:\n if isinstance(value['file_temp'], str):\n # If file_temp not url, hydrate path.\n if not urlregex.search(value['file_temp']):\n value['file_temp'] = manager.get_executor().resolve_upload_path(value['file_temp'])\n else:\n # Something very strange happened.\n value['file_temp'] = 'Invalid value for file_temp in DB'", "response": "Hydrate input basic : upload types with upload location."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef hydrate_size(data, force=False):\n from .data import Data # prevent circular import\n\n def get_dir_size(path):\n \"\"\"Get directory size.\"\"\"\n total_size = 0\n for dirpath, _, filenames in os.walk(path):\n for file_name in filenames:\n file_path = os.path.join(dirpath, file_name)\n if not os.path.isfile(file_path): # Skip all \"not normal\" files (links, ...)\n continue\n total_size += os.path.getsize(file_path)\n return total_size\n\n def get_refs_size(obj, obj_path):\n \"\"\"Calculate size of all references of ``obj``.\n\n :param dict obj: Data object's output field (of type file/dir).\n :param str obj_path: Path to ``obj``.\n \"\"\"\n total_size = 0\n for ref in obj.get('refs', []):\n ref_path = data.location.get_path(filename=ref)\n if ref_path in obj_path:\n # It is a common case that ``obj['file']`` is also contained in\n # one of obj['ref']. In that case, we need to make sure that it's\n # size is not counted twice:\n continue\n if os.path.isfile(ref_path):\n total_size += os.path.getsize(ref_path)\n elif os.path.isdir(ref_path):\n total_size += get_dir_size(ref_path)\n\n return total_size\n\n def add_file_size(obj):\n \"\"\"Add file size to the basic:file field.\"\"\"\n if data.status in [Data.STATUS_DONE, Data.STATUS_ERROR] and 'size' in obj and not force:\n return\n\n path = data.location.get_path(filename=obj['file'])\n if not os.path.isfile(path):\n raise ValidationError(\"Referenced file does not exist ({})\".format(path))\n\n obj['size'] = os.path.getsize(path)\n obj['total_size'] = obj['size'] + get_refs_size(obj, path)\n\n def add_dir_size(obj):\n \"\"\"Add directory size to the basic:dir field.\"\"\"\n if data.status in [Data.STATUS_DONE, Data.STATUS_ERROR] and 'size' in obj and not force:\n return\n\n path = data.location.get_path(filename=obj['dir'])\n if not os.path.isdir(path):\n raise ValidationError(\"Referenced dir does not exist ({})\".format(path))\n\n obj['size'] = get_dir_size(path)\n obj['total_size'] = obj['size'] + get_refs_size(obj, path)\n\n data_size = 0\n for field_schema, fields in iterate_fields(data.output, data.process.output_schema):\n name = field_schema['name']\n value = fields[name]\n if 'type' in field_schema:\n if field_schema['type'].startswith('basic:file:'):\n add_file_size(value)\n data_size += value.get('total_size', 0)\n elif field_schema['type'].startswith('list:basic:file:'):\n for obj in value:\n add_file_size(obj)\n data_size += obj.get('total_size', 0)\n elif field_schema['type'].startswith('basic:dir:'):\n add_dir_size(value)\n data_size += value.get('total_size', 0)\n elif field_schema['type'].startswith('list:basic:dir:'):\n for obj in value:\n add_dir_size(obj)\n data_size += obj.get('total_size', 0)\n\n data.size = data_size", "response": "Populates the size of all files and directories of all objects in the object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef render_descriptor(data):\n if not data.descriptor_schema:\n return\n\n # Set default values\n for field_schema, field, path in iterate_schema(data.descriptor, data.descriptor_schema.schema, 'descriptor'):\n if 'default' in field_schema and field_schema['name'] not in field:\n dict_dot(data, path, field_schema['default'])", "response": "Render data descriptor.\n\n The rendering is based on descriptor schema and input context.\n\n :param data: data instance\n :type data: :class:`resolwe.flow.models.Data` or :class:`dict`"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef render_template(process, template_string, context):\n from resolwe.flow.managers import manager\n\n # Get the appropriate expression engine. If none is defined, do not evaluate\n # any expressions.\n expression_engine = process.requirements.get('expression-engine', None)\n if not expression_engine:\n return template_string\n\n return manager.get_expression_engine(expression_engine).evaluate_block(template_string, context)", "response": "Render a template using the specified expression engine."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef json_path_components(path):\n if isinstance(path, str):\n path = path.split('.')\n\n return list(path)", "response": "Convert a JSON path to individual path components."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate_process_subtype(supertype_name, supertype, subtype_name, subtype):\n errors = []\n for item in supertype:\n # Ensure that the item exists in subtype and has the same schema.\n for subitem in subtype:\n if item['name'] != subitem['name']:\n continue\n\n for key in set(item.keys()) | set(subitem.keys()):\n if key in ('label', 'description'):\n # Label and description can differ.\n continue\n elif key == 'required':\n # A non-required item can be made required in subtype, but not the\n # other way around.\n item_required = item.get('required', True)\n subitem_required = subitem.get('required', False)\n\n if item_required and not subitem_required:\n errors.append(\"Field '{}' is marked as required in '{}' and optional in '{}'.\".format(\n item['name'],\n supertype_name,\n subtype_name,\n ))\n elif item.get(key, None) != subitem.get(key, None):\n errors.append(\"Schema for field '{}' in type '{}' does not match supertype '{}'.\".format(\n item['name'],\n subtype_name,\n supertype_name\n ))\n\n break\n else:\n errors.append(\"Schema for type '{}' is missing supertype '{}' field '{}'.\".format(\n subtype_name,\n supertype_name,\n item['name']\n ))\n\n return errors", "response": "Perform process subtype validation."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nperform process type validation.", "response": "def validate_process_types(queryset=None):\n \"\"\"Perform process type validation.\n\n :param queryset: Optional process queryset to validate\n :return: A list of validation error strings\n \"\"\"\n if not queryset:\n from .process import Process\n queryset = Process.objects.all()\n\n processes = {}\n for process in queryset:\n dict_dot(\n processes,\n process.type.replace(':', '.') + '__schema__',\n process.output_schema\n )\n\n errors = []\n for path, key, value in iterate_dict(processes, exclude=lambda key, value: key == '__schema__'):\n if '__schema__' not in value:\n continue\n\n # Validate with any parent types.\n for length in range(len(path), 0, -1):\n parent_type = '.'.join(path[:length] + ['__schema__'])\n try:\n parent_schema = dict_dot(processes, parent_type)\n except KeyError:\n continue\n\n errors += validate_process_subtype(\n supertype_name=':'.join(path[:length]),\n supertype=parent_schema,\n subtype_name=':'.join(path + [key]),\n subtype=value['__schema__']\n )\n\n return errors"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfilling empty optional fields in input with default values.", "response": "def fill_with_defaults(process_input, input_schema):\n \"\"\"Fill empty optional fields in input with default values.\"\"\"\n for field_schema, fields, path in iterate_schema(process_input, input_schema):\n if 'default' in field_schema and field_schema['name'] not in fields:\n dict_dot(process_input, path, field_schema['default'])"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nformat the internal value.", "response": "def to_internal_value(self, data):\n \"\"\"Format the internal value.\"\"\"\n # When setting the contributor, it may be passed as an integer.\n if isinstance(data, dict) and isinstance(data.get('id', None), int):\n data = data['id']\n elif isinstance(data, int):\n pass\n else:\n raise ValidationError(\"Contributor must be an integer or a dictionary with key 'id'\")\n\n return self.Meta.model.objects.get(pk=data)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extract_descriptor(self, obj):\n descriptor = []\n\n def flatten(current):\n \"\"\"Flatten descriptor.\"\"\"\n if isinstance(current, dict):\n for key in current:\n flatten(current[key])\n elif isinstance(current, list):\n for val in current:\n flatten(val)\n elif isinstance(current, (int, bool, float, str)):\n descriptor.append(str(current))\n\n flatten(obj.descriptor)\n\n return descriptor", "response": "Extract data from the descriptor."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _serialize_items(self, serializer, kind, items):\n if self.request and self.request.query_params.get('hydrate_{}'.format(kind), False):\n serializer = serializer(items, many=True, read_only=True)\n serializer.bind(kind, self)\n return serializer.data\n else:\n return [item.id for item in items]", "response": "Return serialized items or list of ids depending on hydrate_XXX query param."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns serialized list of entity names on data that user has view permission on.", "response": "def get_entity_names(self, data):\n \"\"\"Return serialized list of entity names on data that user has `view` permission on.\"\"\"\n entities = self._filter_queryset('view_entity', data.entity_set.all())\n return list(entities.values_list('name', flat=True))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns serialized list of collection objects on data that user has view permission on.", "response": "def get_collections(self, data):\n \"\"\"Return serialized list of collection objects on data that user has `view` permission on.\"\"\"\n collections = self._filter_queryset('view_collection', data.collection_set.all())\n\n from .collection import CollectionSerializer\n\n class CollectionWithoutDataSerializer(WithoutDataSerializerMixin, CollectionSerializer):\n \"\"\"Collection without data field serializer.\"\"\"\n\n return self._serialize_items(CollectionWithoutDataSerializer, 'collections', collections)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn serialized list of entity objects on data that user has view permission on.", "response": "def get_entities(self, data):\n \"\"\"Return serialized list of entity objects on data that user has `view` permission on.\"\"\"\n entities = self._filter_queryset('view_entity', data.entity_set.all())\n\n from .entity import EntitySerializer\n\n class EntityWithoutDataSerializer(WithoutDataSerializerMixin, EntitySerializer):\n \"\"\"Entity without data field serializer.\"\"\"\n\n return self._serialize_items(EntityWithoutDataSerializer, 'entities', entities)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nhandle the list_docker_images command.", "response": "def handle(self, *args, **options):\n \"\"\"Handle command list_docker_images.\"\"\"\n verbosity = int(options.get('verbosity'))\n\n # Check that the specified output format is valid\n if options['format'] != 'plain' and options['format'] != 'yaml':\n raise CommandError(\"Unknown output format: %s\" % options['format'])\n\n # Gather only unique latest custom Docker requirements that the processes are using\n # The 'image' field is optional, so be careful about that as well\n unique_docker_images = set(\n p.requirements['executor']['docker']['image']\n for p in Process.objects.filter(is_active=True).order_by(\n 'slug', '-version'\n ).distinct(\n 'slug'\n ).only(\n 'requirements'\n ).filter(\n requirements__icontains='docker'\n )\n if 'image' in p.requirements.get('executor', {}).get('docker', {})\n )\n\n # Add the default image.\n unique_docker_images.add(DEFAULT_CONTAINER_IMAGE)\n\n # Pull images if requested or just output the list in specified format\n if options['pull']:\n # Remove set of already pulled images.\n with PULLED_IMAGES_LOCK:\n unique_docker_images.difference_update(PULLED_IMAGES)\n\n # Get the desired 'docker' command from settings or use the default\n docker = getattr(settings, 'FLOW_DOCKER_COMMAND', 'docker')\n\n # Pull each image\n for img in unique_docker_images:\n ret = subprocess.call(\n shlex.split('{} pull {}'.format(docker, img)),\n stdout=None if verbosity > 0 else subprocess.DEVNULL,\n stderr=None if verbosity > 0 else subprocess.DEVNULL,\n )\n\n # Update set of pulled images.\n with PULLED_IMAGES_LOCK:\n PULLED_IMAGES.add(img)\n\n if ret != 0:\n errmsg = \"Failed to pull Docker image '{}'!\".format(img)\n\n if not options['ignore_pull_errors']:\n # Print error and stop execution\n raise CommandError(errmsg)\n else:\n # Print error, but keep going\n logger.error(errmsg)\n if verbosity > 0:\n self.stderr.write(errmsg)\n else:\n msg = \"Docker image '{}' pulled successfully!\".format(img)\n logger.info(msg)\n if verbosity > 0:\n self.stdout.write(msg)\n else:\n # Sort the set of unique Docker images for nicer output.\n unique_docker_images = sorted(unique_docker_images)\n\n # Convert the set of unique Docker images into a list of dicts for easier output\n imgs = [\n dict(name=s[0], tag=s[1] if len(s) == 2 else 'latest')\n for s in (img.split(':') for img in unique_docker_images)\n ]\n\n # Output in YAML or plaintext (one image per line), as requested\n if options['format'] == 'yaml':\n out = yaml.safe_dump(imgs, default_flow_style=True, default_style=\"'\")\n else:\n out = functools.reduce(operator.add,\n ('{name}:{tag}\\n'.format(**i) for i in imgs), '')\n\n self.stdout.write(out, ending='')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets or create a Data object if similar already exists otherwise create it.", "response": "def get_or_create(self, request, *args, **kwargs):\n \"\"\"Get ``Data`` object if similar already exists, otherwise create it.\"\"\"\n kwargs['get_or_create'] = True\n return self.create(request, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nperform get_or_create - return existing object if found.", "response": "def perform_get_or_create(self, request, *args, **kwargs):\n \"\"\"Perform \"get_or_create\" - return existing object if found.\"\"\"\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n process = serializer.validated_data.get('process')\n process_input = request.data.get('input', {})\n\n fill_with_defaults(process_input, process.input_schema)\n\n checksum = get_data_checksum(process_input, process.slug, process.version)\n data_qs = Data.objects.filter(\n checksum=checksum,\n process__persistence__in=[Process.PERSISTENCE_CACHED, Process.PERSISTENCE_TEMP],\n )\n data_qs = get_objects_for_user(request.user, 'view_data', data_qs)\n if data_qs.exists():\n data = data_qs.order_by('created').last()\n serializer = self.get_serializer(data)\n return Response(serializer.data)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef ready(self):\n # Stop the startup code from running automatically from pytest unit tests.\n # When running tests in parallel with xdist, an instance of GMN is launched\n # before thread specific settings have been applied.\n # if hasattr(sys, '_launched_by_pytest'):\n # return\n self._assert_readable_file_if_set('CLIENT_CERT_PATH')\n self._assert_readable_file_if_set('CLIENT_CERT_PRIVATE_KEY_PATH')\n self._assert_dirs_exist('OBJECT_FORMAT_CACHE_PATH')\n\n self._assert_is_type('SCIMETA_VALIDATION_ENABLED', bool)\n self._assert_is_type('SCIMETA_VALIDATION_MAX_SIZE', int)\n self._assert_is_in('SCIMETA_VALIDATION_OVER_SIZE_ACTION', ('reject', 'accept'))\n\n self._warn_unsafe_for_prod()\n self._check_resource_map_create()\n if not d1_gmn.app.sciobj_store.is_existing_store():\n self._create_sciobj_store_root()\n\n self._add_xslt_mimetype()", "response": "Called once per Django process instance."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking that the dirs leading up to the given file path exist.", "response": "def _assert_dirs_exist(self, setting_name):\n \"\"\"Check that the dirs leading up to the given file path exist.\n\n Does not check if the file exists.\n\n \"\"\"\n v = self._get_setting(setting_name)\n if (not os.path.isdir(os.path.split(v)[0])) or os.path.isdir(v):\n self.raise_config_error(\n setting_name,\n v,\n str,\n 'a file path in an existing directory',\n is_none_allowed=False,\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _warn_unsafe_for_prod(self):\n safe_settings_list = [\n ('DEBUG', False),\n ('DEBUG_GMN', False),\n ('STAND_ALONE', False),\n ('DATABASES.default.ATOMIC_REQUESTS', True),\n ('SECRET_KEY', ''),\n ('STATIC_SERVER', False),\n ]\n for setting_str, setting_safe in safe_settings_list:\n setting_current = self._get_setting(setting_str)\n if setting_current != setting_safe:\n logger.warning(\n 'Setting is unsafe for use in production. setting=\"{}\" current=\"{}\" '\n 'safe=\"{}\"'.format(setting_str, setting_current, setting_safe)\n )", "response": "Warn on settings that are not safe for production."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a potentially nested dict setting.", "response": "def _get_setting(self, setting_dotted_name, default=None):\n \"\"\"Return the value of a potentially nested dict setting.\n\n E.g., 'DATABASES.default.NAME\n\n \"\"\"\n name_list = setting_dotted_name.split('.')\n setting_obj = getattr(django.conf.settings, name_list[0], default)\n # if len(name_list) == 1:\n # return setting_obj\n\n return functools.reduce(\n lambda o, a: o.get(a, default), [setting_obj] + name_list[1:]\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrefreshing the connection to Elasticsearch when worker is started.", "response": "def _refresh_connection(self):\n \"\"\"Refresh connection to Elasticsearch when worker is started.\n\n File descriptors (sockets) can be shared between multiple\n threads. If same connection is used by multiple threads at the\n same time, this can cause timeouts in some of the pushes. So\n connection needs to be reestablished in each thread to make sure\n that it is unique per thread.\n \"\"\"\n # Thread with same id can be created when one terminates, but it\n # is ok, as we are only concerned about concurent pushes.\n current_thread_id = threading.current_thread().ident\n\n if current_thread_id != self.connection_thread_id:\n prepare_connection()\n\n self.connection_thread_id = current_thread_id"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates unique document id for ElasticSearch.", "response": "def generate_id(self, obj):\n \"\"\"Generate unique document id for ElasticSearch.\"\"\"\n object_type = type(obj).__name__.lower()\n return '{}_{}'.format(object_type, self.get_object_id(obj))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef process_object(self, obj):\n document = self.document_class(meta={'id': self.generate_id(obj)})\n\n for field in document._doc_type.mapping: # pylint: disable=protected-access\n if field in ['users_with_permissions', 'groups_with_permissions', 'public_permission']:\n continue # These fields are handled separately\n\n try:\n # use get_X_value function\n get_value_function = getattr(self, 'get_{}_value'.format(field), None)\n if get_value_function:\n setattr(document, field, get_value_function(obj)) # pylint: disable=not-callable\n continue\n\n # use `mapping` dict\n if field in self.mapping:\n if callable(self.mapping[field]):\n setattr(document, field, self.mapping[field](obj))\n continue\n\n try:\n object_attr = dict_dot(obj, self.mapping[field])\n except (KeyError, AttributeError):\n object_attr = None\n\n if callable(object_attr):\n # use method on object\n setattr(document, field, object_attr(obj))\n else:\n # use attribute on object\n setattr(document, field, object_attr)\n continue\n\n # get value from the object\n try:\n object_value = dict_dot(obj, field)\n setattr(document, field, object_value)\n continue\n except KeyError:\n pass\n\n raise AttributeError(\"Cannot determine mapping for field {}\".format(field))\n\n except Exception: # pylint: disable=broad-except\n logger.exception(\n \"Error occurred while setting value of field '%s' in '%s' Elasticsearch index.\",\n field, self.__class__.__name__,\n extra={'object_type': self.object_type, 'obj_id': obj.pk}\n )\n\n permissions = self.get_permissions(obj)\n document.users_with_permissions = permissions['users']\n document.groups_with_permissions = permissions['groups']\n document.public_permission = permissions['public']\n\n self.push_queue.append(document)", "response": "Process the current object and push it to the ElasticSearch index."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate the mappings in elasticsearch.", "response": "def create_mapping(self):\n \"\"\"Create the mappings in elasticsearch.\"\"\"\n try:\n self.document_class.init()\n self._mapping_created = True\n except IllegalOperation as error:\n if error.args[0].startswith('You cannot update analysis configuration'):\n # Ignore mapping update errors, which are thrown even when the analysis\n # configuration stays the same.\n # TODO: Remove this when https://github.com/elastic/elasticsearch-dsl-py/pull/272 is merged.\n return\n\n raise"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npush built documents to Elasticsearch server.", "response": "def push(self):\n \"\"\"Push built documents to ElasticSearch.\"\"\"\n self._refresh_connection()\n\n # Check if we need to update mappings as this needs to be done\n # before we push anything to the Elasticsearch server.\n # This must be done even if the queue is empty, as otherwise ES\n # will fail when retrieving data.\n if not self._mapping_created:\n logger.debug(\"Pushing mapping for Elasticsearch index '%s'.\", self.__class__.__name__)\n self.create_mapping()\n\n if not self.push_queue:\n logger.debug(\"No documents to push, skipping push.\")\n return\n\n logger.debug(\"Found %s documents to push to Elasticsearch.\", len(self.push_queue))\n\n bulk(connections.get_connection(), (doc.to_dict(True) for doc in self.push_queue), refresh=True)\n self.push_queue = []\n\n logger.debug(\"Finished pushing builded documents to Elasticsearch server.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns users and groups with view permission on the current object.", "response": "def get_permissions(self, obj):\n \"\"\"Return users and groups with ``view`` permission on the current object.\n\n Return a dict with two keys - ``users`` and ``groups`` - which\n contain list of ids of users/groups with ``view`` permission.\n \"\"\"\n # TODO: Optimize this for bulk running\n filters = {\n 'object_pk': obj.id,\n 'content_type': ContentType.objects.get_for_model(obj),\n 'permission__codename__startswith': 'view',\n }\n return {\n 'users': list(\n UserObjectPermission.objects.filter(**filters).distinct('user').values_list('user_id', flat=True)\n ),\n 'groups': list(\n GroupObjectPermission.objects.filter(**filters).distinct('group').values_list('group', flat=True)\n ),\n 'public': UserObjectPermission.objects.filter(user__username=ANONYMOUS_USER_NAME, **filters).exists(),\n }"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef remove_object(self, obj):\n obj_id = self.generate_id(obj)\n es_obj = self.document_class.get(obj_id, ignore=[404])\n # Object may not exist in this index.\n if es_obj:\n es_obj.delete(refresh=True)", "response": "Remove current object from the ElasticSearch index."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates a new entry point for the current locale.", "response": "def generate_pyxb_binding(self, args):\n \"\"\"Args:\n\n args:\n\n \"\"\"\n pyxbgen_args = []\n pyxbgen_args.append('--schema-root=\\'{}\\''.format(self.schema_dir))\n pyxbgen_args.append('--binding-root=\\'{}\\''.format(self.binding_dir))\n pyxbgen_args.append(\n '--schema-stripped-prefix='\n '\\'https://repository.dataone.org/software/cicore/branches/D1_SCHEMA_v1.1/\\''\n )\n pyxbgen_args.extend(args)\n\n self.run_pyxbgen(pyxbgen_args)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_version_file(self, schema_filename, binding_filename):\n version_filename = binding_filename + '_version.txt'\n version_path = os.path.join(self.binding_dir, version_filename)\n schema_path = os.path.join(self.schema_dir, schema_filename)\n try:\n tstamp, svnpath, svnrev, version = self.get_version_info_from_svn(\n schema_path\n )\n except TypeError:\n pass\n else:\n self.write_version_file(version_path, tstamp, svnpath, svnrev, version)", "response": "Given a DataONE schema generates a file that contains version information\n about the schema."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates DataLocation for each Data.", "response": "def set_data_location(apps, schema_editor):\n \"\"\"Create DataLocation for each Data.\"\"\"\n Data = apps.get_model('flow', 'Data')\n DataLocation = apps.get_model('flow', 'DataLocation')\n\n for data in Data.objects.all():\n if os.path.isdir(os.path.join(settings.FLOW_EXECUTOR['DATA_DIR'], str(data.id))):\n with transaction.atomic():\n # Manually set DataLocation id to preserve data directory.\n data_location = DataLocation.objects.create(id=data.id, subpath=str(data.id))\n data_location.data.add(data)\n\n # Increment DataLocation id's sequence\n if DataLocation.objects.exists():\n max_id = DataLocation.objects.order_by('id').last().id\n with connection.cursor() as cursor:\n cursor.execute(\n \"ALTER SEQUENCE flow_datalocation_id_seq RESTART WITH {};\".format(max_id + 1)\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nretrieve the next page of results.", "response": "def _loadMore(self, start=0, trys=0, validation=True):\n \"\"\"Retrieves the next page of results.\"\"\"\n self._log.debug(\"Loading page starting from %d\" % start)\n self._czero = start\n self._pageoffs = 0\n try:\n pyxb.RequireValidWhenParsing(validation)\n self._object_list = self._client.listObjects(\n start=start,\n count=self._pagesize,\n fromDate=self._fromDate,\n nodeId=self._nodeId,\n )\n except http.client.BadStatusLine as e:\n self._log.warning(\"Server responded with Bad Status Line. Retrying in 5sec\")\n self._client.connection.close()\n if trys > 3:\n raise e\n trys += 1\n self._loadMore(start, trys)\n except d1_common.types.exceptions.ServiceFailure as e:\n self._log.error(e)\n if trys > 3:\n raise e\n trys += 1\n self._loadMore(start, trys, validation=False)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncarving up a mime - type and returns a tuple of the type subtype and params.", "response": "def parse_mime_type(mime_type):\n \"\"\"Carves up a mime-type and returns a tuple of the (type, subtype, params) where\n 'params' is a dictionary of all the parameters for the media range. For example, the\n media range 'application/xhtml;q=0.5' would get parsed into:\n\n ('application', 'xhtml', {'q', '0.5'})\n\n \"\"\"\n parts = mime_type.split(\";\")\n params = dict([tuple([s.strip() for s in param.split(\"=\")]) for param in parts[1:]])\n full_type = parts[0].strip()\n # Java URLConnection class sends an Accept header that includes a single \"*\"\n # Turn it into a legal wildcard.\n if full_type == '*':\n full_type = '*/*'\n (type, subtype) = full_type.split(\"/\")\n return (type.strip(), subtype.strip(), params)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntakes a list of supported mime - types and finds the best match for all the media - types listed in header.", "response": "def best_match(supported, header):\n \"\"\"Takes a list of supported mime-types and finds the best match for all the media-\n ranges listed in header. The value of header must be a string that conforms to the\n format of the HTTP Accept: header. The value of 'supported' is a list of mime-types.\n\n >>> best_match(['application/xbel+xml', 'text/xml'], 'text/*;q=0.5,*/*; q=0.1')\n 'text/xml'\n\n \"\"\"\n parsed_header = [parse_media_range(r) for r in header.split(\",\")]\n weighted_matches = [\n (fitness_and_quality_parsed(mime_type, parsed_header), mime_type)\n for mime_type in supported\n ]\n weighted_matches.sort()\n return weighted_matches[-1][0][1] and weighted_matches[-1][1] or ''"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchunking delete, which should be used if deleting many objects. The reason why this method is needed is that deleting a lot of Data objects requires Django to fetch all of them into memory (fast path is not used) and this causes huge memory usage (and possibly OOM). :param chunk_size: Optional chunk size", "response": "def _delete_chunked(queryset, chunk_size=500):\n \"\"\"Chunked delete, which should be used if deleting many objects.\n\n The reason why this method is needed is that deleting a lot of Data objects\n requires Django to fetch all of them into memory (fast path is not used) and\n this causes huge memory usage (and possibly OOM).\n\n :param chunk_size: Optional chunk size\n \"\"\"\n while True:\n # Discover primary key to limit the current chunk. This is required because delete\n # cannot be called on a sliced queryset due to ordering requirement.\n with transaction.atomic():\n # Get offset of last item (needed because it may be less than the chunk size).\n offset = queryset.order_by('pk')[:chunk_size].count()\n if not offset:\n break\n\n # Fetch primary key of last item and use it to delete the chunk.\n last_instance = queryset.order_by('pk')[offset - 1]\n queryset.filter(pk__lte=last_instance.pk).delete()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef duplicate(self, contributor=None):\n bundle = [\n {'original': data, 'copy': data.duplicate(contributor=contributor)}\n for data in self\n ]\n\n bundle = rewire_inputs(bundle)\n duplicated = [item['copy'] for item in bundle]\n\n return duplicated", "response": "Duplicate ( make a copy ) Data objects."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save_storage(self, instance, schema):\n for field_schema, fields in iterate_fields(instance, schema):\n name = field_schema['name']\n value = fields[name]\n if field_schema.get('type', '').startswith('basic:json:'):\n if value and not self.pk:\n raise ValidationError(\n 'Data object must be `created` before creating `basic:json:` fields')\n\n if isinstance(value, int):\n # already in Storage\n continue\n\n if isinstance(value, str):\n file_path = self.location.get_path(filename=value) # pylint: disable=no-member\n if os.path.isfile(file_path):\n try:\n with open(file_path) as file_handler:\n value = json.load(file_handler)\n except json.JSONDecodeError:\n with open(file_path) as file_handler:\n content = file_handler.read()\n content = content.rstrip()\n raise ValidationError(\n \"Value of '{}' must be a valid JSON, current: {}\".format(name, content)\n )\n\n storage = self.storages.create( # pylint: disable=no-member\n name='Storage for data id {}'.format(self.pk),\n contributor=self.contributor,\n json=value,\n )\n\n # `value` is copied by value, so `fields[name]` must be changed\n fields[name] = storage.pk", "response": "Save basic : json values to a Storage collection."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nretrieve handles for all basic : secret fields on input.", "response": "def resolve_secrets(self):\n \"\"\"Retrieve handles for all basic:secret: fields on input.\n\n The process must have the ``secrets`` resource requirement\n specified in order to access any secrets. Otherwise this method\n will raise a ``PermissionDenied`` exception.\n\n :return: A dictionary of secrets where key is the secret handle\n and value is the secret value.\n \"\"\"\n secrets = {}\n for field_schema, fields in iterate_fields(self.input, self.process.input_schema): # pylint: disable=no-member\n if not field_schema.get('type', '').startswith('basic:secret:'):\n continue\n\n name = field_schema['name']\n value = fields[name]\n try:\n handle = value['handle']\n except KeyError:\n continue\n\n try:\n secrets[handle] = Secret.objects.get_secret(\n handle,\n contributor=self.contributor\n )\n except Secret.DoesNotExist:\n raise PermissionDenied(\"Access to secret not allowed or secret does not exist\")\n\n # If the process does not not have the right requirements it is not\n # allowed to access any secrets.\n allowed = self.process.requirements.get('resources', {}).get('secrets', False) # pylint: disable=no-member\n if secrets and not allowed:\n raise PermissionDenied(\n \"Process '{}' has secret inputs, but no permission to see secrets\".format(\n self.process.slug # pylint: disable=no-member\n )\n )\n\n return secrets"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsaves data and list : data references as parents.", "response": "def save_dependencies(self, instance, schema):\n \"\"\"Save data: and list:data: references as parents.\"\"\"\n def add_dependency(value):\n \"\"\"Add parent Data dependency.\"\"\"\n try:\n DataDependency.objects.update_or_create(\n parent=Data.objects.get(pk=value),\n child=self,\n defaults={'kind': DataDependency.KIND_IO},\n )\n except Data.DoesNotExist:\n pass\n\n for field_schema, fields in iterate_fields(instance, schema):\n name = field_schema['name']\n value = fields[name]\n\n if field_schema.get('type', '').startswith('data:'):\n add_dependency(value)\n elif field_schema.get('type', '').startswith('list:data:'):\n for data in value:\n add_dependency(data)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new entity if flow_collection is defined in process.", "response": "def create_entity(self):\n \"\"\"Create entity if `flow_collection` is defined in process.\n\n Following rules applies for adding `Data` object to `Entity`:\n * Only add `Data object` to `Entity` if process has defined\n `flow_collection` field\n * Add object to existing `Entity`, if all parents that are part\n of it (but not necessary all parents), are part of the same\n `Entity`\n * If parents belong to different `Entities` or do not belong to\n any `Entity`, create new `Entity`\n\n \"\"\"\n entity_type = self.process.entity_type # pylint: disable=no-member\n entity_descriptor_schema = self.process.entity_descriptor_schema # pylint: disable=no-member\n entity_input = self.process.entity_input # pylint: disable=no-member\n\n if entity_type:\n data_filter = {}\n if entity_input:\n input_id = dict_dot(self.input, entity_input, default=lambda: None)\n if input_id is None:\n logger.warning(\"Skipping creation of entity due to missing input.\")\n return\n if isinstance(input_id, int):\n data_filter['data__pk'] = input_id\n elif isinstance(input_id, list):\n data_filter['data__pk__in'] = input_id\n else:\n raise ValueError(\n \"Cannot create entity due to invalid value of field {}.\".format(entity_input)\n )\n else:\n data_filter['data__in'] = self.parents.all() # pylint: disable=no-member\n\n entity_query = Entity.objects.filter(type=entity_type, **data_filter).distinct()\n entity_count = entity_query.count()\n\n if entity_count == 0:\n descriptor_schema = DescriptorSchema.objects.filter(\n slug=entity_descriptor_schema\n ).latest()\n entity = Entity.objects.create(\n contributor=self.contributor,\n descriptor_schema=descriptor_schema,\n type=entity_type,\n name=self.name,\n tags=self.tags,\n )\n assign_contributor_permissions(entity)\n\n elif entity_count == 1:\n entity = entity_query.first()\n copy_permissions(entity, self)\n\n else:\n logger.info(\"Skipping creation of entity due to multiple entities found.\")\n entity = None\n\n if entity:\n entity.data.add(self)\n # Inherit collections from entity.\n for collection in entity.collections.all():\n collection.data.add(self)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsave the data model.", "response": "def save(self, render_name=False, *args, **kwargs): # pylint: disable=keyword-arg-before-vararg\n \"\"\"Save the data model.\"\"\"\n if self.name != self._original_name:\n self.named_by_user = True\n\n create = self.pk is None\n if create:\n fill_with_defaults(self.input, self.process.input_schema) # pylint: disable=no-member\n\n if not self.name:\n self._render_name()\n else:\n self.named_by_user = True\n\n self.checksum = get_data_checksum(\n self.input, self.process.slug, self.process.version) # pylint: disable=no-member\n\n elif render_name:\n self._render_name()\n\n self.save_storage(self.output, self.process.output_schema) # pylint: disable=no-member\n\n if self.status != Data.STATUS_ERROR:\n hydrate_size(self)\n # If only specified fields are updated (e.g. in executor), size needs to be added\n if 'update_fields' in kwargs:\n kwargs['update_fields'].append('size')\n\n # Input Data objects are validated only upon creation as they can be deleted later.\n skip_missing_data = not create\n validate_schema(\n self.input, self.process.input_schema, skip_missing_data=skip_missing_data # pylint: disable=no-member\n )\n\n render_descriptor(self)\n\n if self.descriptor_schema:\n try:\n validate_schema(self.descriptor, self.descriptor_schema.schema) # pylint: disable=no-member\n self.descriptor_dirty = False\n except DirtyError:\n self.descriptor_dirty = True\n elif self.descriptor and self.descriptor != {}:\n raise ValueError(\"`descriptor_schema` must be defined if `descriptor` is given\")\n\n if self.status != Data.STATUS_ERROR:\n output_schema = self.process.output_schema # pylint: disable=no-member\n if self.status == Data.STATUS_DONE:\n validate_schema(\n self.output, output_schema, data_location=self.location, skip_missing_data=True\n )\n else:\n validate_schema(\n self.output, output_schema, data_location=self.location, test_required=False\n )\n\n with transaction.atomic():\n self._perform_save(*args, **kwargs)\n\n # We can only save dependencies after the data object has been saved. This\n # is why a transaction block is needed and the save method must be called first.\n if create:\n self.save_dependencies(self.input, self.process.input_schema) # pylint: disable=no-member\n self.create_entity()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes the data model.", "response": "def delete(self, *args, **kwargs):\n \"\"\"Delete the data model.\"\"\"\n # Store ids in memory as relations are also deleted with the Data object.\n storage_ids = list(self.storages.values_list('pk', flat=True)) # pylint: disable=no-member\n\n super().delete(*args, **kwargs)\n\n Storage.objects.filter(pk__in=storage_ids, data=None).delete()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a duplicate of this data object.", "response": "def duplicate(self, contributor=None):\n \"\"\"Duplicate (make a copy).\"\"\"\n if self.status not in [self.STATUS_DONE, self.STATUS_ERROR]:\n raise ValidationError('Data object must have done or error status to be duplicated')\n\n duplicate = Data.objects.get(id=self.id)\n duplicate.pk = None\n duplicate.slug = None\n duplicate.name = 'Copy of {}'.format(self.name)\n duplicate.duplicated = now()\n if contributor:\n duplicate.contributor = contributor\n\n duplicate._perform_save(force_insert=True) # pylint: disable=protected-access\n\n assign_contributor_permissions(duplicate)\n\n # Override fields that are automatically set on create.\n duplicate.created = self.created\n duplicate._perform_save() # pylint: disable=protected-access\n\n if self.location:\n self.location.data.add(duplicate) # pylint: disable=no-member\n\n duplicate.storages.set(self.storages.all()) # pylint: disable=no-member\n\n for migration in self.migration_history.order_by('created'): # pylint: disable=no-member\n migration.pk = None\n migration.data = duplicate\n migration.save(force_insert=True)\n\n # Inherit existing child dependencies.\n DataDependency.objects.bulk_create([\n DataDependency(child=duplicate, parent=dependency.parent, kind=dependency.kind)\n for dependency in DataDependency.objects.filter(child=self)\n ])\n # Inherit existing parent dependencies.\n DataDependency.objects.bulk_create([\n DataDependency(child=dependency.child, parent=duplicate, kind=dependency.kind)\n for dependency in DataDependency.objects.filter(parent=self)\n ])\n\n return duplicate"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrendering data name. The rendering is based on name template (`process.data_name`) and input context.", "response": "def _render_name(self):\n \"\"\"Render data name.\n\n The rendering is based on name template (`process.data_name`) and\n input context.\n\n \"\"\"\n if not self.process.data_name or self.named_by_user: # pylint: disable=no-member\n return\n\n inputs = copy.deepcopy(self.input)\n hydrate_input_references(inputs, self.process.input_schema, hydrate_values=False) # pylint: disable=no-member\n template_context = inputs\n\n try:\n name = render_template(\n self.process,\n self.process.data_name, # pylint: disable=no-member\n template_context\n )\n except EvaluationError:\n name = '?'\n\n self.name = name"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomposing data location path.", "response": "def get_path(self, prefix=None, filename=None):\n \"\"\"Compose data location path.\"\"\"\n prefix = prefix or settings.FLOW_EXECUTOR['DATA_DIR']\n\n path = os.path.join(prefix, self.subpath)\n if filename:\n path = os.path.join(path, filename)\n\n return path"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomposes data runtime location path.", "response": "def get_runtime_path(self, filename=None):\n \"\"\"Compose data runtime location path.\"\"\"\n return self.get_path(prefix=settings.FLOW_EXECUTOR['RUNTIME_DIR'], filename=filename)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns serialized list of data objects on entity that user has view permission on.", "response": "def get_data(self, entity):\n \"\"\"Return serialized list of data objects on entity that user has `view` permission on.\"\"\"\n data = self._filter_queryset('view_data', entity.data.all())\n\n return self._serialize_data(data)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_schema(self):\n process_type = self.metadata.process_type\n if not process_type.endswith(':'):\n process_type = '{}:'.format(process_type)\n\n schema = {\n 'slug': self.metadata.slug,\n 'name': self.metadata.name,\n 'type': process_type,\n 'version': self.metadata.version,\n 'data_name': '',\n 'requirements': {\n 'executor': {\n 'docker': {\n 'image': 'resolwe/base:ubuntu-18.04',\n },\n },\n },\n }\n\n if self.metadata.description is not None:\n schema['description'] = self.metadata.description\n if self.metadata.category is not None:\n schema['category'] = self.metadata.category\n if self.metadata.scheduling_class is not None:\n schema['scheduling_class'] = self.metadata.scheduling_class\n if self.metadata.persistence is not None:\n schema['persistence'] = self.metadata.persistence\n if self.metadata.requirements is not None:\n schema['requirements'] = self.metadata.requirements\n if self.metadata.data_name is not None:\n schema['data_name'] = self.metadata.data_name\n if self.metadata.entity is not None:\n schema['entity'] = self.metadata.entity\n\n if self.inputs:\n schema['input'] = []\n for field in self.inputs.values():\n schema['input'].append(field.to_schema())\n\n if self.outputs:\n schema['output'] = []\n for field in self.outputs.values():\n schema['output'].append(field.to_schema())\n\n schema['run'] = {\n 'language': 'python',\n 'program': self.source or '',\n }\n\n return schema", "response": "Return the process schema for this process."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef post_register_hook(self, verbosity=1):\n if not getattr(settings, 'FLOW_DOCKER_DONT_PULL', False):\n call_command('list_docker_images', pull=True, verbosity=verbosity)", "response": "Pull Docker images needed by processes after registering."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nresolves the data path for use with the executor.", "response": "def resolve_data_path(self, data=None, filename=None):\n \"\"\"Resolve data path for use with the executor.\n\n :param data: Data object instance\n :param filename: Filename to resolve\n :return: Resolved filename, which can be used to access the\n given data file in programs executed using this executor\n \"\"\"\n if data is None:\n return constants.DATA_ALL_VOLUME\n\n # Prefix MUST be set because ``get_path`` uses Django's settings,\n # if prefix is not set, to get path prefix. But the executor\n # shouldn't use Django's settings directly, so prefix is set\n # via a constant.\n return data.location.get_path(prefix=constants.DATA_ALL_VOLUME, filename=filename)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nresolve the upload path for use with the executor.", "response": "def resolve_upload_path(self, filename=None):\n \"\"\"Resolve upload path for use with the executor.\n\n :param filename: Filename to resolve\n :return: Resolved filename, which can be used to access the\n given uploaded file in programs executed using this\n executor\n \"\"\"\n if filename is None:\n return constants.UPLOAD_VOLUME\n\n return os.path.join(constants.UPLOAD_VOLUME, filename)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetermining if two datetimes are equal with fuzz factor.", "response": "def are_equal(a_dt, b_dt, round_sec=1):\n \"\"\"Determine if two datetimes are equal with fuzz factor.\n\n A naive datetime (no timezone information) is assumed to be in in UTC.\n\n Args:\n a_dt: datetime\n Timestamp to compare.\n\n b_dt: datetime\n Timestamp to compare.\n\n round_sec: int or float\n Round the timestamps to the closest second divisible by this value before\n comparing them.\n\n E.g.:\n\n - ``n_round_sec`` = 0.1: nearest 10th of a second.\n - ``n_round_sec`` = 1: nearest second.\n - ``n_round_sec`` = 30: nearest half minute.\n\n Timestamps may lose resolution or otherwise change slightly as they go through\n various transformations and storage systems. This again may cause timestamps\n that\n have been processed in different systems to fail an exact equality compare even\n if\n they were initially the same timestamp. This rounding avoids such problems as\n long\n as the error introduced to the original timestamp is not higher than the\n rounding\n value. Of course, the rounding also causes a loss in resolution in the values\n compared, so should be kept as low as possible. The default value of 1 second\n should\n be a good tradeoff in most cases.\n\n Returns:\n bool\n - **True**: If the two datetimes are equal after being rounded by\n ``round_sec``.\n\n \"\"\"\n ra_dt = round_to_nearest(a_dt, round_sec)\n rb_dt = round_to_nearest(b_dt, round_sec)\n logger.debug('Rounded:')\n logger.debug('{} -> {}'.format(a_dt, ra_dt))\n logger.debug('{} -> {}'.format(b_dt, rb_dt))\n return normalize_datetime_to_utc(ra_dt) == normalize_datetime_to_utc(rb_dt)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef http_datetime_str_from_dt(dt):\n epoch_seconds = ts_from_dt(dt)\n return email.utils.formatdate(epoch_seconds, localtime=False, usegmt=True)", "response": "Format datetime to HTTP Full Date format."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing the HTTP Full Date formats and return as datetime.", "response": "def dt_from_http_datetime_str(http_full_datetime):\n \"\"\"Parse HTTP Full Date formats and return as datetime.\n\n Args:\n http_full_datetime : str\n Each of the allowed formats are supported:\n\n - Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123\n - Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036\n - Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format\n\n HTTP Full Dates are always in UTC.\n\n Returns:\n datetime\n The returned datetime is always timezone aware and in UTC.\n\n See Also:\n http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.3.1\n\n \"\"\"\n date_parts = list(email.utils.parsedate(http_full_datetime)[:6])\n year = date_parts[0]\n if year <= 99:\n year = year + 2000 if year < 50 else year + 1900\n return create_utc_datetime(year, *date_parts[1:])"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadjusts the datetime to UTC.", "response": "def normalize_datetime_to_utc(dt):\n \"\"\"Adjust datetime to UTC.\n\n Apply the timezone offset to the datetime and set the timezone to UTC.\n\n This is a no-op if the datetime is already in UTC.\n\n Args:\n dt : datetime\n - tz-aware: Used in the formatted string.\n - tz-naive: Assumed to be in UTC.\n\n Returns:\n datetime\n The returned datetime is always timezone aware and in UTC.\n\n Notes:\n This forces a new object to be returned, which fixes an issue with\n serialization to XML in PyXB. PyXB uses a mixin together with\n datetime to handle the XML xs:dateTime. That type keeps track of\n timezone information included in the original XML doc, which conflicts if we\n return it here as part of a datetime mixin.\n\n See Also:\n ``cast_naive_datetime_to_tz()``\n\n \"\"\"\n\n return datetime.datetime(\n *dt.utctimetuple()[:6], microsecond=dt.microsecond, tzinfo=datetime.timezone.utc\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncasts a naive datetime to a tz - aware datetime.", "response": "def cast_naive_datetime_to_tz(dt, tz=UTC()):\n \"\"\"If datetime is tz-naive, set it to ``tz``. If datetime is tz-aware, return it\n unmodified.\n\n Args:\n dt : datetime\n tz-naive or tz-aware datetime.\n\n tz : datetime.tzinfo\n The timezone to which to adjust tz-naive datetime.\n\n Returns:\n datetime\n tz-aware datetime.\n\n Warning:\n This will change the actual moment in time that is represented if the datetime is\n naive and represents a date and time not in ``tz``.\n\n See Also:\n ``normalize_datetime_to_utc()``\n\n \"\"\"\n if has_tz(dt):\n return dt\n return dt.replace(tzinfo=tz)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef round_to_nearest(dt, n_round_sec=1.0):\n ts = ts_from_dt(strip_timezone(dt)) + n_round_sec / 2.0\n res = dt_from_ts(ts - (ts % n_round_sec))\n return res.replace(tzinfo=dt.tzinfo)", "response": "Round datetime up or down to nearest divisor."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsubmit a new process with SLURM.", "response": "def submit(self, data, runtime_dir, argv):\n \"\"\"Run process with SLURM.\n\n For details, see\n :meth:`~resolwe.flow.managers.workload_connectors.base.BaseConnector.submit`.\n \"\"\"\n limits = data.process.get_resource_limits()\n logger.debug(__(\n \"Connector '{}' running for Data with id {} ({}).\",\n self.__class__.__module__,\n data.id,\n repr(argv)\n ))\n\n # Compute target partition.\n partition = getattr(settings, 'FLOW_SLURM_PARTITION_DEFAULT', None)\n if data.process.slug in getattr(settings, 'FLOW_SLURM_PARTITION_OVERRIDES', {}):\n partition = settings.FLOW_SLURM_PARTITION_OVERRIDES[data.process.slug]\n\n try:\n # Make sure the resulting file is executable on creation.\n script_path = os.path.join(runtime_dir, 'slurm.sh')\n file_descriptor = os.open(script_path, os.O_WRONLY | os.O_CREAT, mode=0o555)\n with os.fdopen(file_descriptor, 'wt') as script:\n script.write('#!/bin/bash\\n')\n script.write('#SBATCH --mem={}M\\n'.format(limits['memory'] + EXECUTOR_MEMORY_OVERHEAD))\n script.write('#SBATCH --cpus-per-task={}\\n'.format(limits['cores']))\n if partition:\n script.write('#SBATCH --partition={}\\n'.format(partition))\n\n # Render the argument vector into a command line.\n line = ' '.join(map(shlex.quote, argv))\n script.write(line + '\\n')\n\n command = ['/usr/bin/env', 'sbatch', script_path]\n subprocess.Popen(\n command,\n cwd=runtime_dir,\n stdin=subprocess.DEVNULL\n ).wait()\n except OSError as err:\n logger.error(__(\n \"OSError occurred while preparing SLURM script for Data {}: {}\",\n data.id, err\n ))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the list of files to purge.", "response": "def get_purge_files(root, output, output_schema, descriptor, descriptor_schema):\n \"\"\"Get files to purge.\"\"\"\n def remove_file(fn, paths):\n \"\"\"From paths remove fn and dirs before fn in dir tree.\"\"\"\n while fn:\n for i in range(len(paths) - 1, -1, -1):\n if fn == paths[i]:\n paths.pop(i)\n fn, _ = os.path.split(fn)\n\n def remove_tree(fn, paths):\n \"\"\"From paths remove fn and dirs before or after fn in dir tree.\"\"\"\n for i in range(len(paths) - 1, -1, -1):\n head = paths[i]\n while head:\n if fn == head:\n paths.pop(i)\n break\n head, _ = os.path.split(head)\n\n remove_file(fn, paths)\n\n def subfiles(root):\n \"\"\"Extend unreferenced list with all subdirs and files in top dir.\"\"\"\n subs = []\n for path, dirs, files in os.walk(root, topdown=False):\n path = path[len(root) + 1:]\n subs.extend(os.path.join(path, f) for f in files)\n subs.extend(os.path.join(path, d) for d in dirs)\n return subs\n\n unreferenced_files = subfiles(root)\n\n remove_file('jsonout.txt', unreferenced_files)\n remove_file('stderr.txt', unreferenced_files)\n remove_file('stdout.txt', unreferenced_files)\n\n meta_fields = [\n [output, output_schema],\n [descriptor, descriptor_schema]\n ]\n\n for meta_field, meta_field_schema in meta_fields:\n for field_schema, fields in iterate_fields(meta_field, meta_field_schema):\n if 'type' in field_schema:\n field_type = field_schema['type']\n field_name = field_schema['name']\n\n # Remove basic:file: entries\n if field_type.startswith('basic:file:'):\n remove_file(fields[field_name]['file'], unreferenced_files)\n\n # Remove list:basic:file: entries\n elif field_type.startswith('list:basic:file:'):\n for field in fields[field_name]:\n remove_file(field['file'], unreferenced_files)\n\n # Remove basic:dir: entries\n elif field_type.startswith('basic:dir:'):\n remove_tree(fields[field_name]['dir'], unreferenced_files)\n\n # Remove list:basic:dir: entries\n elif field_type.startswith('list:basic:dir:'):\n for field in fields[field_name]:\n remove_tree(field['dir'], unreferenced_files)\n\n # Remove refs entries\n if field_type.startswith('basic:file:') or field_type.startswith('basic:dir:'):\n for ref in fields[field_name].get('refs', []):\n remove_tree(ref, unreferenced_files)\n\n elif field_type.startswith('list:basic:file:') or field_type.startswith('list:basic:dir:'):\n for field in fields[field_name]:\n for ref in field.get('refs', []):\n remove_tree(ref, unreferenced_files)\n\n return set([os.path.join(root, filename) for filename in unreferenced_files])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\npurge all files referenced by meta data.", "response": "def location_purge(location_id, delete=False, verbosity=0):\n \"\"\"Print and conditionally delete files not referenced by meta data.\n\n :param location_id: Id of the\n :class:`~resolwe.flow.models.DataLocation` model that data\n objects reference to.\n :param delete: If ``True``, then delete unreferenced files.\n \"\"\"\n try:\n location = DataLocation.objects.get(id=location_id)\n except DataLocation.DoesNotExist:\n logger.warning(\"Data location does not exist\", extra={'location_id': location_id})\n return\n\n unreferenced_files = set()\n purged_data = Data.objects.none()\n referenced_by_data = location.data.exists()\n if referenced_by_data:\n if location.data.exclude(status__in=[Data.STATUS_DONE, Data.STATUS_ERROR]).exists():\n return\n\n # Perform cleanup.\n purge_files_sets = list()\n purged_data = location.data.all()\n for data in purged_data:\n purge_files_sets.append(get_purge_files(\n location.get_path(),\n data.output,\n data.process.output_schema,\n data.descriptor,\n getattr(data.descriptor_schema, 'schema', [])\n ))\n\n intersected_files = set.intersection(*purge_files_sets) if purge_files_sets else set()\n unreferenced_files.update(intersected_files)\n else:\n # Remove data directory.\n unreferenced_files.add(location.get_path())\n unreferenced_files.add(location.get_runtime_path())\n\n if verbosity >= 1:\n # Print unreferenced files\n if unreferenced_files:\n logger.info(__(\"Unreferenced files for location id {} ({}):\", location_id, len(unreferenced_files)))\n for name in unreferenced_files:\n logger.info(__(\" {}\", name))\n else:\n logger.info(__(\"No unreferenced files for location id {}\", location_id))\n\n # Go through unreferenced files and delete them.\n if delete:\n for name in unreferenced_files:\n if os.path.isfile(name) or os.path.islink(name):\n os.remove(name)\n elif os.path.isdir(name):\n shutil.rmtree(name)\n\n location.purged = True\n location.save()\n\n if not referenced_by_data:\n location.delete()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _location_purge_all(delete=False, verbosity=0):\n if DataLocation.objects.exists():\n for location in DataLocation.objects.filter(Q(purged=False) | Q(data=None)):\n location_purge(location.id, delete, verbosity)\n else:\n logger.info(\"No data locations\")", "response": "Purge all data locations."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\npurges all data locations and storage.", "response": "def purge_all(delete=False, verbosity=0):\n \"\"\"Purge all data locations.\"\"\"\n _location_purge_all(delete, verbosity)\n _storage_purge_all(delete, verbosity)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts value from an AST node.", "response": "def get_value(self, node):\n \"\"\"Convert value from an AST node.\"\"\"\n if not isinstance(node, ast.Str):\n raise TypeError(\"must be a string literal\")\n\n return node.s"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert value from an AST node.", "response": "def get_value(self, node):\n \"\"\"Convert value from an AST node.\"\"\"\n if not isinstance(node, ast.Attribute):\n raise TypeError(\"must be an attribute\")\n\n if node.value.id != self.choices.__name__:\n raise TypeError(\"must be an attribute of {}\".format(self.choices.__name__))\n\n return getattr(self.choices, node.attr)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts value from an AST node.", "response": "def get_value(self, node):\n \"\"\"Convert value from an AST node.\"\"\"\n if not isinstance(node, ast.Dict):\n raise TypeError(\"must be a dictionary\")\n\n evaluator = SafeEvaluator()\n try:\n value = evaluator.run(node)\n except Exception as ex:\n # TODO: Handle errors.\n raise ex\n\n try:\n # Ensure value is a serializable dictionary.\n value = json.loads(json.dumps(value))\n if not isinstance(value, dict):\n raise TypeError\n except (TypeError, ValueError):\n raise TypeError(\"must be serializable\")\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nvisit a class node containing a list of field definitions.", "response": "def visit_field_class(self, item, descriptor=None, fields=None):\n \"\"\"Visit a class node containing a list of field definitions.\"\"\"\n discovered_fields = collections.OrderedDict()\n field_groups = {}\n for node in item.body:\n if isinstance(node, ast.ClassDef):\n field_groups[node.name] = self.visit_field_class(node)\n continue\n\n if not isinstance(node, ast.Assign):\n continue\n if not isinstance(node.value, ast.Call):\n continue\n if not isinstance(node.targets[0], ast.Name):\n continue\n\n # Build accessible symbols table.\n symtable = {}\n # All field types.\n symtable.update({\n field.__name__: field\n for field in get_available_fields()\n })\n # Field group classes.\n symtable.update(field_groups)\n\n evaluator = SafeEvaluator(symtable=symtable)\n\n name = node.targets[0].id\n try:\n field = evaluator.run(node.value)\n except Exception as ex:\n # TODO: Handle errors.\n raise ex\n\n if descriptor is not None:\n field.contribute_to_class(descriptor, fields, name)\n else:\n discovered_fields[name] = field\n\n if descriptor is None:\n class Fields:\n \"\"\"Fields wrapper.\"\"\"\n\n for name, field in discovered_fields.items():\n setattr(Fields, name, field)\n\n return Fields"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef visit_ClassDef(self, node): # pylint: disable=invalid-name\n # Resolve everything as root scope contains everything from the process module.\n for base in node.bases:\n # Cover `from resolwe.process import ...`.\n if isinstance(base, ast.Name) and isinstance(base.ctx, ast.Load):\n base = getattr(runtime, base.id, None)\n # Cover `from resolwe import process`.\n elif isinstance(base, ast.Attribute) and isinstance(base.ctx, ast.Load):\n base = getattr(runtime, base.attr, None)\n else:\n continue\n\n if issubclass(base, runtime.Process):\n break\n else:\n return\n\n descriptor = ProcessDescriptor(source=self.source)\n\n # Available embedded classes.\n embedded_class_fields = {\n runtime.PROCESS_INPUTS_NAME: descriptor.inputs,\n runtime.PROCESS_OUTPUTS_NAME: descriptor.outputs,\n }\n\n # Parse metadata in class body.\n for item in node.body:\n if isinstance(item, ast.Assign):\n # Possible metadata.\n if (len(item.targets) == 1 and isinstance(item.targets[0], ast.Name)\n and isinstance(item.targets[0].ctx, ast.Store)\n and item.targets[0].id in PROCESS_METADATA):\n # Try to get the metadata value.\n value = PROCESS_METADATA[item.targets[0].id].get_value(item.value)\n setattr(descriptor.metadata, item.targets[0].id, value)\n elif (isinstance(item, ast.Expr) and isinstance(item.value, ast.Str)\n and descriptor.metadata.description is None):\n # Possible description string.\n descriptor.metadata.description = item.value.s\n elif isinstance(item, ast.ClassDef) and item.name in embedded_class_fields.keys():\n # Possible input/output declaration.\n self.visit_field_class(item, descriptor, embedded_class_fields[item.name])\n\n descriptor.validate()\n self.processes.append(descriptor)", "response": "Visit the class definition node."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse(self):\n root = ast.parse(self._source)\n\n visitor = ProcessVisitor(source=self._source)\n visitor.visit(root)\n return visitor.processes", "response": "Parse process.\n\n :return: A list of discovered process descriptors"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_permissions_class(permissions_name=None):\n def load_permissions(permissions_name):\n \"\"\"Look for a fully qualified flow permissions class.\"\"\"\n try:\n return import_module('{}'.format(permissions_name)).ResolwePermissions\n except AttributeError:\n raise AttributeError(\"'ResolwePermissions' class not found in {} module.\".format(\n permissions_name))\n except ImportError as ex:\n # The permissions module wasn't found. Display a helpful error\n # message listing all possible (built-in) permissions classes.\n permissions_dir = os.path.join(os.path.dirname(upath(__file__)), '..', 'perms')\n permissions_dir = os.path.normpath(permissions_dir)\n\n try:\n builtin_permissions = [\n name for _, name, _ in pkgutil.iter_modules([permissions_dir]) if name not in ['tests']]\n except EnvironmentError:\n builtin_permissions = []\n if permissions_name not in ['resolwe.auth.{}'.format(p) for p in builtin_permissions]:\n permissions_reprs = map(repr, sorted(builtin_permissions))\n err_msg = (\"{} isn't an available flow permissions class.\\n\"\n \"Try using 'resolwe.auth.XXX', where XXX is one of:\\n\"\n \" {}\\n\"\n \"Error was: {}\".format(permissions_name, \", \".join(permissions_reprs), ex))\n raise ImproperlyConfigured(err_msg)\n else:\n # If there's some other error, this must be an error in Django\n raise\n\n if permissions_name is None:\n permissions_name = settings.FLOW_API['PERMISSIONS']\n\n if permissions_name not in permissions_classes:\n permissions_classes[permissions_name] = load_permissions(permissions_name)\n\n return permissions_classes[permissions_name]", "response": "Load and cache permissions class."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_identifiers(sysmeta_pyxb):\n pid = d1_common.xml.get_opt_val(sysmeta_pyxb, 'identifier')\n sid = d1_common.xml.get_opt_val(sysmeta_pyxb, 'seriesId')\n obsoletes_pid = d1_common.xml.get_opt_val(sysmeta_pyxb, 'obsoletes')\n obsoleted_by_pid = d1_common.xml.get_opt_val(sysmeta_pyxb, 'obsoletedBy')\n return pid, sid, obsoletes_pid, obsoleted_by_pid", "response": "Get set of identifiers that provide revision context for SciObj."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsort objects by dependency.", "response": "def topological_sort(unsorted_dict):\n \"\"\"Sort objects by dependency.\n\n Sort a dict of obsoleting PID to obsoleted PID to a list of PIDs in order of\n obsolescence.\n\n Args:\n unsorted_dict : dict\n Dict that holds obsolescence information. Each ``key/value`` pair establishes\n that the PID in ``key`` identifies an object that obsoletes an object identifies\n by the PID in ``value``.\n\n\n Returns:\n tuple of sorted_list, unconnected_dict :\n\n ``sorted_list``: A list of PIDs ordered so that all PIDs that obsolete an object\n are listed after the object they obsolete.\n\n ``unconnected_dict``: A dict of PID to obsoleted PID of any objects that could not\n be added to a revision chain. These items will have obsoletes PIDs that directly\n or indirectly reference a PID that could not be sorted.\n\n Notes:\n ``obsoletes_dict`` is modified by the sort and on return holds any items that\n could not be sorted.\n\n The sort works by repeatedly iterating over an unsorted list of PIDs and\n moving PIDs to the sorted list as they become available. A PID is available to\n be moved to the sorted list if it does not obsolete a PID or if the PID it\n obsoletes is already in the sorted list.\n\n \"\"\"\n sorted_list = []\n sorted_set = set()\n found = True\n unconnected_dict = unsorted_dict.copy()\n while found:\n found = False\n for pid, obsoletes_pid in list(unconnected_dict.items()):\n if obsoletes_pid is None or obsoletes_pid in sorted_set:\n found = True\n sorted_list.append(pid)\n sorted_set.add(pid)\n del unconnected_dict[pid]\n return sorted_list, unconnected_dict"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of all PIDs in the revision chain.", "response": "def get_pids_in_revision_chain(client, did):\n \"\"\"Args: client: d1_client.cnclient.CoordinatingNodeClient or\n d1_client.mnclient.MemberNodeClient.\n\n did : str\n SID or a PID of any object in a revision chain.\n\n Returns:\n list of str:\n All PIDs in the chain. The returned list is in the same order as the chain. The\n initial PID is typically obtained by resolving a SID. If the given PID is not in\n a chain, a list containing the single object is returned.\n\n \"\"\"\n\n def _req(p):\n return d1_common.xml.get_req_val(p)\n\n def _opt(p, a):\n return d1_common.xml.get_opt_val(p, a)\n\n sysmeta_pyxb = client.getSystemMetadata(did)\n # Walk to tail\n while _opt(sysmeta_pyxb, 'obsoletes'):\n sysmeta_pyxb = client.getSystemMetadata(_opt(sysmeta_pyxb, 'obsoletes'))\n chain_pid_list = [_req(sysmeta_pyxb.identifier)]\n # Walk from tail to head, recording traversed PIDs\n while _opt(sysmeta_pyxb, 'obsoletedBy'):\n sysmeta_pyxb = client.getSystemMetadata(_opt(sysmeta_pyxb, 'obsoletedBy'))\n chain_pid_list.append(_req(sysmeta_pyxb.identifier))\n return chain_pid_list"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n parser = argparse.ArgumentParser(\n description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter\n )\n parser.add_argument(\"path\", nargs=\"+\", help=\"File or directory path\")\n parser.add_argument(\"--exclude\", nargs=\"+\", help=\"Exclude glob patterns\")\n parser.add_argument(\n \"--no-recursive\",\n dest=\"recursive\",\n action=\"store_false\",\n help=\"Search directories recursively\",\n )\n parser.add_argument(\n \"--ignore-invalid\", action=\"store_true\", help=\"Ignore invalid paths\"\n )\n parser.add_argument(\n \"--pycharm\", action=\"store_true\", help=\"Enable PyCharm integration\"\n )\n parser.add_argument(\n \"--diff\",\n dest=\"show_diff\",\n action=\"store_true\",\n help=\"Show diff and do not modify any files\",\n )\n parser.add_argument(\n \"--dry-run\", action=\"store_true\", help=\"Process files but do not write results\"\n )\n parser.add_argument(\"--debug\", action=\"store_true\", help=\"Debug level logging\")\n\n args = parser.parse_args()\n d1_common.util.log_setup(args.debug)\n\n repo_path = d1_dev.util.find_repo_root_by_path(__file__)\n repo = git.Repo(repo_path)\n\n specified_file_path_list = get_specified_file_path_list(args)\n tracked_path_list = list(d1_dev.util.get_tracked_files(repo))\n format_path_list = sorted(\n set(specified_file_path_list).intersection(tracked_path_list)\n )\n\n progress_logger.start_task_type(\"Format modules\", len(format_path_list))\n\n for i, format_path in enumerate(format_path_list):\n progress_logger.start_task(\"Format modules\", i)\n format_all_docstr(args, format_path)\n\n progress_logger.end_task_type(\"Format modules\")", "response": "This function is called by the command line interface to remove unused imports Unsafe!\n Only tested on our codebase which uses simple absolute imports on the form import\n a. b. c."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_docstr_list(red):\n docstr_list = []\n for n in red.find_all(\"string\"):\n if n.value.startswith('\"\"\"'):\n docstr_list.append(n)\n return docstr_list", "response": "Find all triple - quoted docstrings in module."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nunwraps a string into a list of consequtive blocks.", "response": "def unwrap(s, node_indent):\n \"\"\"Group lines of a docstring to blocks.\n\n For now, only groups markdown list sections.\n\n A block designates a list of consequtive lines that all start at the same\n indentation level.\n\n The lines of the docstring are iterated top to bottom. Each line is added to\n `block_list` until a line is encountered that breaks sufficiently with the previous\n line to be deemed to be the start of a new block. At that point, all lines\n currently\n in `block_list` are stripped and joined to a single line, which is added to\n `unwrap_list`.\n\n Some of the block breaks are easy to determine. E.g., a line that starts with \"- \"\n is the start of a new markdown style list item, so is always the start of a new\n block. But then there are things like this, which is a single block:\n\n - An example list with a second line\n\n And this, which is 3 single line blocks (due to the different indentation levels):\n\n Args:\n jwt_bu64: bytes\n JWT, encoded using a a URL safe flavor of Base64.\n\n \"\"\"\n\n def get_indent():\n if line_str.startswith('\"\"\"'):\n return node_indent\n return len(re.match(r\"^( *)\", line_str).group(1))\n\n def finish_block():\n if block_list:\n unwrap_list.append(\n (block_indent, (\" \".join([v.strip() for v in block_list])).strip())\n )\n block_list.clear()\n\n unwrap_list = []\n\n block_indent = None\n block_list = []\n\n for line_str in s.splitlines():\n line_str = line_str.rstrip()\n line_indent = get_indent()\n\n # A new block has been started. Record the indent of the first line in that\n # block to use as the indent for all the lines that will be put in this block.\n if not block_list:\n block_indent = line_indent\n\n # A blank line always starts a new block.\n if line_str == \"\":\n finish_block()\n\n # Indent any lines that are less indentend than the docstr node\n # if line_indent < node_indent:\n # line_indent = block_indent\n\n # A line that is indented less than the current block starts a new block.\n if line_indent < block_indent:\n finish_block()\n\n # A line that is the start of a markdown list starts a new block.\n elif line_str.strip().startswith((\"- \", \"* \")):\n finish_block()\n\n # A markdown title always starts a new block.\n elif line_str.strip().endswith(\":\"):\n finish_block()\n\n block_list.append(line_str)\n\n # Only make blocks for markdown list items. Write everything else as single line items.\n if not block_list[0].strip().startswith((\"- \", \"* \")):\n finish_block()\n\n # Finish the block that was in progress when the end of the docstring was reached.\n finish_block()\n\n return unwrap_list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrap a single line to one or more lines that start at indent_int and end at indent_int and force a break at the end of the last word that will fit before WRAP_MARGIN_INT.", "response": "def wrap(indent_int, unwrap_str):\n \"\"\"Wrap a single line to one or more lines that start at indent_int and end at the\n last word that will fit before WRAP_MARGIN_INT.\n\n If there are no word breaks (spaces) before WRAP_MARGIN_INT, force a break at\n WRAP_MARGIN_INT.\n\n \"\"\"\n with io.StringIO() as str_buf:\n is_rest_block = unwrap_str.startswith((\"- \", \"* \"))\n\n while unwrap_str:\n cut_pos = (unwrap_str + \" \").rfind(\" \", 0, WRAP_MARGIN_INT - indent_int)\n\n if cut_pos == -1:\n cut_pos = WRAP_MARGIN_INT\n\n this_str, unwrap_str = unwrap_str[:cut_pos], unwrap_str[cut_pos + 1 :]\n str_buf.write(\"{}{}\\n\".format(\" \" * indent_int, this_str))\n\n if is_rest_block:\n is_rest_block = False\n indent_int += 2\n\n return str_buf.getvalue()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a simple OAI -ORE Resource Map with one Science Metadata document and any number of Science Data objects.", "response": "def createSimpleResourceMap(ore_pid, scimeta_pid, sciobj_pid_list):\n \"\"\"Create a simple OAI-ORE Resource Map with one Science Metadata document and any\n number of Science Data objects.\n\n This creates a document that establishes an association between a Science Metadata\n object and any number of Science Data objects. The Science Metadata object contains\n information that is indexed by DataONE, allowing both the Science Metadata and the\n Science Data objects to be discoverable in DataONE Search. In search results, the\n objects will appear together and can be downloaded as a single package.\n\n Args:\n ore_pid: str\n Persistent Identifier (PID) to use for the new Resource Map\n\n scimeta_pid: str\n PID for an object that will be listed as the Science Metadata that is\n describing the Science Data objects.\n\n sciobj_pid_list: list of str\n List of PIDs that will be listed as the Science Data objects that are being\n described by the Science Metadata.\n\n Returns:\n ResourceMap : OAI-ORE Resource Map\n\n \"\"\"\n ore = ResourceMap()\n ore.initialize(ore_pid)\n ore.addMetadataDocument(scimeta_pid)\n ore.addDataDocuments(sciobj_pid_list, scimeta_pid)\n return ore"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a simple OAI -ORE Resource Map from a stream of PIDs.", "response": "def createResourceMapFromStream(in_stream, base_url=d1_common.const.URL_DATAONE_ROOT):\n \"\"\"Create a simple OAI-ORE Resource Map with one Science Metadata document and any\n number of Science Data objects, using a stream of PIDs.\n\n Args:\n in_stream:\n The first non-blank line is the PID of the resource map itself. Second line is\n the science metadata PID and remaining lines are science data PIDs.\n\n Example stream contents:\n\n ::\n\n PID_ORE_value\n sci_meta_pid_value\n data_pid_1\n data_pid_2\n data_pid_3\n\n base_url : str\n Root of the DataONE environment in which the Resource Map will be used.\n\n Returns:\n ResourceMap : OAI-ORE Resource Map\n\n \"\"\"\n pids = []\n for line in in_stream:\n pid = line.strip()\n if pid == \"#\" or pid.startswith(\"# \"):\n continue\n\n if len(pids) < 2:\n raise ValueError(\"Insufficient numbers of identifiers provided.\")\n\n logging.info(\"Read {} identifiers\".format(len(pids)))\n ore = ResourceMap(base_url=base_url)\n logging.info(\"ORE PID = {}\".format(pids[0]))\n ore.initialize(pids[0])\n logging.info(\"Metadata PID = {}\".format(pids[1]))\n ore.addMetadataDocument(pids[1])\n ore.addDataDocuments(pids[2:], pids[1])\n\n return ore"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef initialize(self, pid, ore_software_id=d1_common.const.ORE_SOFTWARE_ID):\n # Set nice prefixes for the namespaces\n for k in list(d1_common.const.ORE_NAMESPACE_DICT.keys()):\n self.bind(k, d1_common.const.ORE_NAMESPACE_DICT[k])\n # Create the ORE entity\n oid = self._pid_to_id(pid)\n ore = rdflib.URIRef(oid)\n self.add((ore, rdflib.RDF.type, ORE.ResourceMap))\n self.add((ore, DCTERMS.identifier, rdflib.term.Literal(pid)))\n self.add((ore, DCTERMS.creator, rdflib.term.Literal(ore_software_id)))\n # Add an empty aggregation\n ag = rdflib.URIRef(oid + \"#aggregation\")\n self.add((ore, ORE.describes, ag))\n self.add((ag, rdflib.RDF.type, ORE.Aggregation))\n self.add((ORE.Aggregation, rdflib.RDFS.isDefinedBy, ORE.term(\"\")))\n self.add(\n (ORE.Aggregation, rdflib.RDFS.label, rdflib.term.Literal(\"Aggregation\"))\n )\n self._ore_initialized = True", "response": "Create the basic ORE document structure."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef serialize_to_transport(self, doc_format=\"xml\", *args, **kwargs):\n return super(ResourceMap, self).serialize(\n format=doc_format, encoding=\"utf-8\", *args, **kwargs\n )", "response": "Serialize to UTF - 8 encoded XML document."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nserialize ResourceMap to an XML doc that is pretty printed for display.", "response": "def serialize_to_display(self, doc_format=\"pretty-xml\", *args, **kwargs):\n \"\"\"Serialize ResourceMap to an XML doc that is pretty printed for display.\n\n Args:\n doc_format: str\n One of: ``xml``, ``n3``, ``turtle``, ``nt``, ``pretty-xml``, ``trix``,\n ``trig`` and ``nquads``.\n\n args and kwargs:\n Optional arguments forwarded to rdflib.ConjunctiveGraph.serialize().\n\n Returns:\n str: Pretty printed Resource Map XML doc\n\n Note:\n Only the default, \"xml\", is automatically indexed by DataONE.\n\n \"\"\"\n return (\n super(ResourceMap, self)\n .serialize(format=doc_format, encoding=None, *args, **kwargs)\n .decode(\"utf-8\")\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the URIRef of the Aggregation entity", "response": "def getAggregation(self):\n \"\"\"Returns:\n\n str : URIRef of the Aggregation entity\n\n \"\"\"\n self._check_initialized()\n return [\n o for o in self.subjects(predicate=rdflib.RDF.type, object=ORE.Aggregation)\n ][0]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getObjectByPid(self, pid):\n self._check_initialized()\n opid = rdflib.term.Literal(pid)\n res = [o for o in self.subjects(predicate=DCTERMS.identifier, object=opid)]\n return res[0]", "response": "Returns the object identified by pid."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef addResource(self, pid):\n self._check_initialized()\n try:\n # is entry already in place?\n self.getObjectByPid(pid)\n return\n except IndexError:\n pass\n # Entry not present, add it to the graph\n oid = self._pid_to_id(pid)\n obj = rdflib.URIRef(oid)\n ag = self.getAggregation()\n self.add((ag, ORE.aggregates, obj))\n self.add((obj, ORE.isAggregatedBy, ag))\n self.add((obj, DCTERMS.identifier, rdflib.term.Literal(pid)))", "response": "Add a resource to the Resource Map."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setDocuments(self, documenting_pid, documented_pid):\n self._check_initialized()\n documenting_id = self.getObjectByPid(documenting_pid)\n documented_id = self.getObjectByPid(documented_pid)\n self.add((documenting_id, CITO.documents, documented_id))", "response": "Add a CiTO the Citation Typing Ontology triple asserting that\n documenting_pid documents documented_pid"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setDocumentedBy(self, documented_pid, documenting_pid):\n self._check_initialized()\n documented_id = self.getObjectByPid(documented_pid)\n documenting_id = self.getObjectByPid(documenting_pid)\n self.add((documented_id, CITO.isDocumentedBy, documenting_id))", "response": "Add a CiTO the Citation Typing Ontology triple asserting that documented_pid is documenting_pid."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a list of PIDs of Science Data objects to the list of Science Data objects that documents the Science Data objects.", "response": "def addDataDocuments(self, scidata_pid_list, scimeta_pid=None):\n \"\"\"Add Science Data object(s)\n\n Args:\n scidata_pid_list : list of str\n List of one or more PIDs of Science Data objects\n\n scimeta_pid: str\n PID of a Science Metadata object that documents the Science Data objects.\n\n \"\"\"\n mpids = self.getAggregatedScienceMetadataPids()\n if scimeta_pid is None:\n if len(mpids) > 1:\n raise ValueError(\n \"No metadata PID specified and more than one choice available.\"\n )\n scimeta_pid = mpids[0]\n else:\n if scimeta_pid not in mpids:\n self.addMetadataDocument(scimeta_pid)\n\n for dpid in scidata_pid_list:\n self.addResource(dpid)\n self.setDocumentedBy(dpid, scimeta_pid)\n self.setDocuments(scimeta_pid, dpid)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the PID of the Resource Map itself.", "response": "def getResourceMapPid(self):\n \"\"\"Returns:\n\n str : PID of the Resource Map itself.\n\n \"\"\"\n ore = [\n o for o in self.subjects(predicate=rdflib.RDF.type, object=ORE.ResourceMap)\n ][0]\n pid = [str(o) for o in self.objects(predicate=DCTERMS.identifier, subject=ore)][\n 0\n ]\n return pid"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef getAllTriples(self):\n return [(str(s), str(p), str(o)) for s, p, o in self]", "response": "Returns a list of tuples containing a subject predicate object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getSubjectObjectsByPredicate(self, predicate):\n return sorted(\n set(\n [\n (str(s), str(o))\n for s, o in self.subject_objects(rdflib.term.URIRef(predicate))\n ]\n )\n )", "response": "Returns a list of subject and object tuples that match the predicate."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse a OAI -ORE resource maps document.", "response": "def parseDoc(self, doc_str, format=\"xml\"):\n \"\"\"Parse a OAI-ORE Resource Maps document.\n\n See Also: ``rdflib.ConjunctiveGraph.parse`` for documentation on arguments.\n\n \"\"\"\n self.parse(data=doc_str, format=format)\n self._ore_initialized = True\n return self"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _pid_to_id(self, pid):\n return d1_common.url.joinPathElements(\n self._base_url,\n self._version_tag,\n \"resolve\",\n d1_common.url.encodePathElement(pid),\n )", "response": "Converts a pid to a URI that can be used as an OAI -ORE identifier."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmaking the script required for checking checksums from another machine.", "response": "def make_checksum_validation_script(stats_list):\n \"\"\"Make batch files required for checking checksums from another machine.\"\"\"\n if not os.path.exists('./hash_check'):\n os.mkdir('./hash_check')\n\n with open('./hash_check/curl.sh', 'w') as curl_f, open(\n './hash_check/md5.txt', 'w'\n ) as md5_f, open('./hash_check/sha1.txt', 'w') as sha1_f:\n\n curl_f.write('#!/usr/bin/env bash\\n\\n')\n\n for stats_dict in stats_list:\n for sysmeta_xml in stats_dict['largest_sysmeta_xml']:\n print(sysmeta_xml)\n sysmeta_pyxb = d1_common.types.dataoneTypes_v1_2.CreateFromDocument(\n sysmeta_xml\n )\n\n pid = sysmeta_pyxb.identifier.value().encode('utf-8')\n file_name = re.sub('\\W+', '_', pid)\n size = sysmeta_pyxb.size\n base_url = stats_dict['gmn_dict']['base_url']\n\n if size > 100 * 1024 * 1024:\n logging.info('Ignored large object. size={} pid={}')\n\n curl_f.write('# {} {}\\n'.format(size, pid))\n curl_f.write(\n 'curl -o obj/{} {}/v1/object/{}\\n'.format(\n file_name, base_url, d1_common.url.encodePathElement(pid)\n )\n )\n\n if sysmeta_pyxb.checksum.algorithm == 'MD5':\n md5_f.write(\n '{} obj/{}\\n'.format(sysmeta_pyxb.checksum.value(), file_name)\n )\n else:\n sha1_f.write(\n '{} obj/{}\\n'.format(sysmeta_pyxb.checksum.value(), file_name)\n )\n\n with open('./hash_check/check.sh', 'w') as f:\n f.write('#!/usr/bin/env bash\\n\\n')\n f.write('mkdir -p obj\\n')\n f.write('./curl.sh\\n')\n f.write('sha1sum -c sha1.txt\\n')\n f.write('md5sum -c md5.txt\\n')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef formfield(self, **kwargs):\n kwargs.update({\n 'max_value': 90,\n 'min_value': -90,\n })\n return super(LatitudeField, self).formfield(**kwargs)", "response": "Returns a formfield with max_value 90 and min_value - 90."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef formfield(self, **kwargs):\n kwargs.update({\n 'max_value': 180,\n 'min_value': -180,\n })\n return super(LongitudeField, self).formfield(**kwargs)", "response": "Returns a formfield with max_value 180 and min_value - 180."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef formfield(self, **kwargs):\n widget_kwargs = {\n 'lat_field': self.latitude_field_name,\n 'lon_field': self.longitude_field_name,\n }\n\n if self.data_field_name:\n widget_kwargs['data_field'] = self.data_field_name\n\n defaults = {\n 'form_class': OSMFormField,\n 'widget': OSMWidget(**widget_kwargs),\n }\n defaults.update(kwargs)\n\n return super(OSMField, self).formfield(**defaults)", "response": "Returns a new OSMFormField with the specified kwargs."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef longitude_field_name(self):\n if self._lon_field_name is None:\n self._lon_field_name = self.name + '_lon'\n return self._lon_field_name", "response": "The name of the related : class : LongitudeField."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_dependency_kinds(apps, schema_editor):\n DataDependency = apps.get_model('flow', 'DataDependency')\n for dependency in DataDependency.objects.all():\n # Assume dependency is of subprocess kind.\n dependency.kind = 'subprocess'\n\n # Check child inputs to determine if this is an IO dependency.\n child = dependency.child\n parent = dependency.parent\n\n for field_schema, fields in iterate_fields(child.input, child.process.input_schema):\n name = field_schema['name']\n value = fields[name]\n\n if field_schema.get('type', '').startswith('data:'):\n if value == parent.pk:\n dependency.kind = 'io'\n break\n elif field_schema.get('type', '').startswith('list:data:'):\n for data in value:\n if value == parent.pk:\n dependency.kind = 'io'\n break\n\n dependency.save()", "response": "Update historical dependency kinds as they may be wrong."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _wrap_jinja_filter(self, function):\n def wrapper(*args, **kwargs):\n \"\"\"Filter wrapper.\"\"\"\n try:\n return function(*args, **kwargs)\n except Exception: # pylint: disable=broad-except\n return NestedUndefined()\n\n # Copy over Jinja filter decoration attributes.\n for attribute in dir(function):\n if attribute.endswith('filter'):\n setattr(wrapper, attribute, getattr(function, attribute))\n\n return wrapper", "response": "Wrap a function to propagate exceptions as undefined values filter."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _register_custom_filters(self):\n custom_filters = self.settings.get('CUSTOM_FILTERS', [])\n if not isinstance(custom_filters, list):\n raise KeyError(\"`CUSTOM_FILTERS` setting must be a list.\")\n\n for filter_module_name in custom_filters:\n try:\n filter_module = import_module(filter_module_name)\n except ImportError as error:\n raise ImproperlyConfigured(\n \"Failed to load custom filter module '{}'.\\n\"\n \"Error was: {}\".format(filter_module_name, error)\n )\n\n try:\n filter_map = getattr(filter_module, 'filters')\n if not isinstance(filter_map, dict):\n raise TypeError\n except (AttributeError, TypeError):\n raise ImproperlyConfigured(\n \"Filter module '{}' does not define a 'filters' dictionary\".format(filter_module_name)\n )\n self._environment.filters.update(filter_map)", "response": "Register any custom filter modules."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconfiguring the evaluation context.", "response": "def _evaluation_context(self, escape, safe_wrapper):\n \"\"\"Configure the evaluation context.\"\"\"\n self._escape = escape\n self._safe_wrapper = safe_wrapper\n\n try:\n yield\n finally:\n self._escape = None\n self._safe_wrapper = None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nevaluating a template block.", "response": "def evaluate_block(self, template, context=None, escape=None, safe_wrapper=None):\n \"\"\"Evaluate a template block.\"\"\"\n if context is None:\n context = {}\n\n try:\n with self._evaluation_context(escape, safe_wrapper):\n template = self._environment.from_string(template)\n return template.render(**context)\n except jinja2.TemplateError as error:\n raise EvaluationError(error.args[0])\n finally:\n self._escape = None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef evaluate_inline(self, expression, context=None, escape=None, safe_wrapper=None):\n if context is None:\n context = {}\n\n try:\n with self._evaluation_context(escape, safe_wrapper):\n compiled = self._environment.compile_expression(expression)\n return compiled(**context)\n except jinja2.TemplateError as error:\n raise EvaluationError(error.args[0])", "response": "Evaluate an inline expression."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_module_file(redbaron_tree, module_path, show_diff=False, dry_run=False):\n with tempfile.NamedTemporaryFile() as tmp_file:\n tmp_file.write(redbaron_tree_to_module_str(redbaron_tree))\n tmp_file.seek(0)\n if are_files_equal(module_path, tmp_file.name):\n logging.debug('Source unchanged')\n return False\n\n logging.debug('Source modified')\n\n tmp_file.seek(0)\n diff_update_file(module_path, tmp_file.read(), show_diff, dry_run)", "response": "Update the module_path with the contents of the tree."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_repo_root_by_path(path):\n repo = git.Repo(path, search_parent_directories=True)\n repo_path = repo.git.rev_parse('--show-toplevel')\n logging.info('Repository: {}'.format(repo_path))\n return repo_path", "response": "Given a path to an item in a git repository find the root of the repository."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _format_value(self, operation, key, indent):\n v = self._find_value(operation, key)\n if v == \"NOT_FOUND\":\n return []\n if not isinstance(v, list):\n v = [v]\n if not len(v):\n v = [None]\n key = key + \":\"\n lines = []\n for s in v:\n # Access control rules are stored in tuples.\n if isinstance(s, tuple):\n s = \"{}: {}\".format(*s)\n lines.append(\n \"{}{}{}{}\".format(\n \" \" * indent, key, \" \" * (TAB - indent - len(key) - 1), s\n )\n )\n key = \"\"\n return lines", "response": "Format the value of the key in the operation."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_cors_headers(response, request):\n opt_method_list = \",\".join(request.allowed_method_list + [\"OPTIONS\"])\n response[\"Allow\"] = opt_method_list\n response[\"Access-Control-Allow-Methods\"] = opt_method_list\n response[\"Access-Control-Allow-Origin\"] = request.META.get(\"Origin\", \"*\")\n response[\"Access-Control-Allow-Headers\"] = \"Authorization\"\n response[\"Access-Control-Allow-Credentials\"] = \"true\"", "response": "Add CORS headers to response."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef run_process(self, slug, inputs):\n def export_files(value):\n \"\"\"Export input files of spawned process.\"\"\"\n if isinstance(value, str) and os.path.isfile(value):\n # TODO: Use the protocol to export files and get the\n # process schema to check field type.\n print(\"export {}\".format(value))\n elif isinstance(value, dict):\n for item in value.values():\n export_files(item)\n elif isinstance(value, list):\n for item in value:\n export_files(item)\n\n export_files(inputs)\n print('run {}'.format(json.dumps({'process': slug, 'input': inputs}, separators=(',', ':'))))", "response": "Run a new process from a running process."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfind data object ID for given slug.", "response": "def get_data_id_by_slug(self, slug):\n \"\"\"Find data object ID for given slug.\n\n This method queries the Resolwe API and requires network access.\n \"\"\"\n resolwe_host = os.environ.get('RESOLWE_HOST_URL')\n url = urllib.parse.urljoin(resolwe_host, '/api/data?slug={}&fields=id'.format(slug))\n\n with urllib.request.urlopen(url, timeout=60) as f:\n data = json.loads(f.read().decode('utf-8'))\n\n if len(data) == 1:\n return data[0]['id']\n elif not data:\n raise ValueError('Data not found for slug {}'.format(slug))\n else:\n raise ValueError('More than one data object returned for slug {}'.format(slug))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a dictionary of the requirements of the current object.", "response": "def requirements(self):\n \"\"\"Process requirements.\"\"\"\n class dotdict(dict): # pylint: disable=invalid-name\n \"\"\"Dot notation access to dictionary attributes.\"\"\"\n\n def __getattr__(self, attr):\n value = self.get(attr)\n return dotdict(value) if isinstance(value, dict) else value\n\n return dotdict(self._meta.metadata.requirements)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef start(self, inputs):\n self.logger.info(\"Process is starting\")\n\n outputs = Outputs(self._meta.outputs)\n\n self.logger.info(\"Process is running\")\n try:\n self.run(inputs.freeze(), outputs)\n return outputs.freeze()\n except Exception as error:\n self.logger.exception(\"Exception while running process\")\n print(resolwe_runtime_utils.error(str(error)))\n raise\n except: # noqa\n self.logger.exception(\"Exception while running process\")\n print(resolwe_runtime_utils.error(\"Exception while running process\"))\n raise\n finally:\n self.logger.info(\"Process has finished\")", "response": "Start the process.\n\n :param inputs: An instance of `Inputs` describing the process inputs\n :return: An instance of `Outputs` describing the process outputs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nperform process discovery in given path and return a list of process schemas.", "response": "def discover_process(self, path):\n \"\"\"Perform process discovery in given path.\n\n This method will be called during process registration and\n should return a list of dictionaries with discovered process\n schemas.\n \"\"\"\n if not path.lower().endswith('.py'):\n return []\n\n parser = SafeParser(open(path).read())\n processes = parser.parse()\n return [process.to_schema() for process in processes]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nevaluate the code needed to compute a given Data object.", "response": "def evaluate(self, data):\n \"\"\"Evaluate the code needed to compute a given Data object.\"\"\"\n return 'PYTHONPATH=\"{runtime}\" python3 -u -m resolwe.process {program} --slug {slug} --inputs {inputs}'.format(\n runtime=PYTHON_RUNTIME_VOLUME,\n program=PYTHON_PROGRAM_VOLUME,\n slug=shlex.quote(data.process.slug),\n inputs=PYTHON_INPUTS_VOLUME,\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef prepare_runtime(self, runtime_dir, data):\n # Copy over Python process runtime (resolwe.process).\n import resolwe.process as runtime_package\n\n src_dir = os.path.dirname(inspect.getsourcefile(runtime_package))\n dest_package_dir = os.path.join(runtime_dir, PYTHON_RUNTIME_DIRNAME, 'resolwe', 'process')\n shutil.copytree(src_dir, dest_package_dir)\n os.chmod(dest_package_dir, 0o755)\n\n # Write python source file.\n source = data.process.run.get('program', '')\n program_path = os.path.join(runtime_dir, PYTHON_PROGRAM_FILENAME)\n with open(program_path, 'w') as file:\n file.write(source)\n os.chmod(program_path, 0o755)\n\n # Write serialized inputs.\n inputs = copy.deepcopy(data.input)\n hydrate_input_references(inputs, data.process.input_schema)\n hydrate_input_uploads(inputs, data.process.input_schema)\n inputs_path = os.path.join(runtime_dir, PYTHON_INPUTS_FILENAME)\n\n # XXX: Skip serialization of LazyStorageJSON. We should support\n # LazyStorageJSON in Python processes on the new communication protocol\n def default(obj):\n \"\"\"Get default value.\"\"\"\n class_name = obj.__class__.__name__\n if class_name == 'LazyStorageJSON':\n return ''\n\n raise TypeError(f'Object of type {class_name} is not JSON serializable')\n\n with open(inputs_path, 'w') as file:\n json.dump(inputs, file, default=default)\n\n # Generate volume maps required to expose needed files.\n volume_maps = {\n PYTHON_RUNTIME_DIRNAME: PYTHON_RUNTIME_VOLUME,\n PYTHON_PROGRAM_FILENAME: PYTHON_PROGRAM_VOLUME,\n PYTHON_INPUTS_FILENAME: PYTHON_INPUTS_VOLUME,\n }\n\n return volume_maps", "response": "Prepare the runtime directory for the current process."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nvalidate the JWT and return the subject it contains.", "response": "def get_subject_with_local_validation(jwt_bu64, cert_obj):\n \"\"\"Validate the JWT and return the subject it contains.\n\n - The JWT is validated by checking that it was signed with a CN certificate.\n - The returned subject can be trusted for authz and authn operations.\n\n - Possible validation errors include:\n\n - A trusted (TLS/SSL) connection could not be made to the CN holding the\n signing certificate.\n - The JWT could not be decoded.\n - The JWT signature signature was invalid.\n - The JWT claim set contains invalid \"Not Before\" or \"Expiration Time\" claims.\n\n Args:\n jwt_bu64: bytes\n The JWT encoded using a a URL safe flavor of Base64.\n\n cert_obj: cryptography.Certificate\n Public certificate used for signing the JWT (typically the CN cert).\n\n Returns:\n - On successful validation, the subject contained in the JWT is returned.\n\n - If validation fails for any reason, errors are logged and None is returned.\n\n \"\"\"\n try:\n jwt_dict = validate_and_decode(jwt_bu64, cert_obj)\n except JwtException as e:\n return log_jwt_bu64_info(logging.error, str(e), jwt_bu64)\n try:\n return jwt_dict['sub']\n except LookupError:\n log_jwt_dict_info(logging.error, 'Missing \"sub\" key', jwt_dict)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_subject_with_remote_validation(jwt_bu64, base_url):\n cert_obj = d1_common.cert.x509.download_as_obj(base_url)\n return get_subject_with_local_validation(jwt_bu64, cert_obj)", "response": "Same as get_subject_with_local_validation except that the signing certificate is downloaded from the CN and the certificate is downloaded from the root CN and the certificate is downloaded from the root CN."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_subject_with_file_validation(jwt_bu64, cert_path):\n cert_obj = d1_common.cert.x509.deserialize_pem_file(cert_path)\n return get_subject_with_local_validation(jwt_bu64, cert_obj)", "response": "Same as get_subject_with_local_validation except that the signing certificate\n is read from a local PEM file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_subject_without_validation(jwt_bu64):\n try:\n jwt_dict = get_jwt_dict(jwt_bu64)\n except JwtException as e:\n return log_jwt_bu64_info(logging.error, str(e), jwt_bu64)\n try:\n return jwt_dict['sub']\n except LookupError:\n log_jwt_dict_info(logging.error, 'Missing \"sub\" key', jwt_dict)", "response": "Extract subject from the JWT without validating the JWT."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_jwt_dict(jwt_bu64):\n jwt_tup = get_jwt_tup(jwt_bu64)\n try:\n jwt_dict = json.loads(jwt_tup[0].decode('utf-8'))\n jwt_dict.update(json.loads(jwt_tup[1].decode('utf-8')))\n jwt_dict['_sig_sha1'] = hashlib.sha1(jwt_tup[2]).hexdigest()\n except TypeError as e:\n raise JwtException('Decode failed. error=\"{}\"'.format(e))\n return jwt_dict", "response": "Parse Base64 encoded JWT and return as a dict."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate_and_decode(jwt_bu64, cert_obj):\n try:\n return jwt.decode(\n jwt_bu64.strip(), cert_obj.public_key(), algorithms=['RS256'], verify=True\n )\n except jwt.InvalidTokenError as e:\n raise JwtException('Signature is invalid. error=\"{}\"'.format(str(e)))", "response": "Validate the JWT and return as a dict."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef log_jwt_dict_info(log, msg_str, jwt_dict):\n d = ts_to_str(jwt_dict)\n # Log known items in specific order, then the rest just sorted\n log_list = [(b, d.pop(a)) for a, b, c in CLAIM_LIST if a in d] + [\n (k, d[k]) for k in sorted(d)\n ]\n list(\n map(\n log,\n ['{}:'.format(msg_str)] + [' {}: {}'.format(k, v) for k, v in log_list],\n )\n )", "response": "Dump the JWT dictionary to log."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting timestamps in JWT to human readable dates.", "response": "def ts_to_str(jwt_dict):\n \"\"\"Convert timestamps in JWT to human readable dates.\n\n Args:\n jwt_dict: dict\n JWT with some keys containing timestamps.\n\n Returns:\n dict: Copy of input dict where timestamps have been replaced with human readable\n dates.\n\n \"\"\"\n d = ts_to_dt(jwt_dict)\n for k, v in list(d.items()):\n if isinstance(v, datetime.datetime):\n d[k] = v.isoformat().replace('T', ' ')\n return d"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts timestamps in JWT to datetime objects.", "response": "def ts_to_dt(jwt_dict):\n \"\"\"Convert timestamps in JWT to datetime objects.\n\n Args:\n jwt_dict: dict\n JWT with some keys containing timestamps.\n\n Returns:\n dict: Copy of input dict where timestamps have been replaced with\n datetime.datetime() objects.\n\n \"\"\"\n d = jwt_dict.copy()\n for k, v in [v[:2] for v in CLAIM_LIST if v[2]]:\n if k in jwt_dict:\n d[k] = d1_common.date_time.dt_from_ts(jwt_dict[k])\n return d"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef encode_bu64(b):\n s = base64.standard_b64encode(b)\n s = s.rstrip('=')\n s = s.replace('+', '-')\n s = s.replace('/', '_')\n return s", "response": "Encode bytes to a URL safe Base64 used by JWTs."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef decode_bu64(b):\n s = b\n s = s.replace(b'-', b'+')\n s = s.replace(b'_', b'/')\n p = len(s) % 4\n if p == 0:\n pass\n elif p == 2:\n s += b'=='\n elif p == 3:\n s += b'='\n else:\n raise ValueError('Illegal Base64url string')\n return base64.standard_b64decode(s)", "response": "Encode bytes to a URL safe flavor of Base64 used by JWTs."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nevaluate the code needed to compute a given Data object.", "response": "def evaluate(self, data):\n \"\"\"Evaluate the code needed to compute a given Data object.\"\"\"\n try:\n inputs = copy.deepcopy(data.input)\n hydrate_input_references(inputs, data.process.input_schema)\n hydrate_input_uploads(inputs, data.process.input_schema)\n\n # Include special 'proc' variable in the context.\n inputs['proc'] = {\n 'data_id': data.id,\n 'data_dir': self.manager.get_executor().resolve_data_path(),\n }\n\n # Include special 'requirements' variable in the context.\n inputs['requirements'] = data.process.requirements\n # Inject default values and change resources according to\n # the current Django configuration.\n inputs['requirements']['resources'] = data.process.get_resource_limits()\n\n script_template = data.process.run.get('program', '')\n\n # Get the appropriate expression engine. If none is defined, do not evaluate\n # any expressions.\n expression_engine = data.process.requirements.get('expression-engine', None)\n if not expression_engine:\n return script_template\n\n return self.get_expression_engine(expression_engine).evaluate_block(\n script_template, inputs,\n escape=self._escape,\n safe_wrapper=SafeString,\n )\n except EvaluationError as error:\n raise ExecutionError('{}'.format(error))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nescape given value unless it is safe.", "response": "def _escape(self, value):\n \"\"\"Escape given value unless it is safe.\"\"\"\n if isinstance(value, SafeString):\n return value\n\n return shellescape.quote(value)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef open_sciobj_file_by_pid_ctx(pid, write=False):\n abs_path = get_abs_sciobj_file_path_by_pid(pid)\n with open_sciobj_file_by_path_ctx(abs_path, write) as sciobj_file:\n yield sciobj_file", "response": "Open the file containing the Science Object bytes of pid in the default SciObj store."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nopens the file containing the Science Object bytes at the custom locationabs_path in the local filesystem and return the file handle and file_url.", "response": "def open_sciobj_file_by_path_ctx(abs_path, write=False):\n \"\"\"Open the file containing the Science Object bytes at the custom location\n ``abs_path`` in the local filesystem.\n\n If ``write`` is True, the file is opened for writing and any missing directores are\n created. Return the file handle and file_url with the file location in a suitable\n form for storing in the DB.\n\n If nothing was written to the file, delete it.\n\n \"\"\"\n if write:\n d1_common.utils.filesystem.create_missing_directories_for_file(abs_path)\n try:\n with open(abs_path, 'wb' if write else 'rb') as sciobj_file:\n yield sciobj_file\n finally:\n if os.path.exists(abs_path) and not os.path.getsize(abs_path):\n os.unlink(abs_path)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nopens the Science Object file containing the Science Object bytes at the custom location .", "response": "def open_sciobj_file_by_pid(pid, write=False):\n \"\"\"Open the file containing the Science Object bytes at the custom location\n ``abs_path`` in the local filesystem for read.\"\"\"\n abs_path = get_abs_sciobj_file_path_by_pid(pid)\n if write:\n d1_common.utils.filesystem.create_missing_directories_for_file(abs_path)\n return open_sciobj_file_by_path(abs_path, write)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef open_sciobj_file_by_path(abs_path, write=False):\n if write:\n d1_common.utils.filesystem.create_missing_directories_for_file(abs_path)\n return open(abs_path, 'wb' if write else 'rb')", "response": "Open a SciObj file for read or write."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the relative local path to the file holding an object.", "response": "def get_rel_sciobj_file_path(pid):\n \"\"\"Get the relative local path to the file holding an object's bytes.\n\n - The path is relative to settings.OBJECT_STORE_PATH\n - There is a one-to-one mapping between pid and path\n - The path is based on a SHA1 hash. It's now possible to craft SHA1 collisions, but\n it's so unlikely that we ignore it for now\n - The path may or may not exist (yet).\n\n \"\"\"\n hash_str = hashlib.sha1(pid.encode('utf-8')).hexdigest()\n return os.path.join(hash_str[:2], hash_str[2:4], hash_str)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_abs_sciobj_file_path_by_url(file_url):\n assert_sciobj_store_exists()\n m = re.match(r'file://(.*?)/(.*)', file_url, re.IGNORECASE)\n if m.group(1) == RELATIVE_PATH_MAGIC_HOST_STR:\n return os.path.join(get_abs_sciobj_store_path(), m.group(2))\n assert os.path.isabs(m.group(2))\n return m.group(2)", "response": "Get the absolute path to the file holding an object s bytes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_gmn_version(base_url):\n home_url = d1_common.url.joinPathElements(base_url, 'home')\n try:\n response = requests.get(home_url, verify=False)\n except requests.exceptions.ConnectionError as e:\n return False, str(e)\n\n if not response.ok:\n return False, 'invalid /home. status={}'.format(response.status_code)\n\n soup = bs4.BeautifulSoup(response.content, 'html.parser')\n version_str = soup.find(string='GMN version:').find_next('td').string\n if version_str is None:\n return False, 'Parse failed'\n\n return True, version_str", "response": "Return the version currently running on a GMN instance."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extract_subjects(subject_info_xml, primary_str):\n subject_info_pyxb = deserialize_subject_info(subject_info_xml)\n subject_info_tree = gen_subject_info_tree(subject_info_pyxb, primary_str)\n return subject_info_tree.get_subject_set()", "response": "Extract a set of authenticated subjects from a SubjectInfo XML document and primary subject."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef deserialize_subject_info(subject_info_xml):\n try:\n return d1_common.xml.deserialize(subject_info_xml)\n except ValueError as e:\n raise d1_common.types.exceptions.InvalidToken(\n 0,\n 'Could not deserialize SubjectInfo. subject_info=\"{}\", error=\"{}\"'.format(\n subject_info_xml, str(e)\n ),\n )", "response": "Deserialize SubjectInfo XML doc to native object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting the flat self referential lists in the SubjectInfo PyXB object to a tree structure.", "response": "def gen_subject_info_tree(subject_info_pyxb, authn_subj, include_duplicates=False):\n \"\"\"Convert the flat, self referential lists in the SubjectInfo to a tree structure.\n\n Args:\n subject_info_pyxb: SubjectInfo PyXB object\n\n authn_subj: str\n The authenticated subject that becomes the root subject in the tree of\n subjects built from the SubjectInfo.\n\n Only subjects that are authenticated by a direct or indirect connection to\n this subject are included in the tree.\n\n include_duplicates:\n Include branches of the tree that contain subjects that have already been\n included via other branches.\n\n If the tree is intended for rendering, including the duplicates will\n provide a more complete view of the SubjectInfo.\n\n Returns:\n SubjectInfoNode : Tree of nodes holding information about subjects that are\n directly or indirectly connected to the authenticated subject in the root.\n\n \"\"\"\n\n class State:\n \"\"\"self.\"\"\"\n\n pass\n\n state = State()\n\n state.subject_info_pyxb = subject_info_pyxb\n state.include_duplicates = include_duplicates\n state.visited_set = set()\n state.tree = SubjectInfoNode(\"Root\", TYPE_NODE_TAG)\n\n _add_subject(state, state.tree, authn_subj)\n symbolic_node = state.tree.add_child(\"Symbolic\", TYPE_NODE_TAG)\n _add_subject(state, symbolic_node, d1_common.const.SUBJECT_AUTHENTICATED)\n _trim_tree(state)\n\n return state.tree"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _trim_tree(state):\n for n in list(state.tree.leaf_node_gen):\n if n.type_str == TYPE_NODE_TAG:\n n.parent.child_list.remove(n)\n return _trim_tree(state)", "response": "Trim empty leaf nodes from the tree."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_child(self, label_str, type_str):\n child_node = SubjectInfoNode(label_str, type_str)\n child_node.parent = self\n self.child_list.append(child_node)\n return child_node", "response": "Add a child node."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting path from root to this node.", "response": "def get_path_str(self, sep=os.path.sep, type_str=None):\n \"\"\"Get path from root to this node.\n\n Args:\n sep: str\n One or more characters to insert between each element in the path.\n Defaults to \"/\" on Unix and \"\\\" on Windows.\n\n type_str:\n SUBJECT_NODE_TAG, TYPE_NODE_TAG or None. If set, only include\n information from nodes of that type.\n\n Returns:\n str: String describing the path from the root to this node.\n\n \"\"\"\n return sep.join(\n list(\n reversed(\n [\n v.label_str\n for v in self.parent_gen\n if type_str in (None, v.type_str)\n ]\n )\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_leaf_node_path_list(self, sep=os.path.sep, type_str=None):\n return [v.get_path_str(sep, type_str) for v in self.leaf_node_gen]", "response": "Get paths for all leaf nodes for the tree rooted at this node."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets list of the labels of the nodes leading up to this node from the root.", "response": "def get_path_list(self, type_str=None):\n \"\"\"Get list of the labels of the nodes leading up to this node from the root.\n\n Args:\n type_str:\n SUBJECT_NODE_TAG, TYPE_NODE_TAG or None. If set, only include\n information from nodes of that type.\n\n Returns:\n list of str: The labels of the nodes leading up to this node from the root.\n\n \"\"\"\n return list(\n reversed(\n [v.label_str for v in self.parent_gen if type_str in (None, v.type_str)]\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_label_set(self, type_str=None):\n return {v.label_str for v in self.node_gen if type_str in (None, v.type_str)}", "response": "Returns a set of label_str for the tree rooted at this node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef start_task_type(self, task_type_str, total_task_count):\n assert (\n task_type_str not in self._task_dict\n ), \"Task type has already been started\"\n self._task_dict[task_type_str] = {\n \"start_time\": time.time(),\n \"total_task_count\": total_task_count,\n \"task_idx\": 0,\n }", "response": "Called when about to start processing a new type of task."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef end_task_type(self, task_type_str):\n assert (\n task_type_str in self._task_dict\n ), \"Task type has not been started yet: {}\".format(task_type_str)\n self._log_progress()\n del self._task_dict[task_type_str]", "response": "Called when processing of all tasks of the given type is complete."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalling when processing is about to start on a single task of the given type.", "response": "def start_task(self, task_type_str, current_task_index=None):\n \"\"\"Call when processing is about to start on a single task of the given task\n type, typically at the top inside of the loop that processes the tasks.\n\n Args:\n task_type_str (str):\n The name of the task, used as a dict key and printed in the progress\n updates.\n\n current_task_index (int):\n If the task processing loop may skip or repeat tasks, the index of the\n current task must be provided here. This parameter can normally be left\n unset.\n\n \"\"\"\n assert (\n task_type_str in self._task_dict\n ), \"Task type has not been started yet: {}\".format(task_type_str)\n if current_task_index is not None:\n self._task_dict[task_type_str][\"task_idx\"] = current_task_index\n else:\n self._task_dict[task_type_str][\"task_idx\"] += 1\n self._log_progress_if_interval_elapsed()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nregister an event that occurred during processing of a task of the given type.", "response": "def event(self, event_name):\n \"\"\"Register an event that occurred during processing of a task of the given\n type.\n\n Args: event_name: str A name for a type of events. Events of the\n same type are displayed as a single entry and a total count of\n occurences.\n\n \"\"\"\n self._event_dict.setdefault(event_name, 0)\n self._event_dict[event_name] += 1\n self._log_progress_if_interval_elapsed()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_permissible_wcs(classname, f):\n # these are the problematic sectors\n classes = ['cu', 'db', 'sb', 'sd', 'mue', 'mutau', 'taue', 'dF0']\n if classname not in classes or f == 5:\n # for 5-flavour WET, nothing to do.\n # Neither for other classes (I, II, ...) because they exist either all\n # or not at all in WET-3 and WET-4 (they have specific flavours).\n return 'all'\n if f not in [3, 4]:\n raise ValueError(\"f must be 3, 4, or 5.\")\n if classname == 'dF0':\n sector = 'dF=0'\n else:\n sector = classname\n perm_keys = wcxf.Basis['WET-{}'.format(f), 'JMS'].sectors[sector].keys()\n all_keys = coeffs[sector]\n return [i for i, c in enumerate(all_keys) if c in perm_keys]", "response": "r Returns a list of sectors that are permitted to use in the given class."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef admeig(classname, f, m_u, m_d, m_s, m_c, m_b, m_e, m_mu, m_tau):\n args = f, m_u, m_d, m_s, m_c, m_b, m_e, m_mu, m_tau\n A = getattr(adm, 'adm_s_' + classname)(*args)\n perm_keys = get_permissible_wcs(classname, f)\n if perm_keys != 'all':\n # remove disallowed rows & columns if necessary\n A = A[perm_keys][:, perm_keys]\n w, v = np.linalg.eig(A.T)\n return w, v", "response": "Compute the eigenvalues and eigenvectors for a QCD anomalous dimension\n matrix that is defined in adm. adm_s_X where X is the name of the sector. Output analogous to np. linalg. eigenvectors."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getUs(classname, eta_s, f, alpha_s, alpha_e, m_u, m_d, m_s, m_c, m_b, m_e, m_mu, m_tau):\n w, v = admeig(classname, f, m_u, m_d, m_s, m_c, m_b, m_e, m_mu, m_tau)\n b0s = 11 - 2 * f / 3\n a = w / (2 * b0s)\n return v @ np.diag(eta_s**a) @ np.linalg.inv(v)", "response": "Get the QCD evolution matrix."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the QCD evolution matrix.", "response": "def getUe(classname, eta_s, f, alpha_s, alpha_e, m_u, m_d, m_s, m_c, m_b, m_e, m_mu, m_tau):\n \"\"\"Get the QCD evolution matrix.\"\"\"\n args = f, m_u, m_d, m_s, m_c, m_b, m_e, m_mu, m_tau\n A = getattr(adm, 'adm_e_' + classname)(*args)\n perm_keys = get_permissible_wcs(classname, f)\n if perm_keys != 'all':\n # remove disallowed rows & columns if necessary\n A = A[perm_keys][:, perm_keys]\n w, v = admeig(classname, *args)\n b0s = 11 - 2 * f / 3\n a = w / (2 * b0s)\n K = np.linalg.inv(v) @ A.T @ v\n for i in range(K.shape[0]):\n for j in range(K.shape[1]):\n if a[i] - a[j] != 1:\n K[i, j] *= (eta_s**(a[j] + 1) - eta_s**a[i]) / (a[i] - a[j] - 1)\n else:\n K[i, j] *= eta_s**a[i] * log(1 / eta_s)\n return -alpha_e / (2 * b0s * alpha_s) * v @ K @ np.linalg.inv(v)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run_sector(sector, C_in, eta_s, f, p_in, p_out, qed_order=1, qcd_order=1):\n Cdictout = OrderedDict()\n classname = sectors[sector]\n keylist = coeffs[sector]\n if sector == 'dF=0':\n perm_keys = get_permissible_wcs('dF0', f)\n else:\n perm_keys = get_permissible_wcs(sector, f)\n if perm_keys != 'all':\n # remove disallowed keys if necessary\n keylist = np.asarray(keylist)[perm_keys]\n C_input = np.array([C_in.get(key, 0) for key in keylist])\n if np.count_nonzero(C_input) == 0 or classname == 'inv':\n # nothing to do for SM-like WCs or RG invariant operators\n C_result = C_input\n else:\n C_scaled = np.asarray([C_input[i] * scale_C(key, p_in) for i, key in enumerate(keylist)])\n if qcd_order == 0:\n Us = np.eye(len(C_scaled))\n elif qcd_order == 1:\n Us = getUs(classname, eta_s, f, **p_in)\n if qed_order == 0:\n Ue = np.zeros(C_scaled.shape)\n elif qed_order == 1:\n if qcd_order == 0:\n Ue = getUe(classname, 1, f, **p_in)\n else:\n Ue = getUe(classname, eta_s, f, **p_in)\n C_out = (Us + Ue) @ C_scaled\n C_result = [C_out[i] / scale_C(key, p_out) for i, key in enumerate(keylist)]\n for j in range(len(C_result)):\n Cdictout[keylist[j]] = C_result[j]\n return Cdictout", "response": "r Solve the WET RGE for a specific sector."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _merge_region_trees(self, dst_tree, src_tree, pid):\n for k, v in list(src_tree.items()):\n # Prepend an underscore to the administrative area names, to make them\n # sort separately from the identifiers.\n # k = '_' + k\n if k not in dst_tree or dst_tree[k] is None:\n dst_tree[k] = {}\n dst_tree[k][pid] = None\n if v is not None:\n self._merge_region_trees(dst_tree[k], v, pid)", "response": "Merge the tree of files in one tree into another tree."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the region_tree item specified by path. An item can be a a folder (represented by a dictionary) or a PID (represented by None). This function is also used for determining which section of a path is within the region tree and which section should be passed to the next resolver. To support this, the logic is as follows: - If the path points to an item in the region tree, the item is returned and the path, having been fully consumed, is returned as an empty list. - If the path exits through a valid PID in the region tree, the PID is returned for the item and the section of the path that was not consumed within the region tree is returned. - If the path exits through a valid folder in the region tree, an \"invalid path\" PathException is raised. This is because only the PIDs are valid \"exit points\" in the tree. - If the path goes to an invalid location within the region tree, an \"invalid path\" PathException is raised.", "response": "def _get_region_tree_item_and_unconsumed_path(\n self, region_tree, path, parent_key=''\n ):\n \"\"\"Return the region_tree item specified by path. An item can be a a folder\n (represented by a dictionary) or a PID (represented by None).\n\n This function is also used for determining which section of a path is within\n the region tree and which section should be passed to the next resolver. To\n support this, the logic is as follows:\n\n - If the path points to an item in the region tree, the item is returned and\n the path, having been fully consumed, is returned as an empty list.\n\n - If the path exits through a valid PID in the region tree, the PID is returned\n for the item and the section of the path that was not consumed within the\n region tree is returned.\n\n - If the path exits through a valid folder in the region tree, an \"invalid\n path\" PathException is raised. This is because only the PIDs are valid \"exit\n points\" in the tree.\n\n - If the path goes to an invalid location within the region tree, an \"invalid\n path\" PathException is raised.\n\n \"\"\"\n # Handle valid item within region tree.\n if not path:\n if region_tree is None:\n return parent_key, []\n else:\n return region_tree, []\n # Handle valid exit through PID.\n if region_tree is None:\n return parent_key, path\n # Handle next level in path.\n if path[0] in list(region_tree.keys()):\n return self._get_region_tree_item_and_unconsumed_path(\n region_tree[path[0]], path[1:], path[0]\n )\n else:\n raise onedrive_exceptions.PathException('Invalid path')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\noverriding save method to catch handled errors and repackage them as 400 errors.", "response": "def save(self, **kwargs):\n \"\"\"Override save method to catch handled errors and repackage them as 400 errors.\"\"\"\n try:\n return super().save(**kwargs)\n except SlugError as error:\n raise ParseError(error)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_api_major_by_base_url(base_url, *client_arg_list, **client_arg_dict):\n api_major = 0\n client = d1_client.mnclient.MemberNodeClient(\n base_url, *client_arg_list, **client_arg_dict\n )\n node_pyxb = client.getCapabilities()\n for service_pyxb in node_pyxb.services.service:\n if service_pyxb.available:\n api_major = max(api_major, int(service_pyxb.version[-1]))\n return api_major", "response": "Read the Node document from a node and return an int containing the latest D1 APIVersion supported by the node."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete any unused subjects from the database.", "response": "def delete_unused_subjects():\n \"\"\"Delete any unused subjects from the database.\n\n This is not strictly required as any unused subjects will automatically be reused if\n needed in the future.\n\n \"\"\"\n # This causes Django to create a single join (check with query.query)\n query = d1_gmn.app.models.Subject.objects.all()\n query = query.filter(scienceobject_submitter__isnull=True)\n query = query.filter(scienceobject_rights_holder__isnull=True)\n query = query.filter(eventlog__isnull=True)\n query = query.filter(permission__isnull=True)\n query = query.filter(whitelistforcreateupdatedelete__isnull=True)\n\n logger.debug('Deleting {} unused subjects:'.format(query.count()))\n for s in query.all():\n logging.debug(' {}'.format(s.subject))\n\n query.delete()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef extract_subject_from_dn(cert_obj):\n return \",\".join(\n \"{}={}\".format(\n OID_TO_SHORT_NAME_DICT.get(v.oid.dotted_string, v.oid.dotted_string),\n rdn_escape(v.value),\n )\n for v in reversed(list(cert_obj.subject))\n )", "response": "Serialize a DN to a DataONE subject string."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_d1_dn_subject(common_name_str):\n return cryptography.x509.Name(\n [\n cryptography.x509.NameAttribute(\n cryptography.x509.oid.NameOID.COUNTRY_NAME, \"US\"\n ),\n cryptography.x509.NameAttribute(\n cryptography.x509.oid.NameOID.STATE_OR_PROVINCE_NAME, \"California\"\n ),\n cryptography.x509.NameAttribute(\n cryptography.x509.oid.NameOID.LOCALITY_NAME, \"San Francisco\"\n ),\n cryptography.x509.NameAttribute(\n cryptography.x509.oid.NameOID.ORGANIZATION_NAME, \"Root CA\"\n ),\n cryptography.x509.NameAttribute(\n cryptography.x509.oid.NameOID.COMMON_NAME, \"ca.ca.com\"\n ),\n ]\n )", "response": "Create the DN Subject for a DataONE environment."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate a Certificate Signing Request (CSR). Args: private_key_bytes: bytes Private key with which the CSR will be signed. subject_name: str Certificate Subject Name fqdn_list: List of Fully Qualified Domain Names (FQDN) and/or IP addresses for which this certificate will provide authentication. E.g.: ['my.membernode.org', '1.2.3.4']", "response": "def generate_csr(private_key_bytes, subject_name, fqdn_list):\n \"\"\"Generate a Certificate Signing Request (CSR).\n\n Args:\n private_key_bytes: bytes\n Private key with which the CSR will be signed.\n\n subject_name: str\n Certificate Subject Name\n\n fqdn_list:\n List of Fully Qualified Domain Names (FQDN) and/or IP addresses for which\n this certificate will provide authentication.\n\n E.g.: ['my.membernode.org', '1.2.3.4']\n \"\"\"\n return (\n cryptography.x509.CertificateSigningRequestBuilder()\n .subject_name(subject_name)\n .add_extension(\n extension=cryptography.x509.SubjectAlternativeName(\n [cryptography.x509.DNSName(v) for v in fqdn_list]\n ),\n critical=False,\n )\n .sign(\n private_key=private_key_bytes,\n algorithm=cryptography.hazmat.primitives.hashes.SHA256(),\n backend=cryptography.hazmat.backends.default_backend(),\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef serialize_cert_to_pem(cert_obj):\n return cert_obj.public_bytes(\n encoding=cryptography.hazmat.primitives.serialization.Encoding.PEM\n )", "response": "Serialize a cryptography. Certificate object to PEM."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nextracts the SubjectInfo XML doc from a certificate.", "response": "def extract_subject_info_extension(cert_obj):\n \"\"\"Extract DataONE SubjectInfo XML doc from certificate.\n\n Certificates issued by DataONE may include an embedded XML doc containing\n additional information about the subject specified in the certificate DN. If\n present, the doc is stored as an extension with an OID specified by DataONE and\n formatted as specified in the DataONE SubjectInfo schema definition.\n\n Args:\n cert_obj: cryptography.Certificate\n\n Returns:\n str : SubjectInfo XML doc if present, else None\n\n \"\"\"\n try:\n subject_info_der = cert_obj.extensions.get_extension_for_oid(\n cryptography.x509.oid.ObjectIdentifier(DATAONE_SUBJECT_INFO_OID)\n ).value.value\n return str(pyasn1.codec.der.decoder.decode(subject_info_der)[0])\n except Exception as e:\n logging.debug('SubjectInfo not extracted. reason=\"{}\"'.format(e))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndownload a public certificate from a DataONE server as DER encoded bytes.", "response": "def download_as_der(\n base_url=d1_common.const.URL_DATAONE_ROOT,\n timeout_sec=d1_common.const.DEFAULT_HTTP_TIMEOUT,\n):\n \"\"\"Download public certificate from a TLS/SSL web server as DER encoded ``bytes``.\n\n If the certificate is being downloaded in order to troubleshoot validation issues,\n the download itself may fail due to the validation issue that is being investigated.\n To work around such chicken-and-egg problems, temporarily wrap calls to the\n download_* functions with the ``disable_cert_validation()`` context manager (also in\n this module).\n\n Args:\n base_url : str\n A full URL to a DataONE service endpoint or a server hostname\n timeout_sec : int or float\n Timeout for the SSL socket operations\n Returns:\n bytes: The server's public certificate as DER encoded bytes.\n\n \"\"\"\n # TODO: It is unclear which SSL and TLS protocols are supported by the method\n # currently being used. The current method and the two commented out below\n # should be compared to determine which has the best compatibility with current\n # versions of Python and current best practices for protocol selection.\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sock.settimeout(timeout_sec)\n ssl_socket = ssl.wrap_socket(sock)\n url_obj = urllib.parse.urlparse(base_url)\n ssl_socket.connect((url_obj.netloc, 443))\n return ssl_socket.getpeercert(binary_form=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef download_as_pem(\n base_url=d1_common.const.URL_DATAONE_ROOT,\n timeout_sec=d1_common.const.DEFAULT_HTTP_TIMEOUT,\n):\n \"\"\"Download public certificate from a TLS/SSL web server as PEM encoded string.\n\n Also see download_as_der().\n\n Args:\n base_url : str\n A full URL to a DataONE service endpoint or a server hostname\n timeout_sec : int or float\n Timeout for the SSL socket operations\n\n Returns:\n str: The certificate as a PEM encoded string.\n\n \"\"\"\n return ssl.DER_cert_to_PEM_cert(download_as_der(base_url, timeout_sec))", "response": "Download a public certificate from a TLS or SSL web server as PEM encoded string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndownload a public certificate from a TLS or SSL web server as Certificate object.", "response": "def download_as_obj(\n base_url=d1_common.const.URL_DATAONE_ROOT,\n timeout_sec=d1_common.const.DEFAULT_HTTP_TIMEOUT,\n):\n \"\"\"Download public certificate from a TLS/SSL web server as Certificate object.\n\n Also see download_as_der().\n\n Args:\n base_url : str\n A full URL to a DataONE service endpoint or a server hostname\n timeout_sec : int or float\n Timeout for the SSL socket operations\n\n Returns:\n cryptography.Certificate\n\n \"\"\"\n return decode_der(download_as_der(base_url, timeout_sec))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndecodes a DER encoded certificate into a cryptography. Certificate object.", "response": "def decode_der(cert_der):\n \"\"\"Decode cert DER string to Certificate object.\n\n Args:\n cert_der : Certificate as a DER encoded string\n\n Returns:\n cryptography.Certificate()\n\n \"\"\"\n return cryptography.x509.load_der_x509_certificate(\n data=cert_der, backend=cryptography.hazmat.backends.default_backend()\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extract_issuer_ca_cert_url(cert_obj):\n for extension in cert_obj.extensions:\n if extension.oid.dotted_string == AUTHORITY_INFO_ACCESS_OID:\n authority_info_access = extension.value\n for access_description in authority_info_access:\n if access_description.access_method.dotted_string == CA_ISSUERS_OID:\n return access_description.access_location.value", "response": "Extract issuer CA certificate URL from certificate."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nserializing a private key to PEM.", "response": "def serialize_private_key_to_pem(private_key, passphrase_bytes=None):\n \"\"\"Serialize private key to PEM.\n\n Args:\n private_key:\n passphrase_bytes:\n\n Returns:\n bytes: PEM encoded private key\n\n \"\"\"\n return private_key.private_bytes(\n encoding=cryptography.hazmat.primitives.serialization.Encoding.PEM,\n format=cryptography.hazmat.primitives.serialization.PrivateFormat.TraditionalOpenSSL,\n encryption_algorithm=cryptography.hazmat.primitives.serialization.BestAvailableEncryption(\n passphrase_bytes\n )\n if passphrase_bytes is not None\n else cryptography.hazmat.primitives.serialization.NoEncryption(),\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngenerate a private key for a single resource.", "response": "def generate_private_key(key_size=2048):\n \"\"\"Generate a private key\"\"\"\n return cryptography.hazmat.primitives.asymmetric.rsa.generate_private_key(\n public_exponent=65537,\n key_size=key_size,\n backend=cryptography.hazmat.backends.default_backend(),\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nextracts public key from certificate as PEM encoded PKCS#1 public key.", "response": "def get_public_key_pem(cert_obj):\n \"\"\"Extract public key from certificate as PEM encoded PKCS#1.\n\n Args:\n cert_obj: cryptography.Certificate\n\n Returns:\n bytes: PEM encoded PKCS#1 public key.\n\n \"\"\"\n return cert_obj.public_key().public_bytes(\n encoding=cryptography.hazmat.primitives.serialization.Encoding.PEM,\n format=cryptography.hazmat.primitives.serialization.PublicFormat.PKCS1,\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading CSR from PEM encoded file", "response": "def load_csr(pem_path):\n \"\"\"Load CSR from PEM encoded file\"\"\"\n with open(pem_path, \"rb\") as f:\n return cryptography.x509.load_pem_x509_csr(\n data=f.read(), backend=cryptography.hazmat.backends.default_backend()\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load_private_key(pem_path, passphrase_bytes=None):\n with open(pem_path, \"rb\") as f:\n return cryptography.hazmat.primitives.serialization.load_pem_private_key(\n data=f.read(),\n password=passphrase_bytes,\n backend=cryptography.hazmat.backends.default_backend(),\n )", "response": "Load a private key from PEM encoded file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nserializing certificate to DER.", "response": "def serialize_cert_to_der(cert_obj):\n \"\"\"Serialize certificate to DER.\n\n Args:\n cert_obj: cryptography.Certificate\n\n Returns:\n bytes: DER encoded certificate\n\n \"\"\"\n return cert_obj.public_bytes(\n cryptography.hazmat.primitives.serialization.Encoding.DER\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndump basic certificate values to the log.", "response": "def log_cert_info(logger, msg_str, cert_obj):\n \"\"\"Dump basic certificate values to the log.\n\n Args:\n logger: Logger\n Logger to which to write the certificate values.\n\n msg_str: str\n A message to write to the log before the certificate values.\n\n cert_obj: cryptography.Certificate\n Certificate containing values to log.\n\n Returns:\n None\n\n \"\"\"\n list(\n map(\n logger,\n [\"{}:\".format(msg_str)]\n + [\n \" {}\".format(v)\n for v in [\n \"Subject: {}\".format(\n _get_val_str(cert_obj, [\"subject\", \"value\"], reverse=True)\n ),\n \"Issuer: {}\".format(\n _get_val_str(cert_obj, [\"issuer\", \"value\"], reverse=True)\n ),\n \"Not Valid Before: {}\".format(\n cert_obj.not_valid_before.isoformat()\n ),\n \"Not Valid After: {}\".format(cert_obj.not_valid_after.isoformat()),\n \"Subject Alt Names: {}\".format(\n _get_ext_val_str(\n cert_obj, \"SUBJECT_ALTERNATIVE_NAME\", [\"value\", \"value\"]\n )\n ),\n \"CRL Distribution Points: {}\".format(\n _get_ext_val_str(\n cert_obj,\n \"CRL_DISTRIBUTION_POINTS\",\n [\"value\", \"full_name\", \"value\", \"value\"],\n )\n ),\n \"Authority Access Location: {}\".format(\n extract_issuer_ca_cert_url(cert_obj) or \"\"\n ),\n ]\n ],\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a standard certificate extension by its name.", "response": "def get_extension_by_name(cert_obj, extension_name):\n \"\"\"Get a standard certificate extension by attribute name.\n\n Args:\n cert_obj: cryptography.Certificate\n Certificate containing a standard extension.\n\n extension_name : str\n Extension name. E.g., 'SUBJECT_DIRECTORY_ATTRIBUTES'.\n\n Returns:\n Cryptography.Extension\n\n \"\"\"\n try:\n return cert_obj.extensions.get_extension_for_oid(\n getattr(cryptography.x509.oid.ExtensionOID, extension_name)\n )\n except cryptography.x509.ExtensionNotFound:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list of values from nested objects by attribute names.", "response": "def _get_val_list(obj, path_list, reverse=False):\n \"\"\"Extract values from nested objects by attribute names.\n\n Objects contain attributes which are named references to objects. This will descend\n down a tree of nested objects, starting at the given object, following the given\n path.\n\n Args:\n obj: object\n Any type of object\n\n path_list: list\n Attribute names\n\n reverse: bool\n Reverse the list of values before concatenation.\n\n Returns:\n list of objects\n\n \"\"\"\n try:\n y = getattr(obj, path_list[0])\n except AttributeError:\n return []\n if len(path_list) == 1:\n return [y]\n else:\n val_list = [x for a in y for x in _get_val_list(a, path_list[1:], reverse)]\n if reverse:\n val_list.reverse()\n return val_list"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_val_str(obj, path_list=None, reverse=False):\n val_list = _get_val_list(obj, path_list or [], reverse)\n return \"\" if obj is None else \" / \".join(map(str, val_list))", "response": "Extract values from nested objects by attribute names and concatenate their\n string representations."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _Bern_to_JMS_I(C, qq):\n if qq in ['sb', 'db', 'ds']:\n dd = 'dd'\n ij = '{}{}'.format(dflav[qq[0]] + 1, dflav[qq[1]] + 1)\n elif qq == 'cu':\n dd = 'uu'\n ij = '{}{}'.format(uflav[qq[0]] + 1, uflav[qq[1]] + 1)\n else:\n raise ValueError(\"not in Bern_I: \".format(qq))\n ji = ij[1] + ij[0]\n d = {\"V{}LL_{}{}\".format(dd, ij, ij): C['1' + 2 * qq],\n \"S1{}RR_{}{}\".format(dd, ji, ji): C['2' + 2 * qq].conjugate() + C['3' + 2 * qq].conjugate() / 3,\n \"S8{}RR_{}{}\".format(dd, ji, ji): 2 * C['3' + 2 * qq].conjugate(),\n \"V1{}LR_{}{}\".format(dd, ij, ij): -C['4' + 2 * qq] / 6 - C['5' + 2 * qq] / 2,\n \"V8{}LR_{}{}\".format(dd, ij, ij): -C['4' + 2 * qq],\n \"V{}RR_{}{}\".format(dd, ij, ij): C['1p' + 2 * qq],\n \"S1{}RR_{}{}\".format(dd, ij, ij): C['2p' + 2 * qq] + C['3p' + 2 * qq] / 3,\n \"S8{}RR_{}{}\".format(dd, ij, ij): 2 * C['3p' + 2 * qq],\n }\n if qq == 'cu':\n # here we need to convert some operators that are not in the basis\n for VXY in ['VuuRR', 'V1uuLR', 'V8uuLR', 'VuuLL']:\n d[VXY + '_1212'] = d.pop(VXY + '_2121').conjugate()\n return d", "response": "From Bern to JMS basis for $\\ Delta F = 2$ operators."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _BernI_to_FormFlavor_I(C, qq):\n qqf = qq[::-1] # FormFlavour uses \"bs\" instead of \"sb\" etc.\n if qq in ['sb', 'db', 'ds']:\n return {\n 'CVLL_' + 2*qqf: C[\"1\" + 2*qq],\n 'CSLL_' + 2*qqf: C[\"2\" + 2*qq] + 1 / 2 * C[\"3\" + 2*qq],\n 'CTLL_' + 2*qqf: -1 / 8 * C[\"3\" + 2*qq],\n 'CVLR_' + 2*qqf: -1 / 2 * C[\"5\" + 2*qq],\n 'CVRR_' + 2*qqf: C[\"1p\" + 2*qq],\n 'CSRR_' + 2*qqf: C[\"2p\" + 2*qq] + 1 / 2 * C[\"3p\" + 2*qq],\n 'CTRR_' + 2*qqf: -1 / 8 * C[\"3p\" + 2*qq],\n 'CSLR_' + 2*qqf: C[\"4\" + 2*qq]\n }\n elif qq == 'cu':\n return {\n 'CVLL_' + 2*qq: C[\"1\" + 2*qq].conjugate(),\n 'CSLL_' + 2*qq: C[\"2\" + 2*qq] + 1 / 2 * C[\"3\" + 2*qq].conjugate(),\n 'CTLL_' + 2*qq: -1 / 8 * C[\"3\" + 2*qq].conjugate(),\n 'CVLR_' + 2*qq: -1 / 2 * C[\"5\" + 2*qq].conjugate(),\n 'CVRR_' + 2*qq: C[\"1p\" + 2*qq].conjugate(),\n 'CSRR_' + 2*qq: C[\"2p\" + 2*qq].conjugate() + 1 / 2 * C[\"3p\" + 2*qq].conjugate(),\n 'CTRR_' + 2*qq: -1 / 8 * C[\"3p\" + 2*qq],\n 'CSLR_' + 2*qq: C[\"4\" + 2*qq].conjugate()\n }\n else:\n raise ValueError(\"{} not in FormFlavor_I\".format(qq))", "response": "From BernI to FormFlavorI basis for $\\ Delta F = 2$ operators."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _Bern_to_Fierz_III_IV_V(C, qqqq):\n # 2nd != 4th, color-octet redundant\n if qqqq in ['sbss', 'dbdd', 'dbds', 'sbsd', 'bsbd', 'dsdd']:\n return {\n 'F' + qqqq + '1': C['1' + qqqq] + 16 * C['3' + qqqq],\n 'F' + qqqq + '1p': C['1p' + qqqq] + 16 * C['3p' + qqqq],\n 'F' + qqqq + '3': C['1' + qqqq] + 4 * C['3' + qqqq],\n 'F' + qqqq + '3p': C['1p' + qqqq] + 4 * C['3p' + qqqq],\n 'F' + qqqq + '5': C['5p' + qqqq] + 64 * C['9p' + qqqq],\n 'F' + qqqq + '5p': C['5' + qqqq] + 64 * C['9' + qqqq],\n 'F' + qqqq + '7': C['5p' + qqqq] + 16 * C['9p' + qqqq],\n 'F' + qqqq + '7p': C['5' + qqqq] + 16 * C['9' + qqqq],\n 'F' + qqqq + '9': C['7p' + qqqq] - 16 * C['9p' + qqqq],\n 'F' + qqqq + '9p': C['7' + qqqq] - 16 * C['9' + qqqq],\n }\n if qqqq in ['dbbb', 'sbbb', 'dsss']: # 2nd = 4th, color-octet redundant\n return {\n 'F' + qqqq + '1': C['1' + qqqq] + 16 * C['3' + qqqq],\n 'F' + qqqq + '1p': C['1p' + qqqq] + 16 * C['3p' + qqqq],\n 'F' + qqqq + '3': C['1' + qqqq] + 4 * C['3' + qqqq],\n 'F' + qqqq + '3p': C['1p' + qqqq] + 4 * C['3p' + qqqq],\n 'F' + qqqq + '5': C['5' + qqqq] + 64 * C['9' + qqqq],\n 'F' + qqqq + '5p': C['5p' + qqqq] + 64 * C['9p' + qqqq],\n 'F' + qqqq + '7': C['5' + qqqq] + 16 * C['9' + qqqq],\n 'F' + qqqq + '7p': C['5p' + qqqq] + 16 * C['9p' + qqqq],\n 'F' + qqqq + '9': C['7' + qqqq] - 16 * C['9' + qqqq],\n 'F' + qqqq + '9p': C['7p' + qqqq] - 16 * C['9p' + qqqq],\n }\n # generic case\n if qqqq in ['sbuu', 'sbdd', 'sbuu', 'sbuc', 'sbcu', 'sbcc',\n 'dbuu', 'dbss', 'dbuu', 'dbuc', 'dbcu', 'dbcc',\n 'dsuu', 'dsbb', 'dsuu', 'dsuc', 'dscu', 'dscc',]:\n return {\n 'F' + qqqq + '1': C['1' + qqqq] - C['2' + qqqq] / 6 + 16 * C['3' + qqqq] - (8 * C['4' + qqqq]) / 3,\n 'F' + qqqq + '10': -8 * C['10' + qqqq] + C['8' + qqqq] / 2,\n 'F' + qqqq + '10p': -8 * C['10p' + qqqq] + C['8p' + qqqq] / 2,\n 'F' + qqqq + '1p': C['1p' + qqqq] - C['2p' + qqqq] / 6 + 16 * C['3p' + qqqq] - (8 * C['4p' + qqqq]) / 3,\n 'F' + qqqq + '2': C['2' + qqqq] / 2 + 8 * C['4' + qqqq],\n 'F' + qqqq + '2p': C['2p' + qqqq] / 2 + 8 * C['4p' + qqqq],\n 'F' + qqqq + '3': C['1' + qqqq] - C['2' + qqqq] / 6 + 4 * C['3' + qqqq] - (2 * C['4' + qqqq]) / 3,\n 'F' + qqqq + '3p': C['1p' + qqqq] - C['2p' + qqqq] / 6 + 4 * C['3p' + qqqq] - (2 * C['4p' + qqqq]) / 3,\n 'F' + qqqq + '4': C['2' + qqqq] / 2 + 2 * C['4' + qqqq],\n 'F' + qqqq + '4p': C['2p' + qqqq] / 2 + 2 * C['4p' + qqqq],\n 'F' + qqqq + '5': -((32 * C['10' + qqqq]) / 3) + C['5' + qqqq] - C['6' + qqqq] / 6 + 64 * C['9' + qqqq],\n 'F' + qqqq + '5p': -((32 * C['10p' + qqqq]) / 3) + C['5p' + qqqq] - C['6p' + qqqq] / 6 + 64 * C['9p' + qqqq],\n 'F' + qqqq + '6': 32 * C['10' + qqqq] + C['6' + qqqq] / 2,\n 'F' + qqqq + '6p': 32 * C['10p' + qqqq] + C['6p' + qqqq] / 2,\n 'F' + qqqq + '7': -((8 * C['10' + qqqq]) / 3) + C['5' + qqqq] - C['6' + qqqq] / 6 + 16 * C['9' + qqqq],\n 'F' + qqqq + '7p': -((8 * C['10p' + qqqq]) / 3) + C['5p' + qqqq] - C['6p' + qqqq] / 6 + 16 * C['9p' + qqqq],\n 'F' + qqqq + '8': 8 * C['10' + qqqq] + C['6' + qqqq] / 2,\n 'F' + qqqq + '8p': 8 * C['10p' + qqqq] + C['6p' + qqqq] / 2,\n 'F' + qqqq + '9': (8 * C['10' + qqqq]) / 3 + C['7' + qqqq] - C['8' + qqqq] / 6 - 16 * C['9' + qqqq],\n 'F' + qqqq + '9p': (8 * C['10p' + qqqq]) / 3 + C['7p' + qqqq] - C['8p' + qqqq] / 6 - 16 * C['9p' + qqqq],\n }\n raise ValueError(\"Case not implemented: {}\".format(qqqq))", "response": "From Bern to 4 - quark Fierz basis for Classes III and IV and V."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef Fierz_to_JMS_lep(C, ddll):\n if ddll[:2] == 'uc':\n s = str(uflav[ddll[0]] + 1)\n b = str(uflav[ddll[1]] + 1)\n q = 'u'\n else:\n s = str(dflav[ddll[0]] + 1)\n b = str(dflav[ddll[1]] + 1)\n q = 'd'\n l = str(lflav[ddll[4:ddll.find('n')]] + 1)\n lp = str(lflav[ddll[ddll.find('_',5)+1:len(ddll)]] + 1)\n ind = ddll.replace('l_','').replace('nu_','')\n d = {\n \"Ve\" + q + \"LL\" + '_' + l + lp + s + b : -C['F' + ind + '10'] + C['F' + ind + '9'],\n \"V\" + q + \"eLR\" + '_' + s + b + l + lp : C['F' + ind + '10'] + C['F' + ind + '9'],\n \"Se\" + q + \"RR\" + '_' + l + lp + s + b : C['F' + ind + 'P'] + C['F' + ind + 'S'],\n \"Se\" + q + \"RL\" + '_' + lp + l + b + s : -C['F' + ind + 'P'].conjugate() + C['F' + ind + 'S'].conjugate(),\n \"Te\" + q + \"RR\" + '_' + lp + l + b + s : C['F' + ind + 'T'].conjugate() - C['F' + ind + 'T5'].conjugate(),\n \"Te\" + q + \"RR\" + '_' + l + lp + s + b : C['F' + ind + 'T'] + C['F' + ind + 'T5'],\n \"Ve\" + q + \"LR\" + '_' + l + lp + s + b : -C['F' + ind + '10p'] + C['F' + ind + '9p'],\n \"Ve\" + q + \"RR\" + '_' + l + lp + s + b : C['F' + ind + '10p'] + C['F' + ind + '9p'],\n \"Se\" + q + \"RL\" + '_' + l + lp + s + b : C['F' + ind + 'Pp'] + C['F' + ind + 'Sp'],\n \"Se\" + q + \"RR\" + '_' + lp + l + b + s : -C['F' + ind + 'Pp'].conjugate() + C['F' + ind + 'Sp'].conjugate(),\n }\n return symmetrize_JMS_dict(d)", "response": "From Fierz to JMS basis for Class V. ddll should be of the form sbl_enu_tau dbl_munu_e etc."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef Fierz_to_Bern_nunu(C, ddll):\n ind = ddll.replace('l_','').replace('nu_','')\n dic = {\n\n 'nu1' + ind : C['F' + ind + 'nu'],\n 'nu1p' + ind : C['F' + ind + 'nup']\n }\n return dic", "response": "From semileptonic Fierz basis to Bern semileptonic basis for Class V and ddll should be of the form sbl_enu_tau dbl_munu_e"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef Bern_to_Fierz_nunu(C,ddll):\n ind = ddll.replace('l_','').replace('nu_','')\n return {\n 'F' + ind + 'nu': C['nu1' + ind],\n 'F' + ind + 'nup': C['nu1p' + ind],\n }", "response": "From semileptonic Bern basis to Fierz basis for Class V. ddll should be of the form sbl_enu_tau dbl_munu_e"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef Fierz_to_Flavio_lep(C, ddll, parameters, norm_gf=True):\n p = parameters\n V = ckmutil.ckm.ckm_tree(p[\"Vus\"], p[\"Vub\"], p[\"Vcb\"], p[\"delta\"])\n if ddll[:2] == 'sb':\n xi = V[2, 2] * V[2, 1].conj()\n elif ddll[:2] == 'db':\n xi = V[2, 2] * V[2, 0].conj()\n elif ddll[:2] == 'ds':\n xi = V[2, 1] * V[2, 0].conj()\n elif ddll[:2] == 'uc':\n xi = V[1, 2].conj() * V[0, 2]\n else:\n raise ValueError(\"Unexpected flavours: {}\".format(ddll[:2]))\n q1, q2 = ddll[:2]\n l1 = ddll[4:ddll.find('n')]\n l2 = ddll[ddll.find('_', 5) + 1:]\n ind = q1 + q2 + l1 + l2\n # flavio has indices within currents inverted\n indfl = q2 + q1 + l2 + l1\n e = sqrt(4* pi * parameters['alpha_e'])\n if ddll[:2] == 'sb' or ddll[:2] == 'db':\n mq = parameters['m_b']\n elif ddll[:2] == 'ds':\n mq = parameters['m_s']\n elif ddll[:2] == 'uc':\n mq = parameters['m_c']\n else:\n KeyError(\"Not sure what to do with quark mass for flavour {}\".format(ddll[:2]))\n dic = {\n \"C9_\" + indfl : (16 * pi**2) / e**2 * C['F' + ind + '9'],\n \"C9p_\" + indfl : (16 * pi**2) / e**2 * C['F' + ind + '9p'],\n \"C10_\" + indfl : (16 * pi**2) / e**2 * C['F' + ind + '10'],\n \"C10p_\" + indfl : (16 * pi**2) / e**2 * C['F' + ind + '10p'],\n \"CS_\" + indfl : (16 * pi**2) / e**2 / mq * C['F' + ind + 'S'],\n \"CSp_\" + indfl : (16 * pi**2) / e**2 / mq * C['F' + ind + 'Sp'],\n \"CP_\" + indfl : (16 * pi**2) / e**2 / mq * C['F' + ind + 'P'],\n \"CPp_\" + indfl : (16 * pi**2) / e**2 / mq * C['F' + ind + 'Pp'],\n }\n if norm_gf:\n prefactor = sqrt(2)/p['GF']/xi/4\n else:\n prefactor = 1 / xi\n return {k: prefactor * v for k,v in dic.items()}", "response": "From semileptonic Fierz basis to Flavio semileptonic basis for Class V ddll should be of the form sb_enu_tau dbl_munu_e etc."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef Flavio_to_Fierz_nunu(C, ddll, parameters, norm_gf=True):\n p = parameters\n V = ckmutil.ckm.ckm_tree(p[\"Vus\"], p[\"Vub\"], p[\"Vcb\"], p[\"delta\"])\n if ddll[:2] == 'sb':\n xi = V[2, 2] * V[2, 1].conj()\n elif ddll[:2] == 'db':\n xi = V[2, 2] * V[2, 0].conj()\n elif ddll[:2] == 'ds':\n xi = V[2, 1] * V[2, 0].conj()\n else:\n raise ValueError(\"Unexpected flavours: {}\".format(ddll[:2]))\n q1, q2 = ddll[:2]\n l1 = ddll[4:ddll.find('n')]\n l2 = ddll[ddll.find('_', 5) + 1:]\n ind = q1 + q2 + l1 + l2\n # flavio has indices within currents inverted\n indnu = q2 + q1 + 'nu' + l2 + 'nu' + l1\n e = sqrt(4* pi * parameters['alpha_e'])\n dic = {\n 'F' + ind + 'nu': C[\"CL_\" + indnu] / ((8 * pi**2) / e**2),\n 'F' + ind + 'nup': C[\"CR_\" + indnu] / ((8 * pi**2) / e**2),\n }\n if norm_gf:\n prefactor = sqrt(2)/p['GF']/xi/4\n else:\n prefactor = 1 / xi\n return {k: v / prefactor for k, v in dic.items()}", "response": "From Flavio semileptonic basis to leptonic Fierz basis for Class V. ddll should be of the form sb_enu_tau dbl_munu_e etc."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef Fierz_to_EOS_lep(C, ddll, parameters):\n p = parameters\n V = ckmutil.ckm.ckm_tree(p[\"Vus\"], p[\"Vub\"], p[\"Vcb\"], p[\"delta\"])\n Vtb = V[2,2]\n Vts = V[2,1]\n ind = ddll.replace('l_','').replace('nu_','')\n ind2 = ddll.replace('l_','').replace('nu_','')[2::]\n e = sqrt(4* pi * parameters['alpha_e'])\n dic = {\n 'b->s' + ind2 + '::c9' : (16 * pi**2) / e**2 * C['F' + ind + '9'],\n 'b->s' + ind2 + \"::c9'\" : (16 * pi**2) / e**2 * C['F' + ind + '9p'],\n 'b->s' + ind2 + \"::c10\" : (16 * pi**2) / e**2 * C['F' + ind + '10'],\n 'b->s' + ind2 + \"::c10'\" : (16 * pi**2) / e**2 * C['F' + ind + '10p'],\n 'b->s' + ind2 + \"::cS\" : (16 * pi**2) / e**2 * C['F' + ind + 'S'],\n 'b->s' + ind2 + \"::cS'\" : (16 * pi**2) / e**2 * C['F' + ind + 'Sp'],\n 'b->s' + ind2 + \"::cP\" : (16 * pi**2) / e**2 * C['F' + ind + 'P'],\n 'b->s' + ind2 + \"::cP'\" : (16 * pi**2) / e**2 * C['F' + ind + 'Pp'],\n 'b->s' + ind2 + \"::cT\" : (16 * pi**2) / e**2 * C['F' + ind + 'T'],\n 'b->s' + ind2 + \"::cT5\" : (16 * pi**2) / e**2 * C['F' + ind + 'T5']\n }\n prefactor = sqrt(2)/p['GF']/Vtb/Vts.conj()/4\n return {k: prefactor * v for k,v in dic.items()}", "response": "From semileptonic Fierz basis to EOS semileptonic basis for Class V ddll should be of the form sbl_enu_tau dbl_munu_e etc."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef Fierz_to_JMS_chrom(C, qq):\n if qq[0] in dflav:\n s = dflav[qq[0]] + 1\n b = dflav[qq[1]] + 1\n return {'dgamma_{}{}'.format(s, b): C['F7gamma' + qq],\n 'dG_{}{}'.format(s, b): C['F8g' + qq],\n 'dgamma_{}{}'.format(b, s): C['F7pgamma' + qq].conjugate(),\n 'dG_{}{}'.format(b, s): C['F8pg' + qq].conjugate(),\n }\n else:\n u = uflav[qq[0]] + 1\n c = uflav[qq[1]] + 1\n return {'ugamma_{}{}'.format(u, c): C['F7gamma' + qq],\n 'uG_{}{}'.format(u, c): C['F8g' + qq],\n 'ugamma_{}{}'.format(c, u): C['F7pgamma' + qq].conjugate(),\n 'uG_{}{}'.format(c, u): C['F8pg' + qq].conjugate(),\n }", "response": "From chromomagnetic Fierz to JMS basis for Class V."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef Flavio_to_Fierz_chrom(C, qq, parameters):\n p = parameters\n V = ckmutil.ckm.ckm_tree(p[\"Vus\"], p[\"Vub\"], p[\"Vcb\"], p[\"delta\"])\n if qq == 'sb':\n xi = V[2, 2] * V[2, 1].conj()\n elif qq == 'db':\n xi = V[2, 2] * V[2, 0].conj()\n elif qq == 'ds':\n xi = V[2, 1] * V[2, 0].conj()\n elif qq == 'uc':\n xi = V[1, 2].conj() * V[0, 2]\n else:\n raise ValueError(\"Unexpected flavours: {}\".format(qq))\n qqfl = qq[::-1]\n e = sqrt(4 * pi * parameters['alpha_e'])\n gs = sqrt(4 * pi * parameters['alpha_s'])\n if qq == 'sb' or qq == 'db':\n mq = parameters['m_b']\n elif qq == 'ds':\n mq = parameters['m_s']\n elif qq == 'uc':\n mq = parameters['m_c']\n else:\n KeyError(\"Not sure what to do with quark mass for flavour {}\".format(qq))\n dic = {\n 'F7gamma' + qq: C[\"C7_\" + qqfl] / ((16 * pi**2) / e / mq),\n 'F8g' + qq: C[\"C8_\" + qqfl] / ((16 * pi**2) / gs / mq),\n 'F7pgamma' + qq: C[\"C7p_\" + qqfl] / ((16 * pi**2) / e / mq),\n 'F8pg' + qq: C[\"C8p_\" + qqfl] / ((16 * pi**2) / gs / mq)\n }\n prefactor = sqrt(2)/p['GF']/xi/4\n return {k: v / prefactor for k, v in dic.items()}", "response": "From Flavio to chromomagnetic Fierz basis for Class V. qq should be of the form sb db etc."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef JMS_to_FormFlavor_chrom(C, qq, parameters):\n e = sqrt(4 * pi * parameters['alpha_e'])\n gs = sqrt(4 * pi * parameters['alpha_s'])\n if qq[0] in dflav.keys():\n s = dflav[qq[0]]\n b = dflav[qq[1]]\n return {\n 'CAR_' + qq : C['dgamma'][s, b] / e,\n 'CGR_' + qq : C['dG'][s, b] / gs,\n 'CAL_' + qq : C['dgamma'][b, s].conj() / e,\n 'CGL_' + qq : C['dG'][b, s].conj() / gs,\n }\n if qq[0] in llflav.keys():\n l1 = llflav[qq[0]]\n l2 = llflav[qq[1]]\n return {\n 'CAR_' + qq : C['egamma'][l1, l2] / e,\n 'CAL_' + qq : C['egamma'][l2, l1].conj() / gs,\n }\n if qq[0] in uflav.keys():\n u = uflav[qq[0]]\n c = uflav[qq[1]]\n return {\n 'CAR_' + qq : C['ugamma'][u, c] / e,\n 'CGR_' + qq : C['uG'][u, c] / gs,\n 'CAL_' + qq : C['ugamma'][c, u].conj() / e,\n 'CGL_' + qq : C['uG'][c, u].conj() / gs,\n }\n else:\n return 'not in FormFlav_chrom'", "response": "From JMS to chromomagnetic FormFlavor basis for Class V. qq should be of the form sb ds uu mt em e mu etc."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving an object from a revision chain.", "response": "def cut_from_chain(sciobj_model):\n \"\"\"Remove an object from a revision chain.\n\n The object can be at any location in the chain, including the head or tail.\n\n Preconditions:\n - The object with the pid is verified to exist and to be a member of an\n revision chain. E.g., with:\n\n d1_gmn.app.views.asserts.is_existing_object(pid)\n d1_gmn.app.views.asserts.is_in_revision_chain(pid)\n\n Postconditions:\n - The given object is a standalone object with empty obsoletes, obsoletedBy and\n seriesId fields.\n - The previously adjacent objects in the chain are adjusted to close any gap that\n was created or remove dangling reference at the head or tail.\n - If the object was the last object in the chain and the chain has a SID, the SID\n reference is shifted over to the new last object in the chain.\n\n \"\"\"\n if _is_head(sciobj_model):\n old_pid = sciobj_model.obsoletes.did\n _cut_head_from_chain(sciobj_model)\n elif _is_tail(sciobj_model):\n old_pid = sciobj_model.obsoleted_by.did\n _cut_tail_from_chain(sciobj_model)\n else:\n old_pid = sciobj_model.obsoleted_by.did\n _cut_embedded_from_chain(sciobj_model)\n _update_sid_to_last_existing_pid_map(old_pid)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nresolves sid to a PID.", "response": "def resolve_sid(sid):\n \"\"\"Get the PID to which the ``sid`` currently maps.\n\n Preconditions:\n - ``sid`` is verified to exist. E.g., with d1_gmn.app.views.asserts.is_sid().\n\n \"\"\"\n return d1_gmn.app.models.Chain.objects.get(sid__did=sid).head_pid.did"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef is_obsoletes_pid(pid):\n return d1_gmn.app.models.ScienceObject.objects.filter(obsoletes__did=pid).exists()", "response": "Return True if the given PID is referenced in the obsoletes field of any object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns True if the given PID is referenced in the obsoletedBy field of any object.", "response": "def is_obsoleted_by_pid(pid):\n \"\"\"Return True if ``pid`` is referenced in the obsoletedBy field of any object.\n\n This will return True even if the PID is in the obsoletes field of an object that\n does not exist on the local MN, such as replica that is in an incomplete chain.\n\n \"\"\"\n return d1_gmn.app.models.ScienceObject.objects.filter(\n obsoleted_by__did=pid\n ).exists()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmerges two chains. For use when it becomes known that two chains that were created separately actually are separate sections of the same chain E.g.: - A obsoleted by X is created. A has no SID. X does not exist yet. A chain is created for A. - B obsoleting Y is created. B has SID. Y does not exist yet. A chain is created for B. - C obsoleting X, obsoleted by Y is created. C tells us that X and Y are in the same chain, which means that A and B are in the same chain. At this point, the two chains need to be merged. Merging the chains causes A to take on the SID of B.", "response": "def _merge_chains(chain_model_a, chain_model_b):\n \"\"\"Merge two chains.\n\n For use when it becomes known that two chains that were created separately\n actually are separate sections of the same chain\n\n E.g.:\n\n - A obsoleted by X is created. A has no SID. X does not exist yet. A chain is\n created for A.\n - B obsoleting Y is created. B has SID. Y does not exist yet. A chain is created\n for B.\n - C obsoleting X, obsoleted by Y is created. C tells us that X and Y are in the\n same chain, which means that A and B are in the same chain. At this point, the\n two chains need to be merged. Merging the chains causes A to take on the SID of\n B.\n\n \"\"\"\n _set_chain_sid(\n chain_model_a, d1_gmn.app.did.get_did_by_foreign_key(chain_model_b.sid)\n )\n for member_model in _get_all_chain_member_queryset_by_chain(chain_model_b):\n member_model.chain = chain_model_a\n member_model.save()\n chain_model_b.delete()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _set_chain_sid(chain_model, sid):\n if not sid:\n return\n if chain_model.sid and chain_model.sid.did != sid:\n raise d1_common.types.exceptions.ServiceFailure(\n 0,\n 'Attempted to modify existing SID. '\n 'existing_sid=\"{}\", new_sid=\"{}\"'.format(chain_model.sid.did, sid),\n )\n chain_model.sid = d1_gmn.app.did.get_or_create_did(sid)\n chain_model.save()", "response": "Set or update the SID for the chain."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_chain_by_pid(pid):\n try:\n return d1_gmn.app.models.ChainMember.objects.get(pid__did=pid).chain\n except d1_gmn.app.models.ChainMember.DoesNotExist:\n pass", "response": "Find chain by pid Return None if not found."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_chain_by_sid(sid):\n try:\n return d1_gmn.app.models.Chain.objects.get(sid__did=sid)\n except d1_gmn.app.models.Chain.DoesNotExist:\n pass", "response": "Return the chain object for the given sid."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate the SID to the last existing object in the chain to which pid is set.", "response": "def _update_sid_to_last_existing_pid_map(pid):\n \"\"\"Set chain head PID to the last existing object in the chain to which ``pid``\n belongs. If SID has been set for chain, it resolves to chain head PID.\n\n Intended to be called in MNStorage.delete() and other chain manipulation.\n\n Preconditions:\n - ``pid`` must exist and be verified to be a PID.\n d1_gmn.app.views.asserts.is_existing_object()\n\n \"\"\"\n last_pid = _find_head_or_latest_connected(pid)\n chain_model = _get_chain_by_pid(last_pid)\n if not chain_model:\n return\n chain_model.head_pid = d1_gmn.app.did.get_or_create_did(last_pid)\n chain_model.save()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _create_chain(pid, sid):\n chain_model = d1_gmn.app.models.Chain(\n # sid=d1_gmn.app.models.did(sid) if sid else None,\n head_pid=d1_gmn.app.did.get_or_create_did(pid)\n )\n chain_model.save()\n _add_pid_to_chain(chain_model, pid)\n _set_chain_sid(chain_model, sid)\n return chain_model", "response": "Create the initial chain structure for a new standalone object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\npopulates entity type from attached descriptor schema.", "response": "def populate_entity_type(apps, schema_editor):\n \"\"\"Populate entity type from attached descriptor schema.\"\"\"\n Entity = apps.get_model('flow', 'Entity')\n\n for entity in Entity.objects.all():\n if entity.descriptor_schema is not None:\n entity.type = entity.descriptor_schema.slug\n entity.save()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef serialize_gen(\n obj_pyxb, encoding='utf-8', pretty=False, strip_prolog=False, xslt_url=None\n):\n \"\"\"Serialize PyXB object to XML.\n\n Args:\n obj_pyxb: PyXB object\n PyXB object to serialize.\n\n encoding: str\n Encoding to use for XML doc bytes\n\n pretty: bool\n True: Use pretty print formatting for human readability.\n\n strip_prolog:\n True: remove any XML prolog (e.g., ````),\n from the resulting XML doc.\n\n xslt_url: str\n If specified, add a processing instruction to the XML doc that specifies the\n download location for an XSLT stylesheet.\n\n Returns:\n XML document\n\n \"\"\"\n assert d1_common.type_conversions.is_pyxb(obj_pyxb)\n assert encoding in (None, 'utf-8', 'UTF-8')\n try:\n obj_dom = obj_pyxb.toDOM()\n except pyxb.ValidationError as e:\n raise ValueError(\n 'Unable to serialize PyXB to XML. error=\"{}\"'.format(e.details())\n )\n except pyxb.PyXBException as e:\n raise ValueError('Unable to serialize PyXB to XML. error=\"{}\"'.format(str(e)))\n\n if xslt_url:\n xslt_processing_instruction = obj_dom.createProcessingInstruction(\n 'xml-stylesheet', 'type=\"text/xsl\" href=\"{}\"'.format(xslt_url)\n )\n root = obj_dom.firstChild\n obj_dom.insertBefore(xslt_processing_instruction, root)\n\n if pretty:\n xml_str = obj_dom.toprettyxml(indent=' ', encoding=encoding)\n # Remove empty lines in the result caused by a bug in toprettyxml()\n if encoding is None:\n xml_str = re.sub(r'^\\s*$\\n', r'', xml_str, flags=re.MULTILINE)\n else:\n xml_str = re.sub(b'^\\s*$\\n', b'', xml_str, flags=re.MULTILINE)\n else:\n xml_str = obj_dom.toxml(encoding)\n if strip_prolog:\n if encoding is None:\n xml_str = re.sub(r'^<\\?(.*)\\?>', r'', xml_str)\n else:\n xml_str = re.sub(b'^<\\?(.*)\\?>', b'', xml_str)\n\n return xml_str.strip()", "response": "Serialize PyXB object to XML."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nserialize PyXB object to XML bytes with UTF - 8 encoding for transport over the network storage and other machine usage.", "response": "def serialize_for_transport(obj_pyxb, pretty=False, strip_prolog=False, xslt_url=None):\n \"\"\"Serialize PyXB object to XML ``bytes`` with UTF-8 encoding for transport over the\n network, filesystem storage and other machine usage.\n\n Args:\n obj_pyxb: PyXB object\n PyXB object to serialize.\n\n pretty: bool\n True: Use pretty print formatting for human readability.\n\n strip_prolog:\n True: remove any XML prolog (e.g., ````),\n from the resulting XML doc.\n\n xslt_url: str\n If specified, add a processing instruction to the XML doc that specifies the\n download location for an XSLT stylesheet.\n\n Returns:\n bytes: UTF-8 encoded XML document\n\n See Also:\n ``serialize_for_display()``\n\n \"\"\"\n return serialize_gen(obj_pyxb, 'utf-8', pretty, strip_prolog, xslt_url)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef serialize_to_xml_str(obj_pyxb, pretty=True, strip_prolog=False, xslt_url=None):\n return serialize_gen(obj_pyxb, None, pretty, strip_prolog, xslt_url)", "response": "Serialize PyXB object to pretty printed XML str for display."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reformat_to_pretty_xml(doc_xml):\n assert isinstance(doc_xml, str)\n dom_obj = xml.dom.minidom.parseString(doc_xml)\n pretty_xml = dom_obj.toprettyxml(indent=' ')\n # Remove empty lines in the result caused by a bug in toprettyxml()\n return re.sub(r'^\\s*$\\n', r'', pretty_xml, flags=re.MULTILINE)", "response": "Pretty print XML doc."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if two XML docs are semantically equivalent.", "response": "def are_equivalent(a_xml, b_xml, encoding=None):\n \"\"\"Return True if two XML docs are semantically equivalent, else False.\n\n - TODO: Include test for tails. Skipped for now because tails are not used in any\n D1 types.\n\n \"\"\"\n assert isinstance(a_xml, str)\n assert isinstance(b_xml, str)\n a_tree = str_to_etree(a_xml, encoding)\n b_tree = str_to_etree(b_xml, encoding)\n return are_equal_or_superset(a_tree, b_tree) and are_equal_or_superset(\n b_tree, a_tree\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn True if superset_tree is equal to or a superset of base_tree.", "response": "def are_equal_or_superset(superset_tree, base_tree):\n \"\"\"Return True if ``superset_tree`` is equal to or a superset of ``base_tree``\n\n - Checks that all elements and attributes in ``superset_tree`` are present and\n contain the same values as in ``base_tree``. For elements, also checks that the\n order is the same.\n - Can be used for checking if one XML document is based on another, as long as all\n the information in ``base_tree`` is also present and unmodified in\n ``superset_tree``.\n\n \"\"\"\n try:\n _compare_attr(superset_tree, base_tree)\n _compare_text(superset_tree, base_tree)\n except CompareError as e:\n logger.debug(str(e))\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef are_equal_xml(a_xml, b_xml):\n a_dom = xml.dom.minidom.parseString(a_xml)\n b_dom = xml.dom.minidom.parseString(b_xml)\n return are_equal_elements(a_dom.documentElement, b_dom.documentElement)", "response": "Normalize and compare XML documents for equality."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef are_equal_elements(a_el, b_el):\n if a_el.tagName != b_el.tagName:\n return False\n if sorted(a_el.attributes.items()) != sorted(b_el.attributes.items()):\n return False\n if len(a_el.childNodes) != len(b_el.childNodes):\n return False\n for a_child_el, b_child_el in zip(a_el.childNodes, b_el.childNodes):\n if a_child_el.nodeType != b_child_el.nodeType:\n return False\n if (\n a_child_el.nodeType == a_child_el.TEXT_NODE\n and a_child_el.data != b_child_el.data\n ):\n return False\n if a_child_el.nodeType == a_child_el.ELEMENT_NODE and not are_equal_elements(\n a_child_el, b_child_el\n ):\n return False\n return True", "response": "Normalize and compare ElementTrees for equality."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef format_diff_pyxb(a_pyxb, b_pyxb):\n return '\\n'.join(\n difflib.ndiff(\n serialize_to_xml_str(a_pyxb).splitlines(),\n serialize_to_xml_str(b_pyxb).splitlines(),\n )\n )", "response": "Create a diff between two PyXB objects."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format_diff_xml(a_xml, b_xml):\n return '\\n'.join(\n difflib.ndiff(\n reformat_to_pretty_xml(a_xml).splitlines(),\n reformat_to_pretty_xml(b_xml).splitlines(),\n )\n )", "response": "Create a diff between two XML documents."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets an optional attribute value from a PyXB element.", "response": "def get_opt_attr(obj_pyxb, attr_str, default_val=None):\n \"\"\"Get an optional attribute value from a PyXB element.\n\n The attributes for elements that are optional according to the schema and\n not set in the PyXB object are present and set to None.\n\n PyXB validation will fail if required elements are missing.\n\n Args:\n obj_pyxb: PyXB object\n attr_str: str\n Name of an attribute that the PyXB object may contain.\n\n default_val: any object\n Value to return if the attribute is not present.\n\n Returns:\n str : Value of the attribute if present, else ``default_val``.\n\n \"\"\"\n v = getattr(obj_pyxb, attr_str, default_val)\n return v if v is not None else default_val"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget an optional Simple Content value from a PyXB element.", "response": "def get_opt_val(obj_pyxb, attr_str, default_val=None):\n \"\"\"Get an optional Simple Content value from a PyXB element.\n\n The attributes for elements that are optional according to the schema and\n not set in the PyXB object are present and set to None.\n\n PyXB validation will fail if required elements are missing.\n\n Args:\n obj_pyxb: PyXB object\n\n attr_str: str\n Name of an attribute that the PyXB object may contain.\n\n default_val: any object\n Value to return if the attribute is not present.\n\n Returns:\n str : Value of the attribute if present, else ``default_val``.\n\n \"\"\"\n try:\n return get_req_val(getattr(obj_pyxb, attr_str))\n except (ValueError, AttributeError):\n return default_val"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef resolwe_exception_handler(exc, context):\n response = exception_handler(exc, context)\n\n if isinstance(exc, ValidationError):\n if response is None:\n response = Response({})\n response.status_code = 400\n response.data['error'] = exc.message\n\n return response", "response": "Handle exceptions raised in API and make them nicer."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef find_valid_combinations(cert_file_name_list, jwt_file_name_list):\n for cert_file_name in cert_file_name_list:\n cert_pem = '' # self.test_files.load_utf8_to_str(cert_file_name)\n cert_obj = d1_common.cert.x509.deserialize_pem(cert_pem)\n # d1_common.cert.x509.log_cert_info(logging.info, 'CERT', cert_obj)\n for jwt_file_name in jwt_file_name_list:\n jwt_bu64 = '' # self.test_files.load_utf8_to_str(jwt_file_name)\n # d1_common.cert.jwt.log_jwt_bu64_info(logging.info, 'JWT', jwt_bu64)\n is_ok = False\n try:\n d1_common.cert.jwt.validate_and_decode(jwt_bu64, cert_obj)\n except d1_common.cert.jwt.JwtException as e:\n logging.info('Invalid. msg=\"{}\"'.format(str(e)))\n else:\n is_ok = True\n logging.info(\n '{} {} {}'.format(\n '***' if is_ok else ' ', cert_file_name, jwt_file_name\n )\n )", "response": "Given a list of cert and JWT file names print a list showing each combination of each combination containing indicators for combinations where the JWT signature was successfully validated with the cert."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parseUrl(url):\n scheme, netloc, url, params, query, fragment = urllib.parse.urlparse(url)\n query_dict = {\n k: sorted(v) if len(v) > 1 else v[0]\n for k, v in list(urllib.parse.parse_qs(query).items())\n }\n return {\n 'scheme': scheme,\n 'netloc': netloc,\n 'url': url,\n 'params': params,\n 'query': query_dict,\n 'fragment': fragment,\n }", "response": "Parse a URL into a dict containing scheme netloc url params query fragment keys."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nencode a URL path element according to RFC3986.", "response": "def encodePathElement(element):\n \"\"\"Encode a URL path element according to RFC3986.\"\"\"\n return urllib.parse.quote(\n (\n element.encode('utf-8')\n if isinstance(element, str)\n else str(element)\n if isinstance(element, int)\n else element\n ),\n safe=d1_common.const.URL_PATHELEMENT_SAFE_CHARS,\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nencoding a URL query element according to RFC3986.", "response": "def encodeQueryElement(element):\n \"\"\"Encode a URL query element according to RFC3986.\"\"\"\n return urllib.parse.quote(\n (\n element.encode('utf-8')\n if isinstance(element, str)\n else str(element)\n if isinstance(element, int)\n else element\n ),\n safe=d1_common.const.URL_QUERYELEMENT_SAFE_CHARS,\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef urlencode(query, doseq=0):\n if hasattr(query, \"items\"):\n # Remove None parameters from query. Dictionaries are mutable, so we can\n # remove the the items directly. dict.keys() creates a copy of the\n # dictionary keys, making it safe to remove elements from the dictionary\n # while iterating.\n for k in list(query.keys()):\n if query[k] is None:\n del query[k]\n # mapping objects\n query = list(query.items())\n else:\n # Remove None parameters from query. Tuples are immutable, so we have to\n # build a new version that does not contain the elements we want to remove,\n # and replace the original with it.\n query = list(filter((lambda x: x[1] is not None), query))\n # it's a bother at times that strings and string-like objects are\n # sequences...\n try:\n # non-sequence items should not work with len()\n # non-empty strings will fail this\n if len(query) and not isinstance(query[0], tuple):\n raise TypeError\n # zero-length sequences of all types will get here and succeed,\n # but that's a minor nit - since the original implementation\n # allowed empty dicts that type of behavior probably should be\n # preserved for consistency\n except TypeError:\n ty, va, tb = sys.exc_info()\n raise TypeError(\n \"not a valid non-string sequence or mapping object\"\n ).with_traceback(tb)\n\n l = []\n if not doseq:\n # preserve old behavior\n for k, v in query:\n k = encodeQueryElement(str(k))\n v = encodeQueryElement(str(v))\n l.append(k + '=' + v)\n else:\n for k, v in query:\n k = encodeQueryElement(str(k))\n if isinstance(v, str):\n v = encodeQueryElement(v)\n l.append(k + '=' + v)\n elif isinstance(v, str):\n # is there a reasonable way to convert to ASCII?\n # encode generates a string, but \"replace\" or \"ignore\"\n # lose information and \"strict\" can raise UnicodeError\n v = encodeQueryElement(v.encode(\"ASCII\", \"replace\"))\n l.append(k + '=' + v)\n else:\n try:\n # is this a sufficient test for sequence-ness?\n len(v)\n except TypeError:\n # not a sequence\n v = encodeQueryElement(str(v))\n l.append(k + '=' + v)\n else:\n # loop over the sequence\n for elt in v:\n l.append(k + '=' + encodeQueryElement(str(elt)))\n return '&'.join(sorted(l))", "response": "Modified version of urllib. urlencode that is conforms to RFC3986."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nattempt to create a valid CN BaseURL when one or more sections of the URL are missing.", "response": "def makeCNBaseURL(url):\n \"\"\"Attempt to create a valid CN BaseURL when one or more sections of the URL are\n missing.\"\"\"\n o = urllib.parse.urlparse(url, scheme=d1_common.const.DEFAULT_CN_PROTOCOL)\n if o.netloc and o.path:\n netloc = o.netloc\n path = o.path\n elif o.netloc:\n netloc = o.netloc\n path = d1_common.const.DEFAULT_CN_PATH\n elif o.path:\n s = o.path.split('/', 1)\n netloc = s[0]\n if len(s) == 1:\n path = d1_common.const.DEFAULT_CN_PATH\n else:\n path = s[1]\n else:\n netloc = d1_common.const.DEFAULT_CN_HOST\n path = d1_common.const.DEFAULT_CN_PATH\n return urllib.parse.urlunparse(\n (o.scheme, netloc, path, o.params, o.query, o.fragment)\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nattempting to create a valid MN BaseURL when one or more sections of the URL are missing.", "response": "def makeMNBaseURL(url):\n \"\"\"Attempt to create a valid MN BaseURL when one or more sections of the URL are\n missing.\"\"\"\n o = urllib.parse.urlparse(url, scheme=d1_common.const.DEFAULT_MN_PROTOCOL)\n if o.netloc and o.path:\n netloc = o.netloc\n path = o.path\n elif o.netloc:\n netloc = o.netloc\n path = d1_common.const.DEFAULT_MN_PATH\n elif o.path:\n s = o.path.split('/', 1)\n netloc = s[0]\n if len(s) == 1:\n path = d1_common.const.DEFAULT_MN_PATH\n else:\n path = s[1]\n else:\n netloc = d1_common.const.DEFAULT_MN_HOST\n path = d1_common.const.DEFAULT_MN_PATH\n return urllib.parse.urlunparse(\n (o.scheme, netloc, path, o.params, o.query, o.fragment)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngive two URLs return a list of any mismatches.", "response": "def find_url_mismatches(a_url, b_url):\n \"\"\"Given two URLs, return a list of any mismatches.\n\n If the list is empty, the URLs are equivalent. Implemented by parsing and comparing\n the elements. See RFC 1738 for details.\n\n \"\"\"\n diff_list = []\n a_parts = urllib.parse.urlparse(a_url)\n b_parts = urllib.parse.urlparse(b_url)\n # scheme\n if a_parts.scheme.lower() != b_parts.scheme.lower():\n diff_list.append(\n 'Schemes differ. a=\"{}\" b=\"{}\" differ'.format(\n a_parts.scheme.lower(), b_parts.scheme.lower()\n )\n )\n # netloc\n if a_parts.netloc.lower() != b_parts.netloc.lower():\n diff_list.append(\n 'Network locations differ. a=\"{}\" b=\"{}\"'.format(\n a_parts.netloc.lower(), b_parts.netloc.lower\n )\n )\n # path\n if a_parts.path != b_parts.path:\n diff_list.append(\n 'Paths differ: a=\"{}\" b=\"{}\"'.format(a_parts.path, b_parts.path)\n )\n # fragment\n if a_parts.fragment != b_parts.fragment:\n diff_list.append(\n 'Fragments differ. a=\"{}\" b=\"{}\"'.format(a_parts.fragment, b_parts.fragment)\n )\n # param\n a_param_list = sorted(a_parts.params.split(\";\"))\n b_param_list = sorted(b_parts.params.split(\";\"))\n if a_param_list != b_param_list:\n diff_list.append(\n 'Parameters differ. a=\"{}\" b=\"{}\"'.format(\n ', '.join(a_param_list), ', '.join(b_param_list)\n )\n )\n # query\n a_query_dict = urllib.parse.parse_qs(a_parts.query)\n b_query_dict = urllib.parse.parse_qs(b_parts.query)\n if len(list(a_query_dict.keys())) != len(list(b_query_dict.keys())):\n diff_list.append(\n 'Number of query keys differs. a={} b={}'.format(\n len(list(a_query_dict.keys())), len(list(b_query_dict.keys()))\n )\n )\n for a_key in b_query_dict:\n if a_key not in list(b_query_dict.keys()):\n diff_list.append(\n 'Query key in first missing in second. a_key=\"{}\"'.format(a_key)\n )\n elif sorted(a_query_dict[a_key]) != sorted(b_query_dict[a_key]):\n diff_list.append(\n 'Query values differ. key=\"{}\" a_value=\"{}\" b_value=\"{}\"'.format(\n a_key, sorted(a_query_dict[a_key]), sorted(b_query_dict[a_key])\n )\n )\n for b_key in b_query_dict:\n if b_key not in a_query_dict:\n diff_list.append(\n 'Query key in second missing in first. b_key=\"{}\"'.format(b_key)\n )\n return diff_list"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef resolve(self, pid):\n client = d1_cli.impl.client.CLICNClient(\n **self._cn_client_connect_params_from_session()\n )\n object_location_list_pyxb = client.resolve(pid)\n for location in object_location_list_pyxb.objectLocation:\n d1_cli.impl.util.print_info(location.url)", "response": "Get Object Locations for Object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting an object from the specified object location.", "response": "def science_object_get(self, pid, path):\n \"\"\"First try the MN set in the session.\n\n Then try to resolve via the CN set in the session.\n\n \"\"\"\n mn_client = d1_cli.impl.client.CLIMNClient(\n **self._mn_client_connect_params_from_session()\n )\n try:\n response = mn_client.get(pid)\n except d1_common.types.exceptions.DataONEException:\n pass\n else:\n self._output(response, path)\n return\n\n cn_client = d1_cli.impl.client.CLICNClient(\n **self._cn_client_connect_params_from_session()\n )\n object_location_list_pyxb = cn_client.resolve(pid)\n for location in object_location_list_pyxb.objectLocation:\n try:\n params = self._mn_client_connect_params_from_session()\n params[\"base_url\"] = location.baseURL\n mn_client = d1_cli.impl.client.CLIMNClient(**params)\n response = mn_client.get(pid)\n except d1_common.types.exceptions.DataONEException:\n pass\n else:\n self._output(response, path)\n return\n\n raise d1_cli.impl.exceptions.CLIError(\"Could not find object: {}\".format(pid))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a new Science Object on a Member Node.", "response": "def science_object_create(self, pid, path, format_id=None):\n \"\"\"Create a new Science Object on a Member Node.\"\"\"\n self._queue_science_object_create(pid, path, format_id)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef science_object_update(self, pid_old, path, pid_new, format_id=None):\n self._queue_science_object_update(pid_old, path, pid_new, format_id)", "response": "Obsolete a Science Object on a Member Node with a different one."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _output(self, file_like_object, path=None):\n if not path:\n self._output_to_display(file_like_object)\n else:\n self._output_to_file(file_like_object, path)", "response": "Display or save file like object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nperforming a SOLR search for a single object.", "response": "def _search_solr(self, line):\n \"\"\"Perform a SOLR search.\"\"\"\n try:\n query_str = self._create_solr_query(line)\n client = d1_cli.impl.client.CLICNClient(\n **self._cn_client_connect_params_from_session()\n )\n object_list_pyxb = client.search(\n queryType=d1_common.const.DEFAULT_SEARCH_ENGINE,\n query=query_str,\n start=self._session.get(d1_cli.impl.session.START_NAME),\n rows=self._session.get(d1_cli.impl.session.COUNT_NAME),\n )\n d1_cli.impl.util.print_info(self._pretty(object_list_pyxb.toxml(\"utf-8\")))\n except d1_common.types.exceptions.ServiceFailure as e:\n e = \"%\".join(str(e).splitlines()) # Flatten line\n regexp = re.compile(\n r\"errorCode: (?P\\d+)%.*%Status code: (?P\\d+)\"\n )\n result = regexp.search(e)\n if (\n (result is not None)\n and (result.group(\"error_code\") == \"500\")\n and (result.group(\"status_code\") == \"400\")\n ): # noqa: E129\n result = re.search(\n r\"description (?P[^<]+)\", e\n )\n msg = re.sub(\n \"&([^;]+);\",\n lambda m: chr(html.entities.name2codepoint[m.group(1)]),\n result.group(\"description\"),\n )\n d1_cli.impl.util.print_info(\"Warning: %s\" % msg)\n else:\n d1_cli.impl.util.print_error(\"Unexpected error:\\n%s\" % str(e))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a Solr query string from a line of text.", "response": "def _create_solr_query(self, line):\n \"\"\"Actual search - easier to test. \"\"\"\n p0 = \"\"\n if line:\n p0 = line.strip()\n p1 = self._query_string_to_solr_filter(line)\n p2 = self._object_format_to_solr_filter(line)\n p3 = self._time_span_to_solr_filter()\n result = p0 + p1 + p2 + p3\n return result.strip()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\napply func to list or tuple obj element - wise and directly otherwise.", "response": "def apply_filter_list(func, obj):\n \"\"\"Apply `func` to list or tuple `obj` element-wise and directly otherwise.\"\"\"\n if isinstance(obj, (list, tuple)):\n return [func(item) for item in obj]\n return func(obj)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting data object field.", "response": "def _get_data_attr(data, attr):\n \"\"\"Get data object field.\"\"\"\n if isinstance(data, dict):\n # `Data` object's id is hydrated as `__id` in expression engine\n data = data['__id']\n\n data_obj = Data.objects.get(id=data)\n\n return getattr(data_obj, attr)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef input_(data, field_path):\n data_obj = Data.objects.get(id=data['__id'])\n\n inputs = copy.deepcopy(data_obj.input)\n # XXX: Optimize by hydrating only the required field (major refactoring).\n hydrate_input_references(inputs, data_obj.process.input_schema)\n hydrate_input_uploads(inputs, data_obj.process.input_schema)\n\n return dict_dot(inputs, field_path)", "response": "Return a hydrated value of the input field."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns HydratedPath object for file - type field.", "response": "def _get_hydrated_path(field):\n \"\"\"Return HydratedPath object for file-type field.\"\"\"\n # Get only file path if whole file object is given.\n if isinstance(field, str) and hasattr(field, 'file_name'):\n # field is already actually a HydratedPath object\n return field\n\n if isinstance(field, dict) and 'file' in field:\n hydrated_path = field['file']\n\n if not hasattr(hydrated_path, 'file_name'):\n raise TypeError(\"Filter argument must be a valid file-type field.\")\n\n return hydrated_path"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns file s url based on base url set in settings.", "response": "def get_url(field):\n \"\"\"Return file's url based on base url set in settings.\"\"\"\n hydrated_path = _get_hydrated_path(field)\n base_url = getattr(settings, 'RESOLWE_HOST_URL', 'localhost')\n return \"{}/data/{}/{}\".format(base_url, hydrated_path.data_id, hydrated_path.file_name)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef descriptor(obj, path=''):\n if isinstance(obj, dict):\n # Current object is hydrated, so we need to get descriptor from\n # dict representation.\n desc = obj['__descriptor']\n else:\n desc = obj.descriptor\n\n resp = dict_dot(desc, path)\n\n if isinstance(resp, list) or isinstance(resp, dict):\n return json.dumps(resp)\n\n return resp", "response": "Return the descriptor of given object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _close_open_date_ranges(self, record):\n date_ranges = (('beginDate', 'endDate'),)\n for begin, end in date_ranges:\n if begin in record and end in record:\n return\n elif begin in record:\n record[end] = record[begin]\n elif end in record:\n record[begin] = record[end]", "response": "Close open date ranges in a record."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nescaping the term for the internal use of the AFIA API.", "response": "def escapeQueryTerm(self, term):\n \"\"\"\n + - && || ! ( ) { } [ ] ^ \" ~ * ? : \\\n \"\"\"\n reserved = [\n '+',\n '-',\n '&',\n '|',\n '!',\n '(',\n ')',\n '{',\n '}',\n '[',\n ']',\n '^',\n '\"',\n '~',\n '*',\n '?',\n ':',\n ]\n term = term.replace('\\\\', '\\\\\\\\')\n for c in reserved:\n term = term.replace(c, \"\\%s\" % c)\n return term"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef prepareQueryTerm(self, field, term):\n if term == \"*\":\n return term\n addstar = False\n if term[len(term) - 1] == '*':\n addstar = True\n term = term[0 : len(term) - 1]\n term = self.escapeQueryTerm(term)\n if addstar:\n term = '%s*' % term\n if self.getSolrType(field) in ['string', 'text', 'text_ws']:\n return '\"%s\"' % term\n return term", "response": "Prepare a query term for inclusion in a query."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncoerces the value of the key to the type specified by ftype.", "response": "def coerceType(self, ftype, value):\n \"\"\"Returns unicode(value) after trying to coerce it into the SOLR field type.\n\n @param ftype(string) The SOLR field type for the value\n @param value(any) The value that is to be represented as Unicode text.\n\n \"\"\"\n if value is None:\n return None\n if ftype == 'string':\n return str(value)\n elif ftype == 'text':\n return str(value)\n elif ftype == 'int':\n try:\n v = int(value)\n return str(v)\n except Exception:\n return None\n elif ftype == 'float':\n try:\n v = float(value)\n return str(v)\n except Exception:\n return None\n elif ftype == 'date':\n try:\n v = datetime.datetime.strptime(value, '%b %d %Y %I:%M%p')\n return v.isoformat()\n except Exception:\n return None\n return str(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getSolrType(self, field):\n ftype = 'string'\n try:\n ftype = self.fieldtypes[field]\n return ftype\n except Exception:\n pass\n fta = field.split('_')\n if len(fta) > 1:\n ft = fta[len(fta) - 1]\n try:\n ftype = self.fieldtypes[ft]\n # cache the type so it's used next time\n self.fieldtypes[field] = ftype\n except Exception:\n pass\n return ftype", "response": "Returns the SOLR type of the specified field name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the number of entries that match query.", "response": "def count(self, q='*:*', fq=None):\n \"\"\"Return the number of entries that match query.\"\"\"\n params = {'q': q, 'rows': '0'}\n if fq is not None:\n params['fq'] = fq\n res = self.search(params)\n hits = res['response']['numFound']\n return hits"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a dictionary of ids and values for the current index.", "response": "def getIds(self, query='*:*', fq=None, start=0, rows=1000):\n \"\"\"Returns a dictionary of: matches: number of matches failed: if true, then an\n exception was thrown start: starting index ids: [id, id, ...]\n\n See also the SOLRSearchResponseIterator class\n\n \"\"\"\n params = {'q': query, 'start': str(start), 'rows': str(rows), 'wt': 'python'}\n if fq is not None:\n params['fq'] = fq\n request = urllib.parse.urlencode(params, doseq=True)\n data = None\n response = {'matches': 0, 'start': start, 'failed': True, 'ids': []}\n try:\n rsp = self.doPost(self.solrBase + '', request, self.formheaders)\n data = eval(rsp.read())\n except Exception:\n pass\n if data is None:\n return response\n response['failed'] = False\n response['matches'] = data['response']['numFound']\n for doc in data['response']['docs']:\n response['ids'].append(doc['id'][0])\n return response"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nretrieve the specified document.", "response": "def get(self, id):\n \"\"\"Retrieves the specified document.\"\"\"\n params = {'q': 'id:%s' % str(id), 'wt': 'python'}\n request = urllib.parse.urlencode(params, doseq=True)\n data = None\n try:\n rsp = self.doPost(self.solrBase + '', request, self.formheaders)\n data = eval(rsp.read())\n except Exception:\n pass\n if data['response']['numFound'] > 0:\n return data['response']['docs'][0]\n return None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getFields(self, numTerms=1):\n if self._fields is not None:\n return self._fields\n params = {'numTerms': str(numTerms), 'wt': 'python'}\n request = urllib.parse.urlencode(params, doseq=True)\n rsp = self.doPost(self.solrBase + '/admin/luke', request, self.formheaders)\n data = eval(rsp.read())\n self._fields = data\n return data", "response": "Retrieve a list of fields."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving the unique values for a field along with their usage counts.", "response": "def fieldValues(self, name, q=\"*:*\", fq=None, maxvalues=-1):\n \"\"\"Retrieve the unique values for a field, along with their usage counts.\n http://localhost:8080/solr/select/?q=*:*&rows=0&facet=true&inde\n nt=on&wt=python&facet.field=genus_s&facet.limit=10&facet.zeros=false&fa\n cet.sort=false.\n\n @param name(string) Name of field to retrieve values for\n @param q(string) Query identifying the records from which values will be retrieved\n @param fq(string) Filter query restricting operation of query\n @param maxvalues(int) Maximum number of values to retrieve. Default is -1,\n which causes retrieval of all values.\n @return dict of {fieldname: [[value, count], ... ], }\n\n \"\"\"\n params = {\n 'q': q,\n 'rows': '0',\n 'facet': 'true',\n 'facet.field': name,\n 'facet.limit': str(maxvalues),\n 'facet.zeros': 'false',\n 'wt': 'python',\n 'facet.sort': 'false',\n }\n if fq is not None:\n params['fq'] = fq\n request = urllib.parse.urlencode(params, doseq=True)\n rsp = self.doPost(self.solrBase + '', request, self.formheaders)\n data = eval(rsp.read())\n return data['facet_counts']['facet_fields']"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the python type of the specified field name.", "response": "def getftype(self, name):\n \"\"\"Returns the python type for the specified field name. The field list is\n cached so multiple calls do not invoke a getFields request each time.\n\n @param name(string) The name of the SOLR field\n @returns Python type of the field.\n\n \"\"\"\n fields = self.getFields()\n try:\n fld = fields['fields'][name]\n except Exception:\n return str\n if fld['type'] in ['string', 'text', 'stext', 'text_ws']:\n return str\n if fld['type'] in ['sint', 'integer', 'long', 'slong']:\n return int\n if fld['type'] in ['sdouble', 'double', 'sfloat', 'float']:\n return float\n if fld['type'] in ['boolean']:\n return bool\n return fld['type']"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate a histogram of values from a string field.", "response": "def fieldAlphaHistogram(\n self, name, q='*:*', fq=None, nbins=10, includequeries=True\n ):\n \"\"\"Generates a histogram of values from a string field. Output is:\n\n [[low, high, count, query], ... ] Bin edges is determined by equal division\n of the fields\n\n \"\"\"\n oldpersist = self.persistent\n self.persistent = True\n bins = []\n qbin = []\n fvals = []\n try:\n # get total number of values for the field\n # TODO: this is a slow mechanism to retrieve the number of distinct values\n # Need to replace this with something more efficient.\n ## Can probably replace with a range of alpha chars - need to check on\n ## case sensitivity\n fvals = self.fieldValues(name, q, fq, maxvalues=-1)\n nvalues = len(fvals[name]) / 2\n if nvalues < nbins:\n nbins = nvalues\n if nvalues == nbins:\n # Use equivalence instead of range queries to retrieve the values\n for i in range(0, nbins):\n bin = [fvals[name][i * 2], fvals[name][i * 2], 0]\n binq = '%s:%s' % (name, self.prepareQueryTerm(name, bin[0]))\n qbin.append(binq)\n bins.append(bin)\n else:\n delta = nvalues / nbins\n if delta == 1:\n # Use equivalence queries, except the last one which includes the\n # remainder of terms\n for i in range(0, nbins - 2):\n bin = [fvals[name][i * 2], fvals[name][i * 2], 0]\n binq = '%s:%s' % (name, self.prepareQueryTerm(name, bin[0]))\n qbin.append(binq)\n bins.append(bin)\n term = fvals[name][(nbins - 1) * 2]\n bin = [term, fvals[name][((nvalues - 1) * 2)], 0]\n binq = '%s:[%s TO *]' % (name, self.prepareQueryTerm(name, term))\n qbin.append(binq)\n bins.append(bin)\n else:\n # Use range for all terms\n # now need to page through all the values and get those at the edges\n coffset = 0.0\n delta = float(nvalues) / float(nbins)\n for i in range(0, nbins):\n idxl = int(coffset) * 2\n idxu = (int(coffset + delta) * 2) - 2\n bin = [fvals[name][idxl], fvals[name][idxu], 0]\n # logging.info(str(bin))\n binq = ''\n try:\n if i == 0:\n binq = '%s:[* TO %s]' % (\n name,\n self.prepareQueryTerm(name, bin[1]),\n )\n elif i == nbins - 1:\n binq = '%s:[%s TO *]' % (\n name,\n self.prepareQueryTerm(name, bin[0]),\n )\n else:\n binq = '%s:[%s TO %s]' % (\n name,\n self.prepareQueryTerm(name, bin[0]),\n self.prepareQueryTerm(name, bin[1]),\n )\n except Exception:\n self.logger.exception('Exception 1 in fieldAlphaHistogram:')\n qbin.append(binq)\n bins.append(bin)\n coffset = coffset + delta\n # now execute the facet query request\n params = {\n 'q': q,\n 'rows': '0',\n 'facet': 'true',\n 'facet.field': name,\n 'facet.limit': '1',\n 'facet.mincount': 1,\n 'wt': 'python',\n }\n request = urllib.parse.urlencode(params, doseq=True)\n for sq in qbin:\n try:\n request = request + '&%s' % urllib.parse.urlencode(\n {'facet.query': self.encoder(sq)[0]}\n )\n except Exception:\n self.logger.exception('Exception 2 in fieldAlphaHistogram')\n rsp = self.doPost(self.solrBase + '', request, self.formheaders)\n data = eval(rsp.read())\n for i in range(0, len(bins)):\n v = data['facet_counts']['facet_queries'][qbin[i]]\n bins[i][2] = v\n if includequeries:\n bins[i].append(qbin[i])\n finally:\n self.persistent = oldpersist\n if not self.persistent:\n self.conn.close()\n return bins"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fieldHistogram(\n self, name, q=\"*:*\", fq=None, nbins=10, minmax=None, includequeries=True\n ):\n \"\"\"Generates a histogram of values. Expects the field to be integer or floating\n point.\n\n @param name(string) Name of the field to compute\n @param q(string) The query identifying the set of records for the histogram\n @param fq(string) Filter query to restrict application of query\n @param nbins(int) Number of bins in resulting histogram\n\n @return list of [binmin, binmax, n, binquery]\n\n \"\"\"\n oldpersist = self.persistent\n self.persistent = True\n ftype = self.getftype(name)\n if ftype == str:\n ##handle text histograms over here\n bins = self.fieldAlphaHistogram(\n name, q=q, fq=fq, nbins=nbins, includequeries=includequeries\n )\n self.persistent = oldpersist\n if not self.persistent:\n self.conn.close()\n return bins\n bins = []\n qbin = []\n fvals = self.fieldValues(name, q, fq, maxvalues=nbins + 1)\n if len(fvals[name]) < 3:\n return bins\n nvalues = len(fvals[name]) / 2\n if nvalues < nbins:\n nbins = nvalues\n minoffset = 1\n if ftype == float:\n minoffset = 0.00001\n try:\n if minmax is None:\n minmax = self.fieldMinMax(name, q=q, fq=fq)\n # logging.info(\"MINMAX = %s\" % str(minmax))\n minmax[0] = float(minmax[0])\n minmax[1] = float(minmax[1])\n delta = (minmax[1] - minmax[0]) / nbins\n for i in range(0, nbins):\n binmin = minmax[0] + (i * delta)\n bin = [binmin, binmin + delta, 0]\n if ftype == int:\n bin[0] = int(bin[0])\n bin[1] = int(bin[1])\n if i == 0:\n binq = '%s:[* TO %d]' % (name, bin[1])\n elif i == nbins - 1:\n binq = '%s:[%d TO *]' % (name, bin[0] + minoffset)\n bin[0] = bin[0] + minoffset\n if bin[1] < bin[0]:\n bin[1] = bin[0]\n else:\n binq = '%s:[%d TO %d]' % (name, bin[0] + minoffset, bin[1])\n bin[0] = bin[0] + minoffset\n else:\n if i == 0:\n binq = '%s:[* TO %f]' % (name, bin[1])\n elif i == nbins - 1:\n binq = '%s:[%f TO *]' % (name, bin[0] + minoffset)\n else:\n binq = '%s:[%f TO %f]' % (name, bin[0] + minoffset, bin[1])\n qbin.append(binq)\n bins.append(bin)\n\n # now execute the facet query request\n params = {\n 'q': q,\n 'rows': '0',\n 'facet': 'true',\n 'facet.field': name,\n 'facet.limit': '1',\n 'facet.mincount': 1,\n 'wt': 'python',\n }\n request = urllib.parse.urlencode(params, doseq=True)\n for sq in qbin:\n request = request + '&%s' % urllib.parse.urlencode({'facet.query': sq})\n rsp = self.doPost(self.solrBase + '', request, self.formheaders)\n data = eval(rsp.read())\n for i in range(0, len(bins)):\n v = data['facet_counts']['facet_queries'][qbin[i]]\n bins[i][2] = v\n if includequeries:\n bins[i].append(qbin[i])\n finally:\n self.persistent = oldpersist\n if not self.persistent:\n self.conn.close()\n return bins", "response": "Generates a histogram of values for a single field."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _nextPage(self, offset):\n self.logger.debug(\"Iterator crecord=%s\" % str(self.crecord))\n params = {\n 'q': self.q,\n 'start': str(offset),\n 'rows': str(self.pagesize),\n 'fl': self.fields,\n 'explainOther': '',\n 'hl.fl': '',\n }\n if self.fq is not None:\n params['fq'] = self.fq\n self.res = self.client.search(params)\n self._numhits = int(self.res['response']['numFound'])", "response": "Retrieves the next set of results from the service."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _nextPage(self, offset):\n self.logger.debug(\"Iterator crecord=%s\" % str(self.crecord))\n\n params = {\n 'q': self.q,\n 'rows': '0',\n 'facet': 'true',\n 'facet.field': self.field,\n 'facet.limit': str(self.pagesize),\n 'facet.offset': str(offset),\n 'facet.zeros': 'false',\n 'wt': 'python',\n }\n if self.fq is not None:\n params['fq'] = self.fq\n request = urllib.parse.urlencode(params, doseq=True)\n rsp = self.client.doPost(\n self.client.solrBase + '', request, self.client.formheaders\n )\n data = eval(rsp.read())\n try:\n self.res = data['facet_counts']['facet_fields'][self.field]\n self.logger.debug(self.res)\n except Exception:\n self.res = []\n self.index = 0", "response": "Retrieves the next set of results from the service."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_log(request):\n query = d1_gmn.app.models.EventLog.objects.all().order_by('timestamp', 'id')\n if not d1_gmn.app.auth.is_trusted_subject(request):\n query = d1_gmn.app.db_filter.add_access_policy_filter(\n request, query, 'sciobj__id'\n )\n query = d1_gmn.app.db_filter.add_redact_annotation(request, query)\n query = d1_gmn.app.db_filter.add_datetime_filter(\n request, query, 'timestamp', 'fromDate', 'gte'\n )\n query = d1_gmn.app.db_filter.add_datetime_filter(\n request, query, 'timestamp', 'toDate', 'lt'\n )\n query = d1_gmn.app.db_filter.add_string_filter(\n request, query, 'event__event', 'event'\n )\n if d1_gmn.app.views.util.is_v1_api(request):\n query = d1_gmn.app.db_filter.add_string_begins_with_filter(\n request, query, 'sciobj__pid__did', 'pidFilter'\n )\n elif d1_gmn.app.views.util.is_v2_api(request):\n query = d1_gmn.app.db_filter.add_sid_or_string_begins_with_filter(\n request, query, 'sciobj__pid__did', 'idFilter'\n )\n else:\n assert False, 'Unable to determine API version'\n total_int = query.count()\n query, start, count = d1_gmn.app.views.slice.add_slice_filter(\n request, query, total_int\n )\n return {\n 'query': query,\n 'start': start,\n 'count': count,\n 'total': total_int,\n 'type': 'log',\n }", "response": "Get log records from the log table."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_node(request):\n api_major_int = 2 if d1_gmn.app.views.util.is_v2_api(request) else 1\n node_pretty_xml = d1_gmn.app.node.get_pretty_xml(api_major_int)\n return django.http.HttpResponse(node_pretty_xml, d1_common.const.CONTENT_TYPE_XML)", "response": "Return a pretty - printed node."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_replica(request, pid):\n _assert_node_is_authorized(request, pid)\n sciobj = d1_gmn.app.models.ScienceObject.objects.get(pid__did=pid)\n content_type_str = d1_gmn.app.object_format_cache.get_content_type(\n sciobj.format.format\n )\n # Replica is always a local file that can be handled with FileResponse()\n response = django.http.FileResponse(\n d1_gmn.app.sciobj_store.open_sciobj_file_by_pid(pid), content_type_str\n )\n d1_gmn.app.views.headers.add_sciobj_properties_headers_to_response(response, sciobj)\n # Log the replication of this object.\n d1_gmn.app.event_log.log_replicate_event(pid, request)\n return response", "response": "MNReplication. getReplica \u2192 OctetStream."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef post_refresh_system_metadata(request):\n d1_gmn.app.views.assert_db.post_has_mime_parts(\n request,\n (\n ('field', 'pid'),\n ('field', 'serialVersion'),\n ('field', 'dateSysMetaLastModified'),\n ),\n )\n d1_gmn.app.views.assert_db.is_existing_object(request.POST['pid'])\n d1_gmn.app.models.sysmeta_refresh_queue(\n request.POST['pid'],\n request.POST['serialVersion'],\n request.POST['dateSysMetaLastModified'],\n 'queued',\n ).save()\n return d1_gmn.app.views.util.http_response_with_boolean_true_type()", "response": "MNStorage. systemMetadataChanged \u2192 boolean."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef post_object_list(request):\n d1_gmn.app.views.assert_db.post_has_mime_parts(\n request, (('field', 'pid'), ('file', 'object'), ('file', 'sysmeta'))\n )\n url_pid = request.POST['pid']\n sysmeta_pyxb = d1_gmn.app.sysmeta.deserialize(request.FILES['sysmeta'])\n d1_gmn.app.views.assert_sysmeta.obsoletes_not_specified(sysmeta_pyxb)\n d1_gmn.app.views.assert_sysmeta.matches_url_pid(sysmeta_pyxb, url_pid)\n d1_gmn.app.views.assert_sysmeta.is_valid_sid_for_new_standalone(sysmeta_pyxb)\n d1_gmn.app.views.create.create_sciobj(request, sysmeta_pyxb)\n return url_pid", "response": "MNStorage. create method for POST requests."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef put_object(request, old_pid):\n if django.conf.settings.REQUIRE_WHITELIST_FOR_UPDATE:\n d1_gmn.app.auth.assert_create_update_delete_permission(request)\n d1_gmn.app.util.coerce_put_post(request)\n d1_gmn.app.views.assert_db.post_has_mime_parts(\n request, (('field', 'newPid'), ('file', 'object'), ('file', 'sysmeta'))\n )\n d1_gmn.app.views.assert_db.is_valid_pid_to_be_updated(old_pid)\n sysmeta_pyxb = d1_gmn.app.sysmeta.deserialize(request.FILES['sysmeta'])\n new_pid = request.POST['newPid']\n d1_gmn.app.views.assert_sysmeta.matches_url_pid(sysmeta_pyxb, new_pid)\n d1_gmn.app.views.assert_sysmeta.obsoletes_matches_pid_if_specified(\n sysmeta_pyxb, old_pid\n )\n sysmeta_pyxb.obsoletes = old_pid\n sid = d1_common.xml.get_opt_val(sysmeta_pyxb, 'seriesId')\n d1_gmn.app.views.assert_sysmeta.is_valid_sid_for_chain(old_pid, sid)\n d1_gmn.app.views.create.create_sciobj(request, sysmeta_pyxb)\n # The create event for the new object is added in create_sciobj(). The update\n # event on the old object is added here.\n d1_gmn.app.event_log.log_update_event(\n old_pid,\n request,\n timestamp=d1_common.date_time.normalize_datetime_to_utc(\n sysmeta_pyxb.dateUploaded\n ),\n )\n d1_gmn.app.sysmeta.update_modified_timestamp(old_pid)\n return new_pid", "response": "MNStorage.update(session, pid, object, newPid, sysmeta) \u2192 Identifier."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef post_generate_identifier(request):\n d1_gmn.app.views.assert_db.post_has_mime_parts(request, (('field', 'scheme'),))\n if request.POST['scheme'] != 'UUID':\n raise d1_common.types.exceptions.InvalidRequest(\n 0, 'Only the UUID scheme is currently supported'\n )\n fragment = request.POST.get('fragment', None)\n while True:\n pid = (fragment if fragment else '') + uuid.uuid4().hex\n if not d1_gmn.app.models.ScienceObject.objects.filter(pid__did=pid).exists():\n return pid", "response": "Generate identifier for the current post."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef post_replicate(request):\n d1_gmn.app.views.assert_db.post_has_mime_parts(\n request, (('field', 'sourceNode'), ('file', 'sysmeta'))\n )\n sysmeta_pyxb = d1_gmn.app.sysmeta.deserialize(request.FILES['sysmeta'])\n d1_gmn.app.local_replica.assert_request_complies_with_replication_policy(\n sysmeta_pyxb\n )\n pid = d1_common.xml.get_req_val(sysmeta_pyxb.identifier)\n d1_gmn.app.views.assert_db.is_valid_pid_for_create(pid)\n d1_gmn.app.local_replica.add_to_replication_queue(\n request.POST['sourceNode'], sysmeta_pyxb\n )\n return d1_gmn.app.views.util.http_response_with_boolean_true_type()", "response": "MNReplication. replicate - Replicates the sourceNode."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nregisters this field with a specific process.", "response": "def contribute_to_class(self, process, fields, name):\n \"\"\"Register this field with a specific process.\n\n :param process: Process descriptor instance\n :param fields: Fields registry to use\n :param name: Field name\n \"\"\"\n self.name = name\n self.process = process\n fields[name] = self"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a dictionary that represents this field.", "response": "def to_schema(self):\n \"\"\"Return field schema for this field.\"\"\"\n if not self.name or not self.process:\n raise ValueError(\"field is not registered with process\")\n\n schema = {\n 'name': self.name,\n 'type': self.get_field_type(),\n }\n if self.required is not None:\n schema['required'] = self.required\n if self.label is not None:\n schema['label'] = self.label\n if self.description is not None:\n schema['description'] = self.description\n if self.default is not None:\n schema['default'] = self.default\n if self.hidden is not None:\n schema['hidden'] = self.hidden\n if self.choices is not None:\n for choice, label in self.choices:\n schema.setdefault('choices', []).append({\n 'label': label,\n 'value': choice,\n })\n\n return schema"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef clean(self, value):\n if value is None:\n value = self.default\n\n try:\n value = self.to_python(value)\n self.validate(value)\n except ValidationError as error:\n raise ValidationError(\"invalid value for {}: {}\".format(\n self.name,\n error.args[0]\n ))\n return value", "response": "Run validators and return the clean value."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_output(self, value):\n return json.loads(resolwe_runtime_utils.save(self.name, str(value)))", "response": "Convert value to process output format."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts value if needed.", "response": "def to_python(self, value):\n \"\"\"Convert value if needed.\"\"\"\n if isinstance(value, str):\n return value\n elif isinstance(value, dict):\n try:\n value = value['url']\n except KeyError:\n raise ValidationError(\"dictionary must contain an 'url' element\")\n\n if not isinstance(value, str):\n raise ValidationError(\"field's url element must be a string\")\n\n return value\n elif not isinstance(value, None):\n raise ValidationError(\"field must be a string or a dict\")"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nimports the file into the working directory.", "response": "def import_file(self, imported_format=None, progress_from=0.0, progress_to=None):\n \"\"\"Import field source file to working directory.\n\n :param imported_format: Import file format (extracted, compressed or both)\n :param progress_from: Initial progress value\n :param progress_to: Final progress value\n :return: Destination file path (if extracted and compressed, extracted path given)\n \"\"\"\n if not hasattr(resolwe_runtime_utils, 'import_file'):\n raise RuntimeError('Requires resolwe-runtime-utils >= 2.0.0')\n\n if imported_format is None:\n imported_format = resolwe_runtime_utils.ImportedFormat.BOTH\n\n return resolwe_runtime_utils.import_file(\n src=self.file_temp,\n file_name=self.path,\n imported_format=imported_format,\n progress_from=progress_from,\n progress_to=progress_to\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert value if needed.", "response": "def to_python(self, value):\n \"\"\"Convert value if needed.\"\"\"\n if isinstance(value, FileDescriptor):\n return value\n elif isinstance(value, str):\n return FileDescriptor(value)\n elif isinstance(value, dict):\n try:\n path = value['file']\n except KeyError:\n raise ValidationError(\"dictionary must contain a 'file' element\")\n\n if not isinstance(path, str):\n raise ValidationError(\"field's file element must be a string\")\n\n size = value.get('size', None)\n if size is not None and not isinstance(size, int):\n raise ValidationError(\"field's size element must be an integer\")\n\n total_size = value.get('total_size', None)\n if total_size is not None and not isinstance(total_size, int):\n raise ValidationError(\"field's total_size element must be an integer\")\n\n is_remote = value.get('is_remote', None)\n if is_remote is not None and not isinstance(is_remote, bool):\n raise ValidationError(\"field's is_remote element must be a boolean\")\n\n file_temp = value.get('file_temp', None)\n if file_temp is not None and not isinstance(file_temp, str):\n raise ValidationError(\"field's file_temp element must be a string\")\n\n refs = value.get('refs', None)\n if refs is not None and not isinstance(refs, list):\n # TODO: Validate that all refs are strings.\n raise ValidationError(\"field's refs element must be a list of strings\")\n\n return FileDescriptor(\n path,\n size=size,\n total_size=total_size,\n is_remote=is_remote,\n file_temp=file_temp,\n refs=refs,\n )\n elif not isinstance(value, None):\n raise ValidationError(\"field must be a FileDescriptor, string or a dict\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert value to process output format.", "response": "def to_output(self, value):\n \"\"\"Convert value to process output format.\"\"\"\n return json.loads(resolwe_runtime_utils.save_file(self.name, value.path, *value.refs))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts value if needed.", "response": "def to_python(self, value):\n \"\"\"Convert value if needed.\"\"\"\n if isinstance(value, DirDescriptor):\n return value\n elif isinstance(value, str):\n return DirDescriptor(value)\n elif isinstance(value, dict):\n try:\n path = value['dir']\n except KeyError:\n raise ValidationError(\"dictionary must contain a 'dir' element\")\n\n if not isinstance(path, str):\n raise ValidationError(\"field's dir element must be a string\")\n\n size = value.get('size', None)\n if size is not None and not isinstance(size, int):\n raise ValidationError(\"field's size element must be an integer\")\n\n total_size = value.get('total_size', None)\n if total_size is not None and not isinstance(total_size, int):\n raise ValidationError(\"field's total_size element must be an integer\")\n\n refs = value.get('refs', None)\n if refs is not None and not isinstance(refs, list):\n # TODO: Validate that all refs are strings.\n raise ValidationError(\"field's refs element must be a list of strings\")\n\n return DirDescriptor(\n path,\n size=size,\n total_size=total_size,\n refs=refs,\n )\n elif not isinstance(value, None):\n raise ValidationError(\"field must be a DirDescriptor, string or a dict\")"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nregisters this field with a specific process.", "response": "def contribute_to_class(self, process, fields, name):\n \"\"\"Register this field with a specific process.\n\n :param process: Process descriptor instance\n :param fields: Fields registry to use\n :param name: Field name\n \"\"\"\n super().contribute_to_class(process, fields, name)\n\n self.inner.name = name\n self.inner.process = process"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts value to process output format.", "response": "def to_output(self, value):\n \"\"\"Convert value to process output format.\"\"\"\n return {self.name: [self.inner.to_output(v)[self.name] for v in value]}"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get(self, key):\n self._populate_cache()\n if key not in self._cache:\n raise AttributeError(\"DataField has no member {}\".format(key))\n\n return self._cache[key]", "response": "Return given key from cache."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting value if needed.", "response": "def to_python(self, value):\n \"\"\"Convert value if needed.\"\"\"\n cache = None\n if value is None:\n return None\n\n if isinstance(value, DataDescriptor):\n return value\n elif isinstance(value, dict):\n # Allow pre-hydrated data objects.\n cache = value\n try:\n value = cache['__id']\n except KeyError:\n raise ValidationError(\"dictionary must contain an '__id' element\")\n elif not isinstance(value, int):\n raise ValidationError(\"field must be a DataDescriptor, data object primary key or dict\")\n\n return DataDescriptor(value, self, cache)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef contribute_to_class(self, process, fields, name):\n # Use order-preserving definition namespace (__dict__) to respect the\n # order of GroupField's fields definition.\n for field_name in self.field_group.__dict__:\n if field_name.startswith('_'):\n continue\n\n field = getattr(self.field_group, field_name)\n field.contribute_to_class(process, self.fields, field_name)\n\n super().contribute_to_class(process, fields, name)", "response": "Register this field with a specific process."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert value if needed.", "response": "def to_python(self, value):\n \"\"\"Convert value if needed.\"\"\"\n if isinstance(value, GroupDescriptor):\n value = value._value # pylint: disable=protected-access\n\n result = {}\n for name, field in self.fields.items():\n result[name] = field.to_python(value.get(name, None))\n\n return GroupDescriptor(result)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_schema(self):\n schema = super().to_schema()\n if self.disabled is not None:\n schema['disabled'] = self.disabled\n if self.collapsed is not None:\n schema['collapsed'] = self.collapsed\n\n group = []\n for field in self.fields.values():\n group.append(field.to_schema())\n schema['group'] = group\n\n return schema", "response": "Return field schema for this field."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update_constants():\n global MANAGER_CONTROL_CHANNEL, MANAGER_EXECUTOR_CHANNELS # pylint: disable=global-statement\n global MANAGER_LISTENER_STATS, MANAGER_STATE_PREFIX # pylint: disable=global-statement\n redis_prefix = getattr(settings, 'FLOW_MANAGER', {}).get('REDIS_PREFIX', '')\n\n MANAGER_CONTROL_CHANNEL = '{}.control'.format(redis_prefix)\n MANAGER_EXECUTOR_CHANNELS = ManagerChannelPair(\n '{}.result_queue'.format(redis_prefix),\n '{}.result_queue_response'.format(redis_prefix),\n )\n MANAGER_STATE_PREFIX = '{}.state'.format(redis_prefix)\n MANAGER_LISTENER_STATS = '{}.listener_stats'.format(redis_prefix)", "response": "Recreate channel name constants with changed settings."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef destroy_channels(self):\n for item_name in dir(self):\n item = getattr(self, item_name)\n if isinstance(item, self.RedisAtomicBase):\n self.redis.delete(item.item_name)", "response": "Destroy Redis channels managed by this state instance."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef render_to_image_file(\n self, image_out_path, width_pixels=None, height_pixels=None, dpi=90\n ):\n \"\"\"Render the SubjectInfo to an image file.\n\n Args:\n image_out_path : str\n Path to where image image will be written. Valid extensions are\n ``.svg,`` ``.pdf``, and ``.png``.\n\n width_pixels : int\n Width of image to write.\n\n height_pixels : int\n Height of image to write, in pixels.\n\n dpi:\n Dots Per Inch to declare in image file. This does not change the\n resolution of the image but may change the size of the image when\n rendered.\n\n Returns:\n None\n\n \"\"\"\n self._render_type = \"file\"\n self._tree.render(\n file_name=image_out_path,\n w=width_pixels,\n h=height_pixels,\n dpi=dpi,\n units=\"px\",\n tree_style=self._get_tree_style(),\n )", "response": "Render the SubjectInfo to an image file."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbrowsing and edit the SubjectInfo in a simple Qt5 based UI.", "response": "def browse_in_qt5_ui(self):\n \"\"\"Browse and edit the SubjectInfo in a simple Qt5 based UI.\"\"\"\n self._render_type = \"browse\"\n self._tree.show(tree_style=self._get_tree_style())"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncopies SubjectInfoTree to a ETE Tree.", "response": "def _gen_etetoolkit_tree(self, node, subject_info_tree):\n \"\"\"Copy SubjectInfoTree to a ETE Tree.\"\"\"\n for si_node in subject_info_tree.child_list:\n if si_node.type_str == TYPE_NODE_TAG:\n child = self._add_type_node(node, si_node.label_str)\n elif si_node.type_str == SUBJECT_NODE_TAG:\n child = self._add_subject_node(node, si_node.label_str)\n else:\n raise AssertionError(\n 'Unknown node type. type_str=\"{}\"'.format(si_node.type_str)\n )\n self._gen_etetoolkit_tree(child, si_node)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _add_type_node(self, node, label):\n child = node.add_child(name=label)\n child.add_feature(TYPE_NODE_TAG, True)\n return child", "response": "Add a node representing a SubjectInfo type."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a node containing a subject string.", "response": "def _add_subject_node(self, node, subj_str):\n \"\"\"Add a node containing a subject string.\"\"\"\n child = node.add_child(name=subj_str)\n child.add_feature(SUBJECT_NODE_TAG, True)\n return child"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the path from the root to node as a list of node names.", "response": "def _get_node_path(self, node):\n \"\"\"Return the path from the root to ``node`` as a list of node names.\"\"\"\n path = []\n while node.up:\n path.append(node.name)\n node = node.up\n return list(reversed(path))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _layout(self, node):\n\n def set_edge_style():\n \"\"\"Set the style for edges and make the node invisible.\"\"\"\n node_style = ete3.NodeStyle()\n node_style[\"vt_line_color\"] = EDGE_COLOR\n node_style[\"hz_line_color\"] = EDGE_COLOR\n node_style[\"vt_line_width\"] = EDGE_WIDTH\n node_style[\"hz_line_width\"] = EDGE_WIDTH\n node_style[\"size\"] = 0\n node.set_style(node_style)\n\n def style_subject_node(color=\"Black\"):\n \"\"\"Specify the appearance of Subject nodes.\"\"\"\n face = ete3.TextFace(node.name, fsize=SUBJECT_NODE_FONT_SIZE, fgcolor=color)\n set_face_margin(face)\n node.add_face(face, column=0, position=\"branch-right\")\n\n def style_type_node(color=\"Black\"):\n \"\"\"Specify the appearance of Type nodes.\"\"\"\n face = ete3.CircleFace(\n radius=TYPE_NODE_RADIUS,\n color=TYPE_NODE_COLOR_DICT.get(node.name, \"White\"),\n style=\"circle\",\n label={\n \"text\": node.name,\n \"color\": color,\n \"fontsize\": (\n TYPE_NODE_FONT_SIZE_FILE\n if self._render_type == \"file\"\n else TYPE_NODE_FONT_SIZE_BROWSE\n ),\n },\n )\n set_face_margin(face)\n node.add_face(face, column=0, position=\"branch-right\")\n\n def set_face_margin(face):\n \"\"\"Add margins to Face object.\n\n - Add space between inner_border and border on TextFace.\n - Add space outside bounding area of CircleFace.\n\n \"\"\"\n face.margin_left = 5\n face.margin_right = 5\n # face.margin_top = 5\n # face.margin_bottom = 5\n\n set_edge_style()\n\n if hasattr(node, SUBJECT_NODE_TAG):\n style_subject_node()\n elif hasattr(node, TYPE_NODE_TAG):\n style_type_node()\n else:\n raise AssertionError(\"Unknown node type\")", "response": "This method renders the node in the ETE file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef extend_settings(self, data_id, files, secrets):\n process = Data.objects.get(pk=data_id).process\n if process.requirements.get('resources', {}).get('secrets', False):\n raise PermissionDenied(\n \"Process which requires access to secrets cannot be run using the local executor\"\n )\n\n return super().extend_settings(data_id, files, secrets)", "response": "Prevent processes requiring access to secrets from being run."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_finder(import_path):\n finder_class = import_string(import_path)\n if not issubclass(finder_class, BaseProcessesFinder):\n raise ImproperlyConfigured(\n 'Finder \"{}\" is not a subclass of \"{}\"'.format(finder_class, BaseProcessesFinder))\n return finder_class()", "response": "Get a process finder."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list of sub - directories.", "response": "def _find_folders(self, folder_name):\n \"\"\"Return a list of sub-directories.\"\"\"\n found_folders = []\n for app_config in apps.get_app_configs():\n folder_path = os.path.join(app_config.path, folder_name)\n if os.path.isdir(folder_path):\n found_folders.append(folder_path)\n return found_folders"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sanity(request, sysmeta_pyxb):\n _does_not_contain_replica_sections(sysmeta_pyxb)\n _is_not_archived(sysmeta_pyxb)\n _obsoleted_by_not_specified(sysmeta_pyxb)\n if 'HTTP_VENDOR_GMN_REMOTE_URL' in request.META:\n return\n _has_correct_file_size(request, sysmeta_pyxb)\n _is_supported_checksum_algorithm(sysmeta_pyxb)\n _is_correct_checksum(request, sysmeta_pyxb)", "response": "Check that sysmeta_pyxb is suitable for creating a new object and matches the\n uploaded sciobj bytes."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef is_valid_sid_for_new_standalone(sysmeta_pyxb):\n sid = d1_common.xml.get_opt_val(sysmeta_pyxb, 'seriesId')\n if not d1_gmn.app.did.is_valid_sid_for_new_standalone(sid):\n raise d1_common.types.exceptions.IdentifierNotUnique(\n 0,\n 'Identifier is already in use as {}. did=\"{}\"'.format(\n d1_gmn.app.did.classify_identifier(sid), sid\n ),\n identifier=sid,\n )", "response": "Assert that any SID in sysmeta_pyxb can be assigned to a new standalone\n object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef is_valid_sid_for_chain(pid, sid):\n if not d1_gmn.app.did.is_valid_sid_for_chain(pid, sid):\n existing_sid = d1_gmn.app.revision.get_sid_by_pid(pid)\n raise d1_common.types.exceptions.IdentifierNotUnique(\n 0,\n 'A different SID is already assigned to the revision chain to which '\n 'the object being created or updated belongs. A SID cannot be changed '\n 'once it has been assigned to a chain. '\n 'existing_sid=\"{}\", new_sid=\"{}\", pid=\"{}\"'.format(existing_sid, sid, pid),\n )", "response": "Assert that a SID can be assigned to the single object or to the chain."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _does_not_contain_replica_sections(sysmeta_pyxb):\n if len(getattr(sysmeta_pyxb, 'replica', [])):\n raise d1_common.types.exceptions.InvalidSystemMetadata(\n 0,\n 'A replica section was included. A new object object created via '\n 'create() or update() cannot already have replicas. pid=\"{}\"'.format(\n d1_common.xml.get_req_val(sysmeta_pyxb.identifier)\n ),\n identifier=d1_common.xml.get_req_val(sysmeta_pyxb.identifier),\n )", "response": "Assert that sysmeta_pyxb does not contain any replica information."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _is_not_archived(sysmeta_pyxb):\n if _is_archived(sysmeta_pyxb):\n raise d1_common.types.exceptions.InvalidSystemMetadata(\n 0,\n 'Archived flag was set. A new object created via create() or update() '\n 'cannot already be archived. pid=\"{}\"'.format(\n d1_common.xml.get_req_val(sysmeta_pyxb.identifier)\n ),\n identifier=d1_common.xml.get_req_val(sysmeta_pyxb.identifier),\n )", "response": "Assert that sysmeta_pyxb does not have the archived flag set."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_d1_env_by_base_url(cn_base_url):\n for k, v in D1_ENV_DICT:\n if v['base_url'].startswith(cn_base_url):\n return D1_ENV_DICT[k]", "response": "Given the BaseURL for a CN return the DataONE environment dict for the CN s\n environemnt."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef match_all(d_SMEFT, parameters=None):\n p = default_parameters.copy()\n if parameters is not None:\n # if parameters are passed in, overwrite the default values\n p.update(parameters)\n C = wilson.util.smeftutil.wcxf2arrays_symmetrized(d_SMEFT)\n C['vT'] = 246.22\n C_WET = match_all_array(C, p)\n C_WET = wilson.translate.wet.rotate_down(C_WET, p)\n C_WET = wetutil.unscale_dict_wet(C_WET)\n d_WET = wilson.util.smeftutil.arrays2wcxf(C_WET)\n basis = wcxf.Basis['WET', 'JMS']\n keys = set(d_WET.keys()) & set(basis.all_wcs)\n d_WET = {k: d_WET[k] for k in keys}\n return d_WET", "response": "Match the SMEFT Warsaw basis onto the WET JMS basis."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a list of HTTP responses for the debug mode.", "response": "def _debug_mode_responses(self, request, response):\n \"\"\"Extra functionality available in debug mode.\n\n - If pretty printed output was requested, force the content type to text. This\n causes the browser to not try to format the output in any way.\n - If SQL profiling is turned on, return a page with SQL query timing\n information instead of the actual response.\n\n \"\"\"\n if django.conf.settings.DEBUG_GMN:\n if 'pretty' in request.GET:\n response['Content-Type'] = d1_common.const.CONTENT_TYPE_TEXT\n if (\n 'HTTP_VENDOR_PROFILE_SQL' in request.META\n or django.conf.settings.DEBUG_PROFILE_SQL\n ):\n response_list = []\n for query in django.db.connection.queries:\n response_list.append('{}\\n{}'.format(query['time'], query['sql']))\n return django.http.HttpResponse(\n '\\n\\n'.join(response_list), d1_common.const.CONTENT_TYPE_TEXT\n )\n return response"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef queryResponse(\n self, queryEngine, query_str, vendorSpecific=None, do_post=False, **kwargs\n ):\n \"\"\"CNRead.query(session, queryEngine, query) \u2192 OctetStream\n https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNRead.query MNQuery.query(session,\n queryEngine, query) \u2192 OctetStream http://jenkins.\n\n -1.dataone.org/jenkins/job/API%20Documentation%20-%20trunk/ws/api-\n documentation/build/html/apis/MN_APIs.html#MNQuery.query.\n\n Args:\n queryEngine:\n query_str:\n vendorSpecific:\n do_post:\n **kwargs:\n\n Returns:\n\n \"\"\"\n self._log.debug(\n 'Solr query: {}'.format(\n ', '.join(['{}={}'.format(k, v) for (k, v) in list(locals().items())])\n )\n )\n return (self.POST if do_post else self.GET)(\n ['query', queryEngine, query_str], headers=vendorSpecific, **kwargs\n )", "response": "Query the ISO for the specified resource set."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsee Also : queryResponse", "response": "def query(\n self, queryEngine, query_str, vendorSpecific=None, do_post=False, **kwargs\n ):\n \"\"\"See Also: queryResponse()\n\n Args:\n queryEngine:\n query_str:\n vendorSpecific:\n do_post:\n **kwargs:\n\n Returns:\n\n \"\"\"\n response = self.queryResponse(\n queryEngine, query_str, vendorSpecific, do_post, **kwargs\n )\n if self._content_type_is_json(response):\n return self._read_json_response(response)\n else:\n return self._read_stream_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getQueryEngineDescriptionResponse(self, queryEngine, vendorSpecific=None, **kwargs):\n return self.GET(['query', queryEngine], query=kwargs, headers=vendorSpecific)", "response": "CNRead.getQueryEngineDescription(session, queryEngine) \u2192\n QueryEngineDescription https://releases.dataone.org/online/api-\n documentation-v2.0.1/apis/CN_APIs.html#CNRead.getQueryEngineDescription\n MNQuery.getQueryEngineDescription(session, queryEngine) \u2192 QueryEngineDescription\n http://jenkins-1.dataone.org/jenkins/job/API%20D ocumentation%20-%20trunk/ws.\n\n /api-documentation/build/html/apis/MN_APIs.h\n tml#MNQuery.getQueryEngineDescription.\n\n Args:\n queryEngine:\n **kwargs:\n\n Returns:"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nseeing Also : getQueryEngineDescriptionResponse", "response": "def getQueryEngineDescription(self, queryEngine, **kwargs):\n \"\"\"See Also: getQueryEngineDescriptionResponse()\n\n Args:\n queryEngine:\n **kwargs:\n\n Returns:\n\n \"\"\"\n response = self.getQueryEngineDescriptionResponse(queryEngine, **kwargs)\n return self._read_dataone_type_response(response, 'QueryEngineDescription')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nderiving cache key for given object.", "response": "def _get_cache_key(self, obj):\n \"\"\"Derive cache key for given object.\"\"\"\n if obj is not None:\n # Make sure that key is REALLY unique.\n return '{}-{}'.format(id(self), obj.pk)\n\n return \"{}-None\".format(id(self))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set(self, obj, build_kwargs):\n if build_kwargs is None:\n build_kwargs = {}\n\n cached = {}\n if 'queryset' in build_kwargs:\n cached = {\n 'model': build_kwargs['queryset'].model,\n 'pks': list(build_kwargs['queryset'].values_list('pk', flat=True)),\n }\n\n elif 'obj' in build_kwargs:\n cached = {\n 'obj': build_kwargs['obj'],\n }\n\n if not hasattr(self._thread_local, 'cache'):\n self._thread_local.cache = {}\n self._thread_local.cache[self._get_cache_key(obj)] = cached", "response": "Set the cached value for the object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets cached value and clean cache.", "response": "def take(self, obj):\n \"\"\"Get cached value and clean cache.\"\"\"\n cached = self._thread_local.cache[self._get_cache_key(obj)]\n build_kwargs = {}\n\n if 'model' in cached and 'pks' in cached:\n build_kwargs['queryset'] = cached['model'].objects.filter(pk__in=cached['pks'])\n\n elif 'obj' in cached:\n if cached['obj'].__class__.objects.filter(pk=cached['obj'].pk).exists():\n build_kwargs['obj'] = cached['obj']\n else:\n # Object was deleted in the meantime.\n build_kwargs['queryset'] = cached['obj'].__class__.objects.none()\n\n self._clean_cache(obj)\n\n return build_kwargs"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef connect(self, signal, **kwargs):\n signal.connect(self, **kwargs)\n self.connections.append((signal, kwargs))", "response": "Connect a specific signal type to this receiver."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndisconnecting all connected signal types from this receiver.", "response": "def disconnect(self):\n \"\"\"Disconnect all connected signal types from this receiver.\"\"\"\n for signal, kwargs in self.connections:\n signal.disconnect(self, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconnecting signals needed for dependency updates.", "response": "def connect(self, index):\n \"\"\"Connect signals needed for dependency updates.\n\n Pre- and post-delete signals have to be handled separately, as:\n\n * in the pre-delete signal we have the information which\n objects to rebuild, but affected relations are still\n presented, so rebuild would reflect in the wrong (outdated)\n indices\n * in the post-delete signal indices can be rebuild corectly,\n but there is no information which objects to rebuild, as\n affected relations were already deleted\n\n To bypass this, list of objects should be stored in the\n pre-delete signal indexing should be triggered in the\n post-delete signal.\n \"\"\"\n self.index = index\n\n signal = ElasticSignal(self, 'process', pass_kwargs=True)\n signal.connect(post_save, sender=self.model)\n signal.connect(pre_delete, sender=self.model)\n\n pre_delete_signal = ElasticSignal(self, 'process_predelete', pass_kwargs=True)\n pre_delete_signal.connect(pre_delete, sender=self.model)\n\n post_delete_signal = ElasticSignal(self, 'process_delete', pass_kwargs=True)\n post_delete_signal.connect(post_delete, sender=self.model)\n\n return [signal, pre_delete_signal, post_delete_signal]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef connect(self, index):\n # Determine which model is the target model as either side of the relation\n # may be passed as `field`.\n if index.object_type == self.field.rel.model:\n self.model = self.field.rel.related_model\n self.accessor = self.field.rel.field.attname\n else:\n self.model = self.field.rel.model\n if self.field.rel.symmetrical:\n # Symmetrical m2m relation on self has no reverse accessor.\n raise NotImplementedError(\n 'Dependencies on symmetrical M2M relations are not supported due '\n 'to strange handling of the m2m_changed signal which only makes '\n 'half of the relation visible during signal execution. For now you '\n 'need to use symmetrical=False on the M2M field definition.'\n )\n else:\n self.accessor = self.field.rel.get_accessor_name()\n\n # Connect signals.\n signals = super().connect(index)\n\n m2m_signal = ElasticSignal(self, 'process_m2m', pass_kwargs=True)\n m2m_signal.connect(m2m_changed, sender=self.field.through)\n signals.append(m2m_signal)\n\n # If the relation has a custom through model, we need to subscribe to it.\n if not self.field.rel.through._meta.auto_created: # pylint: disable=protected-access\n signal = ElasticSignal(self, 'process_m2m_through_save', pass_kwargs=True)\n signal.connect(post_save, sender=self.field.rel.through)\n signals.append(signal)\n\n signal = ElasticSignal(self, 'process_m2m_through_pre_delete', pass_kwargs=True)\n signal.connect(pre_delete, sender=self.field.rel.through)\n signals.append(signal)\n\n signal = ElasticSignal(self, 'process_m2m_through_post_delete', pass_kwargs=True)\n signal.connect(post_delete, sender=self.field.rel.through)\n signals.append(signal)\n\n return signals", "response": "Connect signals needed for dependency updates."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndetermines if dependent object should be processed.", "response": "def _filter(self, objects, **kwargs):\n \"\"\"Determine if dependent object should be processed.\"\"\"\n for obj in objects:\n if self.filter(obj, **kwargs) is False:\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_build_kwargs(self, obj, pk_set=None, action=None, update_fields=None, reverse=None, **kwargs):\n if action is None:\n # Check filter before rebuilding index.\n if not self._filter([obj], update_fields=update_fields):\n return\n\n queryset = getattr(obj, self.accessor).all()\n\n # Special handling for relations to self.\n if self.field.rel.model == self.field.rel.related_model:\n queryset = queryset.union(getattr(obj, self.field.rel.get_accessor_name()).all())\n\n return {'queryset': queryset}\n else:\n # Update to relation itself, only update the object in question.\n if self.field.rel.model == self.field.rel.related_model:\n # Special case, self-reference, update both ends of the relation.\n pks = set()\n if self._filter(self.model.objects.filter(pk__in=pk_set)):\n pks.add(obj.pk)\n\n if self._filter(self.model.objects.filter(pk__in=[obj.pk])):\n pks.update(pk_set)\n\n return {'queryset': self.index.object_type.objects.filter(pk__in=pks)}\n elif isinstance(obj, self.model):\n # Need to switch the role of object and pk_set.\n result = {'queryset': self.index.object_type.objects.filter(pk__in=pk_set)}\n pk_set = {obj.pk}\n else:\n result = {'obj': obj}\n\n if action != 'post_clear':\n # Check filter before rebuilding index.\n if not self._filter(self.model.objects.filter(pk__in=pk_set)):\n return\n\n return result", "response": "Prepare arguments for rebuilding indices."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrender the queryset of influenced objects and cache it.", "response": "def process_predelete(self, obj, pk_set=None, action=None, update_fields=None, **kwargs):\n \"\"\"Render the queryset of influenced objects and cache it.\"\"\"\n build_kwargs = self._get_build_kwargs(obj, pk_set, action, update_fields, **kwargs)\n self.delete_cache.set(obj, build_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrecreate queryset from the index and rebuild the index.", "response": "def process_delete(self, obj, pk_set=None, action=None, update_fields=None, **kwargs):\n \"\"\"Recreate queryset from the index and rebuild the index.\"\"\"\n build_kwargs = self.delete_cache.take(obj)\n\n if build_kwargs:\n self.index.build(**build_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nprocessing signals from dependencies. m2m add signal is processed in two parts.", "response": "def process_m2m(self, obj, pk_set=None, action=None, update_fields=None, cache_key=None, **kwargs):\n \"\"\"Process signals from dependencies.\n\n Remove signal is processed in two parts. For details see:\n :func:`~Dependency.connect`\n \"\"\"\n if action not in (None, 'post_add', 'pre_remove', 'post_remove', 'post_clear'):\n return\n\n if action == 'post_remove':\n build_kwargs = self.remove_cache.take(cache_key)\n else:\n build_kwargs = self._get_build_kwargs(obj, pk_set, action, update_fields, **kwargs)\n\n if action == 'pre_remove':\n self.remove_cache.set(cache_key, build_kwargs)\n return\n\n if build_kwargs:\n self.index.build(**build_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprocessing custom M2M through model actions.", "response": "def _process_m2m_through(self, obj, action):\n \"\"\"Process custom M2M through model actions.\"\"\"\n source = getattr(obj, self.field.rel.field.m2m_field_name())\n target = getattr(obj, self.field.rel.field.m2m_reverse_field_name())\n\n pk_set = set()\n if target:\n pk_set.add(target.pk)\n\n self.process_m2m(source, pk_set, action=action, reverse=False, cache_key=obj)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprocesses M2M through model post save for custom through model.", "response": "def process_m2m_through_save(self, obj, created=False, **kwargs):\n \"\"\"Process M2M post save for custom through model.\"\"\"\n # We are only interested in signals that establish relations.\n if not created:\n return\n\n self._process_m2m_through(obj, 'post_add')"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprocesses signals from dependencies.", "response": "def process(self, obj, pk_set=None, action=None, update_fields=None, **kwargs):\n \"\"\"Process signals from dependencies.\"\"\"\n build_kwargs = self._get_build_kwargs(obj, pk_set, action, update_fields, **kwargs)\n\n if build_kwargs:\n self.index.build(**build_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _connect_signal(self, index):\n post_save_signal = ElasticSignal(index, 'build')\n post_save_signal.connect(post_save, sender=index.object_type)\n self.signals.append(post_save_signal)\n\n post_delete_signal = ElasticSignal(index, 'remove_object')\n post_delete_signal.connect(post_delete, sender=index.object_type)\n self.signals.append(post_delete_signal)\n\n # Connect signals for all dependencies.\n for dependency in index.get_dependencies():\n # Automatically convert m2m fields to dependencies.\n if isinstance(dependency, (models.ManyToManyField, ManyToManyDescriptor)):\n dependency = ManyToManyDependency(dependency)\n elif not isinstance(dependency, Dependency):\n raise TypeError(\"Unsupported dependency type: {}\".format(repr(dependency)))\n\n signal = dependency.connect(index)\n self.signals.extend(signal)", "response": "Connect signals for building indexes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register_signals(self):\n for index in self.indexes:\n if index.object_type:\n self._connect_signal(index)", "response": "Register signals for all indexes."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef discover_indexes(self):\n self.indexes = []\n\n for app_config in apps.get_app_configs():\n indexes_path = '{}.elastic_indexes'.format(app_config.name)\n try:\n indexes_module = import_module(indexes_path)\n\n for attr_name in dir(indexes_module):\n attr = getattr(indexes_module, attr_name)\n if inspect.isclass(attr) and issubclass(attr, BaseIndex) and attr is not BaseIndex:\n # Make sure that parallel tests have different indices.\n if is_testing():\n index = attr.document_class._index._name # pylint: disable=protected-access\n testing_postfix = '_test_{}_{}'.format(TESTING_UUID, os.getpid())\n\n if not index.endswith(testing_postfix):\n # Replace current postfix with the new one.\n if attr.testing_postfix:\n index = index[:-len(attr.testing_postfix)]\n index = index + testing_postfix\n attr.testing_postfix = testing_postfix\n\n attr.document_class._index._name = index # pylint: disable=protected-access\n\n index = attr()\n\n # Apply any extensions defined for the given index. Currently index extensions are\n # limited to extending \"mappings\".\n for extension in composer.get_extensions(attr):\n mapping = getattr(extension, 'mapping', {})\n index.mapping.update(mapping)\n\n self.indexes.append(index)\n except ImportError as ex:\n if not re.match('No module named .*elastic_indexes.*', str(ex)):\n raise", "response": "Find all indexes and add them to the _index_builders attribute."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build(self, obj=None, queryset=None, push=True):\n for index in self.indexes:\n index.build(obj, queryset, push)", "response": "Trigger building of the indexes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npushing built documents to ElasticSearch.", "response": "def push(self, index=None):\n \"\"\"Push built documents to ElasticSearch.\n\n If ``index`` is specified, only that index will be pushed.\n \"\"\"\n for ind in self.indexes:\n if index and not isinstance(ind, index):\n continue\n ind.push()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete all entries from ElasticSearch.", "response": "def delete(self, skip_mapping=False):\n \"\"\"Delete all entries from ElasticSearch.\"\"\"\n for index in self.indexes:\n index.destroy()\n if not skip_mapping:\n index.create_mapping()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeleting all indexes from Elasticsearch and index builder.", "response": "def destroy(self):\n \"\"\"Delete all indexes from Elasticsearch and index builder.\"\"\"\n self.unregister_signals()\n for index in self.indexes:\n index.destroy()\n self.indexes = []"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _set_initial(self, C_in, scale_in):\n self.C_in = C_in\n self.scale_in = scale_in", "response": "Set the initial values for parameters and Wilson coefficients at\n the scale scale_in."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _set_initial_wcxf(self, wc, get_smpar=True):\n if wc.eft != 'SMEFT':\n raise ValueError(\"Wilson coefficients use wrong EFT.\")\n if wc.basis != 'Warsaw':\n raise ValueError(\"Wilson coefficients use wrong basis.\")\n self.scale_in = wc.scale\n C = wilson.util.smeftutil.wcxf2arrays_symmetrized(wc.dict)\n # fill in zeros for missing WCs\n for k, s in smeftutil.C_keys_shape.items():\n if k not in C and k not in smeftutil.SM_keys:\n if s == 1:\n C[k] = 0\n else:\n C[k] = np.zeros(s)\n if self.C_in is None:\n self.C_in = C\n else:\n self.C_in.update(C)\n if get_smpar:\n self.C_in.update(self._get_sm_scale_in())", "response": "Load the initial values for the Wilson coefficients from a Wilson coefficients object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _to_wcxf(self, C_out, scale_out):\n C = self._rotate_defaultbasis(C_out)\n d = wilson.util.smeftutil.arrays2wcxf_nonred(C)\n basis = wcxf.Basis['SMEFT', 'Warsaw']\n all_wcs = set(basis.all_wcs) # to speed up lookup\n d = {k: v for k, v in d.items() if k in all_wcs and v != 0}\n d = wcxf.WC.dict2values(d)\n wc = wcxf.WC('SMEFT', 'Warsaw', scale_out, d)\n return wc", "response": "Return the Wilson coefficients C_out as a wcxf. WC instance."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _rgevolve(self, scale_out, **kwargs):\n self._check_initial()\n return rge.smeft_evolve(C_in=self.C_in,\n scale_in=self.scale_in,\n scale_out=scale_out,\n **kwargs)", "response": "Solve the SMEFT RGEs from the initial scale to scale_out."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomputing the leading logarithmic approximation to the solution of the SMEFT RGEs from the initial scale to scale_out.", "response": "def _rgevolve_leadinglog(self, scale_out):\n \"\"\"Compute the leading logarithmic approximation to the solution\n of the SMEFT RGEs from the initial scale to `scale_out`.\n Returns a dictionary with parameters and Wilson coefficients.\n Much faster but less precise that `rgevolve`.\n \"\"\"\n self._check_initial()\n return rge.smeft_evolve_leadinglog(C_in=self.C_in,\n scale_in=self.scale_in,\n scale_out=scale_out)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _run_sm_scale_in(self, C_out, scale_sm=91.1876):\n # initialize an empty SMEFT instance\n smeft_sm = SMEFT(wc=None)\n C_in_sm = smeftutil.C_array2dict(np.zeros(9999))\n # set the SM parameters to the values obtained from smpar.smeftpar\n C_SM = smpar.smeftpar(scale_sm, C_out, basis='Warsaw')\n SM_keys = set(smeftutil.SM_keys) # to speed up lookup\n C_SM = {k: v for k, v in C_SM.items() if k in SM_keys}\n # set the Wilson coefficients at the EW scale to C_out\n C_in_sm.update(C_out)\n C_in_sm.update(C_SM)\n smeft_sm._set_initial(C_in_sm, scale_sm)\n # run up (with 1% relative precision, ignore running of Wilson coefficients)\n C_SM_high = smeft_sm._rgevolve(self.scale_in, newphys=False, rtol=0.001, atol=1)\n C_SM_high = self._rotate_defaultbasis(C_SM_high)\n return {k: v for k, v in C_SM_high.items() if k in SM_keys}", "response": "Get the SM parameters at the EW scale using an estimate C_out and run them to the\n input scale."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomputing the SM - bar parameters at the electroweak scale.", "response": "def get_smpar(self, accuracy='integrate', scale_sm=91.1876):\n \"\"\"Compute the SM MS-bar parameters at the electroweak scale.\n\n This method can be used to validate the accuracy of the iterative\n extraction of SM parameters. If successful, the values returned by this\n method should agree with the values in the dictionary\n `wilson.run.smeft.smpar.p`.\"\"\"\n if accuracy == 'integrate':\n C_out = self._rgevolve(scale_sm)\n elif accuracy == 'leadinglog':\n C_out = self._rgevolve_leadinglog(scale_sm)\n else:\n raise ValueError(\"'{}' is not a valid value of 'accuracy' (must be either 'integrate' or 'leadinglog').\".format(accuracy))\n return smpar.smpar(C_out)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning the SM up and return the SM parameters at the input scale.", "response": "def _get_sm_scale_in(self, scale_sm=91.1876):\n \"\"\"Get an estimate of the SM parameters at the input scale by running\n them from the EW scale using constant values for the Wilson coefficients\n (corresponding to their leading log approximated values at the EW\n scale).\n\n Note that this is not guaranteed to work and will fail if some of the\n Wilson coefficients (the ones affecting the extraction of SM parameters)\n are large.\"\"\"\n # intialize a copy of ourselves\n _smeft = SMEFT(self.wc, get_smpar=False)\n # Step 1: run the SM up, using the WCs at scale_input as (constant) estimate\n _smeft.C_in.update(self._run_sm_scale_in(self.C_in, scale_sm=scale_sm))\n # Step 2: run the WCs down in LL approximation\n C_out = _smeft._rgevolve_leadinglog(scale_sm)\n # Step 3: run the SM up again, this time using the WCs at scale_sm as (constant) estimate\n return self._run_sm_scale_in(C_out, scale_sm=scale_sm)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self, scale, accuracy='integrate', **kwargs):\n if accuracy == 'integrate':\n C_out = self._rgevolve(scale, **kwargs)\n elif accuracy == 'leadinglog':\n C_out = self._rgevolve_leadinglog(scale)\n else:\n raise ValueError(\"'{}' is not a valid value of 'accuracy' (must be either 'integrate' or 'leadinglog').\".format(accuracy))\n return self._to_wcxf(C_out, scale)", "response": "Returns the Wilson coefficients evolved to the specified scale."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a continuous solution to the RGE as RGsolution instance.", "response": "def run_continuous(self, scale):\n \"\"\"Return a continuous solution to the RGE as `RGsolution` instance.\"\"\"\n if scale == self.scale_in:\n raise ValueError(\"The scale must be different from the input scale\")\n elif scale < self.scale_in:\n scale_min = scale\n scale_max = self.scale_in\n elif scale > self.scale_in:\n scale_max = scale\n scale_min = self.scale_in\n fun = rge.smeft_evolve_continuous(C_in=self.C_in,\n scale_in=self.scale_in,\n scale_out=scale)\n return wilson.classes.RGsolution(fun, scale_min, scale_max)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns SQL filter for filtering by fields in unique_with attribute.", "response": "def _get_unique_constraints(self, instance):\n \"\"\"Return SQL filter for filtering by fields in ``unique_with`` attribute.\n\n Filter is returned as tuple of two elements where first one is\n placeholder which is safe to insert into SQL query and second\n one may include potentially dangerous values and must be passed\n to SQL query in ``params`` attribute to make sure it is properly\n escaped.\n \"\"\"\n constraints_expression = []\n constraints_values = {}\n for field_name in self.unique_with:\n if constants.LOOKUP_SEP in field_name:\n raise NotImplementedError(\n '`unique_with` constraint does not support lookups by related models.'\n )\n\n field = instance._meta.get_field(field_name) # pylint: disable=protected-access\n field_value = getattr(instance, field_name)\n\n # Convert value to the database representation.\n field_db_value = field.get_prep_value(field_value)\n\n constraint_key = 'unique_' + field_name\n constraints_expression.append(\"{} = %({})s\".format(\n connection.ops.quote_name(field.column),\n constraint_key\n ))\n constraints_values[constraint_key] = field_db_value\n\n if not constraints_expression:\n return '', []\n\n constraints_expression = 'AND ' + ' AND '.join(constraints_expression)\n return constraints_expression, constraints_values"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_populate_from_value(self, instance):\n if hasattr(self.populate_from, '__call__'):\n # ResolweSlugField(populate_from=lambda instance: ...)\n return self.populate_from(instance)\n else:\n # ResolweSlugField(populate_from='foo')\n attr = getattr(instance, self.populate_from)\n return attr() if callable(attr) else attr", "response": "Get the value from populate_from attribute."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nensures slug uniqunes before save.", "response": "def pre_save(self, instance, add):\n \"\"\"Ensure slug uniqunes before save.\"\"\"\n slug = self.value_from_object(instance)\n\n # We don't want to change slug defined by user.\n predefined_slug = bool(slug)\n\n if not slug and self.populate_from:\n slug = self._get_populate_from_value(instance)\n\n if slug:\n slug = slugify(slug)\n\n if not slug:\n slug = None\n\n if not self.blank:\n slug = instance._meta.model_name # pylint: disable=protected-access\n elif not self.null:\n slug = ''\n\n if slug:\n # Make sure that auto generated slug with added sequence\n # won't excede maximal length.\n # Validation of predefined slugs is handled by Django.\n if not predefined_slug:\n slug = slug[:(self.max_length - MAX_SLUG_SEQUENCE_DIGITS - 1)]\n\n constraints_placeholder, constraints_values = self._get_unique_constraints(instance)\n\n instance_pk_name = instance._meta.pk.name # pylint: disable=protected-access\n\n # Safe values - make sure that there is no chance of SQL injection.\n query_params = {\n 'constraints_placeholder': constraints_placeholder,\n 'slug_column': connection.ops.quote_name(self.column),\n 'slug_len': len(slug),\n 'table_name': connection.ops.quote_name(self.model._meta.db_table), # pylint: disable=protected-access\n 'pk_neq_placeholder': 'AND {} != %(instance_pk)s'.format(instance_pk_name) if instance.pk else ''\n }\n\n # SQL injection unsafe values - will be escaped.\n # Keys prefixed with `unique_` are reserved for `constraints_values` dict.\n query_escape_params = {\n 'slug': slug,\n 'slug_regex': '^{}(-[0-9]*)?$'.format(slug),\n }\n query_escape_params.update(constraints_values)\n if instance.pk:\n query_escape_params['instance_pk'] = instance.pk\n\n with connection.cursor() as cursor:\n # TODO: Slowest part of this query is `MAX` function. It can\n # be optimized by indexing slug column by slug sequence.\n # https://www.postgresql.org/docs/9.4/static/indexes-expressional.html\n cursor.execute(\n \"\"\"\n SELECT\n CASE\n WHEN (\n EXISTS(\n SELECT 1 FROM {table_name} WHERE (\n {slug_column} = %(slug)s\n {pk_neq_placeholder}\n {constraints_placeholder}\n )\n )\n ) THEN MAX(slug_sequence) + 1\n ELSE NULL\n END\n FROM (\n SELECT COALESCE(\n NULLIF(\n RIGHT({slug_column}, -{slug_len}-1),\n ''\n ),\n '1'\n )::text::integer AS slug_sequence\n FROM {table_name} WHERE (\n {slug_column} ~ %(slug_regex)s\n {pk_neq_placeholder}\n {constraints_placeholder}\n )\n ) AS tmp\n \"\"\".format(**query_params),\n params=query_escape_params\n )\n result = cursor.fetchone()[0]\n\n if result is not None:\n if predefined_slug:\n raise SlugError(\n \"Slug '{}' (version {}) is already taken.\".format(slug, instance.version)\n )\n\n if len(str(result)) > MAX_SLUG_SEQUENCE_DIGITS:\n raise SlugError(\n \"Auto-generated slug sequence too long - please choose a different slug.\"\n )\n\n slug = '{}-{}'.format(slug, result)\n\n # Make the updated slug available as instance attribute.\n setattr(instance, self.name, slug)\n\n return slug"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nperform process discovery in given path.", "response": "def discover_process(self, path):\n \"\"\"Perform process discovery in given path.\n\n This method will be called during process registration and\n should return a list of dictionaries with discovered process\n schemas.\n \"\"\"\n if not path.lower().endswith(('.yml', '.yaml')):\n return []\n\n with open(path) as fn:\n schemas = yaml.load(fn, Loader=yaml.FullLoader)\n if not schemas:\n # TODO: Logger.\n # self.stderr.write(\"Could not read YAML file {}\".format(schema_file))\n return []\n\n process_schemas = []\n for schema in schemas:\n if 'run' not in schema:\n continue\n\n # NOTE: This currently assumes that 'bash' is the default.\n if schema['run'].get('language', 'bash') != 'workflow':\n continue\n\n process_schemas.append(schema)\n\n return process_schemas"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _evaluate_expressions(self, expression_engine, step_id, values, context):\n if expression_engine is None:\n return values\n\n processed = {}\n for name, value in values.items():\n if isinstance(value, str):\n value = value.strip()\n try:\n expression = expression_engine.get_inline_expression(value)\n if expression is not None:\n # Inline expression.\n value = expression_engine.evaluate_inline(expression, context)\n else:\n # Block expression.\n value = expression_engine.evaluate_block(value, context)\n except EvaluationError as error:\n raise ExecutionError('Error while evaluating expression for step \"{}\":\\n{}'.format(\n step_id, error\n ))\n elif isinstance(value, dict):\n value = self._evaluate_expressions(expression_engine, step_id, value, context)\n\n processed[name] = value\n\n return processed", "response": "Recursively evaluate expressions in a dictionary of values."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nevaluates the code needed to compute a given Data object.", "response": "def evaluate(self, data):\n \"\"\"Evaluate the code needed to compute a given Data object.\"\"\"\n expression_engine = data.process.requirements.get('expression-engine', None)\n if expression_engine is not None:\n expression_engine = self.get_expression_engine(expression_engine)\n\n # Parse steps.\n steps = data.process.run.get('program', None)\n if steps is None:\n return\n\n if not isinstance(steps, list):\n raise ExecutionError('Workflow program must be a list of steps.')\n\n # Expression engine evaluation context.\n context = {\n 'input': data.input,\n 'steps': collections.OrderedDict(),\n }\n\n for index, step in enumerate(steps):\n try:\n step_id = step['id']\n step_slug = step['run']\n except KeyError as error:\n raise ExecutionError('Incorrect definition of step \"{}\", missing property \"{}\".'.format(\n step.get('id', index), error\n ))\n\n # Fetch target process.\n process = Process.objects.filter(slug=step_slug).order_by('-version').first()\n if not process:\n raise ExecutionError('Incorrect definition of step \"{}\", invalid process \"{}\".'.format(\n step_id, step_slug\n ))\n\n # Process all input variables.\n step_input = step.get('input', {})\n if not isinstance(step_input, dict):\n raise ExecutionError('Incorrect definition of step \"{}\", input must be a dictionary.'.format(\n step_id\n ))\n\n data_input = self._evaluate_expressions(expression_engine, step_id, step_input, context)\n\n # Create the data object.\n data_object = Data.objects.create(\n process=process,\n contributor=data.contributor,\n tags=data.tags,\n input=data_input,\n )\n DataDependency.objects.create(\n parent=data,\n child=data_object,\n kind=DataDependency.KIND_SUBPROCESS,\n )\n\n # Copy permissions.\n copy_permissions(data, data_object)\n\n # Copy collections.\n for collection in data.collection_set.all():\n collection.data.add(data_object)\n\n context['steps'][step_id] = data_object.pk\n\n # Immediately set our status to done and output all data object identifiers.\n data.output = {\n 'steps': list(context['steps'].values()),\n }\n data.status = Data.STATUS_DONE"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a connection to the Redis server.", "response": "async def init():\n \"\"\"Create a connection to the Redis server.\"\"\"\n global redis_conn # pylint: disable=global-statement,invalid-name\n conn = await aioredis.create_connection(\n 'redis://{}:{}'.format(\n SETTINGS.get('FLOW_EXECUTOR', {}).get('REDIS_CONNECTION', {}).get('host', 'localhost'),\n SETTINGS.get('FLOW_EXECUTOR', {}).get('REDIS_CONNECTION', {}).get('port', 56379)\n ),\n db=int(SETTINGS.get('FLOW_EXECUTOR', {}).get('REDIS_CONNECTION', {}).get('db', 1))\n )\n redis_conn = aioredis.Redis(conn)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\nasync def send_manager_command(cmd, expect_reply=True, extra_fields={}):\n packet = {\n ExecutorProtocol.DATA_ID: DATA['id'],\n ExecutorProtocol.COMMAND: cmd,\n }\n packet.update(extra_fields)\n\n logger.debug(\"Sending command to listener: {}\".format(json.dumps(packet)))\n\n # TODO what happens here if the push fails? we don't have any realistic recourse,\n # so just let it explode and stop processing\n queue_channel = EXECUTOR_SETTINGS['REDIS_CHANNEL_PAIR'][0]\n try:\n await redis_conn.rpush(queue_channel, json.dumps(packet))\n except Exception:\n logger.error(\"Error sending command to manager:\\n\\n{}\".format(traceback.format_exc()))\n raise\n\n if not expect_reply:\n return\n\n for _ in range(_REDIS_RETRIES):\n response = await redis_conn.blpop(QUEUE_RESPONSE_CHANNEL, timeout=1)\n if response:\n break\n else:\n # NOTE: If there's still no response after a few seconds, the system is broken\n # enough that it makes sense to give up; we're isolated here, so if the manager\n # doesn't respond, we can't really do much more than just crash\n raise RuntimeError(\"No response from the manager after {} retries.\".format(_REDIS_RETRIES))\n\n _, item = response\n result = json.loads(item.decode('utf-8'))[ExecutorProtocol.RESULT]\n assert result in [ExecutorProtocol.RESULT_OK, ExecutorProtocol.RESULT_ERROR]\n\n if result == ExecutorProtocol.RESULT_OK:\n return True\n\n return False", "response": "Send a properly formatted command to the manager."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nextracts values from one SciObj.", "response": "def extract_values_query(query, field_list, out_stream=None):\n \"\"\"Get list of dicts where each dict holds values from one SciObj.\n\n Args:\n field_list: list of str\n List of field names for which to return values. Must be strings from\n FIELD_NAME_TO_generate_dict.keys().\n\n If None, return all fields.\n\n filter_arg_dict: dict\n Dict of arguments to pass to ``ScienceObject.objects.filter()``.\n\n out_stream: open file-like object\n If provided, the JSON doc is streamed out instead of buffered in memory.\n\n Returns:\n list of dict: The keys in the returned dict correspond to the field names in\n ``field_list``.\n\n Raises:\n raise d1_common.types.exceptions.InvalidRequest() if ``field_list`` contains any\n invalid field names. A list of the invalid fields is included in the exception.\n\n \"\"\"\n lookup_dict, generate_dict = _split_field_list(field_list)\n\n query, annotate_key_list = _annotate_query(query, generate_dict)\n # return query, annotate_key_list\n #\n # query, annotate_key_list = _create_query(filter_arg_dict, generate_dict)\n\n lookup_list = [v[\"lookup_str\"] for k, v in lookup_dict.items()] + annotate_key_list\n\n if out_stream is None:\n return _create_sciobj_list(query, lookup_list, lookup_dict, generate_dict)\n else:\n return _write_stream(query, lookup_list, lookup_dict, generate_dict, out_stream)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef extract_values(field_list=None, filter_arg_dict=None, out_stream=None):\n lookup_dict, generate_dict = _split_field_list(field_list)\n query, annotate_key_list = _create_query(filter_arg_dict, generate_dict)\n\n lookup_list = [v[\"lookup_str\"] for k, v in lookup_dict.items()] + annotate_key_list\n\n if out_stream is None:\n return _create_sciobj_list(query, lookup_list, lookup_dict, generate_dict)\n else:\n return _write_stream(query, lookup_list, lookup_dict, generate_dict, out_stream)", "response": "Extract values from one SciObj."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nraise an exception if the given list contains invalid field names.", "response": "def assert_invalid_field_list(field_list):\n \"\"\"raise d1_common.types.exceptions.InvalidRequest() if ``field_list`` contains any\n invalid field names. A list of the invalid fields is included in the exception.\n\n - Implicitly called by ``extract_values()``.\n\n \"\"\"\n if field_list is not None:\n invalid_field_list = [\n v for v in field_list if v not in get_valid_field_name_list()\n ]\n if invalid_field_list:\n raise d1_common.types.exceptions.InvalidRequest(\n 0, \"Invalid fields: {}\".format(\", \".join(invalid_field_list))\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding annotations to the query to retrieve values required by field value generate functions.", "response": "def _annotate_query(query, generate_dict):\n \"\"\"Add annotations to the query to retrieve values required by field value generate\n functions.\"\"\"\n annotate_key_list = []\n for field_name, annotate_dict in generate_dict.items():\n for annotate_name, annotate_func in annotate_dict[\"annotate_dict\"].items():\n query = annotate_func(query)\n annotate_key_list.append(annotate_name)\n return query, annotate_key_list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a dict where the keys are the requested field names from the values returned by Django.", "response": "def _value_list_to_sciobj_dict(\n sciobj_value_list, lookup_list, lookup_dict, generate_dict\n):\n \"\"\"Create a dict where the keys are the requested field names, from the values\n returned by Django.\"\"\"\n\n sciobj_dict = {}\n\n # for sciobj_value, lookup_str in zip(sciobj_value_list, lookup_list):\n lookup_to_value_dict = {k: v for k, v in zip(lookup_list, sciobj_value_list)}\n\n for field_name, r_dict in lookup_dict.items():\n if r_dict[\"lookup_str\"] in lookup_to_value_dict.keys():\n sciobj_dict[field_name] = lookup_to_value_dict[r_dict[\"lookup_str\"]]\n\n for field_name, annotate_dict in generate_dict.items():\n for final_name, generate_func in annotate_dict[\"generate_dict\"].items():\n sciobj_dict[field_name] = generate_func(lookup_to_value_dict)\n\n return sciobj_dict"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _split_field_list(field_list):\n lookup_dict = {}\n generate_dict = {}\n\n for field_name in field_list or FIELD_NAME_TO_EXTRACT_DICT.keys():\n try:\n extract_dict = FIELD_NAME_TO_EXTRACT_DICT[field_name]\n except KeyError:\n assert_invalid_field_list(field_list)\n else:\n if \"lookup_str\" in extract_dict:\n lookup_dict[field_name] = extract_dict\n else:\n generate_dict[field_name] = extract_dict\n\n return lookup_dict, generate_dict", "response": "Split the list of fields for which to extract values into lists by extraction\n methods."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the PyXB binding to use when handling a request.", "response": "def dataoneTypes(request):\n \"\"\"Return the PyXB binding to use when handling a request.\"\"\"\n if is_v1_api(request):\n return d1_common.types.dataoneTypes_v1_1\n elif is_v2_api(request) or is_diag_api(request):\n return d1_common.types.dataoneTypes_v2_0\n else:\n raise d1_common.types.exceptions.ServiceFailure(\n 0, 'Unknown version designator in URL. url=\"{}\"'.format(request.path)\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing a ISO 8601 date - time and return a datetime with timezone adjusted to UTC.", "response": "def parse_and_normalize_url_date(date_str):\n \"\"\"Parse a ISO 8601 date-time with optional timezone.\n\n - Return as datetime with timezone adjusted to UTC.\n - Return naive date-time set to UTC.\n\n \"\"\"\n if date_str is None:\n return None\n try:\n return d1_common.date_time.dt_from_iso8601_str(date_str)\n except d1_common.date_time.iso8601.ParseError as e:\n raise d1_common.types.exceptions.InvalidRequest(\n 0,\n 'Invalid date format for URL parameter. date=\"{}\" error=\"{}\"'.format(\n date_str, str(e)\n ),\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving the specified document.", "response": "def get(self, doc_id):\n \"\"\"Retrieve the specified document.\"\"\"\n resp_dict = self._get_query(q='id:{}'.format(doc_id))\n if resp_dict['response']['numFound'] > 0:\n return resp_dict['response']['docs'][0]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve a list of identifiers for documents matching the query.", "response": "def get_ids(self, start=0, rows=1000, **query_dict):\n \"\"\"Retrieve a list of identifiers for documents matching the query.\"\"\"\n resp_dict = self._get_query(start=start, rows=rows, **query_dict)\n return {\n 'matches': resp_dict['response']['numFound'],\n 'start': start,\n 'ids': [d['id'] for d in resp_dict['response']['docs']],\n }"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the number of entries that match query_dict.", "response": "def count(self, **query_dict):\n \"\"\"Return the number of entries that match query.\"\"\"\n param_dict = query_dict.copy()\n param_dict['count'] = 0\n resp_dict = self._get_query(**param_dict)\n return resp_dict['response']['numFound']"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_field_values(self, name, maxvalues=-1, sort=True, **query_dict):\n param_dict = query_dict.copy()\n param_dict.update(\n {\n 'rows': '0',\n 'facet': 'true',\n 'facet.field': name,\n 'facet.limit': str(maxvalues),\n 'facet.zeros': 'false',\n 'facet.sort': str(sort).lower(),\n }\n )\n resp_dict = self._post_query(**param_dict)\n result_dict = resp_dict['facet_counts']['facet_fields']\n result_dict['numFound'] = resp_dict['response']['numFound']\n return result_dict", "response": "Retrieve the unique values for a field."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef field_alpha_histogram(\n self, name, n_bins=10, include_queries=True, **query_dict\n ):\n \"\"\"Generates a histogram of values from a string field.\n\n Output is: [[low, high, count, query], ... ]. Bin edges is determined by equal\n division of the fields.\n\n \"\"\"\n bin_list = []\n q_bin = []\n try:\n # get total number of values for the field\n # TODO: this is a slow mechanism to retrieve the number of distinct values\n # Need to replace this with something more efficient.\n # Can probably replace with a range of alpha chars - need to check on\n # case sensitivity\n param_dict = query_dict.copy()\n f_vals = self.get_field_values(name, maxvalues=-1, **param_dict)\n n_values = len(f_vals[name]) // 2\n if n_values < n_bins:\n n_bins = n_values\n if n_values == n_bins:\n # Use equivalence instead of range queries to retrieve the\n # values\n for i in range(n_bins):\n a_bin = [f_vals[name][i * 2], f_vals[name][i * 2], 0]\n bin_q = '{}:{}'.format(\n name, self._prepare_query_term(name, a_bin[0])\n )\n q_bin.append(bin_q)\n bin_list.append(a_bin)\n else:\n delta = n_values / n_bins\n if delta == 1:\n # Use equivalence queries, except the last one which includes the\n # remainder of terms\n for i in range(n_bins - 1):\n a_bin = [f_vals[name][i * 2], f_vals[name][i * 2], 0]\n bin_q = '{}:{}'.format(\n name, self._prepare_query_term(name, a_bin[0])\n )\n q_bin.append(bin_q)\n bin_list.append(a_bin)\n term = f_vals[name][(n_bins - 1) * 2]\n a_bin = [term, f_vals[name][((n_values - 1) * 2)], 0]\n bin_q = '{}:[{} TO *]'.format(\n name, self._prepare_query_term(name, term)\n )\n q_bin.append(bin_q)\n bin_list.append(a_bin)\n else:\n # Use range for all terms\n # now need to page through all the values and get those at\n # the edges\n c_offset = 0.0\n delta = float(n_values) / float(n_bins)\n for i in range(n_bins):\n idx_l = int(c_offset) * 2\n idx_u = (int(c_offset + delta) * 2) - 2\n a_bin = [f_vals[name][idx_l], f_vals[name][idx_u], 0]\n # logger.info(str(a_bin))\n try:\n if i == 0:\n bin_q = '{}:[* TO {}]'.format(\n name, self._prepare_query_term(name, a_bin[1])\n )\n elif i == n_bins - 1:\n bin_q = '{}:[{} TO *]'.format(\n name, self._prepare_query_term(name, a_bin[0])\n )\n else:\n bin_q = '{}:[{} TO {}]'.format(\n name,\n self._prepare_query_term(name, a_bin[0]),\n self._prepare_query_term(name, a_bin[1]),\n )\n except Exception:\n self._log.exception('Exception:')\n raise\n q_bin.append(bin_q)\n bin_list.append(a_bin)\n c_offset = c_offset + delta\n # now execute the facet query request\n param_dict = query_dict.copy()\n param_dict.update(\n {\n 'rows': '0',\n 'facet': 'true',\n 'facet.field': name,\n 'facet.limit': '1',\n 'facet.mincount': 1,\n 'facet.query': [sq.encode('utf-8') for sq in q_bin],\n }\n )\n resp_dict = self._post_query(**param_dict)\n for i in range(len(bin_list)):\n v = resp_dict['facet_counts']['facet_queries'][q_bin[i]]\n bin_list[i][2] = v\n if include_queries:\n bin_list[i].append(q_bin[i])\n except Exception:\n self._log.exception('Exception')\n raise\n return bin_list", "response": "Generates a histogram of values from a string field."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_docs(self, docs):\n return self.query(\n 'solr',\n '{}'.format(\n ''.join([self._format_add(fields) for fields in docs])\n ),\n do_post=True,\n )", "response": "docs is a list of fields that are a dictionary of name : value for a record."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _coerce_type(self, field_type, value):\n if value is None:\n return None\n if field_type == 'string':\n return str(value)\n elif field_type == 'text':\n return str(value)\n elif field_type == 'int':\n try:\n v = int(value)\n return str(v)\n except:\n return None\n elif field_type == 'float':\n try:\n v = float(value)\n return str(v)\n except:\n return None\n elif field_type == 'date':\n try:\n v = datetime.datetime(\n value['year'],\n value['month'],\n value['day'],\n value['hour'],\n value['minute'],\n value['second'],\n )\n v = v.strftime('%Y-%m-%dT%H:%M:%S.0Z')\n return v\n except:\n return None\n return str(value)", "response": "Coerce the value into the Solr field type."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_solr_type(self, field):\n field_type = 'string'\n try:\n field_type = FIELD_TYPE_CONVERSION_MAP[field]\n return field_type\n except:\n pass\n fta = field.split('_')\n if len(fta) > 1:\n ft = fta[len(fta) - 1]\n try:\n field_type = FIELD_TYPE_CONVERSION_MAP[ft]\n # cache the type so it's used next time\n FIELD_TYPE_CONVERSION_MAP[field] = field_type\n except:\n pass\n return field_type", "response": "Returns the Solr type of the specified field name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_query(self, **query_dict):\n param_dict = query_dict.copy()\n return self._send_query(do_post=False, **param_dict)", "response": "Perform a GET query against Solr and return the response as a Python dict."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nperforms a POST query against Solr and return the response as a Python dict.", "response": "def _post_query(self, **query_dict):\n \"\"\"Perform a POST query against Solr and return the response as a Python\n dict.\"\"\"\n param_dict = query_dict.copy()\n return self._send_query(do_post=True, **param_dict)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nperforming a query against Solr and return the response as a Python dict.", "response": "def _send_query(self, do_post=False, **query_dict):\n \"\"\"Perform a query against Solr and return the response as a Python dict.\"\"\"\n # self._prepare_query_term()\n param_dict = query_dict.copy()\n param_dict.setdefault('wt', 'json')\n param_dict.setdefault('q', '*.*')\n param_dict.setdefault('fl', '*')\n return self.query('solr', '', do_post=do_post, query=param_dict)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _prepare_query_term(self, field, term):\n if term == \"*\":\n return term\n add_star = False\n if term[len(term) - 1] == '*':\n add_star = True\n term = term[0 : len(term) - 1]\n term = self._escape_query_term(term)\n if add_star:\n term = '{}*'.format(term)\n if self._get_solr_type(field) in ['string', 'text', 'text_ws']:\n return '\"{}\"'.format(term)\n return term", "response": "Prepare a query term for inclusion in a query."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nescape a query term for inclusion in a query.", "response": "def _escape_query_term(self, term):\n \"\"\"Escape a query term for inclusion in a query.\n\n - Also see: prepare_query_term().\n\n \"\"\"\n term = term.replace('\\\\', '\\\\\\\\')\n for c in RESERVED_CHAR_LIST:\n term = term.replace(c, r'\\{}'.format(c))\n return term"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nretrieves the next page of results from the service.", "response": "def _next_page(self, offset):\n \"\"\"Retrieves the next set of results from the service.\"\"\"\n self._log.debug(\"Iterator c_record={}\".format(self.c_record))\n page_size = self.page_size\n if (offset + page_size) > self.max_records:\n page_size = self.max_records - offset\n param_dict = self.query_dict.copy()\n param_dict.update(\n {\n 'start': str(offset),\n 'rows': str(page_size),\n 'explainOther': '',\n 'hl.fl': '',\n }\n )\n self.res = self.client.search(**param_dict)\n self._num_hits = int(self.res['response']['numFound'])"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nretrieves the next page of results from the service.", "response": "def _next_page(self, offset):\n \"\"\"Retrieves the next set of results from the service.\"\"\"\n self._log.debug(\"Iterator c_record={}\".format(self.c_record))\n param_dict = self.query_dict.copy()\n param_dict.update(\n {\n 'rows': '0',\n 'facet': 'true',\n 'facet.limit': str(self.page_size),\n 'facet.offset': str(offset),\n 'facet.zeros': 'false',\n }\n )\n print(param_dict)\n # resp_dict = self.client._get_query(**param_dict)\n resp_dict = self.client._post_query(**param_dict)\n # resp_dict = self.client.search(**param_dict)\n pprint.pprint(resp_dict)\n try:\n self.res = resp_dict['facet_counts']['facet_fields'][self.field]\n self._log.debug(self.res)\n except Exception:\n self.res = []\n self.index = 0"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmigrating flow_collection field to entity_type.", "response": "def migrate_flow_collection(apps, schema_editor):\n \"\"\"Migrate 'flow_collection' field to 'entity_type'.\"\"\"\n Process = apps.get_model('flow', 'Process')\n DescriptorSchema = apps.get_model('flow', 'DescriptorSchema')\n\n for process in Process.objects.all():\n process.entity_type = process.flow_collection\n process.entity_descriptor_schema = process.flow_collection\n\n if (process.entity_descriptor_schema is not None and\n not DescriptorSchema.objects.filter(slug=process.entity_descriptor_schema).exists()):\n raise LookupError(\n \"Descriptow schema '{}' referenced in 'entity_descriptor_schema' not \"\n \"found.\".format(process.entity_descriptor_schema)\n )\n\n process.save()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmap a DataONE API version tag to a PyXB binding.", "response": "def get_pyxb_binding_by_api_version(api_major, api_minor=0):\n \"\"\"Map DataONE API version tag to PyXB binding.\n\n Given a DataONE API major version number, return PyXB binding that can\n serialize and deserialize DataONE XML docs of that version.\n\n Args:\n api_major, api_minor: str or int\n DataONE API major and minor version numbers.\n\n - If ``api_major`` is an integer, it is combined with ``api_minor`` to form an\n exact version.\n\n - If ``api_major`` is a string of ``v1`` or ``v2``, ``api_minor`` is ignored\n and the latest PyXB bindingavailable for the ``api_major`` version is\n returned.\n\n Returns:\n PyXB binding: E.g., ``d1_common.types.dataoneTypes_v1_1``.\n\n \"\"\"\n try:\n return VERSION_TO_BINDING_DICT[api_major, api_minor]\n except KeyError:\n raise ValueError(\n 'Unknown DataONE API version: {}.{}'.format(api_major, api_minor)\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nextract a DataONE API version tag from a MN or CN service endpoint URL.", "response": "def extract_version_tag_from_url(url):\n \"\"\"Extract a DataONE API version tag from a MN or CN service endpoint URL.\n\n Args:\n url : str\n Service endpoint URL. E.g.: ``https://mn.example.org/path/v2/object/pid``.\n\n Returns:\n str : Valid version tags are currently ``v1`` or ``v2``.\n\n \"\"\"\n m = re.match(r'(/|^)(v\\d)(/|$)', url)\n if not m:\n return None\n return m.group(2)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef str_to_v1_str(xml_str):\n if str_is_v1(xml_str):\n return xml_str\n etree_obj = str_to_etree(xml_str)\n strip_v2_elements(etree_obj)\n etree_replace_namespace(etree_obj, d1_common.types.dataoneTypes_v1.Namespace)\n return etree_to_str(etree_obj)", "response": "Convert a API v2 XML doc to v1 XML doc."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting a API v1 XML doc to v2 XML doc.", "response": "def str_to_v2_str(xml_str):\n \"\"\"Convert a API v1 XML doc to v2 XML doc.\n\n All v1 elements are valid for v2, so only changes namespace.\n\n Args:\n xml_str : str\n API v1 XML doc. E.g.: ``SystemMetadata v1``.\n\n Returns:\n str : API v2 XML doc. E.g.: ``SystemMetadata v2``.\n\n \"\"\"\n if str_is_v2(xml_str):\n return xml_str\n etree_obj = str_to_etree(xml_str)\n etree_replace_namespace(etree_obj, d1_common.types.dataoneTypes_v2_0.Namespace)\n return etree_to_str(etree_obj)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks if the XML string is well formed.", "response": "def str_is_well_formed(xml_str):\n \"\"\"\n Args:\n xml_str : str\n DataONE API XML doc.\n\n Returns:\n bool: **True** if XML doc is well formed.\n \"\"\"\n try:\n str_to_etree(xml_str)\n except xml.etree.ElementTree.ParseError:\n return False\n else:\n return True"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef pyxb_is_v1(pyxb_obj):\n # TODO: Will not detect v1.2 as v1.\n return (\n pyxb_obj._element().name().namespace()\n == d1_common.types.dataoneTypes_v1.Namespace\n )", "response": "Returns True if pyxb_obj holds an API v1 type."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pyxb_is_v2(pyxb_obj):\n return (\n pyxb_obj._element().name().namespace()\n == d1_common.types.dataoneTypes_v2_0.Namespace\n )", "response": "Returns True if pyxb_obj holds an API v2 type."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef replace_namespace_with_prefix(tag_str, ns_reverse_dict=None):\n ns_reverse_dict = ns_reverse_dict or NS_REVERSE_DICT\n for namespace_str, prefix_str in ns_reverse_dict.items():\n tag_str = tag_str.replace(\n '{{{}}}'.format(namespace_str), '{}:'.format(prefix_str)\n )\n return tag_str", "response": "Convert XML tag names with namespace on the form ore : ResourceMap tag name with prefix."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef strip_system_metadata(etree_obj):\n for series_id_el in etree_obj.findall('seriesId'):\n etree_obj.remove(series_id_el)\n for media_type_el in etree_obj.findall('mediaType'):\n etree_obj.remove(media_type_el)\n for file_name_el in etree_obj.findall('fileName'):\n etree_obj.remove(file_name_el)", "response": "In - place remove elements and attributes that are only valid in v2 types from v1SystemMetadata."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a new replica from a sysmeta. pyxb and a sciobj bytestream.", "response": "def _create_replica(self, sysmeta_pyxb, sciobj_bytestream):\n \"\"\"GMN handles replicas differently from native objects, with the main\n differences being related to handling of restrictions related to revision chains\n and SIDs.\n\n So this create sequence differs significantly from the regular one that is\n accessed through MNStorage.create().\n\n \"\"\"\n pid = d1_common.xml.get_req_val(sysmeta_pyxb.identifier)\n self._assert_is_pid_of_local_unprocessed_replica(pid)\n self._check_and_create_replica_revision(sysmeta_pyxb, \"obsoletes\")\n self._check_and_create_replica_revision(sysmeta_pyxb, \"obsoletedBy\")\n sciobj_url = d1_gmn.app.sciobj_store.get_rel_sciobj_file_url_by_pid(pid)\n sciobj_model = d1_gmn.app.sysmeta.create_or_update(sysmeta_pyxb, sciobj_url)\n self._store_science_object_bytes(pid, sciobj_bytestream)\n d1_gmn.app.event_log.create_log_entry(\n sciobj_model, \"create\", \"0.0.0.0\", \"[replica]\", \"[replica]\"\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nimporting only the Science Objects specified by a text file. The file must be UTF-8 encoded and contain one PIDs or SIDs per line.", "response": "async def restricted_import(self, async_client, node_type):\n \"\"\"Import only the Science Objects specified by a text file.\n\n The file must be UTF-8 encoded and contain one PIDs or SIDs per line.\n\n \"\"\"\n item_task_name = \"Importing objects\"\n pid_path = self.options['pid_path']\n\n if not os.path.exists(pid_path):\n raise ConnectionError('File does not exist: {}'.format(pid_path))\n\n with open(pid_path, encoding='UTF-8') as pid_file:\n self.progress_logger.start_task_type(\n item_task_name, len(pid_file.readlines())\n )\n pid_file.seek(0)\n\n for pid in pid_file.readlines():\n pid = pid.strip()\n self.progress_logger.start_task(item_task_name)\n\n # Ignore any blank lines in the file\n if not pid:\n continue\n\n await self.import_aggregated(async_client, pid)\n\n self.progress_logger.end_task_type(item_task_name)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\nasync def import_aggregated(self, async_client, pid):\n self._logger.info('Importing: {}'.format(pid))\n\n task_set = set()\n\n object_info_pyxb = d1_common.types.dataoneTypes.ObjectInfo()\n object_info_pyxb.identifier = pid\n task_set.add(self.import_object(async_client, object_info_pyxb))\n\n result_set, task_set = await asyncio.wait(task_set)\n\n assert len(result_set) == 1\n assert not task_set\n\n sysmeta_pyxb = result_set.pop().result()\n\n if not sysmeta_pyxb:\n # Import was skipped\n return\n\n assert d1_common.xml.get_req_val(sysmeta_pyxb.identifier) == pid\n\n if d1_gmn.app.did.is_resource_map_db(pid):\n for member_pid in d1_gmn.app.resource_map.get_resource_map_members_by_map(\n pid\n ):\n self.progress_logger.event(\"Importing aggregated SciObj\")\n self._logger.info('Importing aggregated SciObj. pid=\"{}\"'.format(pid))\n await self.import_aggregated(async_client, member_pid)", "response": "Imports the SciObj at pid."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the proxy location URL for an object.", "response": "async def get_object_proxy_location(self, client, pid):\n \"\"\"If object is proxied, return the proxy location URL.\n\n If object is local, return None.\n\n \"\"\"\n try:\n return (await client.describe(pid)).get(\"DataONE-Proxy\")\n except d1_common.types.exceptions.DataONEException:\n # Workaround for older GMNs that return 500 instead of 404 for describe()\n pass"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a dict of arguments that will be passed to listObjects.", "response": "def get_list_objects_arg_dict(self, node_type):\n \"\"\"Create a dict of arguments that will be passed to listObjects().\n\n If {node_type} is a CN, add filtering to include only objects from this GMN\n instance in the ObjectList returned by CNCore.listObjects().\n\n \"\"\"\n arg_dict = {\n # Restrict query for faster debugging\n # \"fromDate\": datetime.datetime(2017, 1, 1),\n # \"toDate\": datetime.datetime(2017, 1, 10),\n }\n if node_type == \"cn\":\n arg_dict[\"nodeId\"] = django.conf.settings.NODE_IDENTIFIER\n return arg_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_log_records_arg_dict(self, node_type):\n arg_dict = {\n # Restrict query for faster debugging\n # \"fromDate\": datetime.datetime(2017, 1, 1),\n # \"toDate\": datetime.datetime(2017, 1, 3),\n }\n if node_type == \"cn\":\n arg_dict[\"nodeId\"] = django.conf.settings.NODE_IDENTIFIER\n return arg_dict", "response": "Create a dict of arguments that will be passed to getLogRecords."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if node at base_url is a CN False otherwise Raise a DataONEException", "response": "async def is_cn(self, client):\n \"\"\"Return True if node at {base_url} is a CN, False if it is an MN.\n\n Raise a DataONEException if it's not a functional CN or MN.\n\n \"\"\"\n node_pyxb = await client.get_capabilities()\n return d1_common.type_conversions.pyxb_get_type_name(node_pyxb) == \"NodeList\""} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\nasync def probe_node_type_major(self, client):\n try:\n node_pyxb = await self.get_node_doc(client)\n except d1_common.types.exceptions.DataONEException as e:\n raise django.core.management.base.CommandError(\n \"Could not find a functional CN or MN at the provided BaseURL. \"\n 'base_url=\"{}\" error=\"{}\"'.format(\n self.options[\"baseurl\"], e.friendly_format()\n )\n )\n\n is_cn = d1_common.type_conversions.pyxb_get_type_name(node_pyxb) == \"NodeList\"\n\n if is_cn:\n self.assert_is_known_node_id(\n node_pyxb, django.conf.settings.NODE_IDENTIFIER\n )\n self._logger.info(\n \"Importing from CN: {}. filtered on MN: {}\".format(\n d1_common.xml.get_req_val(\n self.find_node(node_pyxb, self.options[\"baseurl\"]).identifier\n ),\n django.conf.settings.NODE_IDENTIFIER,\n )\n )\n return \"cn\", \"v2\"\n else:\n self._logger.info(\n \"Importing from MN: {}\".format(\n d1_common.xml.get_req_val(node_pyxb.identifier)\n )\n )\n return \"mn\", self.find_node_api_version(node_pyxb)", "response": "Determine if import source node is a CN or MN and which major version API to\n use."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_node(self, node_list_pyxb, base_url):\n for node_pyxb in node_list_pyxb.node:\n if node_pyxb.baseURL == base_url:\n return node_pyxb", "response": "Search NodeList for Node that has base_url.\n\n Return matching Node or None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding the highest API major version supported by node.", "response": "def find_node_api_version(self, node_pyxb):\n \"\"\"Find the highest API major version supported by node.\"\"\"\n max_major = 0\n for s in node_pyxb.services.service:\n max_major = max(max_major, int(s.version[1:]))\n return max_major"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsearch NodeList for Node with id.", "response": "def find_node_by_id(self, node_list_pyxb, node_id):\n \"\"\"Search NodeList for Node with {node_id}.\n\n Return matching Node or None\n\n \"\"\"\n for node_pyxb in node_list_pyxb.node:\n # if node_pyxb.baseURL == base_url:\n if d1_common.xml.get_req_val(node_pyxb.identifier) == node_id:\n return node_pyxb"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrunning the celery process executor.", "response": "def celery_run(data_id, runtime_dir, argv):\n \"\"\"Run process executor.\n\n :param data_id: The id of the :class:`~resolwe.flow.models.Data`\n object to be processed.\n :param runtime_dir: The directory from which to run the executor.\n :param argv: The argument vector used to run the executor.\n :param verbosity: The logging verbosity level.\n \"\"\"\n subprocess.Popen(\n argv,\n cwd=runtime_dir,\n stdin=subprocess.DEVNULL\n ).wait()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef archive_sciobj(pid):\n sciobj_model = d1_gmn.app.model_util.get_sci_model(pid)\n sciobj_model.is_archived = True\n sciobj_model.save()\n _update_modified_timestamp(sciobj_model)", "response": "Set the status of an object to archived."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates or update a System Metadata object and closely related internal state.", "response": "def create_or_update(sysmeta_pyxb, sciobj_url=None):\n \"\"\"Create or update database representation of a System Metadata object and closely\n related internal state.\n\n - If ``sciobj_url`` is not passed on create, storage in the internal sciobj store\n is assumed\n - If ``sciobj_url`` is passed on create, it can reference a location in the\n internal sciobj store, or an arbitrary location on disk, or a remote web server.\n See the sciobj_store module for more information\n - if ``sciobj_url`` is not passed on update, the sciobj location remains unchanged\n - If ``sciobj_url`` is passed on update, the sciobj location is updated\n\n Preconditions:\n - All values in ``sysmeta_pyxb`` must be valid for the operation being performed\n\n \"\"\"\n # TODO: Make sure that old sections are removed if not included in update.\n\n pid = d1_common.xml.get_req_val(sysmeta_pyxb.identifier)\n\n if sciobj_url is None:\n sciobj_url = d1_gmn.app.sciobj_store.get_rel_sciobj_file_url_by_pid(pid)\n\n try:\n sci_model = d1_gmn.app.model_util.get_sci_model(pid)\n except d1_gmn.app.models.ScienceObject.DoesNotExist:\n sci_model = d1_gmn.app.models.ScienceObject()\n sci_model.pid = d1_gmn.app.did.get_or_create_did(pid)\n sci_model.url = sciobj_url\n sci_model.serial_version = sysmeta_pyxb.serialVersion\n sci_model.uploaded_timestamp = d1_common.date_time.normalize_datetime_to_utc(\n sysmeta_pyxb.dateUploaded\n )\n\n _base_pyxb_to_model(sci_model, sysmeta_pyxb)\n\n sci_model.save()\n\n if _has_media_type_pyxb(sysmeta_pyxb):\n _media_type_pyxb_to_model(sci_model, sysmeta_pyxb)\n\n _access_policy_pyxb_to_model(sci_model, sysmeta_pyxb)\n\n if _has_replication_policy_pyxb(sysmeta_pyxb):\n _replication_policy_pyxb_to_model(sci_model, sysmeta_pyxb)\n\n replica_pyxb_to_model(sci_model, sysmeta_pyxb)\n revision_pyxb_to_model(sci_model, sysmeta_pyxb, pid)\n\n sci_model.save()\n\n return sci_model"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating or update the database representation of sysmeta_pyxb access policy.", "response": "def _access_policy_pyxb_to_model(sci_model, sysmeta_pyxb):\n \"\"\"Create or update the database representation of the sysmeta_pyxb access policy.\n\n If called without an access policy, any existing permissions on the object\n are removed and the access policy for the rights holder is recreated.\n\n Preconditions:\n - Each subject has been verified to a valid DataONE account.\n - Subject has changePermission for object.\n\n Postconditions:\n - The Permission and related tables contain the new access policy.\n\n Notes:\n - There can be multiple rules in a policy and each rule can contain multiple\n subjects. So there are two ways that the same subject can be specified multiple\n times in a policy. If this happens, multiple, conflicting action levels may be\n provided for the subject. This is handled by checking for an existing row for\n the subject for this object and updating it if it contains a lower action\n level. The end result is that there is one row for each subject, for each\n object and this row contains the highest action level.\n\n \"\"\"\n _delete_existing_access_policy(sysmeta_pyxb)\n # Add an implicit allow rule with all permissions for the rights holder.\n allow_rights_holder = d1_common.types.dataoneTypes.AccessRule()\n permission = d1_common.types.dataoneTypes.Permission(\n d1_gmn.app.auth.CHANGEPERMISSION_STR\n )\n allow_rights_holder.permission.append(permission)\n allow_rights_holder.subject.append(\n d1_common.xml.get_req_val(sysmeta_pyxb.rightsHolder)\n )\n top_level = _get_highest_level_action_for_rule(allow_rights_holder)\n _insert_permission_rows(sci_model, allow_rights_holder, top_level)\n # Create db entries for all subjects for which permissions have been granted.\n if _has_access_policy_pyxb(sysmeta_pyxb):\n for allow_rule in sysmeta_pyxb.accessPolicy.allow:\n top_level = _get_highest_level_action_for_rule(allow_rule)\n _insert_permission_rows(sci_model, allow_rule, top_level)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_arguments(self, parser):\n parser.description = __doc__\n parser.formatter_class = argparse.RawDescriptionHelpFormatter\n parser.add_argument(\"--debug\", action=\"store_true\", help=\"Debug level logging\")\n parser.add_argument(\n \"--force\",\n action=\"store_true\",\n help=\"Import even if local database is not empty\",\n )\n parser.add_argument(\"--clear\", action=\"store_true\", help=\"Clear local database\")\n parser.add_argument(\n \"--cert-pub\",\n dest=\"cert_pem_path\",\n action=\"store\",\n help=\"Path to PEM formatted public key of certificate\",\n )\n parser.add_argument(\n \"--cert-key\",\n dest=\"cert_key_path\",\n action=\"store\",\n help=\"Path to PEM formatted private key of certificate\",\n )\n parser.add_argument(\n \"--public\",\n action=\"store_true\",\n help=\"Do not use certificate even if available\",\n )\n parser.add_argument(\n \"--timeout\",\n type=float,\n action=\"store\",\n default=DEFAULT_TIMEOUT_SEC,\n help=\"Timeout for D1 API call to the source MN\",\n )\n parser.add_argument(\n \"--workers\",\n type=int,\n action=\"store\",\n default=DEFAULT_N_WORKERS,\n help=\"Max number workers making concurrent connections to the source MN\",\n )\n parser.add_argument(\n \"--object-page-size\",\n type=int,\n action=\"store\",\n default=d1_common.const.DEFAULT_SLICE_SIZE,\n help=\"Number of objects to retrieve in each listObjects() call\",\n )\n parser.add_argument(\n \"--log-page-size\",\n type=int,\n action=\"store\",\n default=d1_common.const.DEFAULT_SLICE_SIZE,\n help=\"Number of log records to retrieve in each getLogRecords() call\",\n )\n parser.add_argument(\n \"--major\",\n type=int,\n action=\"store\",\n help=\"Use API major version instead of finding by connecting to CN\",\n )\n parser.add_argument(\n \"--only-log\", action=\"store_true\", help=\"Only import event logs\"\n )\n parser.add_argument(\n \"--max-obj\",\n type=int,\n action=\"store\",\n help=\"Limit number of objects to import\",\n )\n parser.add_argument(\"baseurl\", help=\"Source MN BaseURL\")", "response": "Adds command line arguments to the given parser."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _create_log_entry(self, log_record_pyxb):\n event_log_model = d1_gmn.app.event_log.create_log_entry(\n d1_gmn.app.model_util.get_sci_model(\n d1_common.xml.get_req_val(log_record_pyxb.identifier)\n ),\n log_record_pyxb.event,\n log_record_pyxb.ipAddress,\n log_record_pyxb.userAgent,\n log_record_pyxb.subject.value(),\n )\n event_log_model.timestamp = d1_common.date_time.normalize_datetime_to_utc(\n log_record_pyxb.dateLogged\n )\n event_log_model.save()", "response": "Create a log entry from a log record."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndownloading bytes from the source Science Object store to the store.", "response": "def _download_source_sciobj_bytes_to_store(self, client, pid):\n \"\"\"Args:\n\n client: pid:\n\n \"\"\"\n if d1_gmn.app.sciobj_store.is_existing_sciobj_file(pid):\n self._events.log_and_count(\n \"Skipped download of existing sciobj bytes\", 'pid=\"{}\"'.format(pid)\n )\n else:\n with d1_gmn.app.sciobj_store.open_sciobj_file_by_pid(\n pid, write=True\n ) as sciobj_file:\n client.get_and_save(pid, sciobj_file)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _assert_path_is_dir(self, dir_path):\n if not os.path.isdir(dir_path):\n raise django.core.management.base.CommandError(\n 'Invalid dir path. path=\"{}\"'.format(dir_path)\n )", "response": "Checks that the path is a directory."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef HEAD(self, rest_path_list, **kwargs):\n kwargs.setdefault(\"allow_redirects\", False)\n return self._request(\"HEAD\", rest_path_list, **kwargs)", "response": "Send a HEAD request. See requests. request for optional parameters."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsending a POST request to the specified path list.", "response": "def POST(self, rest_path_list, **kwargs):\n \"\"\"Send a POST request with optional streaming multipart encoding. See\n requests.sessions.request for optional parameters. To post regular data, pass a\n string, iterator or generator as the ``data`` argument. To post a multipart\n stream, pass a dictionary of multipart elements as the ``fields`` argument.\n E.g.:\n\n fields = {\n 'field0': 'value',\n 'field1': 'value',\n 'field2': ('filename.xml', open('file.xml', 'rb'), 'application/xml')\n }\n\n :returns: Response object\n\n \"\"\"\n fields = kwargs.pop(\"fields\", None)\n if fields is not None:\n return self._send_mmp_stream(\"POST\", rest_path_list, fields, **kwargs)\n else:\n return self._request(\"POST\", rest_path_list, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef PUT(self, rest_path_list, **kwargs):\n fields = kwargs.pop(\"fields\", None)\n if fields is not None:\n return self._send_mmp_stream(\"PUT\", rest_path_list, fields, **kwargs)\n else:\n return self._request(\"PUT\", rest_path_list, **kwargs)", "response": "Send a PUT request with optional streaming multipart encoding. See post for optional parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets request as cURL command line for debugging.", "response": "def get_curl_command_line(self, method, url, **kwargs):\n \"\"\"Get request as cURL command line for debugging.\"\"\"\n if kwargs.get(\"query\"):\n url = \"{}?{}\".format(url, d1_common.url.urlencode(kwargs[\"query\"]))\n curl_list = [\"curl\"]\n if method.lower() == \"head\":\n curl_list.append(\"--head\")\n else:\n curl_list.append(\"-X {}\".format(method))\n for k, v in sorted(list(kwargs[\"headers\"].items())):\n curl_list.append('-H \"{}: {}\"'.format(k, v))\n curl_list.append(\"{}\".format(url))\n return \" \".join(curl_list)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndump the request and response objects for logging and debugging.", "response": "def dump_request_and_response(self, response):\n \"\"\"Return a string containing a nicely formatted representation of the request\n and response objects for logging and debugging.\n\n - Note: Does not work if the request or response body is a MultipartEncoder\n object.\n\n \"\"\"\n if response.reason is None:\n response.reason = \"\"\n return d1_client.util.normalize_request_response_dump(\n requests_toolbelt.utils.dump.dump_response(response)\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _timeout_to_float(self, timeout):\n if timeout is not None:\n try:\n timeout_float = float(timeout)\n except ValueError:\n raise ValueError(\n 'timeout_sec must be a valid number or None. timeout=\"{}\"'.format(\n timeout\n )\n )\n if timeout_float:\n return timeout_float", "response": "Convert timeout to float."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting the RedBaron syntax tree for a Python module.", "response": "def main():\n \"\"\"Print the RedBaron syntax tree for a Python module.\"\"\"\n parser = argparse.ArgumentParser(\n description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter\n )\n parser.add_argument(\"path\", help=\"Python module path\")\n args = parser.parse_args()\n\n r = d1_dev.util.redbaron_module_path_to_tree(args.path)\n print(r.help(True))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\njoins a base and a relative path and return an absolute path to the resulting location.", "response": "def abs_path_from_base(base_path, rel_path):\n \"\"\"Join a base and a relative path and return an absolute path to the resulting\n location.\n\n Args:\n base_path: str\n Relative or absolute path to prepend to ``rel_path``.\n\n rel_path: str\n Path relative to the location of the module file from which this function is called.\n\n Returns:\n str : Absolute path to the location specified by ``rel_path``.\n\n \"\"\"\n # noinspection PyProtectedMember\n return os.path.abspath(\n os.path.join(\n os.path.dirname(sys._getframe(1).f_code.co_filename), base_path, rel_path\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts a path that is relative to the module file from which this function is called to an absolute path.", "response": "def abs_path(rel_path):\n \"\"\"Convert a path that is relative to the module from which this function is called,\n to an absolute path.\n\n Args:\n rel_path: str\n Path relative to the location of the module file from which this function is called.\n\n Returns:\n str : Absolute path to the location specified by ``rel_path``.\n\n \"\"\"\n # noinspection PyProtectedMember\n return os.path.abspath(\n os.path.join(os.path.dirname(sys._getframe(1).f_code.co_filename), rel_path)\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the current state of the wrapper as XML", "response": "def get_xml(self, encoding='unicode'):\n \"\"\"Returns:\n\n str : Current state of the wrapper as XML\n\n \"\"\"\n return xml.etree.ElementTree.tostring(self._root_el, encoding)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_pretty_xml(self, encoding='unicode'):\n return d1_common.xml.reformat_to_pretty_xml(\n xml.etree.ElementTree.tostring(self._root_el, encoding)\n )", "response": "Returns the current state of the wrapper as a pretty printed XML string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_xml_below_element(self, el_name, el_idx=0, encoding='unicode'):\n return xml.etree.ElementTree.tostring(\n self.get_element_by_name(el_name, el_idx), encoding\n )", "response": "Returns the XML fragment rooted at el_name below el_idx."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_element_by_xpath(self, xpath_str, namespaces=None):\n try:\n return self._root_el.findall('.' + xpath_str, namespaces)\n except (ValueError, xml.etree.ElementTree.ParseError) as e:\n raise SimpleXMLWrapperException(\n 'XPath select raised exception. xpath_str=\"{}\" error=\"{}\"'.format(\n xpath_str, str(e)\n )\n )", "response": "Returns the list of elements matching xpath_str."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_element_by_name(self, el_name, el_idx=0):\n el_list = self.get_element_list_by_name(el_name)\n try:\n return el_list[el_idx]\n except IndexError:\n raise SimpleXMLWrapperException(\n 'Element not found. element_name=\"{}\" requested_idx={} '\n 'available_elements={}'.format(el_name, el_idx, len(el_list))\n )", "response": "Returns the element with the specified name."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_element_by_attr_key(self, attr_key, el_idx=0):\n el_list = self.get_element_list_by_attr_key(attr_key)\n try:\n return el_list[el_idx]\n except IndexError:\n raise SimpleXMLWrapperException(\n 'Element with tag not found. tag_name=\"{}\" requested_idx={} '\n 'available_elements={}'.format(attr_key, el_idx, len(el_list))\n )", "response": "Returns the element with the specified attribute key."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the text of the element with the given name.", "response": "def set_element_text(self, el_name, el_text, el_idx=0):\n \"\"\"\n Args:\n el_name : str\n Name of element to update.\n\n el_text : str\n Text to set for element.\n\n el_idx : int\n Index of element to use in the event that there are multiple sibling\n elements with the same name.\n \"\"\"\n self.get_element_by_name(el_name, el_idx).text = el_text"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the text of the element with the given attribute key.", "response": "def set_element_text_by_attr_key(self, attr_key, el_text, el_idx=0):\n \"\"\"\n Args:\n attr_key : str\n Name of attribute for which to search\n\n el_text : str\n Text to set for element.\n\n el_idx : int\n Index of element to use in the event that there are multiple sibling\n elements with the same name.\n \"\"\"\n self.get_element_by_attr_key(attr_key, el_idx).text = el_text"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_attr_value(self, attr_key, el_idx=0):\n return self.get_element_by_attr_key(attr_key, el_idx).attrib[attr_key]", "response": "Return the value of the selected attribute in the selected element."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_attr_text(self, attr_key, attr_val, el_idx=0):\n self.get_element_by_attr_key(attr_key, el_idx).attrib[attr_key] = attr_val", "response": "Set the value of the selected attribute of the selected element."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the text of the selected element as a datetime. datetime object.", "response": "def get_element_dt(self, el_name, tz=None, el_idx=0):\n \"\"\"Return the text of the selected element as a ``datetime.datetime`` object.\n\n The element text must be a ISO8601 formatted datetime\n\n Args:\n el_name : str\n Name of element to use.\n\n tz : datetime.tzinfo\n Timezone in which to return the datetime.\n\n - Without a timezone, other contextual information is required in order to\n determine the exact represented time.\n - If dt has timezone: The ``tz`` parameter is ignored.\n - If dt is naive (without timezone): The timezone is set to ``tz``.\n - ``tz=None``: Prevent naive dt from being set to a timezone. Without a\n timezone, other contextual information is required in order to determine\n the exact represented time.\n - ``tz=d1_common.date_time.UTC()``: Set naive dt to UTC.\n\n el_idx : int\n Index of element to use in the event that there are multiple sibling\n elements with the same name.\n\n Returns:\n datetime.datetime\n\n \"\"\"\n return iso8601.parse_date(self.get_element_by_name(el_name, el_idx).text, tz)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the text of the selected element to an ISO8601 formatted datetime.", "response": "def set_element_dt(self, el_name, dt, tz=None, el_idx=0):\n \"\"\"Set the text of the selected element to an ISO8601 formatted datetime.\n\n Args:\n el_name : str\n Name of element to update.\n\n dt : datetime.datetime\n Date and time to set\n\n tz : datetime.tzinfo\n Timezone to set\n\n - Without a timezone, other contextual information is required in order to\n determine the exact represented time.\n - If dt has timezone: The ``tz`` parameter is ignored.\n - If dt is naive (without timezone): The timezone is set to ``tz``.\n - ``tz=None``: Prevent naive dt from being set to a timezone. Without a\n timezone, other contextual information is required in order to determine\n the exact represented time.\n - ``tz=d1_common.date_time.UTC()``: Set naive dt to UTC.\n\n el_idx : int\n Index of element to use in the event that there are multiple sibling\n elements with the same name.\n\n \"\"\"\n dt = d1_common.date_time.cast_naive_datetime_to_tz(dt, tz)\n self.get_element_by_name(el_name, el_idx).text = dt.isoformat()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreplace element. with a new element with the root_el.", "response": "def replace_by_etree(self, root_el, el_idx=0):\n \"\"\"Replace element.\n\n Select element that has the same name as ``root_el``, then replace the selected\n element with ``root_el``\n\n ``root_el`` can be a single element or the root of an element tree.\n\n Args:\n root_el : element\n New element that will replace the existing element.\n\n \"\"\"\n el = self.get_element_by_name(root_el.tag, el_idx)\n el[:] = list(root_el)\n el.attrib = root_el.attrib"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef replace_by_xml(self, xml_str, el_idx=0):\n root_el = self.parse_xml(xml_str)\n self.replace_by_etree(root_el, el_idx)", "response": "Replace element.\n Arc with xml_str"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompile SQL for this function.", "response": "def as_sql(self, compiler, connection): # pylint: disable=arguments-differ\n \"\"\"Compile SQL for this function.\"\"\"\n sql, params = super().as_sql(compiler, connection)\n params.append(self.path)\n return sql, params"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef with_json_path(self, path, field=None):\n if field is None:\n field = '_'.join(['json'] + json_path_components(path))\n\n kwargs = {field: JsonGetPath('json', path)}\n return self.defer('json').annotate(**kwargs)", "response": "Annotate Storage objects with a specific JSON path."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_json_path(self, path):\n return self.with_json_path(path, field='result').values_list('result', flat=True)", "response": "Return only a specific JSON path of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads json field from Storage object.", "response": "def _get_storage(self):\n \"\"\"Load `json` field from `Storage` object.\"\"\"\n if self._json is None:\n self._json = Storage.objects.get(**self._kwargs).json"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmigrating process schema. :param process: Process instance :param schema: Process schema to migrate :param from_state: Database model state :return: True if the process was migrated, False otherwise", "response": "def migrate_process_schema(self, process, schema, from_state):\n \"\"\"Migrate process schema.\n\n :param process: Process instance\n :param schema: Process schema to migrate\n :param from_state: Database model state\n :return: True if the process was migrated, False otherwise\n \"\"\"\n container = dict_dot(schema, '.'.join(self.field[:-1]), default=list)\n\n # Ignore processes, which already contain the target field with the\n # target schema.\n for field in container:\n if field['name'] == self.field[-1]:\n if field == self.schema:\n return False\n else:\n raise ValueError(\n \"Failed to migrate schema for process '{process}' as the field '{field}' \"\n \"already exists and has an incompatible schema\".format(\n process=process.slug,\n field=self.field[-1]\n )\n )\n\n # Add field to container.\n container.append(self.schema)\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef migrate_data(self, data, from_state):\n if not self.default:\n return\n\n self.default.prepare(data, from_state)\n for instance in data:\n value = self.default.get_default_for(instance, from_state)\n if not value and not self.schema.get('required', True):\n continue\n\n # Set default value.\n container = getattr(instance, self.schema_type, {})\n dict_dot(container, '.'.join(self.field), value)\n setattr(instance, self.schema_type, container)\n instance.save()", "response": "Migrate data objects.\n\n :param data: Queryset containing all data objects that need\n to be migrated\n :param from_state: Database model state"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef migrate_process_schema(self, process, schema, from_state):\n container = dict_dot(schema, '.'.join(self.field[:-1]), default=list)\n\n # Ignore processes, which already contain the target field.\n migrate = False\n for field in container:\n if field['name'] == self.field[-1]:\n field['name'] = self.new_field\n migrate = True\n break\n elif field['name'] == self.new_field:\n # Already has target field.\n migrate = False\n break\n else:\n if not self.skip_no_field:\n raise ValueError(\n \"Unable to rename: there is no field with name '{field}' or '{new_field}'.\".format(\n field=self.field[-1],\n new_field=self.new_field,\n )\n )\n\n return migrate", "response": "Migrate the process schema."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef migrate_data(self, data, from_state):\n for instance in data:\n if instance.status == 'ER':\n continue\n\n container = getattr(instance, self.schema_type, {})\n schema = container.pop(self.field[-1])\n container[self.new_field] = schema\n\n setattr(instance, self.schema_type, container)\n instance.save()", "response": "Migrate data objects.\n\n :param data: Queryset containing all data objects that need\n to be migrated\n :param from_state: Database model state"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef migrate_process_schema(self, process, schema, from_state):\n if process.type == self.new_type:\n return False\n\n process.type = self.new_type\n return True", "response": "Migrate process instance to new schema."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nhomes page. Root of web server should redirect to here.", "response": "def home(request):\n \"\"\"Home page.\n\n Root of web server should redirect to here.\n\n \"\"\"\n if request.path.endswith('/'):\n return django.http.HttpResponseRedirect(request.path[:-1])\n\n return django.http.HttpResponse(\n generate_status_xml(), d1_common.const.CONTENT_TYPE_XML\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef error_404(request, exception):\n return django.http.HttpResponseNotFound(\n d1_common.types.exceptions.NotFound(\n 0,\n 'Invalid API endpoint',\n # Include the regexes the URL was tested against\n # traceInformation=str(exception),\n nodeId=django.conf.settings.NODE_IDENTIFIER,\n ).serialize_to_transport(xslt_url=django.urls.base.reverse('home_xslt')),\n d1_common.const.CONTENT_TYPE_XML,\n )", "response": "Handle 404s outside of the valid API URL endpoints."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the total free space on the object storage on which the object storage resides ( in bytes", "response": "def get_obj_store_free_space_bytes():\n \"\"\"Return total free space available on the disk on which the object storage resides\n (in bytes)\"\"\"\n obj_store_path = django.conf.settings.OBJECT_STORE_PATH\n if platform.system() == 'Windows':\n free_bytes = ctypes.c_ulonglong(0)\n ctypes.windll.kernel32.GetDiskFreeSpaceExW(\n ctypes.c_wchar_p(obj_store_path), None, None, ctypes.pointer(free_bytes)\n )\n return free_bytes.value\n else:\n return os.statvfs(obj_store_path).f_bfree * os.statvfs(obj_store_path).f_frsize"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef m2Lambda_to_vMh2(m2, Lambda, C):\n try:\n v = (sqrt(2 * m2 / Lambda) + 3 * m2**(3 / 2) /\n (sqrt(2) * Lambda**(5 / 2)) * C['phi'])\n except ValueError:\n v = 0\n Mh2 = 2 * m2 * (1 - m2 / Lambda * (3 * C['phi'] - 4 * Lambda * C['phiBox'] +\n Lambda * C['phiD']))\n return {'v': v, 'Mh2': Mh2}", "response": "Function to numerically determine the physical Higgs VEV and mass\n given the parameters of the Higgs potential."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfunctioning to numerically determine the parameters of the Higgs potential given the physical Higgs VEV and mass.", "response": "def vMh2_to_m2Lambda(v, Mh2, C):\n \"\"\"Function to numerically determine the parameters of the Higgs potential\n given the physical Higgs VEV and mass.\"\"\"\n if C['phi'] == 0 and C['phiBox'] == 0 and C['phiD'] == 0:\n return _vMh2_to_m2Lambda_SM(v, Mh2)\n else:\n def f0(x): # we want the root of this function\n m2, Lambda = x\n d = m2Lambda_to_vMh2(m2=m2.real, Lambda=Lambda.real,\n C=C)\n return np.array([d['v'] - v, d['Mh2'] - Mh2])\n dSM = _vMh2_to_m2Lambda_SM(v, Mh2)\n x0 = np.array([dSM['m2'], dSM['Lambda']])\n try:\n xres = scipy.optimize.newton_krylov(f0, x0)\n except (scipy.optimize.nonlin.NoConvergence, ValueError) as e:\n warnings.warn('Standard optimization method did not converge. The GMRES method is used instead.', Warning)\n try:\n xres = scipy.optimize.newton_krylov(f0, x0, method='gmres',\n f_tol=1e-7)\n except (scipy.optimize.nonlin.NoConvergence, ValueError) as e:\n raise ValueError(\"No solution for m^2 and Lambda found. This problem can be caused by very large values for one or several Wilson coefficients.\")\n return {'m2': xres[0], 'Lambda': xres[1]}"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef smeftpar(scale, C, basis):\n # start with a zero dict and update it with the input values\n MW = p['m_W']\n # MZ = p['m_Z']\n GF = p['GF']\n Mh = p['m_h']\n vb = sqrt(1 / sqrt(2) / GF)\n v = vb # TODO\n _d = vMh2_to_m2Lambda(v=v, Mh2=Mh**2, C=C)\n m2 = _d['m2'].real\n Lambda = _d['Lambda'].real\n gsbar = sqrt(4 * pi * p['alpha_s'])\n gs = (1 - C['phiG'] * (v**2)) * gsbar\n gbar = 2 * MW / v\n g = gbar * (1 - C['phiW'] * (v**2))\n ebar = sqrt(4 * pi * p['alpha_e'])\n gp = get_gpbar(ebar, gbar, v, C)\n c = {}\n c['m2'] = m2\n c['Lambda'] = Lambda\n c['g'] = g\n c['gp'] = gp\n c['gs'] = gs\n K = ckmutil.ckm.ckm_tree(p['Vus'], p['Vub'], p['Vcb'], p['delta'])\n if basis == 'Warsaw':\n Mu = K.conj().T @ np.diag([p['m_u'], p['m_c'], p['m_t']])\n Md = np.diag([p['m_d'], p['m_s'], p['m_b']])\n elif basis == 'Warsaw up':\n Mu = np.diag([p['m_u'], p['m_c'], p['m_t']])\n Md = K @ np.diag([p['m_d'], p['m_s'], p['m_b']])\n else:\n raise ValueError(\"Basis '{}' not supported\".format(basis))\n Me = np.diag([p['m_e'], p['m_mu'], p['m_tau']])\n c['Gd'] = Md / (v / sqrt(2)) + C['dphi'] * (v**2) / 2\n c['Gu'] = Mu / (v / sqrt(2)) + C['uphi'] * (v**2) / 2\n c['Ge'] = Me / (v / sqrt(2)) + C['ephi'] * (v**2) / 2\n return c", "response": "Get the running parameters in SMEFT."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the running effective SM parameters.", "response": "def smpar(C):\n \"\"\"Get the running effective SM parameters.\"\"\"\n m2 = C['m2'].real\n Lambda = C['Lambda'].real\n v = (sqrt(2 * m2 / Lambda) + 3 * m2**(3 / 2) /\n (sqrt(2) * Lambda**(5 / 2)) * C['phi'])\n GF = 1 / (sqrt(2) * v**2) # TODO\n Mh2 = 2 * m2 * (1 - m2 / Lambda * (3 * C['phi'] - 4 * Lambda * C['phiBox'] +\n Lambda * C['phiD']))\n eps = C['phiWB'] * (v**2)\n gb = (C['g'] / (1 - C['phiW'] * (v**2))).real\n gpb = (C['gp'] / (1 - C['phiB'] * (v**2))).real\n gsb = (C['gs'] / (1 - C['phiG'] * (v**2))).real\n MW = gb * v / 2\n ZG0 = 1 + C['phiD'] * (v**2) / 4\n MZ = (sqrt(gb**2 + gpb**2) / 2 * v\n * (1 + eps * gb * gpb / (gb**2 + gpb**2)) * ZG0)\n Mnup = -(v**2) * C['llphiphi']\n Mep = v / sqrt(2) * (C['Ge'] - C['ephi'] * (v**2) / 2)\n Mup = v / sqrt(2) * (C['Gu'] - C['uphi'] * (v**2) / 2)\n Mdp = v / sqrt(2) * (C['Gd'] - C['dphi'] * (v**2) / 2)\n UeL, Me, UeR = ckmutil.diag.msvd(Mep)\n UuL, Mu, UuR = ckmutil.diag.msvd(Mup)\n UdL, Md, UdR = ckmutil.diag.msvd(Mdp)\n UnuL, Mnu = ckmutil.diag.mtakfac(Mnup)\n eb = (gb * gpb / sqrt(gb**2 + gpb**2) *\n (1 - eps * gb * gpb / (gb**2 + gpb**2)))\n K = UuL.conj().T @ UdL\n # U = UeL.conj().T @ UnuL\n sm = {}\n sm['GF'] = GF\n sm['alpha_e'] = eb**2 / (4 * pi)\n sm['alpha_s'] = gsb**2 / (4 * pi)\n sm['Vub'] = abs(K[0, 2])\n sm['Vcb'] = abs(K[1, 2])\n sm['Vus'] = abs(K[0, 1])\n sm['delta'] = phase(-K[0, 0] * K[0, 2].conj()\n / (K[1, 0] * K[1, 2].conj()))\n # sm['U'] = Uu\n sm['m_W'] = MW\n sm['m_Z'] = MZ\n sm['m_h'] = sqrt(abs(Mh2))\n sm['m_u'] = Mu[0]\n sm['m_c'] = Mu[1]\n sm['m_t'] = Mu[2]\n sm['m_d'] = Md[0]\n sm['m_s'] = Md[1]\n sm['m_b'] = Md[2]\n sm['m_e'] = Me[0]\n sm['m_mu'] = Me[1]\n sm['m_tau'] = Me[2]\n return {k: v.real for k, v in sm.items()}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef scale_8(b):\n a = np.array(b, copy=True, dtype=complex)\n for i in range(3):\n a[0, 0, 1, i] = 1/2 * b[0, 0, 1, i]\n a[0, 0, 2, i] = 1/2 * b[0, 0, 2, i]\n a[0, 1, 1, i] = 1/2 * b[0, 1, 1, i]\n a[0, 1, 2, i] = 2/3 * b[0, 1, 2, i] - 1/6 * b[0, 2, 1, i] - 1/6 * b[1, 0, 2, i] + 1/6 * b[1, 2, 0, i]\n a[0, 2, 1, i] = - (1/6) * b[0, 1, 2, i] + 2/3 * b[0, 2, 1, i] + 1/6 * b[1, 0, 2, i] + 1/3 * b[1, 2, 0, i]\n a[0, 2, 2, i] = 1/2 * b[0, 2, 2, i]\n a[1, 0, 2, i] = - (1/6) * b[0, 1, 2, i] + 1/6 * b[0, 2, 1, i] + 2/3 * b[1, 0, 2, i] - 1/6 * b[1, 2, 0, i]\n a[1, 1, 2, i] = 1/2 * b[1, 1, 2, i]\n a[1, 2, 0, i] = 1/6 * b[0, 1, 2, i] + 1/3 * b[0, 2, 1, i] - 1/6 * b[1, 0, 2, i] + 2/3 * b[1, 2, 0, i]\n a[1, 2, 2, i] = 1/2 * b[1, 2, 2, i]\n return a", "response": "Translations necessary for class - 8 coefficients\n to go from a basis with only non - redundant WCxf\n operators to a basis where the Wilson coefficients are symmetrized like\n the operators are not symmetric"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting a dictionary with Wilson coefficient names as keys and numbers or numpy arrays as values to a dictionary with a Wilson coefficient name followed by underscore and numeric indices as keys and numbers as values. This is needed for the output in WCxf format.", "response": "def arrays2wcxf(C):\n \"\"\"Convert a dictionary with Wilson coefficient names as keys and\n numbers or numpy arrays as values to a dictionary with a Wilson coefficient\n name followed by underscore and numeric indices as keys and numbers as\n values. This is needed for the output in WCxf format.\"\"\"\n d = {}\n for k, v in C.items():\n if np.shape(v) == () or np.shape(v) == (1,):\n d[k] = v\n else:\n ind = np.indices(v.shape).reshape(v.ndim, v.size).T\n for i in ind:\n name = k + '_' + ''.join([str(int(j) + 1) for j in i])\n d[name] = v[tuple(i)]\n return d"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef wcxf2arrays(d):\n C = {}\n for k, v in d.items():\n name = k.split('_')[0]\n s = C_keys_shape[name]\n if s == 1:\n C[k] = v\n else:\n ind = k.split('_')[-1]\n if name not in C:\n C[name] = np.zeros(s, dtype=complex)\n C[name][tuple([int(i) - 1 for i in ind])] = v\n return C", "response": "Convert a dictionary with a Wilson coefficient\n name followed by underscore and numeric indices as keys and numbers as arrays as values. This is needed for parsing a Wilson coefficient tree in WCxf format."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd arrays with zeros for missing Wilson coefficient keys", "response": "def add_missing(C):\n \"\"\"Add arrays with zeros for missing Wilson coefficient keys\"\"\"\n C_out = C.copy()\n for k in (set(WC_keys) - set(C.keys())):\n C_out[k] = np.zeros(C_keys_shape[k])\n return C_out"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a 1D array containing C values to a dictionary.", "response": "def C_array2dict(C):\n \"\"\"Convert a 1D array containing C values to a dictionary.\"\"\"\n d = OrderedDict()\n i=0\n for k in C_keys:\n s = C_keys_shape[k]\n if s == 1:\n j = i+1\n d[k] = C[i]\n else:\n j = i \\\n + reduce(operator.mul, s, 1)\n d[k] = C[i:j].reshape(s)\n i = j\n return d"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef C_dict2array(C):\n return np.hstack([np.asarray(C[k]).ravel() for k in C_keys])", "response": "Convert an OrderedDict containing C values to a 1D array."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef symmetrize_nonred(C):\n C_symm = {}\n for i, v in C.items():\n if i in C_symm_keys[0]:\n C_symm[i] = v.real\n elif i in C_symm_keys[1] + C_symm_keys[3]:\n C_symm[i] = v # nothing to do\n elif i in C_symm_keys[2]:\n C_symm[i] = symmetrize_2(C[i])\n elif i in C_symm_keys[4]:\n C_symm[i] = symmetrize_4(C[i])\n C_symm[i] = C_symm[i] / _d_4\n elif i in C_symm_keys[5]:\n C_symm[i] = symmetrize_5(C[i])\n elif i in C_symm_keys[6]:\n C_symm[i] = symmetrize_6(C[i])\n C_symm[i] = C_symm[i] / _d_6\n elif i in C_symm_keys[7]:\n C_symm[i] = symmetrize_7(C[i])\n C_symm[i] = C_symm[i] / _d_7\n elif i in C_symm_keys[8]:\n C_symm[i] = scale_8(C[i])\n C_symm[i] = symmetrize_8(C_symm[i])\n elif i in C_symm_keys[9]:\n C_symm[i] = symmetrize_9(C[i])\n return C_symm", "response": "This function takes into account the symmetry factors\n that occur when transitioning from a basis with only non - redundant operators and the Wilson coefficients are symmetrized\n ."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef unscale_dict(C):\n C_out = {k: _scale_dict[k] * v for k, v in C.items()}\n for k in C_symm_keys[8]:\n C_out['qqql'] = unscale_8(C_out['qqql'])\n return C_out", "response": "Undo the scaling applied in scale_dict."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef wcxf2arrays_symmetrized(d):\n C = wcxf2arrays(d)\n C = symmetrize_nonred(C)\n C = add_missing(C)\n return C", "response": "Convert a dictionary with a Wilson coefficient name followed by underscore numeric indices as keys and numbers as values and numpy arrays as values."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nnudging manager at the end of every Data object save event.", "response": "def commit_signal(data_id):\n \"\"\"Nudge manager at the end of every Data object save event.\"\"\"\n if not getattr(settings, 'FLOW_MANAGER_DISABLE_AUTO_CALLS', False):\n immediate = getattr(settings, 'FLOW_MANAGER_SYNC_AUTO_CALLS', False)\n async_to_sync(manager.communicate)(data_id=data_id, save_settings=False, run_sync=immediate)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncalls when a new process is created.", "response": "def manager_post_save_handler(sender, instance, created, **kwargs):\n \"\"\"Run newly created (spawned) processes.\"\"\"\n if instance.status == Data.STATUS_DONE or instance.status == Data.STATUS_ERROR or created:\n # Run manager at the end of the potential transaction. Otherwise\n # tasks are send to workers before transaction ends and therefore\n # workers cannot access objects created inside transaction.\n transaction.on_commit(lambda: commit_signal(instance.id))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete Entity when last Data object is deleted.", "response": "def delete_entity(sender, instance, **kwargs):\n \"\"\"Delete Entity when last Data object is deleted.\"\"\"\n # 1 means that the last Data object is going to be deleted.\n Entity.objects.annotate(num_data=Count('data')).filter(data=instance, num_data=1).delete()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_relation(sender, instance, **kwargs):\n def process_signal(relation_id):\n \"\"\"Get the relation and delete it if it has no entities left.\"\"\"\n try:\n relation = Relation.objects.get(pk=relation_id)\n except Relation.DoesNotExist:\n return\n\n if relation.entities.count() == 0:\n relation.delete()\n\n # Wait for partitions to be recreated.\n transaction.on_commit(lambda: process_signal(instance.relation_id))", "response": "Delete the relation when the last Entity is removed."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstart the actual execution ; instantiate the executor and run.", "response": "async def run_executor():\n \"\"\"Start the actual execution; instantiate the executor and run.\"\"\"\n parser = argparse.ArgumentParser(description=\"Run the specified executor.\")\n parser.add_argument('module', help=\"The module from which to instantiate the concrete executor.\")\n args = parser.parse_args()\n\n module_name = '{}.run'.format(args.module)\n class_name = 'FlowExecutor'\n\n module = import_module(module_name, __package__)\n executor = getattr(module, class_name)()\n with open(ExecutorFiles.PROCESS_SCRIPT, 'rt') as script_file:\n await executor.run(DATA['id'], script_file.read())"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate file at sysmeta_pyxb against schema selected via formatId and raise InvalidRequest if invalid.", "response": "def assert_valid(sysmeta_pyxb, pid):\n \"\"\"Validate file at {sciobj_path} against schema selected via formatId and raise\n InvalidRequest if invalid.\n\n Validation is only performed when:\n\n - SciMeta validation is enabled\n - and Object size is below size limit for validation\n - and formatId designates object as a Science Metadata object which is recognized\n and parsed by DataONE CNs\n - and XML Schema (XSD) files for formatId are present on local system\n\n \"\"\"\n if not (_is_validation_enabled() and _is_installed_scimeta_format_id(sysmeta_pyxb)):\n return\n\n if _is_above_size_limit(sysmeta_pyxb):\n if _is_action_accept():\n return\n else:\n raise d1_common.types.exceptions.InvalidRequest(\n 0,\n 'Science Metadata file is above size limit for validation and this '\n 'node has been configured to reject unvalidated Science Metadata '\n 'files. For more information, see the SCIMETA_VALIDATE* settings. '\n 'size={} size_limit={}'.format(\n sysmeta_pyxb.size, django.conf.settings.SCIMETA_VALIDATION_MAX_SIZE\n ),\n )\n\n with d1_gmn.app.sciobj_store.open_sciobj_file_by_pid_ctx(pid) as sciobj_file:\n try:\n d1_scimeta.xml_schema.validate(sysmeta_pyxb.formatId, sciobj_file.read())\n except d1_scimeta.xml_schema.SciMetaValidationError as e:\n raise d1_common.types.exceptions.InvalidRequest(0, str(e))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts to internal value.", "response": "def to_internal_value(self, data):\n \"\"\"Convert to internal value.\"\"\"\n user = getattr(self.context.get('request'), 'user')\n queryset = self.get_queryset()\n permission = get_full_perm('view', queryset.model)\n try:\n return get_objects_for_user(\n user,\n permission,\n queryset.filter(**{self.slug_field: data}),\n ).latest()\n except ObjectDoesNotExist:\n self.fail(\n 'does_not_exist',\n slug_name=self.slug_field,\n value=smart_text(data),\n model_name=queryset.model._meta.model_name, # pylint: disable=protected-access\n )\n except (TypeError, ValueError):\n self.fail('invalid')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef dumps(dct, as_dict=False, **kwargs):\n\n result_ = TypeSerializer().serialize(json.loads(json.dumps(dct, default=json_serial),\n use_decimal=True))\n if as_dict:\n return next(six.iteritems(result_))[1]\n else:\n return json.dumps(next(six.iteritems(result_))[1], **kwargs)", "response": "Dump the dict to DynamoDB format."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads dynamodb json format to a python dict", "response": "def loads(s, as_dict=False, *args, **kwargs):\n \"\"\" Loads dynamodb json format to a python dict.\n :param s - the json string or dict (with the as_dict variable set to True) to convert\n :returns python dict object\n \"\"\"\n if as_dict or (not isinstance(s, six.string_types)):\n s = json.dumps(s)\n kwargs['object_hook'] = object_hook\n return json.loads(s, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the size ( in meters ) of a tile", "response": "def tileSize(self, zoom):\n \"Returns the size (in meters) of a tile\"\n assert zoom in range(0, len(self.RESOLUTIONS))\n return self.tileSizePx * self.RESOLUTIONS[int(zoom)]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the bounds of a tile in LV03 ( EPSG : 21781 )", "response": "def tileBounds(self, zoom, tileCol, tileRow):\n \"Returns the bounds of a tile in LV03 (EPSG:21781)\"\n assert zoom in range(0, len(self.RESOLUTIONS))\n\n # 0,0 at top left: y axis down and x axis right\n tileSize = self.tileSize(zoom)\n minX = self.MINX + tileCol * tileSize\n maxX = self.MINX + (tileCol + 1) * tileSize\n if self.originCorner == 'bottom-left':\n minY = self.MINY + tileRow * tileSize\n maxY = self.MINY + (tileRow + 1) * tileSize\n elif self.originCorner == 'top-left':\n minY = self.MAXY - (tileRow + 1) * tileSize\n maxY = self.MAXY - tileRow * tileSize\n return [minX, minY, maxX, maxY]"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a tile address based on a zoom level and \\ a point in the tile", "response": "def tileAddress(self, zoom, point):\n \"Returns a tile address based on a zoom level and \\\n a point in the tile\"\n [x, y] = point\n assert x <= self.MAXX and x >= self.MINX\n assert y <= self.MAXY and y >= self.MINY\n assert zoom in range(0, len(self.RESOLUTIONS))\n\n tileS = self.tileSize(zoom)\n offsetX = abs(x - self.MINX)\n if self.originCorner == 'bottom-left':\n offsetY = abs(y - self.MINY)\n elif self.originCorner == 'top-left':\n offsetY = abs(self.MAXY - y)\n col = offsetX / tileS\n row = offsetY / tileS\n # We are exactly on the edge of a tile and the extent\n if x in (self.MINX, self.MAXX) and col.is_integer():\n col = max(0, col - 1)\n if y in (self.MINY, self.MAXY) and row.is_integer():\n row = max(0, row - 1)\n return [\n int(math.floor(col)),\n int(math.floor(row))\n ]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining if an extent intersects this instance extent", "response": "def intersectsExtent(self, extent):\n \"Determine if an extent intersects this instance extent\"\n return \\\n self.extent[0] <= extent[2] and self.extent[2] >= extent[0] and \\\n self.extent[1] <= extent[3] and self.extent[3] >= extent[1]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nyield the tileBounds zoom tileCol and tileRow", "response": "def iterGrid(self, minZoom, maxZoom):\n \"Yields the tileBounds, zoom, tileCol and tileRow\"\n assert minZoom in range(0, len(self.RESOLUTIONS))\n assert maxZoom in range(0, len(self.RESOLUTIONS))\n assert minZoom <= maxZoom\n\n for zoom in xrange(minZoom, maxZoom + 1):\n [minRow, minCol, maxRow, maxCol] = self.getExtentAddress(zoom)\n for row in xrange(minRow, maxRow + 1):\n for col in xrange(minCol, maxCol + 1):\n tileBounds = self.tileBounds(zoom, col, row)\n yield (tileBounds, zoom, col, row)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the number of tiles over x at a given zoom level", "response": "def numberOfXTilesAtZoom(self, zoom):\n \"Returns the number of tiles over x at a given zoom level\"\n [minRow, minCol, maxRow, maxCol] = self.getExtentAddress(zoom)\n return maxCol - minCol + 1"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef numberOfTilesAtZoom(self, zoom):\n \"Returns the total number of tile at a given zoom level\"\n [minRow, minCol, maxRow, maxCol] = self.getExtentAddress(zoom)\n return (maxCol - minCol + 1) * (maxRow - minRow + 1)", "response": "Returns the total number of tile at a given zoom level"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the total number of tiles for this instance extent", "response": "def totalNumberOfTiles(self, minZoom=None, maxZoom=None):\n \"Return the total number of tiles for this instance extent\"\n nbTiles = 0\n minZoom = minZoom or 0\n if maxZoom:\n maxZoom = maxZoom + 1\n else:\n maxZoom = len(self.RESOLUTIONS)\n for zoom in xrange(minZoom, maxZoom):\n nbTiles += self.numberOfTilesAtZoom(zoom)\n return nbTiles"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the zoom level for a given resolution", "response": "def getZoom(self, resolution):\n \"Return the zoom level for a given resolution\"\n assert resolution in self.RESOLUTIONS\n return self.RESOLUTIONS.index(resolution)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _getZoomLevelRange(self, resolution, unit='meters'):\n \"Return lower and higher zoom level given a resolution\"\n assert unit in ('meters', 'degrees')\n if unit == 'meters' and self.unit == 'degrees':\n resolution = resolution / self.metersPerUnit\n elif unit == 'degrees' and self.unit == 'meters':\n resolution = resolution * EPSG4326_METERS_PER_UNIT\n lo = 0\n hi = len(self.RESOLUTIONS)\n while lo < hi:\n mid = (lo + hi) // 2\n if resolution > self.RESOLUTIONS[mid]:\n hi = mid\n else:\n lo = mid + 1\n return lo, hi", "response": "Return lower and higher zoom level given a resolution"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the closest zoom level for a given resolution", "response": "def getClosestZoom(self, resolution, unit='meters'):\n \"\"\"\n Return the closest zoom level for a given resolution\n Parameters:\n resolution -- max. resolution\n unit -- unit for output (default='meters')\n \"\"\"\n lo, hi = self._getZoomLevelRange(resolution, unit)\n if lo == 0:\n return lo\n if hi == len(self.RESOLUTIONS):\n return hi - 1\n before = self.RESOLUTIONS[lo - 1]\n if abs(self.RESOLUTIONS[lo] - resolution) < abs(before - resolution):\n return lo\n return lo - 1"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getCeilingZoom(self, resolution, unit='meters'):\n if resolution in self.RESOLUTIONS:\n return self.getZoom(resolution)\n lo, hi = self._getZoomLevelRange(resolution, unit)\n if lo == 0 or lo == hi:\n return lo\n if hi == len(self.RESOLUTIONS):\n return hi - 1\n return lo + 1", "response": "Returns the highest zoom level for a given resolution"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the scale at a given zoom level", "response": "def getScale(self, zoom):\n \"\"\"Returns the scale at a given zoom level\"\"\"\n if self.unit == 'degrees':\n resolution = self.getResolution(zoom) * EPSG4326_METERS_PER_UNIT\n else:\n resolution = self.getResolution(zoom)\n return resolution / STANDARD_PIXEL_SIZE"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getExtentAddress(self, zoom, extent=None, contained=False):\n if extent:\n bbox = extent\n else:\n bbox = self.extent\n minX = bbox[0]\n maxX = bbox[2]\n if self.originCorner == 'bottom-left':\n minY = bbox[3]\n maxY = bbox[1]\n elif self.originCorner == 'top-left':\n minY = bbox[1]\n maxY = bbox[3]\n [minCol, minRow] = self.tileAddress(zoom, [minX, maxY])\n [maxCol, maxRow] = self.tileAddress(zoom, [maxX, minY])\n\n if contained and minCol != maxCol or minRow != maxRow:\n parentBoundsMin = self.tileBounds(zoom, minCol, minRow)\n if self.originCorner == 'bottom-left':\n if parentBoundsMin[2] == maxX:\n maxCol -= 1\n if parentBoundsMin[3] == minY:\n maxRow -= 1\n elif self.originCorner == 'top-left':\n if parentBoundsMin[2] == maxX:\n maxCol -= 1\n if parentBoundsMin[1] == minY:\n maxRow -= 1\n return [minRow, minCol, maxRow, maxCol]", "response": "Returns the bounding addresses of the current instance based on the extent."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getParentTiles(self, zoom, col, row, zoomParent):\n assert zoomParent <= zoom\n if zoomParent == zoom:\n return [[zoom, col, row]]\n extent = self.tileBounds(zoom, col, row)\n minRow, minCol, maxRow, maxCol = self.getExtentAddress(\n zoomParent, extent=extent, contained=True)\n addresses = []\n for c in range(minCol, maxCol + 1):\n for r in range(minRow, maxRow + 1):\n addresses.append([zoomParent, c, r])\n return addresses", "response": "Returns the parent tile addresses for an irregular tiling scheme."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nenabling loggly if it is configured and not debug or testing.", "response": "def enable_loggly(graph):\n \"\"\"\n Enable loggly if it is configured and not debug/testing.\n\n \"\"\"\n if graph.metadata.debug or graph.metadata.testing:\n return False\n\n try:\n if not graph.config.logging.loggly.token:\n return False\n\n if not graph.config.logging.loggly.environment:\n return False\n except AttributeError:\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_dict_config(graph):\n formatters = {}\n handlers = {}\n loggers = {}\n\n # create the console handler\n formatters[\"ExtraFormatter\"] = make_extra_console_formatter(graph)\n handlers[\"console\"] = make_stream_handler(graph, formatter=\"ExtraFormatter\")\n\n # maybe create the loggly handler\n if enable_loggly(graph):\n formatters[\"JSONFormatter\"] = make_json_formatter(graph)\n handlers[\"LogglyHTTPSHandler\"] = make_loggly_handler(graph, formatter=\"JSONFormatter\")\n\n # configure the root logger to output to all handlers\n loggers[\"\"] = {\n \"handlers\": handlers.keys(),\n \"level\": graph.config.logging.level,\n }\n\n # set log levels for libraries\n loggers.update(make_library_levels(graph))\n\n return dict(\n version=1,\n disable_existing_loggers=False,\n formatters=formatters,\n handlers=handlers,\n loggers=loggers,\n )", "response": "Build a dictionary configuration from conventions and configuration."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_json_formatter(graph):\n\n return {\n \"()\": graph.config.logging.json_formatter.formatter,\n \"fmt\": graph.config.logging.json_required_keys,\n }", "response": "Create the default json formatter."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_stream_handler(graph, formatter):\n return {\n \"class\": graph.config.logging.stream_handler.class_,\n \"formatter\": formatter,\n \"level\": graph.config.logging.level,\n \"stream\": graph.config.logging.stream_handler.stream,\n }", "response": "Create the stream handler. Used for console output."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_loggly_handler(graph, formatter):\n base_url = graph.config.logging.loggly.base_url\n loggly_url = \"{}/inputs/{}/tag/{}\".format(\n base_url,\n graph.config.logging.loggly.token,\n \",\".join([\n graph.metadata.name,\n graph.config.logging.loggly.environment,\n ]),\n )\n return {\n \"class\": graph.config.logging.https_handler.class_,\n \"formatter\": formatter,\n \"level\": graph.config.logging.level,\n \"url\": loggly_url,\n }", "response": "Create the loggly handler."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_library_levels(graph):\n # inject the default components; these can, but probably shouldn't, be overridden\n levels = {}\n for level in [\"DEBUG\", \"INFO\", \"WARN\", \"ERROR\"]:\n levels.update({\n component: {\n \"level\": level,\n } for component in graph.config.logging.levels.default[level.lower()]\n })\n # override components; these can be set per application\n for level in [\"DEBUG\", \"INFO\", \"WARN\", \"ERROR\"]:\n levels.update({\n component: {\n \"level\": level,\n } for component in graph.config.logging.levels.override[level.lower()]\n })\n return levels", "response": "Create third party library logging level configurations."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef process_request(self, request):\n site = request.site\n cache_key = '{prefix}-{site}'.format(prefix=settings.REDIRECT_CACHE_KEY_PREFIX, site=site.domain)\n redirects = cache.get(cache_key)\n if redirects is None:\n redirects = {redirect.old_path: redirect.new_path for redirect in Redirect.objects.filter(site=site)}\n cache.set(cache_key, redirects, settings.REDIRECT_CACHE_TIMEOUT)\n redirect_to = redirects.get(request.path)\n if redirect_to:\n return redirect(redirect_to, permanent=True)", "response": "Process the request and return a new URL if there is a matching Redirect model\n with the current request URL as the new_path field."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef patched_get_current(self, request=None):\n # Imported here to avoid circular import\n from django.conf import settings\n if request:\n try:\n return self._get_site_by_request(request) # pylint: disable=protected-access\n except Site.DoesNotExist:\n pass\n\n if getattr(settings, 'SITE_ID', ''):\n return self._get_site_by_id(settings.SITE_ID) # pylint: disable=protected-access\n\n raise ImproperlyConfigured(\n \"You're using the Django \\\"sites framework\\\" without having \"\n \"set the SITE_ID setting. Create a site in your database and \"\n \"set the SITE_ID setting or pass a request to \"\n \"Site.objects.get_current() to fix this error.\"\n )", "response": "Monkey patched version of Django s SiteManager. get_current function."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef patched_get_site_by_id(self, site_id):\n now = datetime.datetime.utcnow()\n site = models.SITE_CACHE.get(site_id)\n cache_timeout = SITE_CACHE_TIMEOUTS.get(site_id, now)\n if not site or cache_timeout <= now:\n site = self.get(pk=site_id)\n models.SITE_CACHE[site_id] = site\n SITE_CACHE_TIMEOUTS[site_id] = now + get_site_cache_ttl()\n return models.SITE_CACHE[site_id]", "response": "Monkey patched version of Django s SiteManager. _get_site_by_id method."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmonkeys patched version of Django s SiteManager. _get_site_by_request function.", "response": "def patched_get_site_by_request(self, request):\n \"\"\"\n Monkey patched version of Django's SiteManager._get_site_by_request() function.\n\n Adds a configurable timeout to the in-memory SITE_CACHE for each cached Site.\n This allows for the use of an in-memory cache for Site models, avoiding one\n or more DB hits on every request made to the Django application, but also allows\n for changes made to models associated with the Site model and accessed via the\n Site model's relationship accessors to take effect without having to manual\n recycle all Django worker processes active in an application environment.\n \"\"\"\n host = request.get_host()\n now = datetime.datetime.utcnow()\n site = models.SITE_CACHE.get(host)\n cache_timeout = SITE_CACHE_TIMEOUTS.get(host, now)\n if not site or cache_timeout <= now:\n site = self.get(domain__iexact=host)\n models.SITE_CACHE[host] = site\n SITE_CACHE_TIMEOUTS[host] = now + get_site_cache_ttl()\n return models.SITE_CACHE[host]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntruncates a string starting from lim chars.", "response": "def str_brief(obj, lim=20, dots='...', use_repr=True):\n \"\"\"Truncates a string, starting from 'lim' chars. The given object\n can be a string, or something that can be casted to a string.\n\n >>> import string\n >>> str_brief(string.uppercase)\n 'ABCDEFGHIJKLMNOPQRST...'\n >>> str_brief(2 ** 50, lim=10, dots='0')\n '11258999060'\n \"\"\"\n if isinstance(obj, basestring) or not use_repr:\n full = str(obj)\n else:\n full = repr(obj)\n postfix = []\n CLOSERS = {'(': ')', '{': '}', '[': ']', '\"': '\"', \"'\": \"'\", '<': '>'}\n for i, c in enumerate(full):\n if i >= lim + len(postfix):\n return full[:i] + dots + ''.join(reversed(postfix))\n if postfix and postfix[-1] == c:\n postfix.pop(-1)\n continue\n closer = CLOSERS.get(c, None)\n if closer is not None:\n postfix.append(closer)\n return full"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_mirror_resources_by_name_map(self, scope=None):\r\n volumes_mirrors_by_name = dict()\r\n cgs_mirrors_by_name = dict()\r\n if ((scope is None) or (scope.lower() == 'volume')):\r\n mirror_list = self.xcli_client.cmd.mirror_list(scope='Volume')\r\n for xcli_mirror in mirror_list:\r\n name = MirroredEntities.get_mirrored_object_name(xcli_mirror)\r\n volumes_mirrors_by_name[name] = xcli_mirror\r\n if ((scope is None) or (scope.lower() == CG)):\r\n for xcli_mirror in self.xcli_client.cmd.mirror_list(scope='CG'):\r\n name = MirroredEntities.get_mirrored_object_name(xcli_mirror)\r\n cgs_mirrors_by_name[name] = xcli_mirror\r\n res = Bunch(volumes=volumes_mirrors_by_name, cgs=cgs_mirrors_by_name)\r\n return res", "response": "returns a map volume_name -> volume cg_name -> cg"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of the port names of a XIV host", "response": "def get_host_port_names(self, host_name):\r\n \"\"\" return a list of the port names of XIV host \"\"\"\r\n port_names = list()\r\n host = self.get_hosts_by_name(host_name)\r\n fc_ports = host.fc_ports\r\n iscsi_ports = host.iscsi_ports\r\n port_names.extend(fc_ports.split(',') if fc_ports != '' else [])\r\n port_names.extend(iscsi_ports.split(',') if iscsi_ports != '' else [])\r\n return port_names"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a list of the port names under a given cluster", "response": "def get_cluster_port_names(self, cluster_name):\r\n \"\"\" return a list of the port names under XIV CLuster \"\"\"\r\n port_names = list()\r\n for host_name in self.get_hosts_by_clusters()[cluster_name]:\r\n port_names.extend(self.get_hosts_by_name(host_name))\r\n return port_names"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove all stale clients from pool", "response": "def flush(self):\n \"\"\"remove all stale clients from pool\"\"\"\n now = time.time()\n to_remove = []\n for k, entry in self.pool.items():\n if entry.timestamp < now:\n entry.client.close()\n to_remove.append(k)\n for k in to_remove:\n del self.pool[k]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets an existing connection or opens a new one", "response": "def get(self, user, password, endpoints):\n \"\"\"Gets an existing connection or opens a new one\n \"\"\"\n now = time.time()\n # endpoints can either be str or list\n if isinstance(endpoints, str):\n endpoints = [endpoints]\n for ep in endpoints:\n if ep not in self.pool:\n continue\n entry = self.pool[ep]\n if (not entry.client.is_connected() or\n entry.timestamp + self.time_to_live < now):\n xlog.debug(\"XCLIClientPool: clearing stale client %s\",\n ep)\n del self.pool[ep]\n entry.client.close()\n continue\n user_client = entry.user_clients.get(user, None)\n if not user_client or not user_client.is_connected():\n user_client = entry.client.get_user_client(user, password)\n entry.user_clients[user] = user_client\n return user_client\n\n xlog.debug(\"XCLIClientPool: connecting to %s\", endpoints)\n client = self.connector(None, None, endpoints)\n user_client = {user: client.get_user_client(user, password)}\n for ep in endpoints:\n self.pool[ep] = PoolEntry(client, now, user_client)\n return user_client[user]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef size(value: int, suffixes: list = None) -> str:\n suffixes = suffixes or [\n 'bytes', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB']\n order = int(log2(value) / 10) if value else 0\n return '{:.4g} {}'.format(value / (1 << (order * 10)), suffixes[order])", "response": "Returns a size string for the given integer."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse the size of a single object.", "response": "def parse_size(value):\n \"\"\"\n >>> parse_size('1M')\n 1048576\n >>> parse_size('512K512K')\n 1048576\n >>> parse_size('512K 512K')\n 1048576\n >>> parse_size('512K 512K 4')\n 1048580\n \"\"\"\n if isinstance(value, (int, float)):\n return value\n elif not pattern_valid.fullmatch(value):\n raise ValueError(value)\n\n result = 0\n for m in pattern.finditer(value):\n v = m.groupdict()\n n = int(v['n'])\n unit = v['unit']\n if not unit:\n result += n\n elif unit in size_levels:\n result += n << size_levels[unit]\n else:\n raise ValueError(value)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nparsing duration from a string", "response": "def parse_duration(value):\n \"\"\"\n >>> parse_duration('1h')\n 3600\n >>> parse_duration('1m')\n 60\n >>> parse_duration('1m 2s')\n 62\n >>> parse_duration('1')\n 1\n\n \"\"\"\n if isinstance(value, (int, float)):\n return value\n elif not pattern_valid.fullmatch(value):\n raise ValueError(value)\n\n result = 0\n for m in pattern.finditer(value):\n v = m.groupdict()\n n = int(v['n'])\n unit = v['unit']\n if not unit:\n result += n\n elif unit in durations:\n result += n * durations[unit]\n else:\n raise ValueError(value)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef send_event(self, action, properties, event_severity=EVENT_SEVERITY):\n # verify properties\n event_properties = dict() if (properties is None) else properties\n if type(event_properties) is not dict:\n raise TypeError('properties is not dict')\n\n # prepare event\n event_bunch = Bunch(\n Product=self.product_name,\n Version=self.product_version,\n Server=self.server_name,\n Platform=self.platform,\n Action=action,\n Properties=event_properties)\n event_description = self._get_description_prefix() + \\\n json.dumps(event_bunch)\n\n use_custom_event = True\n if CSS_PRODUCT_EVENT in dir(self.xcli.cmd):\n try:\n # send css product event\n log.debug(\"sending css_product_event \"\n \"description=%s severity=%s\",\n event_description, event_severity)\n self.xcli.cmd.css_product_event(severity=event_severity,\n product=self.product_name,\n version=self.product_version,\n server=self.server_name,\n platform=self.platform,\n action=action,\n properties=event_properties)\n use_custom_event = False\n except (UnrecognizedCommandError,\n OperationForbiddenForUserCategoryError):\n log.warning(\"failed css_product_event \"\n \"description=%s severity=%s\",\n event_description, event_severity)\n if use_custom_event:\n # send custom event\n log.debug(\"sending custom_event description=%s severity=%s\",\n event_description, event_severity)\n self.xcli.cmd.custom_event(\n description=event_description, severity=event_severity)", "response": "send an event to the available media modules"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a mirror and returns a new mirror object.", "response": "def _create_mirror(self, resource_type, resource_name, target_name,\r\n mirror_type, slave_resource_name, create_slave='no',\r\n remote_pool=None, rpo=None, remote_rpo=None,\r\n schedule=None, remote_schedule=None,\r\n activate_mirror='no'):\r\n '''creates a mirror and returns a mirror object.\r\n resource_type must be 'vol' or 'cg',\r\n target name must be a valid target from target_list,\r\n mirror type must be 'sync' or 'async',\r\n slave_resource_name would be the slave_vol or slave_cg name'''\r\n\r\n kwargs = {\r\n resource_type: resource_name,\r\n 'target': target_name,\r\n 'type': mirror_type,\r\n 'slave_' + resource_type: slave_resource_name,\r\n 'create_slave': create_slave,\r\n 'remote_pool': remote_pool,\r\n 'rpo': rpo,\r\n 'remote_rpo': remote_rpo,\r\n 'schedule': schedule,\r\n 'remote_schedule': remote_schedule\r\n }\r\n\r\n if mirror_type == 'sync':\r\n kwargs['type'] = 'sync_best_effort'\r\n kwargs['rpo'] = None\r\n else:\r\n kwargs['type'] = 'async_interval'\r\n if kwargs['remote_schedule'] is None:\r\n kwargs['remote_schedule'] = kwargs['schedule']\r\n\r\n # avoids a python3 issue of the dict changing\r\n # during iteration\r\n keys = set(kwargs.keys()).copy()\r\n for k in keys:\r\n if kwargs[k] is None:\r\n kwargs.pop(k)\r\n\r\n logger.info('creating mirror with arguments: %s' % kwargs)\r\n self.xcli_client.cmd.mirror_create(**kwargs)\r\n\r\n if activate_mirror == 'yes':\r\n logger.info('Activating mirror %s' % resource_name)\r\n self.activate_mirror(resource_name)\r\n\r\n return self.get_mirror_resources()[resource_name]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nembedding the current exception information into the given one (which will replace the current one). For example:: try: ... except OSError as ex: raise chained(MyError(\"database not found!\"))", "response": "def chained(wrapping_exc):\n # pylint: disable=W0212\n \"\"\"\n Embeds the current exception information into the given one (which\n will replace the current one).\n For example::\n\n try:\n ...\n except OSError as ex:\n raise chained(MyError(\"database not found!\"))\n \"\"\"\n t, v, tb = sys.exc_info()\n if not t:\n return wrapping_exc\n wrapping_exc._inner_exc = v\n lines = traceback.format_exception(t, v, tb)\n wrapping_exc._inner_tb = \"\".join(lines[1:])\n return wrapping_exc"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a user profile.", "response": "def get_user(self, username, *, mode=OsuMode.osu, event_days=31):\n \"\"\"Get a user profile.\n\n Parameters\n ----------\n username : str or int\n A `str` representing the user's username, or an `int` representing the user's id.\n mode : :class:`osuapi.enums.OsuMode`\n The osu! game mode for which to look up. Defaults to osu!standard.\n event_days : int\n The number of days in the past to look for events. Defaults to 31 (the maximum).\n \"\"\"\n return self._make_req(endpoints.USER, dict(\n k=self.key,\n u=username,\n type=_username_type(username),\n m=mode.value,\n event_days=event_days\n ), JsonList(User))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_user_best(self, username, *, mode=OsuMode.osu, limit=50):\n return self._make_req(endpoints.USER_BEST, dict(\n k=self.key,\n u=username,\n type=_username_type(username),\n m=mode.value,\n limit=limit\n ), JsonList(SoloScore))", "response": "Get a user s best scores."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a user s most recent scores within the last 24 hours.", "response": "def get_user_recent(self, username, *, mode=OsuMode.osu, limit=10):\n \"\"\"Get a user's most recent scores, within the last 24 hours.\n\n Parameters\n ----------\n username : str or int\n A `str` representing the user's username, or an `int` representing the user's id.\n mode : :class:`osuapi.enums.OsuMode`\n The osu! game mode for which to look up. Defaults to osu!standard.\n limit\n The maximum number of results to return. Defaults to 10, maximum 50.\n \"\"\"\n return self._make_req(endpoints.USER_RECENT, dict(\n k=self.key,\n u=username,\n type=_username_type(username),\n m=mode.value,\n limit=limit\n ), JsonList(RecentScore))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_scores(self, beatmap_id, *, username=None, mode=OsuMode.osu, mods=None, limit=50):\n return self._make_req(endpoints.SCORES, dict(\n k=self.key,\n b=beatmap_id,\n u=username,\n type=_username_type(username),\n m=mode.value,\n mods=mods.value if mods else None,\n limit=limit), JsonList(BeatmapScore))", "response": "Get the top scores for a given beatmap."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_beatmaps(self, *, since=None, beatmapset_id=None, beatmap_id=None, username=None, mode=None,\n include_converted=False, beatmap_hash=None, limit=500):\n \"\"\"Get beatmaps.\n\n Parameters\n ----------\n since : datetime\n If specified, restrict results to beatmaps *ranked* after this date.\n beatmapset_id\n If specified, restrict results to a specific beatmap set.\n beatmap_id\n If specified, restrict results to a specific beatmap.\n username : str or int\n A `str` representing the user's username, or an `int` representing the user's id.\n If specified, restrict results to a specific user.\n mode : :class:`osuapi.enums.OsuMode`\n If specified, restrict results to a specific osu! game mode.\n include_converted : bool\n Whether or not to include autoconverts. Defaults to false.\n beatmap_hash\n If specified, restricts results to a specific beatmap hash.\n limit\n Number of results to return. Defaults to 500, maximum 500.\n \"\"\"\n return self._make_req(endpoints.BEATMAPS, dict(\n k=self.key,\n s=beatmapset_id,\n b=beatmap_id,\n u=username,\n since=\"{:%Y-%m-%d %H:%M:%S}\".format(since) if since is not None else None,\n type=_username_type(username),\n m=mode.value if mode else None,\n a=int(include_converted),\n h=beatmap_hash,\n limit=limit\n ), JsonList(Beatmap))", "response": "Get a specific set of beatmaps."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_match(self, match_id):\n return self._make_req(endpoints.MATCH, dict(\n k=self.key,\n mp=match_id), Match)", "response": "Get a multiplayer match."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncopies the data from key_source to key_dest.", "response": "async def copy(self, key_source, storage_dest, key_dest):\n \"\"\" Return True if data are copied\n * optimized for http->fs copy\n * not supported return_status\n \"\"\"\n from aioworkers.storage.filesystem import FileSystemStorage\n if not isinstance(storage_dest, FileSystemStorage):\n return super().copy(key_source, storage_dest, key_dest)\n url = self.raw_key(key_source)\n logger = self.context.logger\n async with self._semaphore:\n async with self.session.get(url) as response:\n if response.status == 404:\n return\n elif response.status >= 400:\n if logger.getEffectiveLevel() == logging.DEBUG:\n logger.debug(\n 'HttpStorage request to %s '\n 'returned code %s:\\n%s' % (\n url, response.status,\n (await response.read()).decode()))\n return\n async with storage_dest.raw_key(key_dest).open('wb') as f:\n async for chunk in response.content.iter_any():\n await f.write(chunk)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nimports a name from sys. modules.", "response": "def import_name(stref: str):\n \"\"\"\n >>> import_name('datetime.datetime.utcnow') is not None\n True\n >>> import_name('aioworkers.utils.import_name') is not None\n True\n \"\"\"\n h = stref\n p = []\n m = None\n try:\n r = importlib.util.find_spec(stref)\n except (AttributeError, ImportError):\n r = None\n\n if r is not None:\n return importlib.import_module(stref)\n\n while '.' in h:\n h, t = h.rsplit('.', 1)\n p.append(t)\n if h in sys.modules:\n m = sys.modules[h]\n break\n\n if m is None:\n m = importlib.import_module(h)\n\n for i in reversed(p):\n if hasattr(m, i):\n m = getattr(m, i)\n else:\n h += '.' + i\n m = importlib.import_module(h)\n\n logger.debug('Imported \"%s\" as %r', stref, m)\n return m"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the value of the options keyword arguments.", "response": "def set_options(self, **options):\n \"\"\"Sets the value of the given options (as keyword arguments).\n Note that underscored in the option's name will be replaced with\n hyphens (i.e., ``c.set_options(gui_mode = True)``\n will set the option ``gui-mode``)\n \"\"\"\n opt2 = self._contexts[-1]\n for k, v in options.items():\n k2 = k.replace(\"_\", \"-\")\n if v is None:\n opt2.pop(k2, None)\n else:\n opt2[k2] = v"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef connect_ssl(cls, user, password, endpoints,\n ca_certs=None, validate=None):\n \"\"\"\n Creates an SSL transport to the first endpoint (aserver) to which\n we successfully connect\n \"\"\"\n if isinstance(endpoints, basestring):\n endpoints = [endpoints]\n transport = SingleEndpointTransport(\n SocketTransport.connect_ssl, endpoints, ca_certs=ca_certs,\n validate=validate)\n return cls(transport, user, password)", "response": "Creates an SSL transport to the first endpoint that we successfully connect to."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef connect_multiendpoint_ssl(cls, user, password, endpoints,\n auto_discover=True, ca_certs=None,\n validate=None):\n \"\"\"\n Creates a MultiEndpointTransport, so that if the current endpoint\n (aserver) fails, it would automatically move to the next available\n endpoint.\n\n If ``auto_discover`` is ``True``, we will execute ipinterface_list\n on the system to discover all management IP interfaces and add them\n to the list of endpoints\n \"\"\"\n if isinstance(endpoints, basestring):\n endpoints = [endpoints]\n client, transport = cls._initiate_client_for_multi_endpoint(user,\n password,\n endpoints,\n ca_certs,\n validate)\n if auto_discover and user:\n all_endpoints = [ipif.address for ipif in\n client.cmd.ipinterface_list()\n if ipif.type.lower() == \"management\"]\n transport.add_endpoints(all_endpoints)\n return client", "response": "Creates a MultiEndpointTransport for the given list of endpoints."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nexecutes the given command on the given remote target.", "response": "def execute_remote(self, remote_target, cmd, **kwargs):\n \"\"\"\n Executes the given command (with the given arguments)\n on the given remote target of the connected machine\n \"\"\"\n data = self._build_command(cmd, kwargs, self._contexts[-1],\n remote_target)\n with self._lock:\n rootelem = self.transport.send(data)\n try:\n return self._build_response(rootelem)\n except ElementNotFoundException:\n xlog.exception(\"XCLIClient.execute\")\n raise chained(CorruptResponse(rootelem))\n except Exception as e:\n xlog.exception(\"XCLIClient.execute\")\n raise e"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a new client for the given user.", "response": "def get_user_client(self, user, password, populate=True):\n \"\"\"\n Returns a new client for the given user. This is a lightweight\n client that only uses different credentials and shares the transport\n with the underlying client\n \"\"\"\n return XCLIClientForUser(weakproxy(self), user, password,\n populate=populate)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_remote_client(self, target_name, user=None, password=None):\n if user:\n base = self.get_user_client(user, password, populate=False)\n else:\n base = weakproxy(self)\n return RemoteXCLIClient(base, target_name, populate=True)", "response": "Returns a new RemoteXCLIClient for the remote target."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef accuracy(self, mode: OsuMode):\n if mode is OsuMode.osu:\n return (\n (6 * self.count300 + 2 * self.count100 + self.count50) /\n (6 * (self.count300 + self.count100 + self.count50 + self.countmiss)))\n if mode is OsuMode.taiko:\n return (\n (self.count300 + self.countgeki + (0.5*(self.count100 + self.countkatu))) /\n (self.count300 + self.countgeki + self.count100 + self.countkatu + self.countmiss))\n if mode is OsuMode.mania:\n return (\n (6 * (self.countgeki + self.count300) + 4 * self.countkatu + 2 * self.count100 + self.count50) /\n (6 * (self.countgeki + self.count300 + self.countkatu + self.count100 + self.count50 + self.countmiss)))\n if mode is OsuMode.ctb:\n return (\n (self.count50 + self.count100 + self.count300) /\n (self.count50 + self.count100 + self.count300 + self.countmiss + self.countkatu))", "response": "Calculates the accuracy of the current log entry."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_mirror(self, resource_name, target_name, mirror_type,\r\n slave_resource_name, rpo=None, remote_rpo=None,\r\n schedule=None, remote_schedule=None,\r\n activate_mirror='no'):\r\n '''creates a mirror and returns a mirror object.\r\n target name must be a valid target from target_list,\r\n mirror type must be 'sync' or 'async',\r\n slave_resource_name would be the slave_cg name'''\r\n\r\n return self._create_mirror('cg', resource_name, target_name,\r\n mirror_type, slave_resource_name, rpo=rpo,\r\n remote_rpo=remote_rpo, schedule=schedule,\r\n remote_schedule=remote_schedule,\r\n activate_mirror=activate_mirror)", "response": "creates a mirror and returns a mirror object"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_cg_volumes(self, group_id):\r\n for volume in self.xcli_client.cmd.vol_list(cg=group_id):\r\n if volume.snapshot_of == '':\r\n yield volume.name", "response": "Get all non snapshots volumes in a group"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_converter(cls, klass, conv, score=0):\n if isinstance(klass, str):\n klass = import_name(klass)\n item = klass, conv, score\n cls.converters.append(item)\n cls.converters.sort(key=lambda x: x[0])\n return cls", "response": "Add a converter to the list of converters."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef all(self, element_type=None, response_path=None):\n path = self.RETURN_PATH\n if response_path is not None:\n path += \"/\" + response_path\n response_element = self.response_etree.find(path)\n if response_element is None:\n return\n for subelement in self.response_etree.find(path).getchildren():\n if element_type is None or subelement.tag == element_type:\n yield _populate_bunch_with_element(subelement)", "response": "Generates Bunches each representing a single subelement of the response."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef as_single_element(self):\n if self.as_return_etree is None:\n return None\n if len(self.as_return_etree.getchildren()) == 1:\n return _populate_bunch_with_element(self.as_return_etree.\n getchildren()[0])\n return _populate_bunch_with_element(self.as_return_etree)", "response": "Processes the response as a single - element response"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the objects for each individual set flag.", "response": "def enabled_flags(self):\n \"\"\"Return the objects for each individual set flag.\"\"\"\n if not self.value:\n yield self.__flags_members__[0]\n return\n\n val = self.value\n while val:\n lowest_bit = val & -val\n val ^= lowest_bit\n yield self.__flags_members__[lowest_bit]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef contains_any(self, other):\n return self.value == other.value or self.value & other.value", "response": "Check if any of the attributes is set."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_percentage(self):\n cumulative_elapsed_time = self.get_cumulative_elapsed_time()\n sum_elapsed_time = sum(cumulative_elapsed_time, datetime.timedelta())\n if not sum_elapsed_time:\n raise ValueError(\"cannot get percentage if there is no any elapsed time\")\n return [div_timedelta(t, sum_elapsed_time) for t in cumulative_elapsed_time]", "response": "Get the cumulative time percentage of each stopwatch including the current split."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the mean elapsed time per split of each stopwatch.", "response": "def get_mean_per_split(self):\n \"\"\"Get the mean elapsed time per split of each stopwatch (excluding the current split).\n\n Returns\n -------\n mean_elapsed_time_per_split : List[datetime.timedelta]\n \"\"\"\n return [div_timedelta(sum(stopwatch.split_elapsed_time, datetime.timedelta()),\n len(stopwatch.split_elapsed_time))\n if stopwatch.split_elapsed_time else datetime.timedelta()\n for stopwatch in self]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget all statistics as a dictionary.", "response": "def get_statistics(self):\n \"\"\"Get all statistics as a dictionary.\n\n Returns\n -------\n statistics : Dict[str, List]\n \"\"\"\n return {\n 'cumulative_elapsed_time': self.get_cumulative_elapsed_time(),\n 'percentage': self.get_percentage(),\n 'n_splits': self.get_n_splits(),\n 'mean_per_split': self.get_mean_per_split(),\n }"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef conform_query(cls, query):\n query = parse_qs(query, keep_blank_values=True)\n\n # Load yaml of passed values\n for key, vals in query.items():\n # Multiple values of the same name could be passed use first\n # Also params without strings will be treated as true values\n query[key] = yaml.load(vals[0] or 'true', Loader=yaml.FullLoader)\n\n # If expected, populate with defaults\n for key, val in cls.default_query.items():\n if key not in query:\n query[key] = val\n\n return query", "response": "Converts the query string from a target uri uses cls. default_query to populate default arguments."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\noverriding this method to use values from the parsed uri to initialize the expected target.", "response": "def load_target(cls, scheme, path, fragment, username,\n password, hostname, port, query,\n load_method, **kwargs):\n \"\"\"Override this method to use values from the parsed uri to initialize\n the expected target.\n\n \"\"\"\n raise NotImplementedError(\"load_target must be overridden\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef hog(concurrency, requests, limit, timeout,\n params, paramfile, headers, headerfile, method, url):\n '''Sending multiple `HTTP` requests `ON` `GREEN` thread'''\n\n params = parse_from_list_and_file(params, paramfile)\n headers = parse_from_list_and_file(headers, headerfile)\n\n # Running information\n click.echo(HR)\n click.echo(\"Hog is running with {} threads, \".format(concurrency) +\n \"{} requests \".format(requests) +\n \"and timeout in {} second(s).\".format(timeout))\n if limit != 0:\n click.echo(\">>> Limit: {} request(s) per second.\".format(limit))\n click.echo(HR)\n\n # Let's begin!\n result = Hog(callback).run(url, params, headers, method, timeout, concurrency, requests, limit)\n\n sys.stdout.write(\"\\n\")\n print_result(result)", "response": "Send multiple HTTP requests ON green thread"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _append_value(self, value, _file, _name):\n _tabs = '\\t' * self._tctr\n _keys = '{tabs}{name}\\n'.format(tabs=_tabs, name=_name)\n _file.seek(self._sptr, os.SEEK_SET)\n _file.write(_keys)\n\n self._append_dict(value, _file)", "response": "Append a value to the contents of the current dict."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nappending array contents to the file.", "response": "def _append_array(self, value, _file):\n \"\"\"Call this function to write array contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n\n \"\"\"\n _tabs = '\\t' * self._tctr\n _labs = '{tabs}\\n'.format(tabs=_tabs)\n _file.write(_labs)\n self._tctr += 1\n\n for _item in value:\n if _item is None:\n continue\n _item = self.object_hook(_item)\n _type = type(_item).__name__\n _MAGIC_TYPES[_type](self, _item, _file)\n\n self._tctr -= 1\n _tabs = '\\t' * self._tctr\n _labs = '{tabs}\\n'.format(tabs=_tabs)\n _file.write(_labs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nappend dict contents to the file.", "response": "def _append_dict(self, value, _file):\n \"\"\"Call this function to write dict contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n\n \"\"\"\n _tabs = '\\t' * self._tctr\n _labs = '{tabs}\\n'.format(tabs=_tabs)\n _file.write(_labs)\n self._tctr += 1\n\n for (_item, _text) in value.items():\n if _text is None:\n continue\n\n _tabs = '\\t' * self._tctr\n _keys = '{tabs}{item}\\n'.format(tabs=_tabs, item=_item)\n _file.write(_keys)\n\n _text = self.object_hook(_text)\n _type = type(_text).__name__\n _MAGIC_TYPES[_type](self, _text, _file)\n\n self._tctr -= 1\n _tabs = '\\t' * self._tctr\n _labs = '{tabs}\\n'.format(tabs=_tabs)\n _file.write(_labs)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nappends data to the end of the list.", "response": "def _append_data(self, value, _file):\n \"\"\"Call this function to write data contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n\n \"\"\"\n # binascii.b2a_base64(value) -> plistlib.Data\n # binascii.a2b_base64(Data) -> value(bytes)\n\n _tabs = '\\t' * self._tctr\n _text = base64.b64encode(value).decode() # value.hex() # str(value)[2:-1]\n _labs = '{tabs}{text}\\n'.format(tabs=_tabs, text=_text)\n # _labs = '{tabs}\\n'.format(tabs=_tabs)\n\n # _list = []\n # for _item in textwrap.wrap(value.hex(), 32):\n # _text = ' '.join(textwrap.wrap(_item, 2))\n # _item = '{tabs}\\t{text}'.format(tabs=_tabs, text=_text)\n # _list.append(_item)\n # _labs += '\\n'.join(_list)\n\n # _data = [H for H in iter(\n # functools.partial(io.StringIO(value.hex()).read, 2), '')\n # ] # to split bytes string into length-2 hex string list\n # _labs += '\\n{tabs}\\n'.format(tabs=_tabs)\n _file.write(_labs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _append_date(self, value, _file):\n _tabs = '\\t' * self._tctr\n _text = value.strftime('%Y-%m-%dT%H:%M:%S.%fZ')\n _labs = '{tabs}{text}\\n'.format(tabs=_tabs, text=_text)\n _file.write(_labs)", "response": "Write the date contents of the object to the file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwrite integer contents to file.", "response": "def _append_integer(self, value, _file):\n \"\"\"Call this function to write integer contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n\n \"\"\"\n _tabs = '\\t' * self._tctr\n _text = value\n _labs = '{tabs}{text}\\n'.format(tabs=_tabs, text=_text)\n _file.write(_labs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nregister proper and improper file mappings.", "response": "def register_output_name(self, input_folder, rel_name, rel_output_name):\n \"\"\"Register proper and improper file mappings.\"\"\"\n self.improper_input_file_mapping[rel_name].add(rel_output_name)\n self.proper_input_file_mapping[os.path.join(input_folder, rel_name)] = rel_output_name\n self.proper_input_file_mapping[rel_output_name] = rel_output_name"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _discover(self):\n for ep in pkg_resources.iter_entry_points('yamlsettings10'):\n ext = ep.load()\n if callable(ext):\n ext = ext()\n self.add(ext)", "response": "Find and install all extensions"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_extension(self, protocol):\n if protocol not in self.registry:\n raise NoProtocolError(\"No protocol for %s\" % protocol)\n index = self.registry[protocol]\n return self.extensions[index]", "response": "Retrieve the extension for the given protocol"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd an extension to the registry.", "response": "def add(self, extension):\n \"\"\"Adds an extension to the registry\n\n :param extension: Extension object\n :type extension: yamlsettings.extensions.base.YamlSettingsExtension\n\n \"\"\"\n index = len(self.extensions)\n self.extensions[index] = extension\n for protocol in extension.protocols:\n self.registry[protocol] = index"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _load_first(self, target_uris, load_method, **kwargs):\n if isinstance(target_uris, string_types):\n target_uris = [target_uris]\n\n # TODO: Move the list logic into the extension, otherwise a\n # load will always try all missing files first.\n # TODO: How would multiple protocols work, should the registry hold\n # persist copies?\n for target_uri in target_uris:\n target = urlsplit(target_uri, scheme=self.default_protocol)\n\n extension = self.get_extension(target.scheme)\n query = extension.conform_query(target.query)\n try:\n yaml_dict = extension.load_target(\n target.scheme,\n target.path,\n target.fragment,\n target.username,\n target.password,\n target.hostname,\n target.port,\n query,\n load_method,\n **kwargs\n )\n return yaml_dict\n except extension.not_found_exception:\n pass\n\n raise IOError(\"unable to load: {0}\".format(target_uris))", "response": "Load first yamldict target found in uri list."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load(self, target_uris, fields=None, **kwargs):\n yaml_dict = self._load_first(\n target_uris, yamlsettings.yamldict.load, **kwargs\n )\n # TODO: Move this into the extension, otherwise every load from\n # a persistant location will refilter fields.\n if fields:\n yaml_dict.limit(fields)\n\n return yaml_dict", "response": "Load first yamldict target found in uri."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_all(self, target_uris, **kwargs):\n '''\n Load *all* YAML settings from a list of file paths given.\n\n - File paths in the list gets the priority by their orders\n of the list.\n '''\n yaml_series = self._load_first(\n target_uris, yamlsettings.yamldict.load_all, **kwargs\n )\n yaml_dicts = []\n for yaml_dict in yaml_series:\n yaml_dicts.append(yaml_dict)\n # return YAMLDict objects\n return yaml_dicts", "response": "Load all YAML settings from a list of file paths given."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_all(stream):\n loader = YAMLDictLoader(stream)\n try:\n while loader.check_data():\n yield loader.get_data()\n finally:\n loader.dispose()", "response": "Parse all YAML documents in a stream and produce corresponding YAMLDict objects."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nserializes a dictionary into a YAML stream.", "response": "def dump(data, stream=None, **kwargs):\n \"\"\"\n Serialize YAMLDict into a YAML stream.\n If stream is None, return the produced string instead.\n \"\"\"\n return yaml.dump_all(\n [data],\n stream=stream,\n Dumper=YAMLDictDumper,\n **kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nserializing a list of data into a YAML stream.", "response": "def dump_all(data_list, stream=None, **kwargs):\n \"\"\"\n Serialize YAMLDict into a YAML stream.\n If stream is None, return the produced string instead.\n \"\"\"\n return yaml.dump_all(\n data_list,\n stream=stream,\n Dumper=YAMLDictDumper,\n **kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntraversing through all keys and values of the object and replace keys and values with the return values from the callback function.", "response": "def traverse(self, callback):\n ''' Traverse through all keys and values (in-order)\n and replace keys and values with the return values\n from the callback function.\n '''\n def _traverse_node(path, node, callback):\n ret_val = callback(path, node)\n if ret_val is not None:\n # replace node with the return value\n node = ret_val\n else:\n # traverse deep into the hierarchy\n if isinstance(node, YAMLDict):\n for k, v in node.items():\n node[k] = _traverse_node(path + [k], v,\n callback)\n elif isinstance(node, list):\n for i, v in enumerate(node):\n node[i] = _traverse_node(path + ['[{0}]'.format(i)], v,\n callback)\n else:\n pass\n return node\n _traverse_node([], self, callback)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update(self, yaml_dict):\n ''' Update the content (i.e. keys and values) with yaml_dict.\n '''\n def _update_node(base_node, update_node):\n if isinstance(update_node, YAMLDict) or \\\n isinstance(update_node, dict):\n if not (isinstance(base_node, YAMLDict)):\n # NOTE: A regular dictionary is replaced by a new\n # YAMLDict object.\n new_node = YAMLDict()\n else:\n new_node = base_node\n for k, v in update_node.items():\n new_node[k] = _update_node(new_node.get(k), v)\n elif isinstance(update_node, list) or isinstance(\n update_node, tuple\n ):\n # NOTE: A list/tuple is replaced by a new list/tuple.\n new_node = []\n for v in update_node:\n new_node.append(_update_node(None, v))\n if isinstance(update_node, tuple):\n new_node = tuple(new_node)\n else:\n new_node = update_node\n return new_node\n # Convert non-YAMLDict objects to a YAMLDict\n if not (isinstance(yaml_dict, YAMLDict) or\n isinstance(yaml_dict, dict)):\n yaml_dict = YAMLDict(yaml_dict)\n _update_node(self, yaml_dict)", "response": "Update the content of the object with the given YAMLDict."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rebase(self, yaml_dict):\n ''' Use yaml_dict as self's new base and update with existing\n reverse of update.\n '''\n base = yaml_dict.clone()\n base.update(self)\n self.clear()\n self.update(base)", "response": "Use yaml_dict as self s new base and update with existing\n reverse of update."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef limit(self, keys):\n ''' Remove all keys other than the keys specified.\n '''\n if not isinstance(keys, list) and not isinstance(keys, tuple):\n keys = [keys]\n remove_keys = [k for k in self.keys() if k not in keys]\n for k in remove_keys:\n self.pop(k)", "response": "Remove all keys other than the keys specified."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save(yaml_dict, filepath):\n '''\n Save YAML settings to the specified file path.\n '''\n yamldict.dump(yaml_dict, open(filepath, 'w'), default_flow_style=False)", "response": "Save YAML settings to the specified file path."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsaving all YAML settings to a file.", "response": "def save_all(yaml_dicts, filepath):\n '''\n Save *all* YAML settings to the specified file path.\n '''\n yamldict.dump_all(yaml_dicts, open(filepath, 'w'),\n default_flow_style=False)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating the YAML settings with loaded values from filepaths.", "response": "def update_from_file(yaml_dict, filepaths):\n '''\n Override YAML settings with loaded values from filepaths.\n\n - File paths in the list gets the priority by their orders of the list.\n '''\n # load YAML settings with only fields in yaml_dict\n yaml_dict.update(registry.load(filepaths, list(yaml_dict)))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_from_env(yaml_dict, prefix=None):\n '''\n Override YAML settings with values from the environment variables.\n\n - The letter '_' is delimit the hierarchy of the YAML settings such\n that the value of 'config.databases.local' will be overridden\n by CONFIG_DATABASES_LOCAL.\n '''\n prefix = prefix or \"\"\n\n def _set_env_var(path, node):\n env_path = \"{0}{1}{2}\".format(\n prefix.upper(),\n '_' if prefix else '',\n '_'.join([str(key).upper() for key in path])\n )\n env_val = os.environ.get(env_path, None)\n if env_val is not None:\n # convert the value to a YAML-defined type\n env_dict = yamldict.load('val: {0}'.format(env_val))\n return env_dict.val\n else:\n return None\n\n # traverse yaml_dict with the callback function\n yaml_dict.traverse(_set_env_var)", "response": "Update the YAML settings with values from the environment variables."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef line(x, y, width=0.1):\n ctx = _state[\"ctx\"]\n ctx.move_to(0, 0)\n ctx.line_to(x, y)\n ctx.close_path()\n # _ctx.set_source_rgb (0.3, 0.2, 0.5)\n ctx.set_line_width(width)\n ctx.stroke()", "response": "Draw a line on the current canvas."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_version(\n package_name: str, version_module_name: str = '_version',\n version_variable_name: str = 'VERSION') -> str:\n \"\"\"Simulate behaviour of \"from package_name._version import VERSION\", and return VERSION.\"\"\"\n version_module = importlib.import_module(\n '{}.{}'.format(package_name.replace('-', '_'), version_module_name))\n return getattr(version_module, version_variable_name)", "response": "Simulate behaviour of from package_name. _version import VERSION and return VERSION."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef find_packages(root_directory: str = '.') -> t.List[str]:\n exclude = ['test*', 'test.*'] if ('bdist_wheel' in sys.argv or 'bdist' in sys.argv) else []\n packages_list = setuptools.find_packages(root_directory, exclude=exclude)\n return packages_list", "response": "Find packages to pack."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_requirements(\n requirements_path: str = 'requirements.txt') -> t.List[str]:\n \"\"\"Read contents of requirements.txt file and return data from its relevant lines.\n\n Only non-empty and non-comment lines are relevant.\n \"\"\"\n requirements = []\n with HERE.joinpath(requirements_path).open() as reqs_file:\n for requirement in [line.strip() for line in reqs_file.read().splitlines()]:\n if not requirement or requirement.startswith('#'):\n continue\n requirements.append(requirement)\n return requirements", "response": "Read contents of requirements. txt file and return data from relevant lines."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef partition_version_classifiers(\n classifiers: t.Sequence[str], version_prefix: str = 'Programming Language :: Python :: ',\n only_suffix: str = ' :: Only') -> t.Tuple[t.List[str], t.List[str]]:\n \"\"\"Find version number classifiers in given list and partition them into 2 groups.\"\"\"\n versions_min, versions_only = [], []\n for classifier in classifiers:\n version = classifier.replace(version_prefix, '')\n versions = versions_min\n if version.endswith(only_suffix):\n version = version.replace(only_suffix, '')\n versions = versions_only\n try:\n versions.append(tuple([int(_) for _ in version.split('.')]))\n except ValueError:\n pass\n return versions_min, versions_only", "response": "Given a list of version number classifiers find version number classifiers in given list and partition them into 2 groups."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndetermining the minimum required Python version.", "response": "def find_required_python_version(\n classifiers: t.Sequence[str], version_prefix: str = 'Programming Language :: Python :: ',\n only_suffix: str = ' :: Only') -> t.Optional[str]:\n \"\"\"Determine the minimum required Python version.\"\"\"\n versions_min, versions_only = partition_version_classifiers(\n classifiers, version_prefix, only_suffix)\n if len(versions_only) > 1:\n raise ValueError(\n 'more than one \"{}\" version encountered in {}'.format(only_suffix, versions_only))\n only_version = None\n if len(versions_only) == 1:\n only_version = versions_only[0]\n for version in versions_min:\n if version[:len(only_version)] != only_version:\n raise ValueError(\n 'the \"{}\" version {} is inconsistent with version {}'\n .format(only_suffix, only_version, version))\n min_supported_version = None\n for version in versions_min:\n if min_supported_version is None or \\\n (len(version) >= len(min_supported_version) and version < min_supported_version):\n min_supported_version = version\n if min_supported_version is None:\n if only_version is not None:\n return '.'.join([str(_) for _ in only_version])\n else:\n return '>=' + '.'.join([str(_) for _ in min_supported_version])\n return None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_rst(text: str) -> docutils.nodes.document:\n parser = docutils.parsers.rst.Parser()\n components = (docutils.parsers.rst.Parser,)\n settings = docutils.frontend.OptionParser(components=components).get_default_values()\n document = docutils.utils.new_document('', settings=settings)\n parser.parse(text, document)\n return document", "response": "Parse text assuming it s an RST markup."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef resolve_relative_rst_links(text: str, base_link: str):\n document = parse_rst(text)\n visitor = SimpleRefCounter(document)\n document.walk(visitor)\n for target in visitor.references:\n name = target.attributes['name']\n uri = target.attributes['refuri']\n new_link = '`{} <{}{}>`_'.format(name, base_link, uri)\n if name == uri:\n text = text.replace('`<{}>`_'.format(uri), new_link)\n else:\n text = text.replace('`{} <{}>`_'.format(name, uri), new_link)\n return text", "response": "Resolve all relative links in a given RST document."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncall for reference nodes.", "response": "def visit_reference(self, node: docutils.nodes.reference) -> None:\n \"\"\"Called for \"reference\" nodes.\"\"\"\n # if len(node.children) != 1 or not isinstance(node.children[0], docutils.nodes.Text) \\\n # or not all(_ in node.attributes for _ in ('name', 'refuri')):\n # return\n path = pathlib.Path(node.attributes['refuri'])\n try:\n if path.is_absolute():\n return\n resolved_path = path.resolve()\n except FileNotFoundError: # in resolve(), prior to Python 3.6\n return\n # except OSError: # in is_absolute() and resolve(), on URLs in Windows\n # return\n try:\n resolved_path.relative_to(HERE)\n except ValueError:\n return\n if not path.is_file():\n return\n assert node.attributes['name'] == node.children[0].astext()\n self.references.append(node)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef try_fields(cls, *names) -> t.Optional[t.Any]:\n for name in names:\n if hasattr(cls, name):\n return getattr(cls, name)\n raise AttributeError((cls, names))", "response": "Try to find first existing of given class field names."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_readme(cls, readme_path: str = 'README.rst', encoding: str = 'utf-8') -> str:\n with HERE.joinpath(readme_path).open(encoding=encoding) as readme_file:\n long_description = readme_file.read() # type: str\n\n if readme_path.endswith('.rst') and cls.download_url.startswith('https://github.com/'):\n base_url = '{}/blob/v{}/'.format(cls.download_url, cls.version)\n long_description = resolve_relative_rst_links(long_description, base_url)\n\n return long_description", "response": "Parse a README file and return its contents as a string."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfills in possibly missing package metadata.", "response": "def prepare(cls) -> None:\n \"\"\"Fill in possibly missing package metadata.\"\"\"\n if cls.version is None:\n cls.version = find_version(cls.name)\n if cls.long_description is None:\n cls.long_description = cls.parse_readme()\n if cls.packages is None:\n cls.packages = find_packages(cls.root_directory)\n if cls.install_requires is None:\n cls.install_requires = parse_requirements()\n if cls.python_requires is None:\n cls.python_requires = find_required_python_version(cls.classifiers)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nruns setuptools. setup with correct arguments.", "response": "def setup(cls) -> None:\n \"\"\"Run setuptools.setup() with correct arguments.\"\"\"\n cls.prepare()\n setuptools.setup(\n name=cls.name, version=cls.version, description=cls.description,\n long_description=cls.long_description, url=cls.url, download_url=cls.download_url,\n author=cls.author, author_email=cls.author_email,\n maintainer=cls.try_fields('maintainer', 'author'),\n maintainer_email=cls.try_fields('maintainer_email', 'author_email'),\n license=cls.license_str, classifiers=cls.classifiers, keywords=cls.keywords,\n packages=cls.packages, package_dir={'': cls.root_directory},\n include_package_data=True,\n package_data=cls.package_data, exclude_package_data=cls.exclude_package_data,\n install_requires=cls.install_requires, extras_require=cls.extras_require,\n python_requires=cls.python_requires,\n entry_points=cls.entry_points, test_suite=cls.test_suite)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef surface_to_image(surface):\n from IPython.display import Image\n buf = BytesIO()\n surface.write_to_png(buf)\n data = buf.getvalue()\n buf.close()\n return Image(data=data)", "response": "Renders current buffer surface to IPython image"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_npimage(transparent=False, y_origin=\"top\"):\n image_surface = render_record_surface()\n img = 0 + np.frombuffer(image_surface.get_data(), np.uint8)\n img.shape = (HEIGHT, WIDTH, 4)\n img = img[:, :, [2, 1, 0, 3]]\n if y_origin == \"bottom\":\n img = img[::-1]\n return img if transparent else img[:, :, : 3]", "response": "Returns a numpy array representing the RGB picture."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstopping recursion if resolution is too low on number of components is too high on number of components is too high", "response": "def check_limits(user_rule):\n \"\"\"Stop recursion if resolution is too low on number of components is too high \"\"\"\n\n def wrapper(*args, **kwargs):\n \"\"\"The body of the decorator \"\"\"\n global _state\n _state[\"cnt_elements\"] += 1\n _state[\"depth\"] += 1\n matrix = _state[\"ctx\"].get_matrix()\n # TODO: add check of transparency = 0\n if _state[\"depth\"] >= MAX_DEPTH:\n logger.info(\"stop recursion by reaching max depth {}\".format(MAX_DEPTH))\n else:\n min_size_scaled = SIZE_MIN_FEATURE / min(WIDTH, HEIGHT)\n current_scale = max([abs(matrix[i]) for i in range(2)])\n if (current_scale < min_size_scaled):\n logger.info(\"stop recursion by reaching min feature size\")\n # TODO: check feature size with respect to current ink size\n else:\n if _state[\"cnt_elements\"] > MAX_ELEMENTS:\n logger.info(\"stop recursion by reaching max elements\")\n else:\n user_rule(*args, **kwargs)\n _state[\"depth\"] -= 1\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ninitialize the base state of the base class.", "response": "def init(canvas_size=(512, 512), max_depth=12, face_color=None, background_color=None):\n \"\"\"Initializes global state\"\"\"\n global _background_color\n _background_color = background_color\n global _ctx\n global cnt_elements\n global MAX_DEPTH\n global WIDTH\n global HEIGHT\n\n _init_state()\n sys.setrecursionlimit(20000)\n MAX_DEPTH = max_depth\n WIDTH, HEIGHT = canvas_size\n\n if face_color is not None:\n r, g, b = htmlcolor_to_rgb(face_color)\n _state[\"ctx\"].set_source_rgb(r, g, b)\n hue, saturation, brightness = colorsys.rgb_to_hsv(r, g, b)\n _state[\"color\"] = (hue, saturation, brightness, 1)\n logger.debug(\"Init done\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef htmlcolor_to_rgb(str_color):\n if not (str_color.startswith('#') and len(str_color) == 7):\n raise ValueError(\"Bad html color format. Expected: '#RRGGBB' \")\n result = [1.0 * int(n, 16) / 255 for n in (str_color[1:3], str_color[3:5], str_color[5:])]\n return result", "response": "function to convert HTML - styly color string to RGB values"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompile docstrings into HTML strings with shortcode support.", "response": "def compile_string(self, data, source_path=None, is_two_file=True, post=None, lang=None):\n \"\"\"Compile docstrings into HTML strings, with shortcode support.\"\"\"\n if not is_two_file:\n _, data = self.split_metadata(data, None, lang)\n new_data, shortcodes = sc.extract_shortcodes(data)\n # The way pdoc generates output is a bit inflexible\n path_templates = os.path.join(self.plugin_path, \"tempaltes\")\n LOGGER.info(f\"set path tempaltes to {path_templates}\")\n with tempfile.TemporaryDirectory() as tmpdir:\n subprocess.check_call(['pdoc', '--html', '--html-no-source', '--html-dir', tmpdir, \"--template-dir\", path_templates] + shlex.split(new_data.strip()))\n fname = os.listdir(tmpdir)[0]\n tmd_subdir = os.path.join(tmpdir, fname)\n fname = os.listdir(tmd_subdir)[0]\n LOGGER.info(f\"tmpdir = {tmd_subdir}, fname = {fname}\")\n with open(os.path.join(tmd_subdir, fname), 'r', encoding='utf8') as inf:\n output = inf.read()\n return self.site.apply_shortcodes_uuid(output, shortcodes, filename=source_path, extra_context={'post': post})"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncompile the docstring into HTML and save as dest.", "response": "def compile(self, source, dest, is_two_file=True, post=None, lang=None):\n \"\"\"Compile the docstring into HTML and save as dest.\"\"\"\n makedirs(os.path.dirname(dest))\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n data = in_file.read()\n data, shortcode_deps = self.compile_string(data, source, is_two_file, post, lang)\n out_file.write(data)\n if post is None:\n if shortcode_deps:\n self.logger.error(\n \"Cannot save dependencies for post {0} (post unknown)\",\n source)\n else:\n post._depfile[dest] += shortcode_deps\n return True"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_post(self, path, **kw):\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not used by create_post as of now.\n kw.pop('is_page', False)\n metadata = {}\n metadata.update(self.default_metadata)\n metadata.update(kw)\n makedirs(os.path.dirname(path))\n if not content.endswith('\\n'):\n content += '\\n'\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if onefile:\n fd.write(write_metadata(metadata, comment_wrap=False, site=self.site, compiler=self))\n fd.write(content)", "response": "Create a new post."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nappends a value to the contents of the current content dict.", "response": "def _append_value(self, value, _file, _name):\n \"\"\"Call this function to write contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n * _name - str, name of current content dict\n\n \"\"\"\n if self._flag:\n _keys = _name + '\\n'\n else:\n _keys = '\\n' + _name + '\\n'\n _file.seek(self._sptr, os.SEEK_SET)\n _file.write(_keys)\n\n self._bctr = collections.defaultdict(int) # blank branch counter dict\n self._append_branch(value, _file)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nappend array contents to the file.", "response": "def _append_array(self, value, _file):\n \"\"\"Call this function to write array contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n\n \"\"\"\n if not value:\n self._append_none(None, _file)\n return\n\n _bptr = ''\n _tabs = ''\n _tlen = len(value) - 1\n if _tlen:\n _bptr = ' |-->'\n for _ in range(self._tctr + 1):\n _tabs += _TEMP_SPACES if self._bctr[_] else _TEMP_BRANCH\n else:\n _tabs = ''\n\n for (_nctr, _item) in enumerate(value):\n _text = '{tabs}{bptr}'.format(tabs=_tabs, bptr=_bptr)\n _file.write(_text)\n\n _item = self.object_hook(_item)\n _type = type(_item).__name__\n _MAGIC_TYPES[_type](self, _item, _file)\n\n _suff = '\\n' if _nctr < _tlen else ''\n _file.write(_suff)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _append_branch(self, value, _file):\n if not value:\n return\n # return self._append_none(None, _file)\n\n self._tctr += 1\n _vlen = len(value)\n for (_vctr, (_item, _text)) in enumerate(value.items()):\n _text = self.object_hook(_text)\n _type = type(_text).__name__\n\n flag_dict = (_type == 'dict')\n flag_list = (_type == 'list' and (len(_text) > 1 or (len(_text) == 1 and type(_text[0]).__name__ == 'dict'))) # noqa pylint: disable=line-too-long\n flag_tuple = (_type == 'tuple' and (len(_text) > 1 or (len(_text) == 1 and type(_text[0]).__name__ == 'dict'))) # noqa pylint: disable=line-too-long\n flag_bytes = (_type == 'bytes' and len(_text) > 16)\n if any((flag_dict, flag_list, flag_tuple, flag_bytes)):\n _pref = '\\n'\n else:\n _pref = ' ->'\n\n _labs = ''\n for _ in range(self._tctr):\n _labs += _TEMP_SPACES if self._bctr[_] else _TEMP_BRANCH\n\n _keys = '{labs} |-- {item}{pref}'.format(labs=_labs, item=_item, pref=_pref)\n _file.write(_keys)\n\n if _vctr == _vlen - 1:\n self._bctr[self._tctr] = 1\n\n _MAGIC_TYPES[_type](self, _text, _file)\n\n _suff = '' if _type == 'dict' else '\\n'\n _file.write(_suff)\n\n self._bctr[self._tctr] = 0\n self._tctr -= 1", "response": "Append branch contents to the file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nappending bytes to the plistlib. Data object.", "response": "def _append_bytes(self, value, _file):\n \"\"\"Call this function to write bytes contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n\n \"\"\"\n # binascii.b2a_base64(value) -> plistlib.Data\n # binascii.a2b_base64(Data) -> value(bytes)\n if not value:\n self._append_none(None, _file)\n return\n\n if len(value) > 16:\n _tabs = ''\n for _ in range(self._tctr + 1):\n _tabs += _TEMP_SPACES if self._bctr[_] else _TEMP_BRANCH\n\n _list = []\n for (_ictr, _item) in enumerate(textwrap.wrap(value.hex(), 32)):\n _bptr = ' ' if _ictr else ' |--> '\n _text = ' '.join(textwrap.wrap(_item, 2))\n _item = '{tabs}{bptr}{text}'.format(tabs=_tabs, bptr=_bptr, text=_text)\n _list.append(_item)\n _labs = '\\n'.join(_list)\n else:\n _text = ' '.join(textwrap.wrap(value.hex(), 2))\n _labs = ' {text}'.format(text=_text)\n _file.write(_labs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nappending number contents to the file.", "response": "def _append_number(self, value, _file): # pylint: disable=no-self-use\n \"\"\"Call this function to write number contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n\n \"\"\"\n _text = value\n _labs = ' {text}'.format(text=_text)\n _file.write(_labs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nappend a value to the contents of the current object.", "response": "def _append_value(self, value, _file, _name):\n \"\"\"Call this function to write contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n * _name - str, name of current content dict\n\n \"\"\"\n _tabs = '\\t' * self._tctr\n _cmma = ',\\n' if self._vctr[self._tctr] else ''\n _keys = '{cmma}{tabs}\"{name}\" :'.format(cmma=_cmma, tabs=_tabs, name=_name)\n\n _file.seek(self._sptr, os.SEEK_SET)\n _file.write(_keys)\n\n self._vctr[self._tctr] += 1\n self._append_object(value, _file)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nappends array contents to the file.", "response": "def _append_array(self, value, _file):\n \"\"\"Call this function to write array contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n\n \"\"\"\n _labs = ' ['\n _file.write(_labs)\n\n self._tctr += 1\n\n for _item in value:\n _cmma = ',' if self._vctr[self._tctr] else ''\n _file.write(_cmma)\n\n self._vctr[self._tctr] += 1\n\n _item = self.object_hook(_item)\n _type = type(_item).__name__\n _MAGIC_TYPES[_type](self, _item, _file)\n\n self._vctr[self._tctr] = 0\n self._tctr -= 1\n\n _labs = ' ]'\n _file.write(_labs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _append_object(self, value, _file):\n _labs = ' {'\n _file.write(_labs)\n self._tctr += 1\n\n for (_item, _text) in value.items():\n _tabs = '\\t' * self._tctr\n _cmma = ',' if self._vctr[self._tctr] else ''\n _keys = '{cmma}\\n{tabs}\"{item}\" :'.format(cmma=_cmma, tabs=_tabs, item=_item)\n _file.write(_keys)\n\n self._vctr[self._tctr] += 1\n\n _text = self.object_hook(_text)\n _type = type(_text).__name__\n _MAGIC_TYPES[_type](self, _text, _file)\n\n self._vctr[self._tctr] = 0\n self._tctr -= 1\n _tabs = '\\t' * self._tctr\n _labs = '\\n{tabs}{}'.format('}', tabs=_tabs)\n _file.write(_labs)", "response": "Append object contents to the file."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nappending string contents to file.", "response": "def _append_string(self, value, _file): # pylint: disable=no-self-use\n \"\"\"Call this function to write string contents.\n\n Keyword arguments:\n * value - dict, content to be dumped\n * _file - FileIO, output file\n\n \"\"\"\n _text = str(value).replace('\"', '\\\\\"')\n _labs = ' \"{text}\"'.format(text=_text)\n _file.write(_labs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _append_bytes(self, value, _file): # pylint: disable=no-self-use\n # binascii.b2a_base64(value) -> plistlib.Data\n # binascii.a2b_base64(Data) -> value(bytes)\n\n _text = ' '.join(textwrap.wrap(value.hex(), 2))\n # _data = [H for H in iter(\n # functools.partial(io.StringIO(value.hex()).read, 2), '')\n # ] # to split bytes string into length-2 hex string list\n _labs = ' \"{text}\"'.format(text=_text)\n _file.write(_labs)", "response": "Append bytes to the end of the list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _append_date(self, value, _file): # pylint: disable=no-self-use\n _text = value.strftime('%Y-%m-%dT%H:%M:%S.%fZ')\n _labs = ' \"{text}\"'.format(text=_text)\n _file.write(_labs)", "response": "Append date contents to the file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef start(self, verbose=None, end_in_new_line=None):\n if self._start_time is not None and self._end_time is None:\n # the stopwatch is already running\n return self\n if verbose is None:\n verbose = self.verbose_start\n if verbose:\n if end_in_new_line is None:\n end_in_new_line = self.end_in_new_line\n if end_in_new_line:\n self.log(self.description)\n else:\n self.log(self.description, end=\"\", flush=True)\n self._end_time = None\n self._start_time = datetime.datetime.now()\n return self", "response": "Start the stopwatch if it is paused."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npausing the stopwatch. If the stopwatch is already paused, nothing will happen.", "response": "def pause(self):\n \"\"\"Pause the stopwatch.\n\n If the stopwatch is already paused, nothing will happen.\n \"\"\"\n if self._end_time is not None:\n # the stopwatch is already paused\n return\n self._end_time = datetime.datetime.now()\n self._elapsed_time += self._end_time - self._start_time"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the elapsed time of the current split.", "response": "def get_elapsed_time(self):\n \"\"\"Get the elapsed time of the current split.\n \"\"\"\n if self._start_time is None or self._end_time is not None:\n # the stopwatch is paused\n return self._elapsed_time\n return self._elapsed_time + (datetime.datetime.now() - self._start_time)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef split(self, verbose=None, end_in_new_line=None):\n elapsed_time = self.get_elapsed_time()\n self.split_elapsed_time.append(elapsed_time)\n self._cumulative_elapsed_time += elapsed_time\n self._elapsed_time = datetime.timedelta()\n if verbose is None:\n verbose = self.verbose_end\n if verbose:\n if end_in_new_line is None:\n end_in_new_line = self.end_in_new_line\n if end_in_new_line:\n self.log(\"{} done in {}\".format(self.description, elapsed_time))\n else:\n self.log(\" done in {}\".format(elapsed_time))\n self._start_time = datetime.datetime.now()", "response": "Save the elapsed time of the current split and restart the stopwatch."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef bbduk_trim(forward_in, forward_out, reverse_in='NA', reverse_out='NA', returncmd=False, **kwargs):\n options = kwargs_to_string(kwargs)\n cmd = 'which bbduk.sh'\n try:\n subprocess.check_output(cmd.split()).decode('utf-8')\n except subprocess.CalledProcessError:\n print('ERROR: Could not find bbduk. Plase check that the bbtools package is installed and on your $PATH.\\n\\n')\n raise FileNotFoundError\n if os.path.isfile(forward_in.replace('_R1', '_R2')) and reverse_in == 'NA' and '_R1' in forward_in:\n reverse_in = forward_in.replace('_R1', '_R2')\n if reverse_out == 'NA':\n if '_R1' in forward_out:\n reverse_out = forward_out.replace('_R1', '_R2')\n else:\n raise ValueError('If you do not specify reverse_out, forward_out must contain R1.\\n\\n')\n cmd = 'bbduk.sh in1={f_in} in2={r_in} out1={f_out} out2={r_out} qtrim=w trimq=20 k=25 minlength=50 ' \\\n 'forcetrimleft=15 ref=adapters overwrite hdist=1 tpe tbo{optn}'\\\n .format(f_in=forward_in,\n r_in=reverse_in,\n f_out=forward_out,\n r_out=reverse_out,\n optn=options)\n elif reverse_in == 'NA':\n cmd = 'bbduk.sh in={f_in} out={f_out} qtrim=w trimq=20 k=25 minlength=50 forcetrimleft=15' \\\n ' ref=adapters overwrite hdist=1 tpe tbo{optn}'\\\n .format(f_in=forward_in,\n f_out=forward_out,\n optn=options)\n else:\n if reverse_out == 'NA':\n raise ValueError('Reverse output reads must be specified.')\n cmd = 'bbduk.sh in1={f_in} in2={r_in} out1={f_out} out2={r_out} qtrim=w trimq=20 k=25 minlength=50 ' \\\n 'forcetrimleft=15 ref=adapters overwrite hdist=1 tpe tbo{optn}'\\\n .format(f_in=forward_in,\n r_in=reverse_in,\n f_out=forward_out,\n r_out=reverse_out,\n optn=options)\n out, err = run_subprocess(cmd)\n if returncmd:\n return out, err, cmd\n else:\n return out, err", "response": "Wrapper for using bbduk to quality trim reads."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef genome_size(peaks_file, haploid=True):\n size = 0\n with open(peaks_file) as peaks:\n lines = peaks.readlines()\n for line in lines:\n if haploid:\n if '#haploid_genome_size' in line:\n size = int(line.split()[1])\n else:\n if '#genome_size' in line:\n size = int(line.split()[1])\n return size", "response": "Find the genome size of an organsim based on the peaks file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef DownloadAccount(self, next_page_token=None, max_results=None):\n param = {}\n if next_page_token:\n param['nextPageToken'] = next_page_token\n if max_results:\n param['maxResults'] = max_results\n response = self._InvokeGitkitApi('downloadAccount', param)\n # pylint does not recognize the return type of simplejson.loads\n # pylint: disable=maybe-no-member\n return response.get('nextPageToken', None), response.get('users', {})", "response": "Downloads multiple accounts from Gitkit server."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef UploadAccount(self, hash_algorithm, hash_key, accounts):\n param = {\n 'hashAlgorithm': hash_algorithm,\n 'signerKey': hash_key,\n 'users': accounts\n }\n # pylint does not recognize the return type of simplejson.loads\n # pylint: disable=maybe-no-member\n return self._InvokeGitkitApi('uploadAccount', param)", "response": "Uploads multiple accounts to Gitkit server."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndownloads Gitkit public cert.", "response": "def GetPublicCert(self):\n \"\"\"Download Gitkit public cert.\n\n Returns:\n dict of public certs.\n \"\"\"\n\n cert_url = self.google_api_url + 'publicKeys'\n\n resp, content = self.http.request(cert_url)\n if resp.status == 200:\n return simplejson.loads(content)\n else:\n raise errors.GitkitServerError('Error response for cert url: %s' %\n content)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _InvokeGitkitApi(self, method, params=None, need_service_account=True):\n body = simplejson.dumps(params) if params else None\n req = urllib_request.Request(self.google_api_url + method)\n req.add_header('Content-type', 'application/json')\n if need_service_account:\n if self.credentials:\n access_token = self.credentials.get_access_token().access_token\n elif self.service_account_email and self.service_account_key:\n access_token = self._GetAccessToken()\n else:\n raise errors.GitkitClientError('Missing service account credentials')\n req.add_header('Authorization', 'Bearer ' + access_token)\n try:\n binary_body = body.encode('utf-8') if body else None\n raw_response = urllib_request.urlopen(req, binary_body).read()\n except urllib_request.HTTPError as err:\n if err.code == 400:\n raw_response = err.read()\n else:\n raise\n return self._CheckGitkitError(raw_response)", "response": "Invokes Gitkit API, with optional access token for service account.\n\n Args:\n method: string, the api method name.\n params: dict of optional parameters for the API.\n need_service_account: false if service account is not needed.\n\n Raises:\n GitkitClientError: if the request is bad.\n GitkitServerError: if Gitkit can not handle the request.\n\n Returns:\n API response as dict."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _GetAccessToken(self):\n d = {\n 'assertion': self._GenerateAssertion(),\n 'grant_type': 'urn:ietf:params:oauth:grant-type:jwt-bearer',\n }\n try:\n body = parse.urlencode(d)\n except AttributeError:\n body = urllib.urlencode(d)\n req = urllib_request.Request(RpcHelper.TOKEN_ENDPOINT)\n req.add_header('Content-type', 'application/x-www-form-urlencoded')\n binary_body = body.encode('utf-8')\n raw_response = urllib_request.urlopen(req, binary_body)\n return simplejson.loads(raw_response.read())['access_token']", "response": "Gets oauth2 access token for Gitkit API using service account."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _GenerateAssertion(self):\n now = int(time.time())\n payload = {\n 'aud': RpcHelper.TOKEN_ENDPOINT,\n 'scope': 'https://www.googleapis.com/auth/identitytoolkit',\n 'iat': now,\n 'exp': now + RpcHelper.MAX_TOKEN_LIFETIME_SECS,\n 'iss': self.service_account_email\n }\n return crypt.make_signed_jwt(\n crypt.Signer.from_string(self.service_account_key),\n payload)", "response": "Generates the signed Json Web Token that will be used in the request."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _CheckGitkitError(self, raw_response):\n try:\n response = simplejson.loads(raw_response)\n if 'error' not in response:\n return response\n else:\n error = response['error']\n if 'code' in error:\n code = error['code']\n if str(code).startswith('4'):\n raise errors.GitkitClientError(error['message'])\n else:\n raise errors.GitkitServerError(error['message'])\n except simplejson.JSONDecodeError:\n pass\n raise errors.GitkitServerError('null error code from Gitkit server')", "response": "Raises error if API invocation failed."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninitializes from user specified dictionary.", "response": "def FromDictionary(cls, dictionary):\n \"\"\"Initializes from user specified dictionary.\n\n Args:\n dictionary: dict of user specified attributes\n Returns:\n GitkitUser object\n \"\"\"\n if 'user_id' in dictionary:\n raise errors.GitkitClientError('use localId instead')\n if 'localId' not in dictionary:\n raise errors.GitkitClientError('must specify localId')\n if 'email' not in dictionary:\n raise errors.GitkitClientError('must specify email')\n\n return cls(decode=False, **dictionary)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ToRequest(self):\n param = {}\n if self.email:\n param['email'] = self.email\n if self.user_id:\n param['localId'] = self.user_id\n if self.name:\n param['displayName'] = self.name\n if self.photo_url:\n param['photoUrl'] = self.photo_url\n if self.email_verified is not None:\n param['emailVerified'] = self.email_verified\n if self.password_hash:\n param['passwordHash'] = base64.urlsafe_b64encode(self.password_hash)\n if self.salt:\n param['salt'] = base64.urlsafe_b64encode(self.salt)\n if self.provider_info:\n param['providerUserInfo'] = self.provider_info\n return param", "response": "Converts the object to a gitkit api request parameter dict."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef VerifyGitkitToken(self, jwt):\n certs = self.rpc_helper.GetPublicCert()\n crypt.MAX_TOKEN_LIFETIME_SECS = 30 * 86400 # 30 days\n parsed = None\n for aud in filter(lambda x: x is not None, [self.project_id, self.client_id]):\n try:\n parsed = crypt.verify_signed_jwt_with_certs(jwt, certs, aud)\n except crypt.AppIdentityError as e:\n if \"Wrong recipient\" not in e.message:\n return None\n if parsed:\n return GitkitUser.FromToken(parsed)\n return None", "response": "Verifies a Gitkit token string."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef GetUserByEmail(self, email):\n user = self.rpc_helper.GetAccountInfoByEmail(email)\n return GitkitUser.FromApiResponse(user)", "response": "Gets user info by email."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef GetUserById(self, local_id):\n user = self.rpc_helper.GetAccountInfoById(local_id)\n return GitkitUser.FromApiResponse(user)", "response": "Gets a user info by id."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef UploadUsers(self, hash_algorithm, hash_key, accounts):\n return self.rpc_helper.UploadAccount(hash_algorithm,\n base64.urlsafe_b64encode(hash_key),\n [GitkitUser.ToRequest(i) for i in accounts])", "response": "Uploads multiple users to Gitkit server."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting all users from Gitkit server.", "response": "def GetAllUsers(self, pagination_size=10):\n \"\"\"Gets all user info from Gitkit server.\n\n Args:\n pagination_size: int, how many users should be returned per request.\n The account info are retrieved in pagination.\n\n Yields:\n A generator to iterate all users.\n \"\"\"\n next_page_token, accounts = self.rpc_helper.DownloadAccount(\n None, pagination_size)\n while accounts:\n for account in accounts:\n yield GitkitUser.FromApiResponse(account)\n next_page_token, accounts = self.rpc_helper.DownloadAccount(\n next_page_token, pagination_size)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef GetOobResult(self, param, user_ip, gitkit_token=None):\n if 'action' in param:\n try:\n if param['action'] == GitkitClient.RESET_PASSWORD_ACTION:\n request = self._PasswordResetRequest(param, user_ip)\n oob_code, oob_link = self._BuildOobLink(request,\n param['action'])\n return {\n 'action': GitkitClient.RESET_PASSWORD_ACTION,\n 'email': param['email'],\n 'oob_link': oob_link,\n 'oob_code': oob_code,\n 'response_body': simplejson.dumps({'success': True})\n }\n elif param['action'] == GitkitClient.CHANGE_EMAIL_ACTION:\n if not gitkit_token:\n return self._FailureOobResponse('login is required')\n request = self._ChangeEmailRequest(param, user_ip, gitkit_token)\n oob_code, oob_link = self._BuildOobLink(request,\n param['action'])\n return {\n 'action': GitkitClient.CHANGE_EMAIL_ACTION,\n 'email': param['oldEmail'],\n 'new_email': param['newEmail'],\n 'oob_link': oob_link,\n 'oob_code': oob_code,\n 'response_body': simplejson.dumps({'success': True})\n }\n except errors.GitkitClientError as error:\n return self._FailureOobResponse(error.value)\n return self._FailureOobResponse('unknown request type')", "response": "Returns the out - of - band code for the specified action."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding out - of - band url.", "response": "def _BuildOobLink(self, param, mode):\n \"\"\"Builds out-of-band URL.\n\n Gitkit API GetOobCode() is called and the returning code is combined\n with Gitkit widget URL to building the out-of-band url.\n\n Args:\n param: dict of request.\n mode: string, Gitkit widget mode to handle the oob action after user\n clicks the oob url in the email.\n\n Raises:\n GitkitClientError: if oob code is not returned.\n\n Returns:\n A string of oob url.\n \"\"\"\n code = self.rpc_helper.GetOobCode(param)\n if code:\n parsed = list(parse.urlparse(self.widget_url))\n\n query = dict(parse.parse_qsl(parsed[4]))\n query.update({'mode': mode, 'oobCode': code})\n\n try:\n parsed[4] = parse.urlencode(query)\n except AttributeError:\n parsed[4] = urllib.urlencode(query)\n\n return code, parse.urlunparse(parsed)\n raise errors.GitkitClientError('invalid request')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns a subprocess and returns stdout and stderr as strings.", "response": "def run_subprocess(command):\n \"\"\"\n command is the command to run, as a string.\n runs a subprocess, returns stdout and stderr from the subprocess as strings.\n \"\"\"\n x = Popen(command, shell=True, stdout=PIPE, stderr=PIPE)\n out, err = x.communicate()\n out = out.decode('utf-8')\n err = err.decode('utf-8')\n return out, err"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntaking a set of kwargs and turns them into a string which can then be passed to a command.", "response": "def kwargs_to_string(kwargs):\n \"\"\"\n Given a set of kwargs, turns them into a string which can then be passed to a command.\n :param kwargs: kwargs from a function call.\n :return: outstr: A string, which is '' if no kwargs were given, and the kwargs in string format otherwise.\n \"\"\"\n outstr = ''\n for arg in kwargs:\n outstr += ' -{} {}'.format(arg, kwargs[arg])\n return outstr"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread the output of mash dist into a list of MashResult objects.", "response": "def read_mash_output(result_file):\n \"\"\"\n :param result_file: Tab-delimited result file generated by mash dist.\n :return: mash_results: A list with each entry in the result file as an entry, with attributes reference, query,\n distance, pvalue, and matching_hash\n \"\"\"\n with open(result_file) as handle:\n lines = handle.readlines()\n mash_results = list()\n for line in lines:\n result = MashResult(line)\n mash_results.append(result)\n return mash_results"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef read_mash_screen(screen_result):\n with open(screen_result) as handle:\n lines = handle.readlines()\n results = list()\n for line in lines:\n result = ScreenResult(line)\n results.append(result)\n return results", "response": "Reads the mash screen result file and returns a list of ScreenResult objects."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrunning a command using subprocess and returns both the stdout and stderr from that command.", "response": "def run_cmd(cmd):\n \"\"\"\n Runs a command using subprocess, and returns both the stdout and stderr from that command\n If exit code from command is non-zero, raises subproess.CalledProcessError\n :param cmd: command to run as a string, as it would be called on the command line\n :return: out, err: Strings that are the stdout and stderr from the command called.\n \"\"\"\n p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n out, err = p.communicate()\n out = out.decode('utf-8')\n err = err.decode('utf-8')\n if p.returncode != 0:\n raise subprocess.CalledProcessError(p.returncode, cmd=cmd)\n return out, err"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_to_logfile(logfile, out, err, cmd):\n with open(logfile, 'a+') as outfile:\n outfile.write('Command used: {}\\n\\n'.format(cmd))\n outfile.write('STDOUT: {}\\n\\n'.format(out))\n outfile.write('STDERR: {}\\n\\n'.format(err))", "response": "Writes stdout stderr and a command to a logfile"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef find_paired_reads(fastq_directory, forward_id='_R1', reverse_id='_R2'):\n pair_list = list()\n\n fastq_files = glob.glob(os.path.join(fastq_directory, '*.f*q*'))\n for name in sorted(fastq_files):\n if forward_id in name and os.path.isfile(name.replace(forward_id, reverse_id)):\n pair_list.append([name, name.replace(forward_id, reverse_id)])\n return pair_list", "response": "Returns a list of pairs of fastq files that are paired."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_unpaired_reads(fastq_directory, forward_id='_R1', reverse_id='_R2', find_fasta=False):\n read_list = list()\n if find_fasta is False:\n fastq_files = glob.glob(os.path.join(fastq_directory, '*.f*q*'))\n else:\n # Very misnamed!\n fastq_files = glob.glob(os.path.join(fastq_directory, '*.f*a*'))\n for name in sorted(fastq_files):\n # Iterate through files, adding them to our list of unpaired reads if:\n # 1) They don't have the forward identifier or the reverse identifier in their name.\n # 2) They have forward but the reverse isn't there.\n # 3) They have reverse but the forward isn't there.\n if forward_id not in name and reverse_id not in name:\n read_list.append([name])\n elif forward_id in name and not os.path.isfile(name.replace(forward_id, reverse_id)):\n read_list.append([name])\n elif reverse_id in name and not os.path.isfile(name.replace(reverse_id, forward_id)):\n read_list.append([name])\n return read_list", "response": "Find unpaired reads in a directory."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_genusspecific_allele_list(profiles_file, target_genus):\n alleles = list()\n with open(profiles_file) as f:\n lines = f.readlines()\n for line in lines:\n line = line.rstrip()\n genus = line.split(':')[0]\n if genus == target_genus:\n alleles = line.split(':')[1].split(',')[:-1]\n return alleles", "response": "Find the list of species - specific alleles for a given genus."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngiving a pair of reads and an rMLST database will extract the genes that are in the database.", "response": "def extract_rmlst_genes(pair, database, forward_out, reverse_out, threads=12, logfile=None):\n \"\"\"\n Given a pair of reads and an rMLST database, will extract reads that contain sequence from the database.\n :param pair: List containing path to forward reads at index 0 and path to reverse reads at index 1.\n :param database: Path to rMLST database, in FASTA format.\n :param forward_out:\n :param reverse_out:\n :param threads:\n \"\"\"\n out, err, cmd = bbtools.bbduk_bait(database, pair[0], forward_out, reverse_in=pair[1],\n reverse_out=reverse_out, threads=str(threads), returncmd=True)\n if logfile:\n write_to_logfile(logfile, out, err, cmd)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_cross_contamination(databases, pair, tmpdir='tmp', log='log.txt', threads=1):\n genera_present = list()\n out, err, cmd = mash.screen('{}/refseq.msh'.format(databases), pair[0],\n pair[1], threads=threads, w='', i='0.95',\n output_file=os.path.join(tmpdir, 'screen.tab'), returncmd=True)\n write_to_logfile(log, out, err, cmd)\n screen_output = mash.read_mash_screen(os.path.join(tmpdir, 'screen.tab'))\n for item in screen_output:\n mash_genus = item.query_id.split('/')[-3]\n if mash_genus == 'Shigella':\n mash_genus = 'Escherichia'\n if mash_genus not in genera_present:\n genera_present.append(mash_genus)\n if len(genera_present) == 1:\n genera_present = genera_present[0]\n elif len(genera_present) == 0:\n genera_present = 'NA'\n else:\n tmpstr = ''\n for mash_genus in genera_present:\n tmpstr += mash_genus + ':'\n genera_present = tmpstr[:-1]\n return genera_present", "response": "Uses mash to find out if or not a sample has more than one cross - contamination."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef number_of_bases_above_threshold(high_quality_base_count, base_count_cutoff=2, base_fraction_cutoff=None):\n\n # make a dict by dictionary comprehension where values are True or False for each base depending on whether the count meets the threshold.\n # Method differs depending on whether absolute or fraction cutoff is specified\n if base_fraction_cutoff:\n total_hq_base_count = sum(high_quality_base_count.values())\n bases_above_threshold = {base: float(count)/total_hq_base_count >= base_fraction_cutoff and count >= base_count_cutoff for (base,count) in high_quality_base_count.items()}\n else:\n bases_above_threshold = {base: count >= base_count_cutoff for (base, count) in high_quality_base_count.items()}\n\n # True is equal to 1 so sum of the number of Trues in the bases_above_threshold dict is the number of bases passing threhold\n return sum(bases_above_threshold.values())", "response": "Returns True if a site has at least two bases of high quality and that it can be considered as a single base."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_if_multibase(column, quality_cutoff, base_cutoff, base_fraction_cutoff):\n # Sometimes the qualities come out to ridiculously high (>70) values. Looks to be because sometimes reads\n # are overlapping and the qualities get summed for overlapping bases. Issue opened on pysam.\n unfiltered_base_qualities = dict()\n for read in column.pileups:\n if read.query_position is not None: # Not entirely sure why this is sometimes None, but it causes bad stuff\n reference_sequence = read.alignment.get_reference_sequence()\n previous_position = read.query_position - 1 if read.query_position > 1 else 0\n next_position = read.query_position + 1 # This causes index errors. Fix at some point soon.\n # Another stringency check - to make sure that we're actually looking at a point mutation, check that the\n # base before and after the one we're looking at match the reference. With Nanopore data, lots of indels and\n # the like cause false positives, so this filters those out.\n try: # Need to actually handle this at some point. For now, be lazy\n previous_reference_base = reference_sequence[previous_position]\n next_reference_base = reference_sequence[next_position]\n previous_base = read.alignment.query_sequence[previous_position]\n next_base = read.alignment.query_sequence[next_position]\n base = read.alignment.query_sequence[read.query_position]\n quality = read.alignment.query_qualities[read.query_position]\n if previous_reference_base == previous_base and next_reference_base == next_base:\n if base not in unfiltered_base_qualities:\n unfiltered_base_qualities[base] = [quality]\n else:\n unfiltered_base_qualities[base].append(quality)\n except IndexError:\n pass\n\n # Now check that at least two bases for each of the bases present high quality.\n # first remove all low quality bases\n # Use dictionary comprehension to make a new dictionary where only scores above threshold are kept.\n # Internally list comprehension is used to filter the list\n filtered_base_qualities = {base:[score for score in scores if score >= quality_cutoff] for (base,scores) in unfiltered_base_qualities.items()}\n\n # Now remove bases that have no high quality scores\n # Use dictionary comprehension to make a new dictionary where bases that have a non-empty scores list are kept\n filtered_base_qualities = {base:scores for (base,scores) in filtered_base_qualities.items() if scores}\n \n # If we less than two bases with high quality scores, ignore things.\n if len(filtered_base_qualities) < 2:\n return dict()\n\n # Now that filtered_base_qualities only contains bases with more than one HQ base, make just a dict with base counts with dict comprehension\n high_quality_base_count = {base: len(scores) for (base, scores) in filtered_base_qualities.items()}\n \n if number_of_bases_above_threshold(high_quality_base_count, base_count_cutoff=base_cutoff, base_fraction_cutoff=base_fraction_cutoff) > 1:\n logging.debug('base qualities before filtering: {0}'.format(unfiltered_base_qualities))\n logging.debug('base qualities after filtering: {0}'.format(filtered_base_qualities))\n logging.debug('SNVs found at position {0}: {1}\\n'.format(column.pos, high_quality_base_count))\n return high_quality_base_count\n else:\n # logging.debug('No SNVs\\n')\n return dict()", "response": "Finds if a position in a pileup column has more than one base present."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_contig_names(fasta_file):\n contig_names = list()\n for contig in SeqIO.parse(fasta_file, 'fasta'):\n contig_names.append(contig.id)\n return contig_names", "response": "Gets contig names from a fasta file using SeqIO."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nread a contig from a BAM file and returns a dictionary of positions where more than one base is present.", "response": "def read_contig(contig_name, bamfile_name, reference_fasta, quality_cutoff=20, base_cutoff=2, base_fraction_cutoff=None):\n \"\"\"\n Examines a contig to find if there are positions where more than one base is present.\n :param contig_name: Name of contig as a string.\n :param bamfile_name: Full path to bamfile. Must be sorted/indexed\n :param reference_fasta: Full path to fasta file that was used to generate the bamfile.\n :param report_file: File where information about each position found to have multiple bases\n will be written.\n :return: Dictionary of positions where more than one base is present. Keys are contig name, values are positions\n \"\"\"\n bamfile = pysam.AlignmentFile(bamfile_name, 'rb')\n multibase_position_dict = dict()\n to_write = list()\n # These parameters seem to be fairly undocumented with pysam, but I think that they should make the output\n # that I'm getting to match up with what I'm seeing in Tablet.\n for column in bamfile.pileup(contig_name,\n stepper='samtools', ignore_orphans=False, fastafile=pysam.FastaFile(reference_fasta),\n min_base_quality=0):\n base_dict = find_if_multibase(column, quality_cutoff=quality_cutoff, base_cutoff=base_cutoff, base_fraction_cutoff=base_fraction_cutoff)\n\n if base_dict:\n # Pysam starts counting at 0, whereas we actually want to start counting at 1.\n actual_position = column.pos + 1\n if column.reference_name in multibase_position_dict:\n multibase_position_dict[column.reference_name].append(actual_position)\n else:\n multibase_position_dict[column.reference_name] = [actual_position]\n to_write.append('{reference},{position},{bases},{coverage}\\n'.format(reference=column.reference_name,\n position=actual_position,\n bases=base_dict_to_string(base_dict),\n coverage=sum(base_dict.values())))\n bamfile.close()\n return multibase_position_dict, to_write"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfind the loci present in the rMLST report file.", "response": "def find_rmlst_type(kma_report, rmlst_report):\n \"\"\"\n Uses a report generated by KMA to determine what allele is present for each rMLST gene.\n :param kma_report: The .res report generated by KMA.\n :param rmlst_report: rMLST report file to write information to.\n :return: a sorted list of loci present, in format gene_allele\n \"\"\"\n genes_to_use = dict()\n score_dict = dict()\n gene_alleles = list()\n with open(kma_report) as tsvfile:\n reader = csv.DictReader(tsvfile, delimiter='\\t')\n for row in reader:\n gene_allele = row['#Template']\n score = int(row['Score'])\n gene = gene_allele.split('_')[0]\n allele = gene_allele.split('_')[1]\n if gene not in score_dict:\n score_dict[gene] = score\n genes_to_use[gene] = allele\n else:\n if score > score_dict[gene]:\n score_dict[gene] = score\n genes_to_use[gene] = allele\n for gene in genes_to_use:\n gene_alleles.append(gene + '_' + genes_to_use[gene].replace(' ', ''))\n gene_alleles = sorted(gene_alleles)\n with open(rmlst_report, 'w') as f:\n f.write('Gene,Allele\\n')\n for gene_allele in gene_alleles:\n gene = gene_allele.split('_')[0]\n allele = gene_allele.split('_')[1]\n f.write('{},{}\\n'.format(gene, allele))\n return gene_alleles"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a dictionary of bases and counts created by find_if_multibase to a string.", "response": "def base_dict_to_string(base_dict):\n \"\"\"\n Converts a dictionary to a string. {'C': 12, 'A':4} gets converted to C:12;A:4\n :param base_dict: Dictionary of bases and counts created by find_if_multibase\n :return: String representing that dictionary.\n \"\"\"\n outstr = ''\n # First, sort base_dict so that major allele always comes first - makes output report nicer to look at.\n base_list = sorted(base_dict.items(), key=lambda kv: kv[1], reverse=True)\n for base in base_list:\n outstr += '{}:{};'.format(base[0], base[1])\n return outstr[:-1]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfind the total length of the sequence in a file.", "response": "def find_total_sequence_length(fasta_file):\n \"\"\"\n Totals up number of bases in a fasta file.\n :param fasta_file: Path to an uncompressed, fasta-formatted file.\n :return: Number of total bases in file, as an int.\n \"\"\"\n total_length = 0\n for sequence in SeqIO.parse(fasta_file, 'fasta'):\n total_length += len(sequence.seq)\n return total_length"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef estimate_percent_contamination(contamination_report_file):\n contam_levels = list()\n with open(contamination_report_file) as csvfile:\n reader = csv.DictReader(csvfile)\n for row in reader:\n lowest_count = 99999\n base_counts = row['Bases'].split(';')\n for count in base_counts:\n num_bases = int(count.split(':')[1])\n if num_bases < lowest_count:\n lowest_count = num_bases\n total_coverage = int(row['Coverage'])\n contam_levels.append(lowest_count*100/total_coverage)\n return '%.2f' % (np.mean(contam_levels)), '%.2f' % np.std(contam_levels)", "response": "Estimate the percent contamination of a sample."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find_contamination(pair, output_folder, databases_folder, forward_id='_R1', threads=1, keep_files=False,\n quality_cutoff=20, base_cutoff=2, base_fraction_cutoff=0.05, cgmlst_db=None, Xmx=None, tmpdir=None,\n data_type='Illumina', use_rmlst=False):\n \"\"\"\n This needs some documentation fairly badly, so here we go.\n :param pair: This has become a misnomer. If the input reads are actually paired, needs to be a list\n with the full filepath to forward reads at index 0 and full path to reverse reads at index 1.\n If reads are unpaired, should be a list of length 1 with the only entry being the full filepath to read set.\n :param output_folder: Folder where outputs (confindr log and report, and other stuff) will be stored.\n This will be created if it does not exist. (I think - should write a test that double checks this).\n :param databases_folder: Full path to folder where ConFindr's databases live. These files can be\n downloaded from figshare in .tar.gz format (https://ndownloader.figshare.com/files/11864267), and\n will be automatically downloaded if the script is run from the command line.\n :param forward_id: Identifier that marks reads as being in the forward direction for paired reads.\n Defaults to _R1\n :param threads: Number of threads to run analyses with. All parts of this pipeline scale pretty well,\n so more is better.\n :param keep_files: Boolean that says whether or not to keep temporary files.\n :param quality_cutoff: Integer of the phred score required to have a base count towards a multiallelic site.\n :param base_cutoff: Integer of number of bases needed to have a base be part of a multiallelic site.\n :param base_fraction_cutoff: Float of fraction of bases needed to have a base be part of a multiallelic site.\n If specified will be used in parallel with base_cutoff\n :param cgmlst_db: if None, we're using rMLST, if True, using some sort of custom cgMLST database. This requires some\n custom parameters.\n :param Xmx: if None, BBTools will use auto memory detection. If string, BBTools will use what's specified as their\n memory request.\n :param tmpdir: if None, any genus-specifc databases that need to be created will be written to ConFindr DB location.\n Otherwise, genus-specific databases will be written here.\n \"\"\"\n if os.path.isfile(os.path.join(databases_folder, 'download_date.txt')):\n with open(os.path.join(databases_folder, 'download_date.txt')) as f:\n database_download_date = f.readline().rstrip()\n else:\n database_download_date = 'NA'\n log = os.path.join(output_folder, 'confindr_log.txt')\n if len(pair) == 2:\n sample_name = os.path.split(pair[0])[-1].split(forward_id)[0]\n paired = True\n logging.debug('Sample is paired. Sample name is {}'.format(sample_name))\n else:\n sample_name = os.path.split(pair[0])[-1].split('.')[0]\n paired = False\n logging.debug('Sample is unpaired. Sample name is {}'.format(sample_name))\n sample_tmp_dir = os.path.join(output_folder, sample_name)\n if not os.path.isdir(sample_tmp_dir):\n os.makedirs(sample_tmp_dir)\n\n logging.info('Checking for cross-species contamination...')\n if paired:\n genus = find_cross_contamination(databases_folder, pair, tmpdir=sample_tmp_dir, log=log, threads=threads)\n else:\n genus = find_cross_contamination_unpaired(databases_folder, reads=pair[0], tmpdir=sample_tmp_dir, log=log, threads=threads)\n if len(genus.split(':')) > 1:\n write_output(output_report=os.path.join(output_folder, 'confindr_report.csv'),\n sample_name=sample_name,\n multi_positions=0,\n genus=genus,\n percent_contam='NA',\n contam_stddev='NA',\n total_gene_length=0,\n database_download_date=database_download_date)\n logging.info('Found cross-contamination! Skipping rest of analysis...\\n')\n if keep_files is False:\n shutil.rmtree(sample_tmp_dir)\n return\n # Setup genus-specific databases, if necessary.\n if cgmlst_db is not None:\n # Sanity check that the DB specified is actually a file, otherwise, quit with appropriate error message.\n if not os.path.isfile(cgmlst_db):\n logging.error('ERROR: Specified cgMLST file ({}) does not exist. Please check the path and try again.'.format(cgmlst_db))\n quit(code=1)\n sample_database = cgmlst_db\n else:\n db_folder = databases_folder if tmpdir is None else tmpdir\n if not os.path.isdir(db_folder):\n os.makedirs(db_folder)\n if genus != 'NA':\n # Logic here is as follows: users can either have both rMLST databases, which cover all of bacteria,\n # cgmlst-derived databases, which cover only Escherichia, Salmonella, and Listeria (may add more at some\n # point), or they can have both. They can also set priority to either always use rMLST, or to use my\n # core-genome derived stuff and fall back on rMLST if they're trying to look at a genus I haven't created\n # a scheme for.\n\n # In the event rmlst databases have priority, always use them.\n if use_rmlst is True:\n sample_database = os.path.join(db_folder, '{}_db.fasta'.format(genus))\n if not os.path.isfile(sample_database):\n if os.path.isfile(os.path.join(db_folder, 'gene_allele.txt')) and os.path.isfile(os.path.join(db_folder, 'rMLST_combined.fasta')):\n logging.info('Setting up genus-specific database for genus {}...'.format(genus))\n allele_list = find_genusspecific_allele_list(os.path.join(db_folder, 'gene_allele.txt'), genus)\n setup_allelespecific_database(fasta_file=sample_database,\n database_folder=db_folder,\n allele_list=allele_list)\n else:\n # Check if a cgderived database is available. If not, try to use rMLST database.\n sample_database = os.path.join(db_folder, '{}_db_cgderived.fasta'.format(genus))\n if not os.path.isfile(sample_database):\n sample_database = os.path.join(db_folder, '{}_db.fasta'.format(genus))\n # Create genus specific database if it doesn't already exist and we have the necessary rMLST files.\n if os.path.isfile(os.path.join(db_folder, 'rMLST_combined.fasta')) and os.path.isfile(os.path.join(db_folder, 'gene_allele.txt')) and not os.path.isfile(sample_database):\n logging.info('Setting up genus-specific database for genus {}...'.format(genus))\n allele_list = find_genusspecific_allele_list(os.path.join(db_folder, 'gene_allele.txt'), genus)\n setup_allelespecific_database(fasta_file=sample_database,\n database_folder=db_folder,\n allele_list=allele_list)\n\n else:\n sample_database = os.path.join(db_folder, 'rMLST_combined.fasta')\n\n # If a user has gotten to this point and they don't have any database available to do analysis because\n # they don't have rMLST downloaded and we don't have a cg-derived database available, boot them with a helpful\n # message.\n if not os.path.isfile(sample_database):\n write_output(output_report=os.path.join(output_folder, 'confindr_report.csv'),\n sample_name=sample_name,\n multi_positions=0,\n genus=genus,\n percent_contam='NA',\n contam_stddev='NA',\n total_gene_length=0,\n database_download_date=database_download_date)\n logging.info('Did not find databases for genus {genus}. You can download the rMLST database to get access to all '\n 'genera (see https://olc-bioinformatics.github.io/ConFindr/install/). Alternatively, if you have a '\n 'high-quality core-genome derived database for your genome of interest, we would be happy to '\n 'add it - open an issue at https://github.com/OLC-Bioinformatics/ConFindr/issues with the '\n 'title \"Add genus-specific database: {genus}\"\\n'.format(genus=genus))\n if keep_files is False:\n shutil.rmtree(sample_tmp_dir)\n return\n\n # Extract rMLST reads and quality trim.\n logging.info('Extracting conserved core genes...')\n if paired:\n if Xmx is None:\n out, err, cmd = bbtools.bbduk_bait(reference=sample_database,\n forward_in=pair[0],\n reverse_in=pair[1],\n forward_out=os.path.join(sample_tmp_dir, 'rmlst_R1.fastq.gz'),\n reverse_out=os.path.join(sample_tmp_dir, 'rmlst_R2.fastq.gz'),\n threads=threads,\n returncmd=True)\n else:\n out, err, cmd = bbtools.bbduk_bait(reference=sample_database,\n forward_in=pair[0],\n reverse_in=pair[1],\n forward_out=os.path.join(sample_tmp_dir, 'rmlst_R1.fastq.gz'),\n reverse_out=os.path.join(sample_tmp_dir, 'rmlst_R2.fastq.gz'),\n threads=threads,\n Xmx=Xmx,\n returncmd=True)\n else:\n if data_type == 'Nanopore':\n forward_out = os.path.join(sample_tmp_dir, 'trimmed.fastq.gz')\n else:\n forward_out = os.path.join(sample_tmp_dir, 'rmlst.fastq.gz')\n if Xmx is None:\n out, err, cmd = bbtools.bbduk_bait(reference=sample_database, forward_in=pair[0],\n forward_out=forward_out,\n returncmd=True, threads=threads)\n else:\n out, err, cmd = bbtools.bbduk_bait(reference=sample_database, forward_in=pair[0],\n forward_out=forward_out, Xmx=Xmx,\n returncmd=True, threads=threads)\n\n write_to_logfile(log, out, err, cmd)\n logging.info('Quality trimming...')\n if data_type == 'Illumina':\n if paired:\n if Xmx is None:\n out, err, cmd = bbtools.bbduk_trim(forward_in=os.path.join(sample_tmp_dir, 'rmlst_R1.fastq.gz'),\n reverse_in=os.path.join(sample_tmp_dir, 'rmlst_R2.fastq.gz'),\n forward_out=os.path.join(sample_tmp_dir, 'trimmed_R1.fastq.gz'),\n reverse_out=os.path.join(sample_tmp_dir, 'trimmed_R2.fastq.gz'),\n threads=str(threads), returncmd=True)\n else:\n out, err, cmd = bbtools.bbduk_trim(forward_in=os.path.join(sample_tmp_dir, 'rmlst_R1.fastq.gz'),\n reverse_in=os.path.join(sample_tmp_dir, 'rmlst_R2.fastq.gz'),\n forward_out=os.path.join(sample_tmp_dir, 'trimmed_R1.fastq.gz'),\n reverse_out=os.path.join(sample_tmp_dir, 'trimmed_R2.fastq.gz'), Xmx=Xmx,\n threads=str(threads), returncmd=True)\n\n else:\n if Xmx is None:\n out, err, cmd = bbtools.bbduk_trim(forward_in=os.path.join(sample_tmp_dir, 'rmlst.fastq.gz'),\n forward_out=os.path.join(sample_tmp_dir, 'trimmed.fastq.gz'),\n returncmd=True, threads=threads)\n else:\n out, err, cmd = bbtools.bbduk_trim(forward_in=os.path.join(sample_tmp_dir, 'rmlst.fastq.gz'),\n forward_out=os.path.join(sample_tmp_dir, 'trimmed.fastq.gz'),\n returncmd=True, threads=threads, Xmx=Xmx)\n write_to_logfile(log, out, err, cmd)\n\n logging.info('Detecting contamination...')\n # Now do mapping in two steps - first, map reads back to database with ambiguous reads matching all - this\n # will be used to get a count of number of reads aligned to each gene/allele so we can create a custom rmlst file\n # with only the most likely allele for each gene.\n if not os.path.isfile(sample_database + '.fai'): # Don't bother re-indexing, this only needs to happen once.\n pysam.faidx(sample_database)\n kma_database = sample_database.replace('.fasta', '') + '_kma'\n kma_report = os.path.join(sample_tmp_dir, 'kma_rmlst')\n if not os.path.isfile(kma_database + '.name'): # The .name is one of the files KMA creates when making a database.\n cmd = 'kma index -i {} -o {}'.format(sample_database, kma_database) # NOTE: Need KMA >=1.2.0 for this to work.\n out, err = run_cmd(cmd)\n write_to_logfile(log, out, err, cmd)\n\n # Run KMA.\n if paired:\n cmd = 'kma -ipe {forward_in} {reverse_in} -t_db {kma_database} -o {kma_report} ' \\\n '-t {threads}'.format(forward_in=os.path.join(sample_tmp_dir, 'trimmed_R1.fastq.gz'),\n reverse_in=os.path.join(sample_tmp_dir, 'trimmed_R2.fastq.gz'),\n kma_database=kma_database,\n kma_report=kma_report,\n threads=threads)\n out, err = run_cmd(cmd)\n write_to_logfile(log, out, err, cmd)\n else:\n if data_type == 'Illumina':\n cmd = 'kma -i {input_reads} -t_db {kma_database} -o {kma_report} ' \\\n '-t {threads}'.format(input_reads=os.path.join(sample_tmp_dir, 'trimmed.fastq.gz'),\n kma_database=kma_database,\n kma_report=kma_report,\n threads=threads)\n else:\n # Recommended Nanopore settings from KMA repo: https://bitbucket.org/genomicepidemiology/kma\n cmd = 'kma -i {input_reads} -t_db {kma_database} -o {kma_report} -mem_mode -mp 20 -mrs 0.0 -bcNano ' \\\n '-t {threads}'.format(input_reads=os.path.join(sample_tmp_dir, 'trimmed.fastq.gz'),\n kma_database=kma_database,\n kma_report=kma_report,\n threads=threads)\n out, err = run_cmd(cmd)\n write_to_logfile(log, out, err, cmd)\n\n rmlst_report = os.path.join(output_folder, sample_name + '_rmlst.csv')\n gene_alleles = find_rmlst_type(kma_report=kma_report + '.res',\n rmlst_report=rmlst_report)\n\n with open(os.path.join(sample_tmp_dir, 'rmlst.fasta'), 'w') as f:\n for contig in SeqIO.parse(sample_database, 'fasta'):\n if contig.id in gene_alleles:\n f.write('>{}\\n'.format(contig.id))\n f.write(str(contig.seq) + '\\n')\n\n rmlst_gene_length = find_total_sequence_length(os.path.join(sample_tmp_dir, 'rmlst.fasta'))\n logging.debug('Total gene length is {}'.format(rmlst_gene_length))\n # Second step of mapping - Do a mapping of our baited reads against a fasta file that has only one allele per\n # rMLST gene.\n pysam.faidx(os.path.join(sample_tmp_dir, 'rmlst.fasta'))\n if paired:\n cmd = 'bbmap.sh ref={ref} in={forward_in} in2={reverse_in} out={outbam} threads={threads} mdtag ' \\\n 'nodisk'.format(ref=os.path.join(sample_tmp_dir, 'rmlst.fasta'),\n forward_in=os.path.join(sample_tmp_dir, 'trimmed_R1.fastq.gz'),\n reverse_in=os.path.join(sample_tmp_dir, 'trimmed_R2.fastq.gz'),\n outbam=os.path.join(sample_tmp_dir, 'out_2.bam'),\n threads=threads)\n if cgmlst_db is not None:\n # Lots of core genes seem to have relatives within a genome that are at ~70 percent identity - this means\n # that reads that shouldn't really map do, and cause false positives. Adding in this subfilter means that\n # reads can only have one mismatch, so they actually have to be from the right gene for this to work.\n cmd += ' subfilter=1'\n if Xmx:\n cmd += ' -Xmx{}'.format(Xmx)\n out, err = run_cmd(cmd)\n write_to_logfile(log, out, err, cmd)\n else:\n if data_type == 'Illumina':\n cmd = 'bbmap.sh ref={ref} in={forward_in} out={outbam} threads={threads} mdtag ' \\\n 'nodisk'.format(ref=os.path.join(sample_tmp_dir, 'rmlst.fasta'),\n forward_in=os.path.join(sample_tmp_dir, 'trimmed.fastq.gz'),\n outbam=os.path.join(sample_tmp_dir, 'out_2.bam'),\n threads=threads)\n if cgmlst_db is not None:\n # Lots of core genes seem to have relatives within a genome that are at ~70 percent identity - this means\n # that reads that shouldn't really map do, and cause false positives. Adding in this subfilter means that\n # reads can only have one mismatch, so they actually have to be from the right gene for this to work.\n cmd += ' subfilter=1'\n if Xmx:\n cmd += ' -Xmx{}'.format(Xmx)\n out, err = run_cmd(cmd)\n write_to_logfile(log, out, err, cmd)\n else:\n cmd = 'minimap2 --MD -t {threads} -ax map-ont {ref} {reads} ' \\\n '> {outsam}'.format(ref=os.path.join(sample_tmp_dir, 'rmlst.fasta'),\n reads=os.path.join(sample_tmp_dir, 'trimmed.fastq.gz'),\n outsam=os.path.join(sample_tmp_dir, 'out_2.sam'),\n threads=threads)\n out, err = run_cmd(cmd)\n write_to_logfile(log, out, err, cmd)\n outbam = os.path.join(sample_tmp_dir, 'out_2.bam')\n # Apparently have to perform equivalent of a touch on this file for this to work.\n fh = open(outbam, 'w')\n fh.close()\n pysam.view('-b', '-o', outbam, os.path.join(sample_tmp_dir, 'out_2.sam'), save_stdout=outbam)\n pysam.sort('-o', os.path.join(sample_tmp_dir, 'contamination.bam'), os.path.join(sample_tmp_dir, 'out_2.bam'))\n pysam.index(os.path.join(sample_tmp_dir, 'contamination.bam'))\n # Now find number of multi-positions for each rMLST gene/allele combination\n multi_positions = 0\n\n # Run the BAM parsing in parallel! Some refactoring of the code would likely be a good idea so this\n # isn't quite so ugly, but it works.\n p = multiprocessing.Pool(processes=threads)\n bamfile_list = [os.path.join(sample_tmp_dir, 'contamination.bam')] * len(gene_alleles)\n # bamfile_list = [os.path.join(sample_tmp_dir, 'rmlst.bam')] * len(gene_alleles)\n reference_fasta_list = [os.path.join(sample_tmp_dir, 'rmlst.fasta')] * len(gene_alleles)\n quality_cutoff_list = [quality_cutoff] * len(gene_alleles)\n base_cutoff_list = [base_cutoff] * len(gene_alleles)\n base_fraction_list = [base_fraction_cutoff] * len(gene_alleles)\n multibase_dict_list = list()\n report_write_list = list()\n for multibase_dict, report_write in p.starmap(read_contig, zip(gene_alleles, bamfile_list, reference_fasta_list, quality_cutoff_list, base_cutoff_list, base_fraction_list), chunksize=1):\n multibase_dict_list.append(multibase_dict)\n report_write_list.append(report_write)\n p.close()\n p.join()\n\n # Write out report info.\n report_file = os.path.join(output_folder, sample_name + '_contamination.csv')\n with open(report_file, 'w') as r:\n r.write('{reference},{position},{bases},{coverage}\\n'.format(reference='Gene',\n position='Position',\n bases='Bases',\n coverage='Coverage'))\n for item in report_write_list:\n for contamination_info in item:\n r.write(contamination_info)\n\n # Total up the number of multibase positions.\n for multibase_position_dict in multibase_dict_list:\n multi_positions += sum([len(snp_positions) for snp_positions in multibase_position_dict.values()])\n if cgmlst_db is None:\n snp_cutoff = int(rmlst_gene_length/10000) + 1\n else:\n snp_cutoff = 10\n if multi_positions >= snp_cutoff:\n percent_contam, contam_stddev = estimate_percent_contamination(contamination_report_file=report_file)\n else:\n percent_contam = 0\n contam_stddev = 0\n logging.info('Done! Number of contaminating SNVs found: {}\\n'.format(multi_positions))\n write_output(output_report=os.path.join(output_folder, 'confindr_report.csv'),\n sample_name=sample_name,\n multi_positions=multi_positions,\n genus=genus,\n percent_contam=percent_contam,\n contam_stddev=contam_stddev,\n total_gene_length=rmlst_gene_length,\n snp_cutoff=snp_cutoff,\n cgmlst=cgmlst_db,\n database_download_date=database_download_date)\n if keep_files is False:\n shutil.rmtree(sample_tmp_dir)", "response": "This function finds the contamination of a pair of reads."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfunctioning that writes the output of ConFindr to a CSV file.", "response": "def write_output(output_report, sample_name, multi_positions, genus, percent_contam, contam_stddev, total_gene_length,\n database_download_date, snp_cutoff=3, cgmlst=None):\n \"\"\"\n Function that writes the output generated by ConFindr to a report file. Appends to a file that already exists,\n or creates the file if it doesn't already exist.\n :param output_report: Path to CSV output report file. Should have headers SampleName,Genus,NumContamSNVs,\n ContamStatus,PercentContam, and PercentContamStandardDeviation, in that order.\n :param sample_name: string - name of sample\n :param multi_positions: integer - number of positions that were found to have more than one base present.\n :param genus: string - The genus of your sample\n :param percent_contam: float - Estimated percentage contamination\n :param contam_stddev: float - Standard deviation of percentage contamination\n :param total_gene_length: integer - number of bases examined to make a contamination call.\n :param cgmlst: If None, means that rMLST database was used, so use rMLST snp cutoff. Otherwise, some sort of cgMLST\n database was used, so use a different cutoff.\n \"\"\"\n # If the report file hasn't been created, make it, with appropriate header.\n if not os.path.isfile(output_report):\n with open(os.path.join(output_report), 'w') as f:\n f.write('Sample,Genus,NumContamSNVs,ContamStatus,PercentContam,PercentContamStandardDeviation,BasesExamined,DatabaseDownloadDate\\n')\n\n if multi_positions >= snp_cutoff or len(genus.split(':')) > 1:\n contaminated = True\n else:\n contaminated = False\n with open(output_report, 'a+') as f:\n f.write('{samplename},{genus},{numcontamsnvs},'\n '{contamstatus},{percent_contam},{contam_stddev},'\n '{gene_length},{database_download_date}\\n'.format(samplename=sample_name,\n genus=genus,\n numcontamsnvs=multi_positions,\n contamstatus=contaminated,\n percent_contam=percent_contam,\n contam_stddev=contam_stddev,\n gene_length=total_gene_length,\n database_download_date=database_download_date))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_acceptable_xmx(xmx_string):\n acceptable_xmx = True\n acceptable_suffixes = ['K', 'M', 'G']\n if xmx_string[-1].upper() not in acceptable_suffixes:\n acceptable_xmx = False\n logging.error('ERROR: Memory must be specified as K (kilobytes), M (megabytes), or G (gigabytes). Your specified '\n 'suffix was {}.'.format(xmx_string[-1]))\n if '.' in xmx_string:\n acceptable_xmx = False\n logging.error('ERROR: Xmx strings must be integers, floating point numbers are not accepted.')\n if not str.isdigit(xmx_string[:-1]):\n acceptable_xmx = False\n logging.error('ERROR: The amount of memory requested was not an integer.')\n return acceptable_xmx", "response": "Checks that the XMX string is acceptable by BBTools."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef char_code(columns, name=None):\n if name is None:\n name = 'Char Code Field (' + str(columns) + ' columns)'\n\n if columns <= 0:\n raise BaseException()\n\n char_sets = None\n for char_set in _tables.get_data('character_set'):\n regex = '[ ]{' + str(15 - len(char_set)) + '}' + char_set\n if char_sets is None:\n char_sets = regex\n else:\n char_sets += '|' + regex\n\n # Accepted sets\n _character_sets = pp.Regex(char_sets)\n _unicode_1_16b = pp.Regex('U\\+0[0-8,A-F]{3}[ ]{' + str(columns - 6) + '}')\n _unicode_2_21b = pp.Regex('U\\+0[0-8,A-F]{4}[ ]{' + str(columns - 7) + '}')\n\n # Basic field\n char_code_field = (_character_sets | _unicode_1_16b | _unicode_2_21b)\n\n # Parse action\n char_code_field = char_code_field.setParseAction(lambda s: s[0].strip())\n\n # Name\n char_code_field.setName(name)\n\n return char_code_field", "response": "Returns an instance of the Character set code field."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_choice_validator(\n choices, default_key=None, normalizer=None):\n \"\"\"\n Returns a callable that accepts the choices provided.\n\n Choices should be provided as a list of 2-tuples, where the first\n element is a string that should match user input (the key); the\n second being the value associated with the key.\n\n The callable by default will match, upon complete match the first\n value associated with the result will be returned. Partial matches\n are supported.\n\n If a default is provided, that value will be returned if the user\n provided input is empty, i.e. the value that is mapped to the empty\n string.\n\n Finally, a normalizer function can be passed. This normalizes all\n keys and validation value.\n \"\"\"\n\n def normalize_all(_choices):\n # normalize all the keys for easier comparison\n if normalizer:\n _choices = [(normalizer(key), value) for key, value in choices]\n return _choices\n\n choices = normalize_all(choices)\n\n def choice_validator(value):\n if normalizer:\n value = normalizer(value)\n if not value and default_key:\n value = choices[default_key][0]\n results = []\n for choice, mapped in choices:\n if value == choice:\n return mapped\n if choice.startswith(value):\n results.append((choice, mapped))\n if len(results) == 1:\n return results[0][1]\n elif not results:\n raise ValueError('Invalid choice.')\n else:\n raise ValueError(\n 'Choice ambiguous between (%s)' % ', '.join(\n k for k, v in normalize_all(results))\n )\n\n return choice_validator", "response": "Creates a callable that accepts the user input."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nprompt user for question maybe choices and get answer.", "response": "def prompt(question, validator=None,\n choices=None, default_key=NotImplemented,\n normalizer=str.lower,\n _stdin=None, _stdout=None):\n \"\"\"\n Prompt user for question, maybe choices, and get answer.\n\n Arguments:\n\n question\n The question to prompt. It will only be prompted once.\n validator\n Defaults to None. Must be a callable that takes in a value.\n The callable should raise ValueError when the value leads to an\n error, otherwise return a converted value.\n choices\n If choices are provided instead, a validator will be constructed\n using make_choice_validator along with the next default_value\n argument. Please refer to documentation for that function.\n default_value\n See above.\n normalizer\n Defaults to str.lower. See above.\n \"\"\"\n\n def write_choices(choice_keys, default_key):\n _stdout.write('(')\n _stdout.write('/'.join(choice_keys))\n _stdout.write(') ')\n if default_key is not NotImplemented:\n _stdout.write('[')\n _stdout.write(choice_keys[default_key])\n _stdout.write('] ')\n\n if _stdin is None:\n _stdin = sys.stdin\n\n if _stdout is None:\n _stdout = sys.stdout\n\n _stdout.write(question)\n _stdout.write(' ')\n\n if not check_interactive():\n if choices and default_key is not NotImplemented:\n choice_keys = [choice for choice, mapped in choices]\n write_choices(choice_keys, default_key)\n display, answer = choices[default_key]\n _stdout.write(display)\n _stdout.write('\\n')\n\n logger.warning(\n 'non-interactive mode; auto-selected default option [%s]',\n display)\n return answer\n logger.warning(\n 'interactive code triggered within non-interactive session')\n _stdout.write('Aborted.\\n')\n return None\n\n choice_keys = []\n\n if validator is None:\n if choices:\n validator = make_choice_validator(\n choices, default_key, normalizer)\n choice_keys = [choice for choice, mapped in choices]\n else:\n validator = null_validator\n\n answer = NotImplemented\n while answer is NotImplemented:\n if choice_keys:\n write_choices(choice_keys, default_key)\n _stdout.flush()\n try:\n answer = validator(\n _stdin.readline().strip().encode(locale).decode(locale))\n except ValueError as e:\n _stdout.write('%s\\n' % e)\n _stdout.write(question.splitlines()[-1])\n _stdout.write(' ')\n except KeyboardInterrupt:\n _stdout.write('Aborted.\\n')\n answer = None\n\n return answer"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprompting end user with a diff of original and new json that may overwrite the file at the target_path.", "response": "def prompt_overwrite_json(original, new, target_path, dumps=json_dumps):\n \"\"\"\n Prompt end user with a diff of original and new json that may\n overwrite the file at the target_path. This function only displays\n a confirmation prompt and it is up to the caller to implement the\n actual functionality. Optionally, a custom json.dumps method can\n also be passed in for output generation.\n \"\"\"\n\n # generate compacted ndiff output.\n diff = '\\n'.join(l for l in (\n line.rstrip() for line in difflib.ndiff(\n json_dumps(original).splitlines(),\n json_dumps(new).splitlines(),\n ))\n if l[:1] in '?+-' or l[-1:] in '{}' or l[-2:] == '},')\n basename_target = basename(target_path)\n return prompt(\n \"Generated '%(basename_target)s' differs with '%(target_path)s'.\\n\\n\"\n \"The following is a compacted list of changes required:\\n\"\n \"%(diff)s\\n\\n\"\n \"Overwrite '%(target_path)s'?\" % locals(),\n choices=(\n ('Yes', True),\n ('No', False),\n ),\n default_key=1,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlocates a single npm package to return its browser or main entry.", "response": "def locate_package_entry_file(working_dir, package_name):\n \"\"\"\n Locate a single npm package to return its browser or main entry.\n \"\"\"\n\n basedir = join(working_dir, 'node_modules', package_name)\n package_json = join(basedir, 'package.json')\n if not exists(package_json):\n logger.debug(\n \"could not locate package.json for the npm package '%s' in the \"\n \"current working directory '%s'; the package may have been \"\n \"not installed, the build process may fail\",\n package_name, working_dir,\n )\n return\n\n with open(package_json) as fd:\n package_info = json.load(fd)\n\n if ('browser' in package_info or 'main' in package_info):\n # assume the target file exists because configuration files\n # never lie /s\n return join(\n basedir,\n *(package_info.get('browser') or package_info['main']).split('/')\n )\n\n index_js = join(basedir, 'index.js')\n if exists(index_js):\n return index_js\n\n logger.debug(\n \"package.json for the npm package '%s' does not contain a main \"\n \"entry point\", package_name,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef render_pictures(context, selection='recent', amount=3):\n pictures = Image.objects.filter(\n folder__id__in=Gallery.objects.filter(is_published=True).values_list(\n 'folder__pk', flat=True))\n if selection == 'recent':\n context.update({\n 'pictures': pictures.order_by('-uploaded_at')[:amount]\n })\n elif selection == 'random':\n context.update({\n 'pictures': pictures.order_by('?')[:amount]\n })\n else:\n return None\n return context", "response": "Template tag to render a list of pictures."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_header(self, entry):\n info = entry.split('\\t')\n self.n_individuals = len(info)-9\n for i,v in enumerate(info[9:]):\n self.individuals[v] = i\n return self.n_individuals > 0", "response": "Parses the VCF Header field and returns the number of samples in the VCF file"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing an individual VCF entry and return a VCFEntry which contains information about the call.", "response": "def parse_entry(self, row):\n \"\"\"Parse an individual VCF entry and return a VCFEntry which contains information about\n the call (such as alternative allele, zygosity, etc.)\n\n \"\"\"\n var_call = VCFEntry(self.individuals)\n var_call.parse_entry(row)\n return var_call"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_header(self, individual=-1):\n type_map = dict([(val,key) for key,val in self.meta.type_map.iteritems()])\n extra = '\\n'.join(['##{0}'.format(i) for i in self.meta.extra])\n info = '\\n'.join(['##INFO='.format(key, val.get('num_entries','.'), type_map.get(val.get('type', '')), val.get('description')) for key,val in self.meta.info.iteritems()])\n filter = '\\n'.join(['##FILTER='.format(key, val.get('description','.')) for key,val in self.meta.filter.iteritems()])\n format = '\\n'.join(['##FORMAT='.format(key, val.get('num_entries','.'), type_map.get(val.get('type', '')), val.get('description')) for key,val in self.meta.format.iteritems()])\n alt = '\\n'.join(['##ALT='.format(key, val.get('description','.')) for key,val in self.meta.alt.iteritems()])\n header = '\\t'.join(['#CHROM','POS','ID','REF','ALT','QUAL','FILTER','INFO','FORMAT'])\n if individual is not None:\n if individual == -1:\n individual = '\\t'.join(self.individuals.keys())\n else:\n if isinstance(individual, int):\n for i, v in self.individuals.iteritems():\n if v == individual:\n individual = i\n break\n header += '\\t'+individual\n return '\\n'.join([extra, info, filter, format, alt, header])", "response": "Returns the vcf header"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_info(self, entry):\n entry = entry[8:-1]\n info = entry.split(',')\n if len(info) < 4:\n return False\n for v in info:\n key, value = v.split('=', 1)\n if key == 'ID':\n self.info[value] = {}\n id_ = value\n elif key == 'Number':\n if value == 'A' or value == 'G':\n value = -1\n self.info[id_]['num_entries'] = value\n elif key == 'Type':\n self.info[id_]['type'] = self.type_map[value]\n elif key == 'Description':\n self.info[id_]['description'] = value\n if len(info) > 4:\n self.info[id_]['description'] += '; '.join(info[4:])\n break\n return True", "response": "Parse and store the info field"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_filter(self, entry):\n entry = entry[10:-1]\n info = entry.split(',')\n if len(info) < 2:\n return False\n for v in info:\n key, value = v.split('=', 1)\n if key == 'ID':\n self.filter[value] = {}\n id_ = value\n elif key == 'Description':\n self.filter[id_]['description'] = value\n if len(info) > 2:\n self.info[id_]['description'] += '; '.join(info[2:])\n return True", "response": "Parse and store the filter field"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses and store the alternative allele field", "response": "def add_alt(self, entry):\n \"\"\"Parse and store the alternative allele field\"\"\"\n entry = entry[7:-1]\n info = entry.split(',')\n if len(info) < 2:\n return False\n for v in info:\n key, value = v.split('=', 1)\n if key == 'ID':\n self.alt[value] = {}\n id_ = value\n elif key == 'Description':\n self.alt[id_]['description'] = value\n if len(info) > 4:\n self.alt[id_]['description'] += '; '.join(info[4:])\n break\n return True"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sample_string(self, individual=-1):\n base = str(self)\n extra = self.get_sample_info(individual=individual)\n extra = [':'.join([str(j) for j in i]) for i in zip(*extra.values())]\n return '\\t'.join([base, '\\t'.join(extra)])", "response": "Returns the VCF entry as it appears in the vcf file"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the sample info of a given sample or all by default", "response": "def get_sample_info(self, individual=-1):\n \"\"\"Returns the sample info of a given sample or all by default\n\n \"\"\"\n if isinstance(individual, str):\n individual = self.individuals[individual]\n extra = OrderedDict()\n for format_ in self.format:\n index = getattr(self, format_)\n if index != -1:\n if format_ == 'GT':\n d = self.genotype\n elif format_ == 'GQ':\n d = self.genome_quality\n elif format_ == 'DP':\n d = self.depth\n if individual == -1:\n if len(d) != len(self.samples):\n [self.parse_sample(i) for i in six.moves.range(len(self.samples))]\n extra[format_] = [d[i] for i in six.moves.range(len(d))]\n else:\n if individual not in d:\n self.parse_sample(individual)\n extra[format_] = [d[individual]]\n return extra"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the alternative alleles of the individual as a list", "response": "def get_alt(self, individual=0, nucleotides_only=True):\n \"\"\"Returns the alternative alleles of the individual as a list\"\"\"\n #not i.startswith(',') is put in to handle cases like where we have no alternate allele\n #but some reference\n if isinstance(individual, str):\n individual = self.individuals[individual]\n if nucleotides_only:\n return [self.alt[i-1].replace('.', '') for i in self.genotype[individual] if i > 0 and not self.alt[i-1].startswith('<')]\n else:\n return [self.alt[i-1].replace('.', '') for i in self.genotype[individual] if i > 0]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_alt_length(self, individual=0):\n if isinstance(individual, str):\n individual = self.individuals[individual]\n return [len(self.alt[i-1].replace('.','')) for i in self.genotype[individual] if i > 0 and not self.alt[i-1].startswith('<')]", "response": "Returns the number of basepairs of each alternative allele"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_alt_lengths(self):\n #this is a hack to store the # of individuals without having to actually store it\n out = []\n for i in six.moves.range(len(self.genotype)):\n valid_alt = self.get_alt_length(individual=i)\n if not valid_alt:\n out.append(None)\n else:\n out.append(max(valid_alt)-len(self.ref))\n return out", "response": "Returns the longest length of the variant. For deletions returns is negative and insertions are +."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef has_snp(self, individual=0):\n if isinstance(individual, str):\n individual = self.individuals[individual]\n alts = self.get_alt(individual=individual)\n if alts:\n return [i != self.ref and len(i) == len(self.ref) for i in alts]\n return [False]", "response": "Returns a boolean list of SNP status ordered by samples"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_entry(self, entry):\n entry = entry.split('\\t')\n self.chrom, self.pos, self.id, self.ref, alt_, self.qual, filter_, info, self.format = entry[:9]\n self.samples = entry[9:]\n self.alt = alt_.split(',')\n if filter_ == 'PASS' or filter_ == '.':\n self.passed = True\n else:\n self.passed = filter_.split(';')\n self.info = info\n # currently unused\n #if info != '.':\n #info_l = info.split(';')\n #self.info = [v.split('=') if '=' in v else (v,1) for v in info_l]\n self.format = self.format.split(':')\n if 'GT' in self.format:\n self.GT = self.format.index('GT')\n if 'GQ' in self.format:\n self.GQ = self.format.index('GQ')\n if 'DP' in self.format:\n self.DP = self.format.index('DP')\n if 'FT' in self.format:\n self.FT = self.format.index('FT')", "response": "This parses a VCF row and stores the relevant information"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_child(self, child):\n child_id = getattr(child, 'id', None)\n if child_id:\n if not hasattr(self, 'children'):\n self.children = {}\n if child_id not in self.children:\n self.children[child_id] = child", "response": "Add a child to the entry."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nhandles the ISO formatting of an object.", "response": "def _iso_handler(obj):\n \"\"\"\n Transforms an object into it's ISO format, if possible.\n\n If the object can't be transformed, then an error is raised for the JSON\n parser.\n\n This is meant to be used on datetime instances, but will work with any\n object having a method called isoformat.\n\n :param obj: object to transform into it's ISO format\n :return: the ISO format of the object\n \"\"\"\n if hasattr(obj, 'isoformat'):\n result = obj.isoformat()\n else:\n raise TypeError(\"Unserializable object {} of type {}\".format(obj,\n type(obj)))\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngenerating string of cwr format for all possible combinations of fields and entity.", "response": "def encode(self, entity):\n \"\"\"\n Generate string of cwr format for all possible combinations of fields,\n accumulate and then elect the best. The best string it is who used most of all fields\n :param entity:\n :return:\n \"\"\"\n possible_results = []\n entity_dict = self.get_entity_dict(entity)\n record_field_encoders = self.get_record_fields_encoders()\n for field_encoders in record_field_encoders:\n result = self.try_encode(field_encoders, entity_dict)\n if result:\n possible_results.append({'result': result, 'len': len(field_encoders)})\n cwr = self.head(entity) + self._get_best_result(possible_results) + \"\\r\\n\"\n return cwr"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the argparser for this instance.", "response": "def argparser(self):\n \"\"\"\n For setting up the argparser for this instance.\n \"\"\"\n\n if self.__argparser is None:\n self.__argparser = self.argparser_factory()\n self.init_argparser(self.__argparser)\n return self.__argparser"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nproduces an argparser for this type of Runtime.", "response": "def argparser_factory(self):\n \"\"\"\n Produces argparser for this type of Runtime.\n \"\"\"\n\n return ArgumentParser(\n prog=self.prog, description=self.__doc__, add_help=False,\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef init_argparser(self, argparser):\n\n def prepare_argparser():\n if argparser in self.argparser_details:\n return False\n result = self.argparser_details[argparser] = ArgumentParserDetails(\n {}, {}, {})\n return result\n\n def to_module_attr(ep):\n return '%s:%s' % (ep.module_name, '.'.join(ep.attrs))\n\n def register(name, runtime, entry_point):\n subparser = commands.add_parser(\n name, help=inst.description,\n )\n # Have to specify this separately because otherwise the\n # subparser will not have a proper description when it is\n # invoked as the root.\n subparser.description = inst.description\n\n # Assign values for version reporting system\n setattr(subparser, ATTR_ROOT_PKG, getattr(\n argparser, ATTR_ROOT_PKG, self.package_name))\n subp_info = []\n subp_info.extend(getattr(argparser, ATTR_INFO, []))\n subp_info.append((subparser.prog, entry_point.dist))\n setattr(subparser, ATTR_INFO, subp_info)\n\n try:\n try:\n runtime.init_argparser(subparser)\n except RuntimeError as e:\n # first attempt to filter out recursion errors; also if\n # the stack frame isn't available the complaint about\n # bad validation doesn't apply anyway.\n frame = currentframe()\n if (not frame or 'maximum recursion depth' not in str(\n e.args)):\n raise\n\n if (not isinstance(runtime, Runtime) or (type(\n runtime).entry_point_load_validated.__code__ is\n Runtime.entry_point_load_validated.__code__)):\n # welp, guess some other thing blew up then, or\n # that the problem is definitely not caused by\n # this runtime implementation.\n # TODO figure out how to log this nicer via the\n # self.log_debug_error without exploding the\n # console like Megumin would have done.\n raise\n\n # assume the overridden method didn't do everything\n # correctly then; would be great if there is a way\n # to ensure that our thing would have been called.\n cls = type(runtime)\n logger.critical(\n \"Runtime subclass at entry_point '%s' has override \"\n \"'entry_point_load_validated' without filtering out \"\n \"its parent classes; this can be addressed by calling \"\n \"super(%s.%s, self).entry_point_load_validated(\"\n \"entry_point) in its implementation, or simply don't \"\n \"override that method to avoid infinite recursion.\",\n entry_point, cls.__module__, cls.__name__,\n )\n exc = RuntimeError(\n \"%r has an invalid 'entry_point_load_validated' \"\n \"implementation: insufficient protection against \"\n \"infinite recursion into self not provided\" % runtime\n )\n # for Python 3 to not blow it up.\n exc.__suppress_context__ = True\n raise exc\n except Exception as e:\n self.log_debug_error(\n \"cannot register entry_point '%s' from '%s' as a \"\n \"subcommand to '%s': %s: %s\",\n entry_point, entry_point.dist, argparser.prog,\n e.__class__.__name__, e\n )\n # this is where naughty things happen: will be poking at\n # the parser internals to undo the damage that was done\n # first, pop the choices_actions as a help was provided\n commands._choices_actions.pop()\n # then pop the name that was mapped.\n commands._name_parser_map.pop(name)\n else:\n # finally record the completely initialized subparser\n # into the structure here if successful.\n subparsers[name] = subparser\n runtimes[name] = runtime\n entry_points[name] = entry_point\n\n details = prepare_argparser()\n if not details:\n logger.debug(\n 'argparser %r has already been initialized against runner %r',\n argparser, self,\n )\n return\n subparsers, runtimes, entry_points = details\n\n super(Runtime, self).init_argparser(argparser)\n\n commands = argparser.add_subparsers(\n dest=self.action_key, metavar='')\n # Python 3.7 has required set to True, which is correct in most\n # cases but this disables the manual handling for cases where a\n # command was not provided; also this generates a useless error\n # message that simply states \" is required\" and forces\n # the program to exit. As the goal of this suite of classes is\n # to act as a helpful CLI front end, force required to be False\n # to keep our manual handling and management of subcommands.\n # Setting this as a property for compatibility with Python<3.7,\n # as only in Python>=3.7 the add_subparsers can accept required\n # as an argument.\n commands.required = False\n\n for entry_point in self.iter_entry_points():\n inst = self.entry_point_load_validated(entry_point)\n if not inst:\n continue\n\n if entry_point.name in runtimes:\n reg_ep = entry_points[entry_point.name]\n reg_rt = runtimes[entry_point.name]\n\n if reg_rt is inst:\n # this is fine, multiple packages declared the same\n # thing with the same name.\n logger.debug(\n \"duplicated registration of command '%s' via entry \"\n \"point '%s' ignored; registered '%s', confict '%s'\",\n entry_point.name, entry_point, reg_ep.dist,\n entry_point.dist,\n )\n continue\n\n logger.error(\n \"a calmjs runtime command named '%s' already registered.\",\n entry_point.name\n )\n logger.info(\"conflicting entry points are:\")\n logger.info(\n \"'%s' from '%s' (registered)\", reg_ep, reg_ep.dist)\n logger.info(\n \"'%s' from '%s' (conflict)\", entry_point, entry_point.dist)\n # Fall back name should work if the class/instances are\n # stable.\n name = to_module_attr(entry_point)\n\n if name in runtimes:\n # Maybe this is the third time this module is\n # registered. Test for its identity.\n if runtimes[name] is not inst:\n # Okay someone is having a fun time here mucking\n # with data structures internal to here, likely\n # (read hopefully) due to testing or random\n # monkey patching (or module level reload).\n logger.critical(\n \"'%s' is already registered but points to a \"\n \"completely different instance; please try again \"\n \"with verbose logging and note which packages are \"\n \"reported as conflicted; alternatively this is a \"\n \"forced situation where this Runtime instance has \"\n \"been used or initialized improperly.\",\n name\n )\n else:\n logger.debug(\n \"fallback command '%s' is already registered.\",\n name\n )\n continue\n\n logger.error(\n \"falling back to using full instance path '%s' as command \"\n \"name, also registering alias for registered command\", name\n )\n register(to_module_attr(reg_ep), reg_rt, reg_ep)\n else:\n name = entry_point.name\n\n register(name, inst, entry_point)", "response": "Initialize the argparser for the internal version of the system."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef unrecognized_arguments_error(self, args, parsed, extras):\n\n # loop variants\n kwargs = vars(parsed)\n failed = list(extras)\n # initial values\n runtime, subparser, idx = (self, self.argparser, 0)\n # recursion not actually needed when it can be flattened.\n while isinstance(runtime, Runtime):\n cmd = kwargs.pop(runtime.action_key)\n # can happen if it wasn't set, or is set but from a default\n # value (thus not provided by args)\n action_idx = None if cmd not in args else args.index(cmd)\n if cmd not in args and cmd is not None:\n # this normally shouldn't happen, and the test case\n # showed that the parsing will not flip down to the\n # forced default subparser - this can remain a debug\n # message until otherwise.\n logger.debug(\n \"command for prog=%r is set to %r without being specified \"\n \"as part of the input arguments - the following error \"\n \"message may contain misleading references\",\n subparser.prog, cmd\n )\n subargs = args[idx:action_idx]\n subparsed, subextras = subparser.parse_known_args(subargs)\n if subextras:\n subparser.unrecognized_arguments_error(subextras)\n # since the failed arguments are in order\n failed = failed[len(subextras):]\n if not failed:\n # have taken everything, quit now.\n # also note that if cmd was really None it would\n # cause KeyError below, but fortunately it also\n # forced action_idx to be None which took all\n # remaining tokens from failed, so definitely get\n # out of here.\n break\n\n # advance the values\n # note that any internal consistency will almost certainly\n # result in KeyError being raised.\n details = runtime.get_argparser_details(subparser)\n runtime = details.runtimes[cmd]\n subparser = details.subparsers[cmd]\n idx = action_idx + 1\n\n if failed:\n subparser.unrecognized_arguments_error(failed)\n sys.exit(2)", "response": "This function is called by the argparse module to check if unrecognized arguments are present in the arguments list."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef error(self, argparser, target, message):\n\n warnings.warn(\n 'Runtime.error is deprecated and will be removed by calmjs-4.0.0',\n DeprecationWarning)\n details = self.get_argparser_details(argparser)\n argparser = details.subparsers[target] if details else self.argparser\n argparser.error(message)", "response": "This method is used to log an error message for the specified target parser."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef init_argparser_export_target(\n self, argparser,\n default=None,\n help='the export target',\n ):\n \"\"\"\n Subclass could override this by providing alternative keyword\n arguments and call this as its super. It should not reimplement\n this completely. Example:\n\n def init_argparser_export_target(self, argparser):\n super(MyToolchainRuntime, self).init_argparser_export_target(\n argparser, default='my_default.js',\n help=\"the export target, default is 'my_default.js'\",\n )\n\n Note that the above example will prevent its subclasses from\n directly using the definition of that class, but they _can_\n simply call the exact same super, or invoke ToolchainRuntime's\n init_argparser_* method directly.\n\n Arguments\n\n default\n The default export target.\n help\n The help text.\n \"\"\"\n\n argparser.add_argument(\n '-w', '--overwrite', dest=EXPORT_TARGET_OVERWRITE,\n action='store_true',\n help='overwrite the export target without any confirmation',\n )\n\n argparser.add_argument(\n '--export-target', dest=EXPORT_TARGET,\n metavar=metavar(EXPORT_TARGET),\n default=default,\n help=help,\n )", "response": "This method is used to initialize the argument parser for the specific locale."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninitializes the argument parser for the working directory.", "response": "def init_argparser_working_dir(\n self, argparser,\n explanation='',\n help_template=(\n 'the working directory; %(explanation)s'\n 'default is current working directory (%(cwd)s)'),\n ):\n \"\"\"\n Subclass could an extra expanation on how this is used.\n\n Arguments\n\n explanation\n Explanation text for the default help template\n help_template\n A standard help message for this option.\n \"\"\"\n\n cwd = self.toolchain.join_cwd()\n argparser.add_argument(\n '--working-dir', dest=WORKING_DIR,\n metavar=metavar(WORKING_DIR),\n default=cwd,\n help=help_template % {'explanation': explanation, 'cwd': cwd},\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninitializes the argument parser for the build directory.", "response": "def init_argparser_build_dir(\n self, argparser, help=(\n 'the build directory, where all sources will be copied to '\n 'as part of the build process; if left unspecified, the '\n 'default behavior is to create a new temporary directory '\n 'that will be removed upon conclusion of the build; if '\n 'specified, it must be an existing directory and all files '\n 'for the build will be copied there instead, overwriting any '\n 'existing file, with no cleanup done after.'\n )):\n \"\"\"\n For setting up build directory\n \"\"\"\n\n argparser.add_argument(\n '--build-dir', default=None, dest=BUILD_DIR,\n metavar=metavar(BUILD_DIR), help=help,\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef init_argparser_optional_advice(\n self, argparser, default=[], help=(\n 'a comma separated list of packages to retrieve optional '\n 'advice from; the provided packages should have registered '\n 'the appropriate entry points for setting up the advices for '\n 'the toolchain; refer to documentation for the specified '\n 'packages for details'\n )):\n \"\"\"\n For setting up optional advice.\n \"\"\"\n\n argparser.add_argument(\n '--optional-advice', default=default, required=False,\n dest=ADVICE_PACKAGES, action=StoreRequirementList,\n metavar='[,[...]]',\n help=help\n )", "response": "Initializes the argument parser for optional advice."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninitialize the argument parser for the current version of the toolchain.", "response": "def init_argparser(self, argparser):\n \"\"\"\n Other runtimes (or users of ArgumentParser) can pass their\n subparser into here to collect the arguments here for a\n subcommand.\n \"\"\"\n\n super(ToolchainRuntime, self).init_argparser(argparser)\n\n # it is possible for subclasses to fully override this, but if\n # they are using this as the runtime to drive the toolchain they\n # should be prepared to follow the layout, but if they omit them\n # it should only result in the spec omitting these arguments.\n self.init_argparser_export_target(argparser)\n self.init_argparser_working_dir(argparser)\n self.init_argparser_build_dir(argparser)\n self.init_argparser_optional_advice(argparser)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npreparing a spec for usage with the generic ToolchainRuntime.", "response": "def prepare_spec(self, spec, **kwargs):\n \"\"\"\n Prepare a spec for usage with the generic ToolchainRuntime.\n\n Subclasses should avoid overriding this; override create_spec\n instead.\n \"\"\"\n\n self.prepare_spec_debug_flag(spec, **kwargs)\n self.prepare_spec_export_target_checks(spec, **kwargs)\n # defer the setup till the actual toolchain invocation\n spec.advise(SETUP, self.prepare_spec_advice_packages, spec, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nturns the provided kwargs into a spec ready for toolchain.", "response": "def kwargs_to_spec(self, **kwargs):\n \"\"\"\n Turn the provided kwargs into arguments ready for toolchain.\n \"\"\"\n\n spec = self.create_spec(**kwargs)\n self.prepare_spec(spec, **kwargs)\n return spec"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef init_argparser_package_names(self, argparser, help=(\n 'names of the python package to generate artifacts for; '\n 'note that the metadata directory for the specified '\n 'packages must be writable')):\n \"\"\"\n Default helper for setting up the package_names option.\n\n This is separate so that subclasses are not assumed for the\n purposes of artifact creation; they should consider modifying\n the default help message to reflect the fact.\n \"\"\"\n\n argparser.add_argument(\n 'package_names', metavar=metavar('package'), nargs='+', help=help)", "response": "Initialize the argument parser for the package_names option."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninitialize the argument parser for the source registry flag.", "response": "def init_argparser_source_registry(\n self, argparser, default=None, help=(\n 'comma separated list of registries to use for gathering '\n 'JavaScript sources from the given Python packages'\n )):\n \"\"\"\n For setting up the source registry flag.\n \"\"\"\n\n argparser.add_argument(\n '--source-registry', default=default,\n dest=CALMJS_MODULE_REGISTRY_NAMES, action=StoreDelimitedList,\n metavar='[,[...]]',\n help=help,\n )\n\n argparser.add_argument(\n '--source-registries', default=default,\n dest=CALMJS_MODULE_REGISTRY_NAMES, action=StoreDelimitedList,\n help=SUPPRESS,\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef init_argparser_loaderplugin_registry(\n self, argparser, default=None, help=(\n 'the name of the registry to use for the handling of loader '\n 'plugins that may be loaded from the given Python packages'\n )):\n \"\"\"\n Default helper for setting up the loaderplugin registries flags.\n\n Note that this is NOT part of the init_argparser due to\n implementation specific requirements. Subclasses should\n consider modifying the default value help message to cater to the\n toolchain it encapsulates.\n \"\"\"\n\n argparser.add_argument(\n '--loaderplugin-registry', default=default,\n dest=CALMJS_LOADERPLUGIN_REGISTRY_NAME, action='store',\n metavar=metavar('registry'),\n help=help,\n )", "response": "Initialize the given argument parser for loaderplugin registries."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninitializing the argument parser.", "response": "def init_argparser(self, argparser):\n \"\"\"\n Other runtimes (or users of ArgumentParser) can pass their\n subparser into here to collect the arguments here for a\n subcommand.\n \"\"\"\n\n super(SourcePackageToolchainRuntime, self).init_argparser(argparser)\n\n self.init_argparser_source_registry(argparser)\n self.init_argparser_package_names(argparser)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a grammar for an Alphanumeric field.", "response": "def alphanum(columns, name=None, extended=False, isLast=False):\n \"\"\"\n Creates the grammar for an Alphanumeric (A) field, accepting only the\n specified number of characters.\n\n By default Alphanumeric fields accept only ASCII characters, excluding\n lowercases. If the extended flag is set to True, then non-ASCII characters\n are allowed, but the no ASCII lowercase constraint is kept.\n\n This can be a compulsory field, in which case the empty string is\n disallowed.\n\n The text will be stripped of heading and trailing whitespaces.\n\n :param columns: number of columns for this field\n :param name: name for the field\n :param extended: indicates if this is the exceptional case where non-ASCII\n are allowed\n :return: grammar for this Alphanumeric field\n \"\"\"\n\n if name is None:\n name = 'Alphanumeric Field'\n\n if columns < 0:\n # Can't be empty or have negative size\n raise BaseException()\n\n if isLast:\n columns = str('1,' + str(columns))\n\n # Checks if non-ASCII characters are allowed\n if not extended:\n # The regular expression just forbids lowercase characters\n field = pp.Regex('([\\x00-\\x60]|[\\x7B-\\x7F]){' + str(columns) + '}')\n else:\n # The regular expression forbids lowercase characters but allows\n # non-ASCII characters\n field = pp.Regex('([\\x00-\\x09]|[\\x0E-\\x60]|[\\x7B-\\x7F]|[^\\x00-\\x7F]){' +\n str(columns) + '}')\n\n # Parse action\n field.setParseAction(lambda s: s[0].strip())\n\n # Compulsory field validation action\n if columns:\n field.addParseAction(lambda s: _check_not_empty(s[0]))\n\n # White spaces are not removed\n field.leaveWhitespace()\n\n # Name\n field.setName(name)\n\n return field"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _check_not_empty(string):\n string = string.strip()\n\n if len(string) == 0:\n message = 'The string should not be empty'\n raise pp.ParseException(message)", "response": "Checks that the string is not empty."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a grammar for a numeric field.", "response": "def numeric(columns, name=None):\n \"\"\"\n Creates the grammar for a Numeric (N) field, accepting only the specified\n number of characters.\n\n This version only allows integers.\n\n :param columns: number of columns for this field\n :param name: name for the field\n :return: grammar for the integer numeric field\n \"\"\"\n\n if name is None:\n name = 'Numeric Field'\n\n if columns <= 0:\n # Can't be empty or have negative size\n raise BaseException()\n\n # Only numbers are accepted\n field = pp.Regex('[0-9]{' + str(columns) + '}')\n\n # Parse action\n field.setParseAction(_to_int)\n field.leaveWhitespace()\n\n # Name\n field.setName(name)\n\n return field"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef numeric_float(columns, nums_int, name=None):\n\n if name is None:\n name = 'Numeric Field'\n\n if columns <= 0:\n # Can't be empty or have negative size\n raise BaseException('Number of columns should be positive')\n\n if nums_int < 0:\n # Integer columns can't have negative size\n raise BaseException('Number of integer values should be positive or '\n 'zero')\n\n if columns < nums_int:\n # There are more integer numbers than columns\n message = 'The number of columns is %s and should be higher or ' \\\n 'equal than the integers: %s' % (\n columns, nums_int)\n raise BaseException(message)\n\n # Basic field\n field = pp.Word(pp.nums, exact=columns)\n\n # Parse action\n field.setParseAction(lambda n: _to_numeric_float(n[0], nums_int))\n\n # Compulsory field validation action\n field.addParseAction(lambda s: _check_above_value_float(s[0], 0))\n\n # Name\n field.setName(name)\n\n return field", "response": "Returns a grammar for a numeric field with the specified number of characters."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _to_numeric_float(number, nums_int):\n index_end = len(number) - nums_int\n return float(number[:nums_int] + '.' + number[-index_end:])", "response": "Transforms a string into a float."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks that the value parsed from the string is above a minimum value.", "response": "def _check_above_value_float(string, minimum):\n \"\"\"\n Checks that the number parsed from the string is above a minimum.\n\n This is used on compulsory numeric fields.\n\n If the value is not above the minimum an exception is thrown.\n\n :param string: the field value\n :param minimum: minimum value\n \"\"\"\n value = float(string)\n\n if value < minimum:\n message = 'The Numeric Field value should be above %s' % minimum\n raise pp.ParseException(message)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the grammar for a Boolean field", "response": "def boolean(name=None):\n \"\"\"\n Creates the grammar for a Boolean (B) field, accepting only 'Y' or 'N'\n\n :param name: name for the field\n :return: grammar for the flag field\n \"\"\"\n\n if name is None:\n name = 'Boolean Field'\n\n # Basic field\n field = pp.Regex('[YN]')\n\n # Parse action\n field.setParseAction(lambda b: _to_boolean(b[0]))\n\n # Name\n field.setName(name)\n\n return field"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _to_boolean(string):\n\n if string == 'Y':\n result = True\n elif string == 'N':\n result = False\n else:\n raise pp.ParseException(string, msg='Is not a valid boolean value')\n\n return result", "response": "Transforms a string into a boolean value."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef flag(name=None):\n\n if name is None:\n name = 'Flag Field'\n\n # Basic field\n field = pp.Regex('[YNU]')\n\n # Name\n field.setName(name)\n\n field.leaveWhitespace()\n\n return field", "response": "Returns the grammar for a Flag field."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef date(name=None):\n\n if name is None:\n name = 'Date Field'\n\n # Basic field\n # This regex allows values from 00000101 to 99991231\n field = pp.Regex('[0-9][0-9][0-9][0-9](0[1-9]|1[0-2])'\n '(0[1-9]|[1-2][0-9]|3[0-1])')\n\n # Parse action\n field.setParseAction(lambda d: datetime.datetime.strptime(d[0], '%Y%m%d')\n .date())\n\n # Name\n field.setName(name)\n\n # White spaces are not removed\n field.leaveWhitespace()\n\n return field", "response": "Returns a grammar for a Date field."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a grammar for a Time or Duration field.", "response": "def time(name=None):\n \"\"\"\n Creates the grammar for a Time or Duration (T) field, accepting only\n numbers in a certain pattern.\n\n :param name: name for the field\n :return: grammar for the date field\n \"\"\"\n\n if name is None:\n name = 'Time Field'\n\n # Basic field\n # This regex allows values from 000000 to 235959\n field = pp.Regex('(0[0-9]|1[0-9]|2[0-3])[0-5][0-9][0-5][0-9]')\n\n # Parse action\n field.setParseAction(lambda t: datetime.datetime.strptime(t[0], '%H%M%S')\n .time())\n\n # White spaces are not removed\n field.leaveWhitespace()\n\n # Name\n field.setName(name)\n\n return field"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a grammar for a Lookup field.", "response": "def lookup(values, name=None):\n \"\"\"\n Creates the grammar for a Lookup (L) field, accepting only values from a\n list.\n\n Like in the Alphanumeric field, the result will be stripped of all heading\n and trailing whitespaces.\n\n :param values: values allowed\n :param name: name for the field\n :return: grammar for the lookup field\n \"\"\"\n if name is None:\n name = 'Lookup Field'\n\n if values is None:\n raise ValueError('The values can no be None')\n\n # TODO: This should not be needed, it is just a patch. Fix this.\n try:\n v = values.asList()\n values = v\n except AttributeError:\n values = values\n\n # Only the specified values are allowed\n lookup_field = pp.oneOf(values)\n\n lookup_field.setName(name)\n\n lookup_field.setParseAction(lambda s: s[0].strip())\n\n lookup_field.leaveWhitespace()\n\n return lookup_field"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a grammar for a blank field.", "response": "def blank(columns=1, name=None):\n \"\"\"\n Creates the grammar for a blank field.\n\n These are for constant empty strings which should be ignored, as they are\n used just as fillers.\n\n :param columns: number of columns, which is the required number of\n whitespaces\n :param name: name for the field\n :return: grammar for the blank field\n \"\"\"\n if name is None:\n name = 'Blank Field'\n\n field = pp.Regex('[ ]{' + str(columns) + '}')\n field.leaveWhitespace()\n field.suppress()\n\n field.setName(name)\n\n return field"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_attribute(self, attribute, value=None, features=False):\n if attribute in self.filters:\n valid_gff_objects = self.fast_attributes[attribute] if not value else\\\n [i for i in self.fast_attributes[attribute] if i.attributes.get(attribute, False) == value]\n if features:\n valid_ids = [gff_object.attributes.get(self.id_tag, None) for gff_object in valid_gff_objects]\n return [self.feature_map[gff_id] for gff_id in valid_ids if gff_id]\n else:\n return valid_gff_objects\n else:\n valid_gff_objects = [gff_object for gff_feature in self.feature_map.values()\n for gff_object in gff_feature.features\n if gff_object.attributes.get(attribute, False)]\n valid_gff_objects = valid_gff_objects if not value else [gff_object for gff_object in valid_gff_objects\n if gff_object.attributes[attribute] == value]\n if features:\n valid_ids = [gff_object.attributes.get(self.id_tag, None) for gff_object in valid_gff_objects]\n return [self.feature_map[gff_id] for gff_id in valid_ids if gff_id]\n else:\n return valid_gff_objects", "response": "This method returns a list of GFF objects with the given attribute and if supplied those objects with the specified value. If not supplied those objects with the given value are returned."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef contains(self, chrom, start, end, overlap=True):\n d = self.positions.get(chrom,[])\n if overlap:\n return [vcf_entry for vcf_start, vcf_end in d\n for vcf_entry in d[(vcf_start, vcf_end)]\n if not (end < vcf_start or start > vcf_end)]\n else:\n return [vcf_entry for vcf_start, vcf_end in d\n for vcf_entry in d[(vcf_start, vcf_end)]\n if (vcf_start <= start and vcf_end >= end)]", "response": "This method returns a list of VCFEntry objects which cover a specified location."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove_variants(self, variants):\n chroms = set([i.chrom for i in variants])\n for chrom in chroms:\n if self.append_chromosome:\n chrom = 'chr%s' % chrom\n to_delete = [pos for pos in self.positions[chrom] if pos in variants]\n for pos in to_delete:\n del self.positions[chrom][pos]", "response": "Remove a list of variants from the positions we are scanning"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef generate_handler_sourcepath(\n self, toolchain, spec, loaderplugin_sourcepath):\n \"\"\"\n Attempt to locate the plugin source; returns a mapping of\n modnames to the absolute path of the located sources.\n \"\"\"\n\n # TODO calmjs-4.0.0 consider formalizing to the method instead\n npm_pkg_name = (\n self.node_module_pkg_name\n if self.node_module_pkg_name else\n self.find_node_module_pkg_name(toolchain, spec)\n )\n\n if not npm_pkg_name:\n cls = type(self)\n registry_name = getattr(\n self.registry, 'registry_name', '')\n if cls is NPMLoaderPluginHandler:\n logger.error(\n \"no npm package name specified or could be resolved for \"\n \"loaderplugin '%s' of registry '%s'; please subclass \"\n \"%s:%s such that the npm package name become specified\",\n self.name, registry_name, cls.__module__, cls.__name__,\n )\n else:\n logger.error(\n \"no npm package name specified or could be resolved for \"\n \"loaderplugin '%s' of registry '%s'; implementation of \"\n \"%s:%s may be at fault\",\n self.name, registry_name, cls.__module__, cls.__name__,\n )\n return {}\n\n working_dir = spec.get(WORKING_DIR, None)\n if working_dir is None:\n logger.info(\n \"attempting to derive working directory using %s, as the \"\n \"provided spec is missing working_dir\", toolchain\n )\n working_dir = toolchain.join_cwd()\n\n logger.debug(\"deriving npm loader plugin from '%s'\", working_dir)\n\n target = locate_package_entry_file(working_dir, npm_pkg_name)\n if target:\n logger.debug('picked %r for loader plugin %r', target, self.name)\n # use the parent recursive lookup.\n result = super(\n NPMLoaderPluginHandler, self).generate_handler_sourcepath(\n toolchain, spec, loaderplugin_sourcepath)\n result.update({self.name: target})\n return result\n\n # the expected package file is not found, use the logger to show\n # why.\n # Also note that any inner/chained loaders will be dropped.\n if exists(join(\n working_dir, 'node_modules', npm_pkg_name,\n 'package.json')):\n logger.warning(\n \"'package.json' for the npm package '%s' does not contain a \"\n \"valid entry point: sources required for loader plugin '%s' \"\n \"cannot be included automatically; the build process may fail\",\n npm_pkg_name, self.name,\n )\n else:\n logger.warning(\n \"could not locate 'package.json' for the npm package '%s' \"\n \"which was specified to contain the loader plugin '%s' in the \"\n \"current working directory '%s'; the missing package may \"\n \"be installed by running 'npm install %s' for the mean time \"\n \"as a workaround, though the package that owns that source \"\n \"file that has this requirement should declare an explicit \"\n \"dependency; the build process may fail\",\n npm_pkg_name, self.name, working_dir,\n npm_pkg_name,\n )\n\n return {}", "response": "Generate the path to the source file for the given loader plugin."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef store_records_for_package(self, entry_point, records):\n\n pkg_records_entry = self._dist_to_package_module_map(entry_point)\n pkg_records_entry.extend(\n rec for rec in records if rec not in pkg_records_entry)\n # TODO figure out a more efficient way to do this with a bit\n # more reuse.\n if entry_point.dist is not None:\n if entry_point.dist.project_name not in self.package_loader_map:\n self.package_loader_map[entry_point.dist.project_name] = []\n self.package_loader_map[entry_point.dist.project_name].append(\n entry_point.name)", "response": "Stores the records for the given entry_point in the package_loader_map."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef default_file_encoder():\n config = CWRConfiguration()\n field_configs = config.load_field_config('table')\n field_configs.update(config.load_field_config('common'))\n\n field_values = CWRTables()\n\n for entry in field_configs.values():\n if 'source' in entry:\n values_id = entry['source']\n entry['values'] = field_values.get_data(values_id)\n\n record_configs = config.load_record_config('common')\n return CwrFileEncoder(record_configs, field_configs)", "response": "Get default encoder cwr file\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nencodes a FileTag object into a CWR file name.", "response": "def encode(self, tag):\n \"\"\"\n Parses a CWR file name from a FileTag object.\n\n The result will be a string following the format CWyynnnnsss_rrr.Vxx,\n where the numeric sequence will have the length set on the encoder's\n constructor.\n\n :param tag: FileTag to parse\n :return: a string file name parsed from the FileTag\n \"\"\"\n # Acquires sequence number\n sequence = str(tag.sequence_n)\n\n # If the sequence is bigger the max, it is cut\n if len(sequence) > self._sequence_l:\n sequence = sequence[:self._sequence_l]\n\n # If the sequence is smaller the max, it is padded with zeroes\n while len(sequence) < self._sequence_l:\n sequence = '0' + sequence\n\n # Acquires version\n version = str(tag.version)\n\n # If the version is too long only the first and last number are taken,\n # to remove decimal separator\n if len(version) > 2:\n version = version[:1] + version[-1:]\n\n # If the version is too short, it is padded with zeroes\n while len(version) < 2:\n version = '0' + version\n\n # Acquires year\n # Only the two last digits of the year are used\n year = str(tag.year)[-2:]\n\n # Acquires sender and receiver\n sender = tag.sender[:3]\n receiver = tag.receiver[:3]\n\n rule = self._header + year + sequence + sender\n rule = rule + self._ip_delimiter + receiver + \".V\" + version\n\n return rule"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef encode(self, transmission):\n data = ''\n data += self._record_encode(transmission.header)\n for group in transmission.groups:\n data += self._record_encode(group.group_header)\n for transaction in group.transactions:\n for record in transaction:\n data += self._record_encode(record)\n data += self._record_encode(group.group_trailer)\n data += self._record_encode(transmission.trailer)\n return data", "response": "Encodes the data for a single object in the domain model."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a random scan for all the tables", "response": "def getScans(self, modifications=True, fdr=True):\n \"\"\"\n get a random scan\n \"\"\"\n if not self.scans:\n for i in self:\n yield i\n else:\n for i in self.scans.values():\n yield i\n yield None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a scan object for a given title", "response": "def getScan(self, title, peptide=None):\n \"\"\"\n allows random lookup\n \"\"\"\n if self.ra.has_key(title):\n self.filename.seek(self.ra[title][0],0)\n toRead = self.ra[title][1]-self.ra[title][0]\n info = self.filename.read(toRead)\n scan = self.parseScan(info)\n else:\n return None\n return scan"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parseScan(self, scan):\n setupScan = True\n foundCharge = False\n foundMass = False\n foundTitle = False\n scanObj = ScanObject()\n scanObj.ms_level = 2\n for row in scan.split('\\n'):\n if not row:\n continue\n entry = row.strip().split('=')\n if len(entry) >= 2:\n if entry[0] == 'PEPMASS':\n scanObj.mass = float(entry[1])\n foundMass = True\n elif entry[0] == 'CHARGE':\n scanObj.charge = entry[1]\n foundCharge = True\n elif entry[0] == 'TITLE':\n# if self.titleMap:\n# pos = entry[1].find(',')\n# title = self.titleMap[int(entry[1][:entry[1].find(',')])]\n# else:\n title = '='.join(entry[1:])\n foundTitle = True\n scanObj.title = title\n scanObj.id = title\n elif entry[0] == 'RTINSECONDS':\n scanObj.rt = float(entry[1])\n else:\n mz,intensity = self.scanSplit.split(row.strip())\n scanObj.scans.append((float(mz),float(intensity)))\n if foundCharge and foundMass and foundTitle:\n return scanObj\n return None", "response": "Parses a scan string and returns a ScanObject object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a random scan from the database", "response": "def getScan(self, specId, peptide=None):\n \"\"\"\n get a random scan\n \"\"\"\n sql = self.base_sql + \" where sh.SpectrumID = %d and p.Sequence = '%s'\"%(int(specId),peptide)\n self.cur.execute(sql)\n i = self.cur.fetchone()\n if not i:\n return None\n scan = self.parseFullScan(i)\n scan.spectrumId = specId\n return scan"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getScans(self, modifications=False, fdr=True):\n if fdr:\n sql = self.base_sql+\"WHERE p.ConfidenceLevel >= {} and p.SearchEngineRank <= {} {}\".format(self.clvl, self.srank, self.extra)\n try:\n self.cur.execute(sql)\n except sqlite3.OperationalError:\n sql = self.base_sql+\"WHERE p.ConfidenceLevel >= {} {}\".format(self.clvl, self.extra)\n self.cur.execute(sql)\n else:\n sql = self.base_sql\n self.cur.execute(sql)\n while True:\n # results = self.cur.fetchmany(1000)\n # if not results:\n # break\n try:\n tup = self.cur.fetchone()\n except:\n sys.stderr.write('Error fetching scan:\\n{}\\n'.format(traceback.format_exc()))\n else:\n while tup is not None:\n if tup is None:\n break\n if tup[1] is not None:\n scan = self.parseFullScan(tup, modifications=modifications)\n scan.spectrumId = tup[3]\n yield scan\n try:\n tup = self.cur.fetchone()\n except:\n sys.stderr.write('Error fetching scan:\\n{}\\n'.format(traceback.format_exc()))\n if tup is None:\n break\n yield None", "response": "get a random set of scans"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parseFullScan(self, i, modifications=False):\n scanObj = PeptideObject()\n peptide = str(i[1])\n pid=i[2]\n scanObj.acc = self.protein_map.get(i[4], i[4])\n if pid is None:\n return None\n if modifications:\n sql = 'select aam.ModificationName,pam.Position,aam.DeltaMass from peptidesaminoacidmodifications pam left join aminoacidmodifications aam on (aam.AminoAcidModificationID=pam.AminoAcidModificationID) where pam.PeptideID=%s'%pid\n for row in self.conn.execute(sql):\n scanObj.addModification(peptide[row[1]], str(row[1]), str(row[2]), row[0])\n else:\n mods = self.mods.get(int(pid))\n if mods is not None:\n for modId, modPosition in zip(mods[0].split(','),mods[1].split(',')):\n modEntry = self.modTable[str(modId)]\n scanObj.addModification(peptide[int(modPosition)], modPosition, modEntry[1], modEntry[0])\n tmods = self.tmods.get(int(pid))\n if tmods is not None:\n for modIds in tmods:\n for modId in modIds.split(','):\n modEntry = self.modTable[str(modId)]\n scanObj.addModification('[', 0, modEntry[1], modEntry[0])\n scanObj.peptide = peptide\n if self.decompressScanInfo(scanObj, i[0]):\n return scanObj\n return None", "response": "Parses a full scan info for giving a Spectrum Obj for plotting. takes significantly longer since it has to unzip/parse xml\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getScans(self, modifications=True, fdr=True):\n if fdr:\n sql = \"select sp.Spectrum, p.Sequence, p.PeptideID, p.SpectrumID from spectrumheaders sh left join spectra sp on (sp.UniqueSpectrumID=sh.UniqueSpectrumID) left join peptides p on (sh.SpectrumID=p.SpectrumID) WHERE p.ConfidenceLevel >= %d and p.SearchEngineRank <= %d\" % (self.clvl, self.srank)\n try:\n self.cur.execute(sql)\n except sqlite3.OperationalError:\n sql = \"select sp.Spectrum, p.Sequence, p.PeptideID, p.SpectrumID from spectrumheaders sh left join spectra sp on (sp.UniqueSpectrumID=sh.UniqueSpectrumID) left join peptides p on (sh.SpectrumID=p.SpectrumID) WHERE p.ConfidenceLevel >= %d\" % self.clvl\n self.cur.execute(sql)\n else:\n sql = \"select sp.Spectrum, p.Sequence, p.PeptideID, p.SpectrumID from spectrumheaders sh left join spectra sp on (sp.UniqueSpectrumID=sh.UniqueSpectrumID) left join peptides p on (sh.SpectrumID=p.SpectrumID)\"\n self.cur.execute(sql)\n while True:\n results = self.cur.fetchmany(1000)\n if not results:\n break\n for tup in results:\n scan = self.parseFullScan(tup, modifications=modifications)\n scan.spectrumId = tup[3]\n yield scan\n yield None", "response": "get a random scan from the database"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse a full scan info for giving a Spectrum Obj for plotting. takes significantly longer since it has to unzip and parse xml", "response": "def parseFullScan(self, i, modifications=True):\n \"\"\"\n parses scan info for giving a Spectrum Obj for plotting. takes significantly longer since it has to unzip/parse xml\n \"\"\"\n scanObj = PeptideObject()\n peptide = str(i[1])\n pid=i[2]\n if modifications:\n sql = 'select aam.ModificationName,pam.Position,aam.DeltaMass from peptidesaminoacidmodifications pam left join aminoacidmodifications aam on (aam.AminoAcidModificationID=pam.AminoAcidModificationID) where pam.PeptideID=%s'%pid\n for row in self.conn.execute(sql):\n scanObj.addModification(peptide[row[1]], str(row[1]), str(row[2]), row[0])\n scanObj.peptide = peptide\n if self.decompressScanInfo(scanObj, i[0]):\n return scanObj\n return None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nattempting to resolve the distribution from the provided class in the class most naive way - this assumes that the module path to the class contains the name of the package that provided the class and the class itself.", "response": "def _cls_lookup_dist(cls):\n \"\"\"\n Attempt to resolve the distribution from the provided class in the\n most naive way - this assumes the Python module path to the class\n contains the name of the package that provided the module and class.\n \"\"\"\n\n frags = cls.__module__.split('.')\n for name in ('.'.join(frags[:x]) for x in range(len(frags), 0, -1)):\n dist = find_pkg_dist(name)\n if dist:\n return dist"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef verify_builder(builder):\n\n try:\n d = getcallargs(builder, package_names=[], export_target='some_path')\n except TypeError:\n return False\n return d == {'package_names': [], 'export_target': 'some_path'}", "response": "Verify that the provided builder has a signature that is at least\n compatible."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef extract_builder_result(builder_result, toolchain_cls=Toolchain):\n\n try:\n toolchain, spec = builder_result\n except Exception:\n return None, None\n if not isinstance(toolchain, toolchain_cls) or not isinstance(spec, Spec):\n return None, None\n return toolchain, spec", "response": "Extract the builder result to produce a Toolchain and Spec instance."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntrace the versions of the involved packages for the provided toolchain instance.", "response": "def trace_toolchain(toolchain):\n \"\"\"\n Trace the versions of the involved packages for the provided\n toolchain instance.\n \"\"\"\n\n pkgs = []\n for cls in getmro(type(toolchain)):\n if not issubclass(cls, Toolchain):\n continue\n dist = _cls_lookup_dist(cls)\n value = {\n 'project_name': dist.project_name,\n 'version': dist.version,\n } if dist else {}\n key = '%s:%s' % (cls.__module__, cls.__name__)\n pkgs.append({key: value})\n return pkgs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_artifact_filename(self, package_name, artifact_name):\n\n project_name = self.packages.normalize(package_name)\n return self.records.get((project_name, artifact_name))", "response": "Returns the path to the artifact file that should be used for the given artifact name."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef resolve_artifacts_by_builder_compat(\n self, package_names, builder_name, dependencies=False):\n \"\"\"\n Yield the list of paths to the artifacts in the order of the\n dependency resolution\n\n Arguments:\n\n package_names\n The names of the packages to probe the dependency graph, to\n be provided as a list of strings.\n artifact_name\n The exact name of the artifact.\n dependencies\n Trace dependencies. Default is off.\n\n Returns the path of where the artifact should be if it has been\n declared, otherwise None.\n \"\"\"\n\n paths = self.compat_builders.get(builder_name)\n if not paths:\n # perhaps warn, but just return\n return\n\n resolver = (\n # traces dependencies for distribution.\n find_packages_requirements_dists\n if dependencies else\n # just get grabs the distribution.\n pkg_names_to_dists\n )\n for distribution in resolver(package_names):\n path = paths.get(distribution.project_name)\n if path:\n yield path", "response": "Yields the list of paths to the artifacts that are needed for the given packages."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_artifact_metadata(self, package_name):\n\n filename = self.metadata.get(package_name)\n if not filename or not exists(filename):\n return {}\n with open(filename, encoding='utf8') as fd:\n contents = fd.read()\n\n try:\n is_json_compat(contents)\n except ValueError:\n logger.info(\"artifact metadata file '%s' is invalid\", filename)\n return {}\n\n return json.loads(contents)", "response": "Get the artifact metadata for the given package name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef generate_metadata_entry(self, entry_point, toolchain, spec):\n\n export_target = spec['export_target']\n toolchain_bases = trace_toolchain(toolchain)\n toolchain_bin_path = spec.get(TOOLCHAIN_BIN_PATH)\n toolchain_bin = ([\n basename(toolchain_bin_path), # bin_name\n get_bin_version_str(toolchain_bin_path), # bin_version\n ] if toolchain_bin_path else [])\n\n return {basename(export_target): {\n 'toolchain_bases': toolchain_bases,\n 'toolchain_bin': toolchain_bin,\n 'builder': '%s:%s' % (\n entry_point.module_name, '.'.join(entry_point.attrs)),\n }}", "response": "Generate the metadata entry for the entry_point."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\niterating records for a specific package.", "response": "def iter_records_for(self, package_name):\n \"\"\"\n Iterate records for a specific package.\n \"\"\"\n\n entry_points = self.packages.get(package_name, NotImplemented)\n if entry_points is NotImplemented:\n logger.debug(\n \"package '%s' has not declared any entry points for the '%s' \"\n \"registry for artifact construction\",\n package_name, self.registry_name,\n )\n return iter([])\n\n logger.debug(\n \"package '%s' has declared %d entry points for the '%s' \"\n \"registry for artifact construction\",\n package_name, len(entry_points), self.registry_name,\n )\n return iter(entry_points.values())"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef generate_builder(self, entry_point, export_target):\n\n try:\n builder = entry_point.resolve()\n except ImportError:\n logger.error(\n \"unable to import the target builder for the entry point \"\n \"'%s' from package '%s' to generate artifact '%s'\",\n entry_point, entry_point.dist, export_target,\n )\n return\n\n if not self.verify_builder(builder):\n logger.error(\n \"the builder referenced by the entry point '%s' \"\n \"from package '%s' has an incompatible signature\",\n entry_point, entry_point.dist,\n )\n return\n\n # CLEANUP see deprecation notice below\n verifier = self.verify_export_target(export_target)\n if not verifier:\n logger.error(\n \"the export target '%s' has been rejected\", export_target)\n return\n\n toolchain, spec = self.extract_builder_result(builder(\n [entry_point.dist.project_name], export_target=export_target))\n if not toolchain:\n logger.error(\n \"the builder referenced by the entry point '%s' \"\n \"from package '%s' failed to produce a valid \"\n \"toolchain\",\n entry_point, entry_point.dist,\n )\n return\n\n if spec.get(EXPORT_TARGET) != export_target:\n logger.error(\n \"the builder referenced by the entry point '%s' \"\n \"from package '%s' failed to produce a spec with the \"\n \"expected export_target\",\n entry_point, entry_point.dist,\n )\n return\n\n if callable(verifier):\n warnings.warn(\n \"%s:%s.verify_export_target returned a callable, which \"\n \"will no longer be passed to spec.advise by calmjs-4.0.0; \"\n \"please instead override 'setup_export_location' or \"\n \"'prepare_export_location' in that class\" % (\n self.__class__.__module__, self.__class__.__name__),\n DeprecationWarning\n )\n spec.advise(BEFORE_PREPARE, verifier, export_target)\n else:\n spec.advise(\n BEFORE_PREPARE,\n self.prepare_export_location, export_target)\n yield entry_point, toolchain, spec", "response": "Yields exactly one builder for the provided entry point and export target."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef execute_builder(self, entry_point, toolchain, spec):\n\n toolchain(spec)\n if not exists(spec['export_target']):\n logger.error(\n \"the entry point '%s' from package '%s' failed to \"\n \"generate an artifact at '%s'\",\n entry_point, entry_point.dist, spec['export_target']\n )\n return {}\n return self.generate_metadata_entry(entry_point, toolchain, spec)", "response": "Executes the builder and returns the result."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef process_package(self, package_name):\n\n metadata = super(ArtifactRegistry, self).process_package(package_name)\n if metadata:\n self.update_artifact_metadata(package_name, metadata)", "response": "Process the given package and update the artifact metadata."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a grammar for an alphanumeric variable size.", "response": "def alphanum_variable(min_size, max_size, name=None):\n \"\"\"\n Creates the grammar for an alphanumeric code where the size ranges between\n two values.\n\n :param min_size: minimum size\n :param max_size: maximum size\n :param name: name for the field\n :return: grammar for an alphanumeric field of a variable size\n \"\"\"\n\n if name is None:\n name = 'Alphanumeric Field'\n\n if min_size < 0:\n # Can't have negative min\n raise BaseException()\n if max_size < min_size:\n # Max can't be lower than min\n raise BaseException()\n\n field = pp.Word(pp.alphanums, min=min_size, max=max_size)\n\n # Parse action\n field.setParseAction(lambda s: s[0].strip())\n\n # White spaces are not removed\n field.leaveWhitespace()\n\n # Name\n field.setName(name)\n\n return field"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef year(columns, name=None):\n\n if columns < 0:\n # Can't have negative size\n raise BaseException()\n\n field = numeric(columns, name)\n\n # Parse action\n field.addParseAction(_to_year)\n\n return field", "response": "Returns a grammar for a field containing a year."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck that the value is a JSON decodable string or a dict that can be serialized into a JSON.", "response": "def is_json_compat(value):\n \"\"\"\n Check that the value is either a JSON decodable string or a dict\n that can be encoded into a JSON.\n\n Raises ValueError when validation fails.\n \"\"\"\n\n try:\n value = json.loads(value)\n except ValueError as e:\n raise ValueError('JSON decoding error: ' + str(e))\n except TypeError:\n # Check that the value can be serialized back into json.\n try:\n json.dumps(value)\n except TypeError as e:\n raise ValueError(\n 'must be a JSON serializable object: ' + str(e))\n\n if not isinstance(value, dict):\n raise ValueError(\n 'must be specified as a JSON serializable dict or a '\n 'JSON deserializable string'\n )\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate_json_field(dist, attr, value):\n\n try:\n is_json_compat(value)\n except ValueError as e:\n raise DistutilsSetupError(\"%r %s\" % (attr, e))\n\n return True", "response": "Check if json field is valid."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nvalidating that the value is a list of valid identifiers", "response": "def validate_line_list(dist, attr, value):\n \"\"\"\n Validate that the value is compatible\n \"\"\"\n\n # does not work as reliably in Python 2.\n if isinstance(value, str):\n value = value.split()\n value = list(value)\n\n try:\n check = (' '.join(value)).split()\n if check == value:\n return True\n except Exception:\n pass\n raise DistutilsSetupError(\"%r must be a list of valid identifiers\" % attr)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_json_file(argname, cmd, basename, filename):\n\n value = getattr(cmd.distribution, argname, None)\n\n if isinstance(value, dict):\n value = json.dumps(\n value, indent=4, sort_keys=True, separators=(',', ': '))\n\n cmd.write_or_delete_file(argname, filename, value, force=True)", "response": "Writes a JSON file from the package s distribution directory using the specified filename."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef write_line_list(argname, cmd, basename, filename):\n\n values = getattr(cmd.distribution, argname, None)\n if isinstance(values, list):\n values = '\\n'.join(values)\n cmd.write_or_delete_file(argname, filename, values, force=True)", "response": "Write out the retrieved value as list of lines."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlocates a package s distribution by its name.", "response": "def find_pkg_dist(pkg_name, working_set=None):\n \"\"\"\n Locate a package's distribution by its name.\n \"\"\"\n\n working_set = working_set or default_working_set\n req = Requirement.parse(pkg_name)\n return working_set.find(req)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts package names which can be a string of a number of package names or requirements separated by spaces.", "response": "def convert_package_names(package_names):\n \"\"\"\n Convert package names, which can be a string of a number of package\n names or requirements separated by spaces.\n \"\"\"\n\n results = []\n errors = []\n\n for name in (\n package_names.split()\n if hasattr(package_names, 'split') else package_names):\n try:\n Requirement.parse(name)\n except ValueError:\n errors.append(name)\n else:\n results.append(name)\n\n return results, errors"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_packages_requirements_dists(pkg_names, working_set=None):\n\n working_set = working_set or default_working_set\n requirements = [\n r for r in (Requirement.parse(req) for req in pkg_names)\n if working_set.find(r)\n ]\n return list(reversed(working_set.resolve(requirements)))", "response": "Return the entire list of dependency requirements for the given packages."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nleverage the find_packages_requirements_dists but strip out the packages that are not in pkg_names.", "response": "def find_packages_parents_requirements_dists(pkg_names, working_set=None):\n \"\"\"\n Leverages the `find_packages_requirements_dists` but strip out the\n distributions that matches pkg_names.\n \"\"\"\n\n dists = []\n # opting for a naive implementation\n targets = set(pkg_names)\n for dist in find_packages_requirements_dists(pkg_names, working_set):\n if dist.project_name in targets:\n continue\n dists.append(dist)\n return dists"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read_dist_egginfo_json(dist, filename=DEFAULT_JSON):\n\n # use the given package's distribution to acquire the json file.\n if not dist.has_metadata(filename):\n logger.debug(\"no '%s' for '%s'\", filename, dist)\n return\n\n try:\n result = dist.get_metadata(filename)\n except IOError:\n logger.error(\"I/O error on reading of '%s' for '%s'.\", filename, dist)\n return\n\n try:\n obj = json.loads(result)\n except (TypeError, ValueError):\n logger.error(\n \"the '%s' found in '%s' is not a valid json.\", filename, dist)\n return\n\n logger.debug(\"found '%s' for '%s'.\", filename, dist)\n return obj", "response": "Read a json file within an egginfo distribution."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads json from egginfo of a package identified by pkg_name.", "response": "def read_egginfo_json(pkg_name, filename=DEFAULT_JSON, working_set=None):\n \"\"\"\n Read json from egginfo of a package identified by `pkg_name` that's\n already installed within the current Python environment.\n \"\"\"\n\n working_set = working_set or default_working_set\n dist = find_pkg_dist(pkg_name, working_set=working_set)\n return read_dist_egginfo_json(dist, filename)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nflattening a distribution s egginfo json with the depended keys to be flattened.", "response": "def flatten_dist_egginfo_json(\n source_dists, filename=DEFAULT_JSON, dep_keys=DEP_KEYS,\n working_set=None):\n \"\"\"\n Flatten a distribution's egginfo json, with the depended keys to be\n flattened.\n\n Originally this was done for this:\n\n Resolve a distribution's (dev)dependencies through the working set\n and generate a flattened version package.json, returned as a dict,\n from the resolved distributions.\n\n Default working set is the one from pkg_resources.\n\n The generated package.json dict is done by grabbing all package.json\n metadata from all parent Python packages, starting from the highest\n level and down to the lowest. The current distribution's\n dependencies will be layered on top along with its other package\n information. This has the effect of child packages overriding\n node/npm dependencies which is by the design of this function. If\n nested dependencies are desired, just rely on npm only for all\n dependency management.\n\n Flat is better than nested.\n \"\"\"\n\n working_set = working_set or default_working_set\n obj = {}\n\n # TODO figure out the best way to explicitly report back to caller\n # how the keys came to be (from which dist). Perhaps create a\n # detailed function based on this, retain this one to return the\n # distilled results.\n\n depends = {dep: {} for dep in dep_keys}\n\n # Go from the earliest package down to the latest one, as we will\n # flatten children's d(evD)ependencies on top of parent's.\n for dist in source_dists:\n obj = read_dist_egginfo_json(dist, filename)\n if not obj:\n continue\n\n logger.debug(\"merging '%s' for required '%s'\", filename, dist)\n for dep in dep_keys:\n depends[dep].update(obj.get(dep, {}))\n\n if obj is None:\n # top level object does not have egg-info defined\n return depends\n\n for dep in dep_keys:\n # filtering out all the nulls.\n obj[dep] = {k: v for k, v in depends[dep].items() if v is not None}\n\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef flatten_egginfo_json(\n pkg_names, filename=DEFAULT_JSON, dep_keys=DEP_KEYS, working_set=None):\n \"\"\"\n A shorthand calling convention where the package name is supplied\n instead of a distribution.\n\n Originally written for this:\n\n Generate a flattened package.json with packages `pkg_names` that are\n already installed within the current Python environment (defaults\n to the current global working_set which should have been set up\n correctly by pkg_resources).\n \"\"\"\n\n working_set = working_set or default_working_set\n # Ensure only grabbing packages that exists in working_set\n dists = find_packages_requirements_dists(\n pkg_names, working_set=working_set)\n return flatten_dist_egginfo_json(\n dists, filename=filename, dep_keys=dep_keys, working_set=working_set)", "response": "Generates a flattened package. json with packages pkg_names and dep_keys."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build_helpers_egginfo_json(\n json_field, json_key_registry, json_filename=None):\n \"\"\"\n Return a tuple of functions that will provide the usage of the\n JSON egginfo based around the provided field.\n \"\"\"\n\n json_filename = (\n json_field + '.json' if json_filename is None else json_filename)\n\n # Default calmjs core implementation specific functions, to be used by\n # integrators intended to use this as a distribution.\n\n def get_extras_json(pkg_names, working_set=None):\n \"\"\"\n Only extract the extras_json information for the given packages\n 'pkg_names'.\n \"\"\"\n\n working_set = working_set or default_working_set\n dep_keys = set(get(json_key_registry).iter_records())\n dists = pkg_names_to_dists(pkg_names, working_set=working_set)\n return flatten_dist_egginfo_json(\n dists, filename=json_filename,\n dep_keys=dep_keys, working_set=working_set\n )\n\n def _flatten_extras_json(pkg_names, find_dists, working_set):\n # registry key must be explicit here as it was designed for this.\n dep_keys = set(get(json_key_registry).iter_records())\n dists = find_dists(pkg_names, working_set=working_set)\n return flatten_dist_egginfo_json(\n dists, filename=json_filename,\n dep_keys=dep_keys, working_set=working_set\n )\n\n def flatten_extras_json(pkg_names, working_set=None):\n \"\"\"\n Traverses through the dependency graph of packages 'pkg_names'\n and flattens all the egg_info json information\n \"\"\"\n\n working_set = working_set or default_working_set\n return _flatten_extras_json(\n pkg_names, find_packages_requirements_dists, working_set)\n\n def flatten_parents_extras_json(pkg_names, working_set=None):\n \"\"\"\n Traverses through the dependency graph of packages 'pkg_names'\n and flattens all the egg_info json information for parents of\n the specified packages.\n \"\"\"\n\n working_set = working_set or default_working_set\n return _flatten_extras_json(\n pkg_names, find_packages_parents_requirements_dists, working_set)\n\n write_extras_json = partial(write_json_file, json_field)\n\n return (\n get_extras_json,\n flatten_extras_json,\n flatten_parents_extras_json,\n write_extras_json,\n )", "response": "Build a tuple of functions that will provide the usage of the JSON egginfo based around the provided field."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a tuple of functions that will provide the functions that will provide the relevant sets of module registry records based on the given packages.", "response": "def build_helpers_module_registry_dependencies(registry_name='calmjs.module'):\n \"\"\"\n Return a tuple of funtions that will provide the functions that\n return the relevant sets of module registry records based on the\n dependencies defined for the provided packages.\n \"\"\"\n\n def get_module_registry_dependencies(\n pkg_names, registry_name=registry_name, working_set=None):\n \"\"\"\n Get dependencies for the given package names from module\n registry identified by registry name.\n\n For the given packages 'pkg_names' and the registry identified\n by 'registry_name', resolve the exported location for just the\n package.\n \"\"\"\n\n working_set = working_set or default_working_set\n registry = get(registry_name)\n if not isinstance(registry, BaseModuleRegistry):\n return {}\n result = {}\n for pkg_name in pkg_names:\n result.update(registry.get_records_for_package(pkg_name))\n return result\n\n def _flatten_module_registry_dependencies(\n pkg_names, registry_name, find_dists, working_set):\n \"\"\"\n Flatten dependencies for the given package names from module\n registry identified by registry name using the find_dists\n function on the given working_set.\n\n For the given packages 'pkg_names' and the registry identified\n by 'registry_name', resolve and flatten all the exported\n locations.\n \"\"\"\n\n result = {}\n registry = get(registry_name)\n if not isinstance(registry, BaseModuleRegistry):\n return result\n\n dists = find_dists(pkg_names, working_set=working_set)\n for dist in dists:\n result.update(registry.get_records_for_package(dist.project_name))\n\n return result\n\n def flatten_module_registry_dependencies(\n pkg_names, registry_name=registry_name, working_set=None):\n \"\"\"\n Flatten dependencies for the specified packages from the module\n registry identified by registry name.\n\n For the given packages 'pkg_names' and the registry identified\n by 'registry_name', resolve and flatten all the exported\n locations.\n \"\"\"\n\n working_set = working_set or default_working_set\n return _flatten_module_registry_dependencies(\n pkg_names, registry_name, find_packages_requirements_dists,\n working_set)\n\n def flatten_parents_module_registry_dependencies(\n pkg_names, registry_name=registry_name, working_set=None):\n \"\"\"\n Flatten dependencies for the parents of the specified packages\n from the module registry identified by registry name.\n\n For the given packages 'pkg_names' and the registry identified\n by 'registry_name', resolve and flatten all the exported\n locations.\n \"\"\"\n\n working_set = working_set or default_working_set\n return _flatten_module_registry_dependencies(\n pkg_names, registry_name, find_packages_parents_requirements_dists,\n working_set)\n\n return (\n get_module_registry_dependencies,\n flatten_module_registry_dependencies,\n flatten_parents_module_registry_dependencies,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef has_calmjs_artifact_declarations(cmd, registry_name='calmjs.artifacts'):\n\n return any(get(registry_name).iter_records_for(\n cmd.distribution.get_name()))", "response": "Check that the artifact build step is there."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build_calmjs_artifacts(dist, key, value, cmdclass=BuildCommand):\n\n if value is not True:\n return\n\n build_cmd = dist.get_command_obj('build')\n if not isinstance(build_cmd, cmdclass):\n logger.error(\n \"'build' command in Distribution is not an instance of \"\n \"'%s:%s' (got %r instead)\",\n cmdclass.__module__, cmdclass.__name__, build_cmd)\n return\n\n build_cmd.sub_commands.append((key, has_calmjs_artifact_declarations))", "response": "Trigger the artifact build process through the setuptools."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_rule(self, field_id):\n\n if field_id in self._fields:\n # Field already exists\n field = self._fields[field_id]\n else:\n # Field does not exist\n # It is created\n field = self._create_field(field_id)\n\n # Field is saved\n self._fields[field_id] = field\n\n return field", "response": "Returns the rule for the given field_id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _create_field(self, field_id):\n # Field configuration info\n config = self._field_configs[field_id]\n\n adapter = self._adapters[config['type']]\n\n if 'name' in config:\n name = config['name']\n else:\n name = None\n\n if 'size' in config:\n columns = config['size']\n else:\n columns = None\n\n if 'values' in config:\n values = config['values']\n else:\n values = None\n\n field = adapter.get_field(name, columns, values)\n\n if 'results_name' in config:\n field = field.setResultsName(config['results_name'])\n else:\n field = field.setResultsName(field_id)\n\n return field", "response": "Creates the basic rule for the specified field."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef read_csv_file(self, file_name):\n result = []\n with open(os.path.join(self.__path(), os.path.basename(file_name)),\n 'rt') as csvfile:\n headers_reader = csv.reader(csvfile, delimiter=',', quotechar='|')\n for type_row in headers_reader:\n for t in type_row:\n result.append(t)\n return result", "response": "Reads a CSV file into a list of the contents."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads a YAML file into a matrix.", "response": "def read_yaml_file(self, file_name):\n \"\"\"\n Parses a YAML file into a matrix.\n\n :param file_name: name of the YAML file\n :return: a matrix with the file's contents\n \"\"\"\n with open(os.path.join(self.__path(), os.path.basename(file_name)),\n 'rt') as yamlfile:\n return yaml.load(yamlfile)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading the data from the table identified by the id.", "response": "def get_data(self, file_id):\n \"\"\"\n Acquires the data from the table identified by the id.\n\n The file is read only once, consecutive calls to this method will\n return the sale collection.\n\n :param file_id: identifier for the table\n :return: all the values from the table\n \"\"\"\n if file_id not in self._file_values:\n file_contents = 'cwr_%s.csv' % file_id\n self._file_values[file_id] = self._reader.read_csv_file(\n file_contents)\n\n return self._file_values[file_id]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef record_type(values):\n field = basic.lookup(values, name='Record Type (one of %s)' % values)\n\n return field.setResultsName('record_type')", "response": "Returns a grammar for the record type field."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef record_prefix(required_type, factory):\n field = record_type(required_type)\n field += factory.get_rule('transaction_sequence_n')\n field += factory.get_rule('record_sequence_n')\n\n # field.leaveWhitespace()\n\n return field", "response": "Creates a record prefix for the specified record type."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsearches and return entity or sub entity that contain value of this field.", "response": "def expand_entity(self, entity):\n \"\"\"\n Search and return entity or sub entity that contain value of this field.\n :param entity:\n :return: entity\n :raise KeyError\n \"\"\"\n if self.name in entity:\n return entity\n for key, value in entity.items():\n if isinstance(value, dict):\n if self.name in value:\n return value\n raise KeyError(\"The field %s (%s) not found in %s\" % (self.name, self._rule['type'], entity))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef encode(self, entity):\n entity = self.expand_entity(entity)\n value = entity[self.name]\n result = self.format(value)\n return result", "response": "Encode this\n :param entity:\n :return: cwr string"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread a CWR grammar config file.", "response": "def read_config_file(self, file_name):\n \"\"\"\n Reads a CWR grammar config file.\n\n :param file_name: name of the text file\n :return: the file's contents\n \"\"\"\n with open(os.path.join(self.__path(), os.path.basename(file_name)),\n 'rt') as file_config:\n return self._parser.parseString(file_config.read())"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _load_cwr_defaults(self):\n if self._cwr_defaults is None:\n self._cwr_defaults = self._reader.read_yaml_file(\n self._file_defaults)\n\n return self._cwr_defaults", "response": "Loads the CWR default values file and creates a matrix from it and then returns this data."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads the configuration fields file for the given id.", "response": "def load_field_config(self, file_id):\n \"\"\"\n Loads the configuration fields file for the id.\n\n :param file_id: the id for the field\n :return: the fields configuration\n \"\"\"\n if file_id not in self._field_configs:\n self._field_configs[file_id] = self._reader.read_yaml_file(\n 'field_config_%s.yml' % file_id)\n\n return self._field_configs[file_id]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load_group_config(self, file_id):\n if file_id not in self._group_configs:\n self._group_configs[file_id] = self._reader.read_config_file(\n 'group_config_%s.cml' % file_id)\n\n return self._group_configs[file_id]", "response": "Loads the configuration fields file for the group."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload the configuration fields file for the record.", "response": "def load_record_config(self, file_id):\n \"\"\"\n Loads the configuration fields file for the id.\n\n :param file_id: the id for the field\n :return: the fields configuration\n \"\"\"\n if file_id not in self._record_configs:\n self._record_configs[file_id] = self._reader.read_config_file(\n 'record_config_%s.cml' % file_id)\n\n return self._record_configs[file_id]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload the configuration fields file for the id.", "response": "def load_transaction_config(self, file_id):\n \"\"\"\n Loads the configuration fields file for the id.\n\n :param file_id: the id for the field\n :return: the fields configuration\n \"\"\"\n if file_id not in self._transaction_configs:\n self._transaction_configs[file_id] = self._reader.read_config_file(\n 'transaction_config_%s.cml' % file_id)\n\n return self._transaction_configs[file_id]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload the CWR acknowledge config file and returns the values matrix", "response": "def load_acknowledge_config(self, file_id):\n \"\"\"\n Loads the CWR acknowledge config\n :return: the values matrix\n \"\"\"\n if self._cwr_defaults is None:\n self._cwr_defaults = self._reader.read_yaml_file(\n 'acknowledge_config_%s.yml' % file_id)\n\n return self._cwr_defaults"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprint a soft error message to stderr.", "response": "def soft_error(self, message):\n \"\"\"\n Same as error, without the dying in a fire part.\n \"\"\"\n\n self.print_usage(sys.stderr)\n args = {'prog': self.prog, 'message': message}\n self._print_message(\n _('%(prog)s: error: %(message)s\\n') % args, sys.stderr)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef default_filename_decoder():\n factory = default_filename_grammar_factory()\n\n grammar_old = factory.get_rule('filename_old')\n grammar_new = factory.get_rule('filename_new')\n\n return FileNameDecoder(grammar_old, grammar_new)", "response": "Creates a decoder which parses CWR filenames following the old or the new\n convention."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef decode(self, data):\n file_name = self._filename_decoder.decode(data['filename'])\n\n file_data = data['contents']\n i = 0\n max_size = len(file_data)\n while file_data[i:i + 1] != 'H' and i < max_size:\n i += 1\n if i > 0:\n data['contents'] = file_data[i:]\n\n transmission = self._file_decoder.decode(data['contents'])[0]\n\n return CWRFile(file_name, transmission)", "response": "Parses the file and creates a CWRFile instance from it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nparse the filename and returns a FileTag instance.", "response": "def decode(self, file_name):\n \"\"\"\n Parses the filename, creating a FileTag from it.\n\n It will try both the old and the new conventions, if the filename does\n not conform any of them, then an empty FileTag will be returned.\n\n :param file_name: filename to parse\n :return: a FileTag instance\n \"\"\"\n try:\n file_tag = self._filename_decoder_new.decode(file_name)\n except:\n try:\n file_tag = self._filename_decoder_old.decode(file_name)\n except:\n file_tag = FileTag(0, 0, '', '', '')\n\n return file_tag"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef finalize_env(env):\n\n keys = _PLATFORM_ENV_KEYS.get(sys.platform, [])\n if 'PATH' not in keys:\n # this MUST be available due to Node.js (and others really)\n # needing something to look for binary locations when it shells\n # out to other binaries.\n keys.append('PATH')\n results = {\n key: os.environ.get(key, '') for key in keys\n }\n results.update(env)\n return results", "response": "Finalize the environment for passing into subprocess. Popen\n family of external process calling methods."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndoes a fork - exec through the subprocess. Popen abstraction in a way that takes a stdin and return stdout and stderr.", "response": "def fork_exec(args, stdin='', **kwargs):\n \"\"\"\n Do a fork-exec through the subprocess.Popen abstraction in a way\n that takes a stdin and return stdout.\n \"\"\"\n\n as_bytes = isinstance(stdin, bytes)\n source = stdin if as_bytes else stdin.encode(locale)\n p = Popen(args, stdin=PIPE, stdout=PIPE, stderr=PIPE, **kwargs)\n stdout, stderr = p.communicate(source)\n if as_bytes:\n return stdout, stderr\n return (stdout.decode(locale), stderr.decode(locale))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef raise_os_error(_errno, path=None):\n\n msg = \"%s: '%s'\" % (strerror(_errno), path) if path else strerror(_errno)\n raise OSError(_errno, msg)", "response": "Raises an exception under Python 2. 7."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngive a command check where it is on PATH.", "response": "def which(cmd, mode=os.F_OK | os.X_OK, path=None):\n \"\"\"\n Given cmd, check where it is on PATH.\n\n Loosely based on the version in python 3.3.\n \"\"\"\n\n if os.path.dirname(cmd):\n if os.path.isfile(cmd) and os.access(cmd, mode):\n return cmd\n\n if path is None:\n path = os.environ.get('PATH', defpath)\n if not path:\n return None\n\n paths = path.split(pathsep)\n\n if sys.platform == 'win32':\n # oh boy\n if curdir not in paths:\n paths = [curdir] + paths\n\n # also need to check the fileexts...\n pathext = os.environ.get('PATHEXT', '').split(pathsep)\n\n if any(cmd.lower().endswith(ext.lower()) for ext in pathext):\n files = [cmd]\n else:\n files = [cmd + ext for ext in pathext]\n else:\n # sanity\n files = [cmd]\n\n seen = set()\n for p in paths:\n normpath = normcase(p)\n if normpath in seen:\n continue\n seen.add(normpath)\n for f in files:\n fn = os.path.join(p, f)\n if os.path.isfile(fn) and os.access(fn, mode):\n return fn\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _init(self):\n\n self._entry_points = {}\n for entry_point in self.raw_entry_points:\n if entry_point.dist.project_name != self.reserved.get(\n entry_point.name, entry_point.dist.project_name):\n logger.error(\n \"registry '%s' for '%s' is reserved for package '%s'\",\n entry_point.name, self.registry_name,\n self.reserved[entry_point.name],\n )\n continue\n\n if self.get_record(entry_point.name):\n logger.warning(\n \"registry '%s' for '%s' is already registered.\",\n entry_point.name, self.registry_name,\n )\n existing = self._entry_points[entry_point.name]\n logger.debug(\n \"registered '%s' from '%s'\", existing, existing.dist)\n logger.debug(\n \"discarded '%s' from '%s'\", entry_point, entry_point.dist)\n continue\n\n logger.debug(\n \"recording '%s' from '%s'\", entry_point, entry_point.dist)\n self._entry_points[entry_point.name] = entry_point", "response": "Initialize the internal state of the internal state of the internal state of the internal state."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef dict_update_overwrite_check(base, fresh):\n\n result = [\n (key, base[key], fresh[key])\n for key in set(base.keys()) & set(fresh.keys())\n if base[key] != fresh[key]\n ]\n base.update(fresh)\n return result", "response": "For updating a base dict with a fresh one returns a list of tuple containing the key previous value and the fresh value."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates the loaderplugin registry in a Spec object.", "response": "def spec_update_loaderplugin_registry(spec, default=None):\n \"\"\"\n Resolve a BasePluginLoaderRegistry instance from spec, and update\n spec[CALMJS_LOADERPLUGIN_REGISTRY] with that value before returning\n it.\n \"\"\"\n\n registry = spec.get(CALMJS_LOADERPLUGIN_REGISTRY)\n if isinstance(registry, BaseLoaderPluginRegistry):\n logger.debug(\n \"loaderplugin registry '%s' already assigned to spec\",\n registry.registry_name)\n return registry\n elif not registry:\n # resolving registry\n registry = get_registry(spec.get(CALMJS_LOADERPLUGIN_REGISTRY_NAME))\n if isinstance(registry, BaseLoaderPluginRegistry):\n logger.info(\n \"using loaderplugin registry '%s'\", registry.registry_name)\n spec[CALMJS_LOADERPLUGIN_REGISTRY] = registry\n return registry\n\n # acquire the real default instance, if possible.\n if not isinstance(default, BaseLoaderPluginRegistry):\n default = get_registry(default)\n if not isinstance(default, BaseLoaderPluginRegistry):\n logger.info(\n \"provided default is not a valid loaderplugin registry\")\n default = None\n\n if default is None:\n default = BaseLoaderPluginRegistry('')\n\n # TODO determine the best way to optionally warn about this for\n # toolchains that require this.\n if registry:\n logger.info(\n \"object referenced in spec is not a valid loaderplugin registry; \"\n \"using default loaderplugin registry '%s'\", default.registry_name)\n else:\n logger.info(\n \"no loaderplugin registry referenced in spec; \"\n \"using default loaderplugin registry '%s'\", default.registry_name)\n spec[CALMJS_LOADERPLUGIN_REGISTRY] = registry = default\n\n return registry"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntaking an existing spec and a sourcepath mapping (that could be produced via calmjs.dist.*_module_registry_dependencies functions) and split out the keys that does not contain loaderplugin syntax and assign it to the spec under sourcepath_key. For the parts with loader plugin syntax (i.e. modnames (keys) that contain a '!' character), they are instead stored under a different mapping under its own mapping identified by the plugin_name. The mapping under loaderplugin_sourcepath_map_key will contain all mappings of this type. The resolution for the handlers will be done through the loader plugin registry provided via spec[CALMJS_LOADERPLUGIN_REGISTRY] if available, otherwise the registry instance will be acquired through the main registry using spec[CALMJS_LOADERPLUGIN_REGISTRY_NAME]. For the example sourcepath_map input: sourcepath = { 'module': 'something', 'plugin!inner': 'inner', 'plugin!other': 'other', 'plugin?query!question': 'question', 'plugin!plugin2!target': 'target', } The following will be stored under the following keys in spec: spec[sourcepath_key] = { 'module': 'something', } spec[loaderplugin_sourcepath_map_key] = { 'plugin': { 'plugin!inner': 'inner', 'plugin!other': 'other', 'plugin?query!question': 'question', 'plugin!plugin2!target': 'target', }, } The goal of this function is to aid in processing each of the plugin types by batch, one level at a time. It is up to the handler itself to trigger further lookups as there are implementations of loader plugins that do not respect the chaining mechanism, thus a generic lookup done at once may not be suitable. Note that nested/chained loaderplugins are not immediately grouped as they must be individually handled given that the internal syntax are generally proprietary to the outer plugin. The handling will be dealt with at the Toolchain.compile_loaderplugin_entry method through the associated handler call method. Toolchain implementations may either invoke this directly as part of the prepare step on the required sourcepaths values stored in the spec, or implement this at a higher level before invocating the toolchain instance with the spec.", "response": "def spec_update_sourcepath_filter_loaderplugins(\n spec, sourcepath_map, sourcepath_map_key,\n loaderplugin_sourcepath_map_key=LOADERPLUGIN_SOURCEPATH_MAPS):\n \"\"\"\n Take an existing spec and a sourcepath mapping (that could be\n produced via calmjs.dist.*_module_registry_dependencies functions)\n and split out the keys that does not contain loaderplugin syntax and\n assign it to the spec under sourcepath_key.\n\n For the parts with loader plugin syntax (i.e. modnames (keys) that\n contain a '!' character), they are instead stored under a different\n mapping under its own mapping identified by the plugin_name. The\n mapping under loaderplugin_sourcepath_map_key will contain all\n mappings of this type.\n\n The resolution for the handlers will be done through the loader\n plugin registry provided via spec[CALMJS_LOADERPLUGIN_REGISTRY] if\n available, otherwise the registry instance will be acquired through\n the main registry using spec[CALMJS_LOADERPLUGIN_REGISTRY_NAME].\n\n For the example sourcepath_map input:\n\n sourcepath = {\n 'module': 'something',\n 'plugin!inner': 'inner',\n 'plugin!other': 'other',\n 'plugin?query!question': 'question',\n 'plugin!plugin2!target': 'target',\n }\n\n The following will be stored under the following keys in spec:\n\n spec[sourcepath_key] = {\n 'module': 'something',\n }\n\n spec[loaderplugin_sourcepath_map_key] = {\n 'plugin': {\n 'plugin!inner': 'inner',\n 'plugin!other': 'other',\n 'plugin?query!question': 'question',\n 'plugin!plugin2!target': 'target',\n },\n }\n\n The goal of this function is to aid in processing each of the plugin\n types by batch, one level at a time. It is up to the handler itself\n to trigger further lookups as there are implementations of loader\n plugins that do not respect the chaining mechanism, thus a generic\n lookup done at once may not be suitable.\n\n Note that nested/chained loaderplugins are not immediately grouped\n as they must be individually handled given that the internal syntax\n are generally proprietary to the outer plugin. The handling will be\n dealt with at the Toolchain.compile_loaderplugin_entry method\n through the associated handler call method.\n\n Toolchain implementations may either invoke this directly as part\n of the prepare step on the required sourcepaths values stored in the\n spec, or implement this at a higher level before invocating the\n toolchain instance with the spec.\n \"\"\"\n\n default = dict_setget_dict(spec, sourcepath_map_key)\n registry = spec_update_loaderplugin_registry(spec)\n\n # it is more loaderplugin_sourcepath_maps\n plugins = dict_setget_dict(spec, loaderplugin_sourcepath_map_key)\n\n for modname, sourcepath in sourcepath_map.items():\n parts = modname.split('!', 1)\n if len(parts) == 1:\n # default\n default[modname] = sourcepath\n continue\n\n # don't actually do any processing yet.\n plugin_name = registry.to_plugin_name(modname)\n plugin = dict_setget_dict(plugins, plugin_name)\n plugin[modname] = sourcepath"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef toolchain_spec_prepare_loaderplugins(\n toolchain, spec,\n loaderplugin_read_key,\n handler_sourcepath_key,\n loaderplugin_sourcepath_map_key=LOADERPLUGIN_SOURCEPATH_MAPS):\n \"\"\"\n A standard helper function for combining the filtered (e.g. using\n ``spec_update_sourcepath_filter_loaderplugins``) loaderplugin\n sourcepath mappings back into one that is usable with the standard\n ``toolchain_spec_compile_entries`` function.\n\n Arguments:\n\n toolchain\n The toolchain\n spec\n The spec\n\n loaderplugin_read_key\n The read_key associated with the loaderplugin process as set up\n for the Toolchain that implemented this. If the toolchain has\n this in its compile_entries:\n\n ToolchainSpecCompileEntry('loaderplugin', 'plugsrc', 'plugsink')\n\n The loaderplugin_read_key it must use will be 'plugsrc'.\n\n handler_sourcepath_key\n All found handlers will have their handler_sourcepath method be\n invoked, and the combined results will be a dict stored in the\n spec under that key.\n\n loaderplugin_sourcepath_map_key\n It must be the same key to the value produced by\n ``spec_update_sourcepath_filter_loaderplugins``\n \"\"\"\n\n # ensure the registry is applied to the spec\n registry = spec_update_loaderplugin_registry(\n spec, default=toolchain.loaderplugin_registry)\n\n # this one is named like so for the compile entry method\n plugin_sourcepath = dict_setget_dict(\n spec, loaderplugin_read_key + '_sourcepath')\n # the key is supplied by the toolchain that might make use of this\n if handler_sourcepath_key:\n handler_sourcepath = dict_setget_dict(spec, handler_sourcepath_key)\n else:\n # provide a null value for this.\n handler_sourcepath = {}\n\n for key, value in spec.get(loaderplugin_sourcepath_map_key, {}).items():\n handler = registry.get(key)\n if handler:\n # assume handler will do the job.\n logger.debug(\"found handler for '%s' loader plugin\", key)\n plugin_sourcepath.update(value)\n logger.debug(\n \"plugin_sourcepath updated with %d keys\", len(value))\n # TODO figure out how to address the case where the actual\n # JavaScript module for the handling wasn't found.\n handler_sourcepath.update(\n handler.generate_handler_sourcepath(toolchain, spec, value))\n else:\n logger.warning(\n \"loaderplugin handler for '%s' not found in loaderplugin \"\n \"registry '%s'; as arguments associated with loader plugins \"\n \"are specific, processing is disabled for this group; the \"\n \"sources referenced by the following names will not be \"\n \"compiled into the build target: %s\",\n key, registry.registry_name, sorted(value.keys()),\n )", "response": "This function is used to prepare the loaderplugins for a given set of sources."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprocessing the compile entries.", "response": "def process_compile_entries(\n processor, spec, entries, modpath_logger=None, targetpath_logger=None):\n \"\"\"\n The generalized raw spec entry process invocation loop.\n \"\"\"\n\n # Contains a mapping of the module name to the compiled file's\n # relative path starting from the base build_dir.\n all_modpaths = {}\n all_targets = {}\n # List of exported module names, should be equal to all keys of\n # the compiled and bundled sources.\n all_export_module_names = []\n\n def update(base, fresh, logger):\n if callable(logger):\n for dupes in dict_update_overwrite_check(base, fresh):\n logger(*dupes)\n else:\n base.update(fresh)\n\n for entry in entries:\n modpaths, targetpaths, export_module_names = processor(spec, entry)\n update(all_modpaths, modpaths, modpath_logger)\n update(all_targets, targetpaths, targetpath_logger)\n all_export_module_names.extend(export_module_names)\n\n return all_modpaths, all_targets, all_export_module_names"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nlike update but a list of selected keys must be provided.", "response": "def update_selected(self, other, selected):\n \"\"\"\n Like update, however a list of selected keys must be provided.\n \"\"\"\n\n self.update({k: other[k] for k in selected})"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __advice_stack_frame_protection(self, frame):\n\n if frame is None:\n logger.debug(\n 'currentframe() returned None; frame protection disabled')\n return\n\n f_back = frame.f_back\n while f_back:\n if f_back.f_code is self.handle.__code__:\n raise RuntimeError(\n \"indirect invocation of '%s' by 'handle' is forbidden\" %\n frame.f_code.co_name,\n )\n f_back = f_back.f_back", "response": "This method is used to check if the current frame protection is allowed."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding an advice to the stack of the advice group.", "response": "def advise(self, name, f, *a, **kw):\n \"\"\"\n Add an advice that will be handled later by the handle method.\n\n Arguments:\n\n name\n The name of the advice group\n f\n A callable method or function.\n\n The rest of the arguments will be passed as arguments and\n keyword arguments to f when it's invoked.\n \"\"\"\n\n if name is None:\n return\n\n advice = (f, a, kw)\n debug = self.get(DEBUG)\n\n frame = currentframe()\n if frame is None:\n logger.debug('currentframe() failed to return frame')\n else:\n if name in self._called:\n self.__advice_stack_frame_protection(frame)\n if debug:\n logger.debug(\n \"advise '%s' invoked by %s:%d\",\n name,\n frame.f_back.f_code.co_filename, frame.f_back.f_lineno,\n )\n if debug > 1:\n # use the memory address of the tuple which should\n # be stable\n self._frames[id(advice)] = ''.join(\n format_stack(frame.f_back))\n\n self._advices[name] = self._advices.get(name, [])\n self._advices[name].append(advice)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncall all advices at the provided name. This has an analogue in the join point in aspected oriented programming, but the analogy is a weak one as we don't have the proper metaobject protocol to support this. Implementation that make use of this system should make it clear that they will call this method with name associated with its group before and after its execution, or that the method at hand that want this invoked be called by this other conductor method. For the Toolchain standard steps (prepare, compile, assemble, link and finalize), this handle method will only be called by invoking the toolchain as a callable. Calling those methods piecemeal will not trigger the invocation, even though it probably should. Modules, classes and methods that desire to call their own handler should instead follow the convention where the handle be called before and after with the appropriate names. For instance: def test(self, spec): spec.handle(BEFORE_TEST) # do the things spec.handle(AFTER_TEST) This arrangement will need to be revisited when a proper system is written at the metaclass level. Arguments: name The name of the advices group. All the callables registered to this group will be invoked, last-in-first-out.", "response": "def handle(self, name):\n \"\"\"\n Call all advices at the provided name.\n\n This has an analogue in the join point in aspected oriented\n programming, but the analogy is a weak one as we don't have the\n proper metaobject protocol to support this. Implementation that\n make use of this system should make it clear that they will call\n this method with name associated with its group before and after\n its execution, or that the method at hand that want this invoked\n be called by this other conductor method.\n\n For the Toolchain standard steps (prepare, compile, assemble,\n link and finalize), this handle method will only be called by\n invoking the toolchain as a callable. Calling those methods\n piecemeal will not trigger the invocation, even though it\n probably should. Modules, classes and methods that desire to\n call their own handler should instead follow the convention\n where the handle be called before and after with the appropriate\n names. For instance:\n\n def test(self, spec):\n spec.handle(BEFORE_TEST)\n # do the things\n spec.handle(AFTER_TEST)\n\n This arrangement will need to be revisited when a proper system\n is written at the metaclass level.\n\n Arguments:\n\n name\n The name of the advices group. All the callables\n registered to this group will be invoked, last-in-first-out.\n \"\"\"\n\n if name in self._called:\n logger.warning(\n \"advice group '%s' has been called for this spec %r\",\n name, self,\n )\n # only now ensure checking\n self.__advice_stack_frame_protection(currentframe())\n else:\n self._called.add(name)\n\n # Get a complete clone, so indirect manipulation done to the\n # reference that others have access to will not have an effect\n # within the scope of this execution. Please refer to the\n # test_toolchain, test_spec_advice_no_infinite_pop test case.\n advices = []\n advices.extend(self._advices.get(name, []))\n\n if advices and self.get('debug'):\n logger.debug(\n \"handling %d advices in group '%s' \", len(advices), name)\n\n while advices:\n try:\n # cleanup basically done lifo (last in first out)\n values = advices.pop()\n advice, a, kw = values\n if not ((callable(advice)) and\n isinstance(a, tuple) and\n isinstance(kw, dict)):\n raise TypeError\n except ValueError:\n logger.info('Spec advice extraction error: got %s', values)\n except TypeError:\n logger.info('Spec advice malformed: got %s', values)\n else:\n try:\n try:\n advice(*a, **kw)\n except Exception as e:\n # get that back by the id.\n frame = self._frames.get(id(values))\n if frame:\n logger.info('Spec advice exception: %r', e)\n logger.info(\n 'Traceback for original advice:\\n%s', frame)\n # continue on for the normal exception\n raise\n except AdviceCancel as e:\n logger.info(\n \"advice %s in group '%s' signaled its cancellation \"\n \"during its execution: %s\", advice, name, e\n )\n if self.get(DEBUG):\n logger.debug(\n 'showing traceback for cancellation', exc_info=1,\n )\n except AdviceAbort as e:\n # this is a signaled error with a planned abortion\n logger.warning(\n \"advice %s in group '%s' encountered a known error \"\n \"during its execution: %s; continuing with toolchain \"\n \"execution\", advice, name, e\n )\n if self.get(DEBUG):\n logger.warning(\n 'showing traceback for error', exc_info=1,\n )\n except ToolchainCancel:\n # this is the safe cancel\n raise\n except ToolchainAbort as e:\n logger.critical(\n \"an advice in group '%s' triggered an abort: %s\",\n name, str(e)\n )\n raise\n except KeyboardInterrupt:\n raise ToolchainCancel('interrupted')\n except Exception as e:\n # a completely unplanned failure\n logger.critical(\n \"advice %s in group '%s' terminated due to an \"\n \"unexpected exception: %s\", advice, name, e\n )\n if self.get(DEBUG):\n logger.critical(\n 'showing traceback for error', exc_info=1,\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nresolving and update the path key in the spec with its realpath based on the working directory.", "response": "def realpath(self, spec, key):\n \"\"\"\n Resolve and update the path key in the spec with its realpath,\n based on the working directory.\n \"\"\"\n\n if key not in spec:\n # do nothing for now\n return\n\n if not spec[key]:\n logger.warning(\n \"cannot resolve realpath of '%s' as it is not defined\", key)\n return\n\n check = realpath(join(spec.get(WORKING_DIR, ''), spec[key]))\n if check != spec[key]:\n spec[key] = check\n logger.warning(\n \"realpath of '%s' resolved to '%s', spec is updated\",\n key, check\n )\n return check"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setup_prefix_suffix(self):\n\n self.compile_prefix = 'compile_'\n self.sourcepath_suffix = '_sourcepath'\n self.modpath_suffix = '_modpaths'\n self.targetpath_suffix = '_targetpaths'", "response": "Set up the compile prefix sourcepath and targetpath suffix attributes which are used to retrieve the values from the generator\n function."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _validate_build_target(self, spec, target):\n\n if not realpath(target).startswith(spec[BUILD_DIR]):\n raise ValueError('build_target %s is outside build_dir' % target)", "response": "Validate that the target is inside the build_dir."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef transpile_modname_source_target(self, spec, modname, source, target):\n\n if not isinstance(self.transpiler, BaseUnparser):\n _deprecation_warning(\n 'transpiler callable assigned to %r must be an instance of '\n 'calmjs.parse.unparsers.base.BaseUnparser by calmjs-4.0.0; '\n 'if the original transpile behavior is to be retained, the '\n 'subclass may instead override this method to call '\n '`simple_transpile_modname_source_target` directly, as '\n 'this fallback behavior will be removed by calmjs-4.0.0' % (\n self,\n )\n )\n return self.simple_transpile_modname_source_target(\n spec, modname, source, target)\n\n # do the new thing here.\n return self._transpile_modname_source_target(\n spec, modname, source, target)", "response": "This method is called by compile_transpile_entry to process the JavaScript source file provided by some\n Python package through the transpiler instance."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef compile_loaderplugin_entry(self, spec, entry):\n\n modname, source, target, modpath = entry\n handler = spec[CALMJS_LOADERPLUGIN_REGISTRY].get(modname)\n if handler:\n return handler(self, spec, modname, source, target, modpath)\n logger.warning(\n \"no loaderplugin handler found for plugin entry '%s'\", modname)\n return {}, {}, []", "response": "Compile a loader plugin entry into a dictionary."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a target file name from the input module name and its source file name. The result should be a path relative to the build_dir, and this is derived directly from the modname with NO implicit convers of path separators (i.e. '/' or any other) into a system or OS specific form (e.g. '\\\\'). The rationale for this choice is that there exists Node.js/JavaScript tools that handle this internally and/or these paths and values are directly exposed on the web and thus these separators must be preserved. If the specific implementation requires this to be done, implementations may override by wrapping the result of this using os.path.normpath. For the generation of transpile write targets, this will be done in _generate_transpile_target. Default is to append the module name with the filename_suffix assigned to this instance (setup by setup_filename_suffix), iff the provided source also end with this filename suffix. However, certain tools have issues dealing with loader plugin syntaxes showing up on the filesystem (and certain filesystems definitely do not like some of the characters), so the usage of the loaderplugin registry assigned to the spec may be used for lookup if available. Called by generator method `_gen_modname_source_target_modpath`.", "response": "def modname_source_to_target(self, spec, modname, source):\n \"\"\"\n Create a target file name from the input module name and its\n source file name. The result should be a path relative to the\n build_dir, and this is derived directly from the modname with NO\n implicit convers of path separators (i.e. '/' or any other) into\n a system or OS specific form (e.g. '\\\\'). The rationale for\n this choice is that there exists Node.js/JavaScript tools that\n handle this internally and/or these paths and values are\n directly exposed on the web and thus these separators must be\n preserved.\n\n If the specific implementation requires this to be done,\n implementations may override by wrapping the result of this\n using os.path.normpath. For the generation of transpile write\n targets, this will be done in _generate_transpile_target.\n\n Default is to append the module name with the filename_suffix\n assigned to this instance (setup by setup_filename_suffix), iff\n the provided source also end with this filename suffix.\n\n However, certain tools have issues dealing with loader plugin\n syntaxes showing up on the filesystem (and certain filesystems\n definitely do not like some of the characters), so the usage of\n the loaderplugin registry assigned to the spec may be used for\n lookup if available.\n\n Called by generator method `_gen_modname_source_target_modpath`.\n \"\"\"\n\n loaderplugin_registry = spec.get(CALMJS_LOADERPLUGIN_REGISTRY)\n if '!' in modname and loaderplugin_registry:\n handler = loaderplugin_registry.get(modname)\n if handler:\n return handler.modname_source_to_target(\n self, spec, modname, source)\n\n if (source.endswith(self.filename_suffix) and\n not modname.endswith(self.filename_suffix)):\n return modname + self.filename_suffix\n else:\n # assume that modname IS the filename\n return modname"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef modname_source_target_modnamesource_to_modpath(\n self, spec, modname, source, target, modname_source):\n \"\"\"\n Typical JavaScript tools will get confused if '.js' is added, so\n by default the same modname is returned as path rather than the\n target file for the module path to be written to the output file\n for linkage by tools. Some other tools may desire the target to\n be returned instead, or construct some other string that is more\n suitable for the tool that will do the assemble and link step.\n\n The modname and source argument provided to aid pedantic tools,\n but really though this provides more consistency to method\n signatures.\n\n Same as `self.modname_source_target_to_modpath`, but includes\n the original raw key-value as a 2-tuple.\n\n Called by generator method `_gen_modname_source_target_modpath`.\n \"\"\"\n\n return self.modname_source_target_to_modpath(\n spec, modname, source, target)", "response": "Returns the path to the target file for the given module name."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _gen_modname_source_target_modpath(self, spec, d):\n\n for modname_source in d.items():\n try:\n modname = self.modname_source_to_modname(spec, *modname_source)\n source = self.modname_source_to_source(spec, *modname_source)\n target = self.modname_source_to_target(spec, *modname_source)\n modpath = self.modname_source_target_modnamesource_to_modpath(\n spec, modname, source, target, modname_source)\n except ValueError as e:\n # figure out which of the above 3 functions failed by\n # acquiring the name from one frame down.\n f_name = sys.exc_info()[2].tb_next.tb_frame.f_code.co_name\n\n if isinstance(e, ValueSkip):\n # a purposely benign failure.\n log = partial(\n logger.info,\n \"toolchain purposely skipping on '%s', \"\n \"reason: %s, where modname='%s', source='%s'\",\n )\n else:\n log = partial(\n logger.warning,\n \"toolchain failed to acquire name with '%s', \"\n \"reason: %s, where modname='%s', source='%s'; \"\n \"skipping\",\n )\n\n log(f_name, e, *modname_source)\n continue\n yield modname, source, target, modpath", "response": "Private generator that generates the names of the source and target modules that are used by the toolchain."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef compile(self, spec):\n\n spec[EXPORT_MODULE_NAMES] = export_module_names = spec.get(\n EXPORT_MODULE_NAMES, [])\n if not isinstance(export_module_names, list):\n raise TypeError(\n \"spec provided a '%s' but it is not of type list \"\n \"(got %r instead)\" % (EXPORT_MODULE_NAMES, export_module_names)\n )\n\n def compile_entry(method, read_key, store_key):\n spec_read_key = read_key + self.sourcepath_suffix\n spec_modpath_key = store_key + self.modpath_suffix\n spec_target_key = store_key + self.targetpath_suffix\n\n if _check_key_exists(spec, [spec_modpath_key, spec_target_key]):\n logger.error(\n \"aborting compile step %r due to existing key\", entry,\n )\n return\n\n sourcepath_dict = spec.get(spec_read_key, {})\n entries = self._gen_modname_source_target_modpath(\n spec, sourcepath_dict)\n (spec[spec_modpath_key], spec[spec_target_key],\n new_module_names) = method(spec, entries)\n logger.debug(\n \"entry %r \"\n \"wrote %d entries to spec[%r], \"\n \"wrote %d entries to spec[%r], \"\n \"added %d export_module_names\",\n entry,\n len(spec[spec_modpath_key]), spec_modpath_key,\n len(spec[spec_target_key]), spec_target_key,\n len(new_module_names),\n )\n export_module_names.extend(new_module_names)\n\n for entry in self.compile_entries:\n if isinstance(entry, ToolchainSpecCompileEntry):\n log = partial(\n logging.getLogger(entry.logger).log,\n entry.log_level,\n (\n entry.store_key + \"%s['%s'] is being rewritten from \"\n \"'%s' to '%s'; configuration may now be invalid\"\n ),\n ) if entry.logger else None\n compile_entry(partial(\n toolchain_spec_compile_entries, self,\n process_name=entry.process_name,\n overwrite_log=log,\n ), entry.read_key, entry.store_key)\n continue\n\n m, read_key, store_key = entry\n if callable(m):\n method = m\n else:\n method = getattr(self, self.compile_prefix + m, None)\n if not callable(method):\n logger.error(\n \"'%s' not a callable attribute for %r from \"\n \"compile_entries entry %r; skipping\", m, self, entry\n )\n continue\n\n compile_entry(method, read_key, store_key)", "response": "Generic step that compiles from a spec to build the build directory."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef calf(self, spec):\n\n if not isinstance(spec, Spec):\n raise TypeError('spec must be of type Spec')\n\n if not spec.get(BUILD_DIR):\n tempdir = realpath(mkdtemp())\n spec.advise(CLEANUP, shutil.rmtree, tempdir)\n build_dir = join(tempdir, 'build')\n mkdir(build_dir)\n spec[BUILD_DIR] = build_dir\n else:\n build_dir = self.realpath(spec, BUILD_DIR)\n if not isdir(build_dir):\n logger.error(\"build_dir '%s' is not a directory\", build_dir)\n raise_os_error(errno.ENOTDIR, build_dir)\n\n self.realpath(spec, EXPORT_TARGET)\n\n # Finally, handle setup which may set up the deferred advices,\n # as all the toolchain (and its runtime and/or its parent\n # runtime and related toolchains) spec advises should have been\n # done.\n spec.handle(SETUP)\n\n try:\n process = ('prepare', 'compile', 'assemble', 'link', 'finalize')\n for p in process:\n spec.handle('before_' + p)\n getattr(self, p)(spec)\n spec.handle('after_' + p)\n spec.handle(SUCCESS)\n except ToolchainCancel:\n # quietly handle the issue and move on out of here.\n pass\n finally:\n spec.handle(CLEANUP)", "response": "This function sets up the spec advises and returns the unique ID of the new one."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef transpile_modname_source_target(self, spec, modname, source, target):\n\n return self.simple_transpile_modname_source_target(\n spec, modname, source, target)", "response": "Transpile a module name from one source to another."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_bin_version_str(bin_path, version_flag='-v', kw={}):\n\n try:\n prog = _get_exec_binary(bin_path, kw)\n version_str = version_expr.search(\n check_output([prog, version_flag], **kw).decode(locale)\n ).groups()[0]\n except OSError:\n logger.warning(\"failed to execute '%s'\", bin_path)\n return None\n except Exception:\n logger.exception(\n \"encountered unexpected error while trying to find version of \"\n \"'%s':\", bin_path\n )\n return None\n logger.info(\"'%s' is version '%s'\", bin_path, version_str)\n return version_str", "response": "Get the version string through the binary."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_bin_version(bin_path, version_flag='-v', kw={}):\n\n version_str = get_bin_version_str(bin_path, version_flag, kw)\n if version_str:\n return tuple(int(i) for i in version_str.split('.'))", "response": "Get the version string through the binary and return a tuple of integers."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef node(self, source, args=(), env={}):\n\n return self._exec(self.node_bin, source, args=args, env=env)", "response": "Calls node with an inline source."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pkg_manager_view(\n self, package_names, stream=None, explicit=False, **kw):\n \"\"\"\n Returns the manifest JSON for the Python package name. Default\n npm implementation calls for package.json.\n\n If this class is initiated using standard procedures, this will\n mimic the functionality of ``npm view`` but mostly for showing\n the dependencies. This is done as a default action.\n\n Arguments:\n\n package_names\n The names of the python packages with their requirements to\n source the package.json from.\n stream\n If specified, the generated package.json will be written to\n there.\n explicit\n If True, the package names specified are the explicit list\n to search for - no dependency resolution will then be done.\n\n Returns the manifest json as a dict.\n \"\"\"\n\n # For looking up the pkg_name to dist converter for explicit\n to_dists = {\n False: find_packages_requirements_dists,\n True: pkg_names_to_dists,\n }\n\n # assuming string, and assume whitespaces are invalid.\n pkg_names, malformed = convert_package_names(package_names)\n if malformed:\n msg = 'malformed package name(s) specified: %s' % ', '.join(\n malformed)\n raise ValueError(msg)\n\n if len(pkg_names) == 1:\n logger.info(\n \"generating a flattened '%s' for '%s'\",\n self.pkgdef_filename, pkg_names[0],\n )\n else:\n logger.info(\n \"generating a flattened '%s' for packages {%s}\",\n self.pkgdef_filename, ', '.join(pkg_names),\n )\n\n # remember the filename is in the context of the distribution,\n # not the filesystem.\n dists = to_dists[explicit](pkg_names)\n pkgdef_json = flatten_dist_egginfo_json(\n dists, filename=self.pkgdef_filename,\n dep_keys=self.dep_keys,\n )\n\n if pkgdef_json.get(\n self.pkg_name_field, NotImplemented) is NotImplemented:\n # use the last item.\n pkg_name = Requirement.parse(pkg_names[-1]).project_name\n pkgdef_json[self.pkg_name_field] = pkg_name\n\n if stream:\n self.dump(pkgdef_json, stream)\n stream.write('\\n')\n\n return pkgdef_json", "response": "This function generates a manifest json for the Python package name."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnotes default implementation calls for npm and package.json, please note that it may not be the case for this instance of Driver. If this class is initiated using standard procedures, this will emulate the functionality of ``npm init`` for the generation of a working ``package.json``, but without asking users for input but instead uses information available through the distribution packages within ``setuptools``. Arguments: package_names The names of the python packages with their requirements to source the package.json from. overwrite Boolean flag; if set, overwrite package.json with the newly generated ``package.json``; merge Boolean flag; if set, implies overwrite, but does not ignore interactive setting. However this will keep details defined in existing ``package.json`` and only merge dependencies / devDependencies defined by the specified Python package. callback A callable. If this is passed, the value for overwrite will be derived from its result; it will be called with arguments (original_json, pkgdef_json, pkgdef_path, dumps=self.dumps). Typically the calmjs.ui.prompt_overwrite_json is passed into this argument; refer to its documentation on details. Returns generated definition file if successful; can be achieved by writing a new file or that the existing one matches with the expected version. Returns False otherwise.", "response": "def pkg_manager_init(\n self, package_names, overwrite=False, merge=False,\n callback=None, **kw):\n \"\"\"\n Note: default implementation calls for npm and package.json,\n please note that it may not be the case for this instance of\n Driver.\n\n If this class is initiated using standard procedures, this will\n emulate the functionality of ``npm init`` for the generation of\n a working ``package.json``, but without asking users for input\n but instead uses information available through the distribution\n packages within ``setuptools``.\n\n Arguments:\n\n package_names\n The names of the python packages with their requirements to\n source the package.json from.\n\n overwrite\n Boolean flag; if set, overwrite package.json with the newly\n generated ``package.json``;\n\n merge\n Boolean flag; if set, implies overwrite, but does not ignore\n interactive setting. However this will keep details defined\n in existing ``package.json`` and only merge dependencies /\n devDependencies defined by the specified Python package.\n\n callback\n A callable. If this is passed, the value for overwrite will\n be derived from its result; it will be called with arguments\n (original_json, pkgdef_json, pkgdef_path, dumps=self.dumps).\n Typically the calmjs.ui.prompt_overwrite_json is passed into\n this argument; refer to its documentation on details.\n\n Returns generated definition file if successful; can be achieved\n by writing a new file or that the existing one matches with the\n expected version. Returns False otherwise.\n \"\"\"\n\n # this will be modified in place\n original_json = {}\n\n pkgdef_json = self.pkg_manager_view(package_names, **kw)\n\n # Now we figure out the actual file we want to work with.\n pkgdef_path = self.join_cwd(self.pkgdef_filename)\n existed = exists(pkgdef_path)\n\n if existed:\n try:\n with open(pkgdef_path, 'r') as fd:\n original_json = json.load(fd)\n except ValueError:\n logger.warning(\n \"ignoring existing malformed '%s'\", pkgdef_path)\n except (IOError, OSError):\n logger.error(\n \"reading of existing '%s' failed; \"\n \"please confirm that it is a file and/or permissions to \"\n \"read and write is permitted before retrying.\",\n pkgdef_path\n )\n # Cowardly giving up.\n raise\n\n if merge:\n # Merge the generated on top of the original.\n updates = generate_merge_dict(\n self.dep_keys, original_json, pkgdef_json,\n )\n final = {}\n final.update(original_json)\n final.update(pkgdef_json)\n final.update(updates)\n pkgdef_json = final\n\n if original_json == pkgdef_json:\n # Well, if original existing one is identical with the\n # generated version, we have reached our target.\n return pkgdef_json\n\n if not overwrite and callable(callback):\n overwrite = callback(\n original_json, pkgdef_json, pkgdef_path, dumps=self.dumps)\n else:\n # here the implied settings due to non-interactive mode\n # are finally set\n if merge:\n overwrite = True\n\n if not overwrite:\n logger.warning(\"not overwriting existing '%s'\", pkgdef_path)\n return False\n\n with open(pkgdef_path, 'w') as fd:\n self.dump(pkgdef_json, fd)\n logger.info(\"wrote '%s'\", pkgdef_path)\n\n return pkgdef_json"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pkg_manager_install(\n self, package_names=None,\n production=None, development=None,\n args=(), env={}, **kw):\n \"\"\"\n This will install all dependencies into the current working\n directory for the specific Python package from the selected\n JavaScript package manager; this requires that this package\n manager's package definition file to be properly generated\n first, otherwise the process will be aborted.\n\n If the production argument is supplied, it will be passed to the\n underlying package manager binary as a true or false value with\n the --production flag, otherwise it will not be set.\n\n Likewise for development. However, the production flag has\n priority.\n\n If the argument 'args' is supplied as a tuple, those will be\n passed through to the package manager install command as its\n arguments. This will be very specific to the underlying\n program; use with care as misuse can result in an environment\n that is not expected by the other parts of the framework.\n\n If the argument 'env' is supplied, they will be additional\n environment variables that are not already defined by the\n framework, which are 'NODE_PATH' and 'PATH'. Values set for\n those will have highest precedence, then the ones passed in\n through env, then finally whatever was already defined before\n the execution of this program.\n\n All other arguments to this method will be passed forward to the\n pkg_manager_init method, if the package_name is supplied for the\n Python package.\n\n If no package_name was supplied then just continue with the\n process anyway, to still enable the shorthand calling.\n\n If the package manager could not be invoked, it will simply not\n be.\n\n Arguments:\n\n package_names\n The names of the Python package to generate the manifest\n for.\n args\n The arguments to pass into the command line install.\n \"\"\"\n\n if not package_names:\n logger.warning(\n \"no package name supplied, not continuing with '%s %s'\",\n self.pkg_manager_bin, self.install_cmd,\n )\n return\n\n result = self.pkg_manager_init(package_names, **kw)\n if result is False:\n logger.warning(\n \"not continuing with '%s %s' as the generation of \"\n \"'%s' failed\", self.pkg_manager_bin, self.install_cmd,\n self.pkgdef_filename\n )\n return\n\n call_kw = self._gen_call_kws(**env)\n logger.debug(\n \"invoking '%s %s'\", self.pkg_manager_bin, self.install_cmd)\n if self.env_path:\n logger.debug(\n \"invoked with env_path '%s'\", self.env_path)\n if self.working_dir:\n logger.debug(\n \"invoked from working directory '%s'\", self.working_dir)\n try:\n cmd = [self._get_exec_binary(call_kw), self.install_cmd]\n cmd.extend(self._prodev_flag(\n production, development, result.get(self.devkey)))\n cmd.extend(args)\n logger.info('invoking %s', ' '.join(cmd))\n call(cmd, **call_kw)\n except (IOError, OSError):\n logger.error(\n \"invocation of the '%s' binary failed; please ensure it and \"\n \"its dependencies are installed and available.\", self.binary\n )\n # Still raise the exception as this is a lower level API.\n raise\n\n return True", "response": "This method installs all dependencies into the current working directory for the specified Python package."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef run(self, args=(), env={}):\n\n # the following will call self._get_exec_binary\n return self._exec(self.binary, args=args, env=env)", "response": "Calls the package manager with the arguments."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_exec_binary(binary, kw):\n\n binary = which(binary, path=kw.get('env', {}).get('PATH'))\n if binary is None:\n raise_os_error(errno.ENOENT)\n return binary", "response": "Get the executable path for the\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes the entry points for the specified registry.", "response": "def _init_entry_points(self, entry_points):\n \"\"\"\n Default initialization loop.\n \"\"\"\n\n logger.debug(\n \"registering %d entry points for registry '%s'\",\n len(entry_points), self.registry_name,\n )\n for entry_point in entry_points:\n try:\n logger.debug(\n \"registering entry point '%s' from '%s'\",\n entry_point, entry_point.dist,\n )\n self._init_entry_point(entry_point)\n except ImportError:\n logger.warning(\n 'ImportError: %s not found; skipping registration',\n entry_point.module_name)\n except Exception:\n logger.exception(\n \"registration of entry point '%s' from '%s' to registry \"\n \"'%s' failed with the following exception\",\n entry_point, entry_point.dist, self.registry_name,\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nstoring the records for the given entry_point in a way that permit lookup by package .", "response": "def store_records_for_package(self, entry_point, records):\n \"\"\"\n Store the records in a way that permit lookup by package\n \"\"\"\n\n # If provided records already exist in the module mapping list,\n # it likely means that a package declared multiple keys for the\n # same package namespace; while normally this does not happen,\n # this default implementation make no assumptions as to whether\n # or not this is permitted.\n pkg_module_records = self._dist_to_package_module_map(entry_point)\n pkg_module_records.extend(records)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef register_entry_point(self, entry_point):\n\n module = _import_module(entry_point.module_name)\n self._register_entry_point_module(entry_point, module)", "response": "Register a lone entry_point"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a record by name", "response": "def get_record(self, name):\n \"\"\"\n Get a record by name\n \"\"\"\n\n result = {}\n result.update(self.records.get(name, {}))\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets all records identified by package.", "response": "def get_records_for_package(self, package_name):\n \"\"\"\n Get all records identified by package.\n \"\"\"\n\n names = self.package_module_map.get(package_name, [])\n result = {}\n for name in names:\n result.update(self.get_record(name))\n return result"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef resolve_parent_registry_name(self, registry_name, suffix):\n\n if not registry_name.endswith(suffix):\n raise ValueError(\n \"child module registry name defined with invalid suffix \"\n \"('%s' does not end with '%s')\" % (registry_name, suffix))\n return registry_name[:-len(suffix)]", "response": "Resolve the parent module registry name."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a record for the given name which will be a set of all records matching the desired module names.", "response": "def get_record(self, name):\n \"\"\"\n Get a record for the registered name, which will be a set of\n matching desired \"module names\" for the given path.\n \"\"\"\n\n return set().union(self.records.get(name, set()))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets all records identified by package.", "response": "def get_records_for_package(self, package_name):\n \"\"\"\n Get all records identified by package.\n \"\"\"\n\n result = []\n result.extend(self.package_module_map.get(package_name))\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfigure out which binary this will execute.", "response": "def which(self):\n \"\"\"\n Figure out which binary this will execute.\n\n Returns None if the binary is not found.\n \"\"\"\n\n if self.binary is None:\n return None\n\n return which(self.binary, path=self.env_path)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfinding all node_modules directories configured to be accessible through this driver instance.", "response": "def find_node_modules_basedir(self):\n \"\"\"\n Find all node_modules directories configured to be accessible\n through this driver instance.\n\n This is typically used for adding the direct instance, and does\n not traverse the parent directories like what Node.js does.\n\n Returns a list of directories that contain a 'node_modules'\n directory.\n \"\"\"\n\n paths = []\n\n # First do the working dir.\n local_node_path = self.join_cwd(NODE_MODULES)\n if isdir(local_node_path):\n paths.append(local_node_path)\n\n # do the NODE_PATH environment variable last, as Node.js seem to\n # have these resolving just before the global.\n if self.node_path:\n paths.extend(self.node_path.split(pathsep))\n\n return paths"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef which_with_node_modules(self):\n\n if self.binary is None:\n return None\n\n # first, log down the pedantic things...\n if isdir(self.join_cwd(NODE_MODULES)):\n logger.debug(\n \"'%s' instance will attempt to locate '%s' binary from \"\n \"%s%s%s%s%s, located through the working directory\",\n self.__class__.__name__, self.binary,\n self.join_cwd(), sep, NODE_MODULES, sep, NODE_MODULES_BIN,\n )\n if self.node_path:\n logger.debug(\n \"'%s' instance will attempt to locate '%s' binary from \"\n \"its %s of %s\",\n self.__class__.__name__, self.binary,\n NODE_PATH, self.node_path,\n )\n\n paths = self.find_node_modules_basedir()\n whichpaths = pathsep.join(join(p, NODE_MODULES_BIN) for p in paths)\n\n if paths:\n logger.debug(\n \"'%s' instance located %d possible paths to the '%s' binary, \"\n \"which are %s\",\n self.__class__.__name__, len(paths), self.binary, whichpaths,\n )\n\n return which(self.binary, path=whichpaths)", "response": "Return the path to the binary which has node_path and node_modules."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _set_env_path_with_node_modules(self):\n\n modcls_name = ':'.join((\n self.__class__.__module__, self.__class__.__name__))\n\n if self.binary is None:\n raise ValueError(\n \"binary undefined for '%s' instance\" % modcls_name)\n\n logger.debug(\n \"locating '%s' node binary for %s instance...\",\n self.binary, modcls_name,\n )\n\n default = self.which()\n if default is not None:\n logger.debug(\n \"found '%s'; \"\n \"not modifying PATH environment variable in instance of '%s'.\",\n realpath(default), modcls_name)\n return True\n\n target = self.which_with_node_modules()\n\n if target:\n # Only setting the path specific for the binary; side effect\n # will be whoever else borrowing the _exec in here might not\n # get the binary they want. That's why it's private.\n self.env_path = dirname(target)\n logger.debug(\n \"located '%s' binary at '%s'; setting PATH environment \"\n \"variable for '%s' instance.\",\n self.binary, self.env_path, modcls_name\n )\n return True\n else:\n logger.debug(\n \"Unable to locate '%s'; not modifying PATH environment \"\n \"variable for instance of '%s'.\",\n self.binary, modcls_name\n )\n return False", "response": "Attempts to locate and set the PATH environment variable for the instance with the node modules defined for this instance."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _exec(self, binary, stdin='', args=(), env={}):\n\n call_kw = self._gen_call_kws(**env)\n call_args = [self._get_exec_binary(call_kw)]\n call_args.extend(args)\n return fork_exec(call_args, stdin, **call_kw)", "response": "Executes the given binary using stdin and args with environment\n variables."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndumps the contents of the object to a file - like object stream.", "response": "def dump(self, blob, stream):\n \"\"\"\n Call json.dump with the attributes of this instance as\n arguments.\n \"\"\"\n\n json.dump(\n blob, stream, indent=self.indent, sort_keys=True,\n separators=self.separators,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dumps(self, blob):\n\n return json.dumps(\n blob, indent=self.indent, sort_keys=True,\n separators=self.separators,\n )", "response": "Dump the contents of the object as a JSON string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\njoin the current working directory with the current working directory.", "response": "def join_cwd(self, path=None):\n \"\"\"\n Join the path with the current working directory. If it is\n specified for this instance of the object it will be used,\n otherwise rely on the global value.\n \"\"\"\n\n if self.working_dir:\n logger.debug(\n \"'%s' instance 'working_dir' set to '%s' for join_cwd\",\n type(self).__name__, self.working_dir,\n )\n cwd = self.working_dir\n else:\n cwd = getcwd()\n logger.debug(\n \"'%s' instance 'working_dir' unset; \"\n \"default to process '%s' for join_cwd\",\n type(self).__name__, cwd,\n )\n\n if path:\n return join(cwd, path)\n return cwd"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _unicode_handler(obj):\n try:\n result = obj.isoformat()\n except AttributeError:\n raise TypeError(\"Unserializable object {} of type {}\".format(obj,\n type(obj)))\n\n return result", "response": "Transform an unicode string into a UTF - 8 equivalent."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nencoding the data into a JSON structure from an instance from the current domain model.", "response": "def encode(self, entity):\n \"\"\"\n Encodes the data, creating a JSON structure from an instance from the\n domain model.\n\n :param entity: the instance to encode\n :return: a JSON structure created from the received data\n \"\"\"\n encoded = self._dict_encoder.encode(entity)\n\n if sys.version_info[0] == 2:\n result = json.dumps(encoded, ensure_ascii=False,\n default=_iso_handler, encoding='latin1')\n else:\n # For Python 3\n result = json.dumps(encoded, ensure_ascii=False,\n default=_iso_handler)\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef ipi_base_number(name=None):\n\n if name is None:\n name = 'IPI Base Number Field'\n\n field = pp.Regex('I-[0-9]{9}-[0-9]')\n\n # Name\n field.setName(name)\n\n field_num = basic.numeric(13)\n field_num.setName(name)\n\n field = field | field_num\n\n # White spaces are not removed\n field.leaveWhitespace()\n\n return field.setResultsName('ipi_base_n')", "response": "returns a parser for the IPI Base Number field"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ipi_name_number(name=None):\n\n if name is None:\n name = 'IPI Name Number Field'\n\n field = basic.numeric(11)\n\n field.setName(name)\n\n return field.setResultsName('ipi_name_n')", "response": "A function to create a parser for the IPI Name Number field."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef iswc(name=None):\n\n if name is None:\n name = 'ISWC Field'\n\n # T followed by 10 numbers\n field = pp.Regex('T[0-9]{10}')\n\n # Name\n field.setName(name)\n\n # White spaces are not removed\n field.leaveWhitespace()\n\n return field.setResultsName('iswc')", "response": "A function to create a parser for the ISWC code field."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef percentage(columns, maximum=100, name=None):\n\n if name is None:\n name = 'Percentage Field'\n\n if columns < 3:\n message = 'The values can not be lower than 3'\n raise pp.ParseException(message)\n\n field = basic.numeric_float(columns, 3)\n\n field.addParseAction(lambda v: _assert_is_percentage(v[0], maximum))\n\n field.setName(name)\n\n return field", "response": "Returns a grammar for a numeric field storing a percentage field."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nensuring that the received value is a percentage.", "response": "def _assert_is_percentage(value, maximum=100):\n \"\"\"\n Makes sure the received value is a percentage. Otherwise an exception is\n thrown.\n\n :param value: the value to check\n \"\"\"\n\n if value < 0 or value > maximum:\n message = 'The value on a percentage field should be between 0 and %s' \\\n % maximum\n raise pp.ParseException(message)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the grammar for an EAN 13 code.", "response": "def ean_13(name=None):\n \"\"\"\n Creates the grammar for an EAN 13 code.\n\n These are the codes on thirteen digits barcodes.\n\n :param name: name for the field\n :return: grammar for an EAN 13 field\n \"\"\"\n\n if name is None:\n name = 'EAN 13 Field'\n\n field = basic.numeric(13)\n\n field = field.setName(name)\n\n return field.setResultsName('ean_13')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a grammar for an ISRC code.", "response": "def isrc(name=None):\n \"\"\"\n Creates the grammar for an ISRC code.\n\n ISRC stands for International Standard Recording Code, which is the\n standard ISO 3901. This stores information identifying a particular\n recording.\n\n :param name: name for the field\n :return: grammar for an ISRC field\n \"\"\"\n\n if name is None:\n name = 'ISRC Field'\n\n field = _isrc_short(name) | _isrc_long(name)\n\n field.setName(name)\n\n return field.setResultsName('isrc')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _isrc_long(name=None):\n\n config = CWRTables()\n\n if name is None:\n name = 'ISRC Field'\n\n country = config.get_data('isrc_country_code')\n # registrant = basic.alphanum(3)\n # year = pp.Regex('[0-9]{2}')\n # work_id = pp.Regex('[0-9]{5}')\n\n country_regex = ''\n for c in country:\n if len(country_regex) > 0:\n country_regex += '|'\n country_regex += c\n country_regex = '(' + country_regex + ')'\n\n field = pp.Regex(country_regex + '.{3}[0-9]{2}[0-9]{5}')\n\n # country.setName('ISO-2 Country Code')\n # registrant.setName('Registrant')\n # year.setName('Year')\n # work_id.setName('Work ID')\n\n field.setName(name)\n\n return field.setResultsName('isrc')", "response": "Returns a grammar for a short ISRC code."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a grammar for a V - ISAN code.", "response": "def visan(name=None):\n \"\"\"\n Creates the grammar for a V-ISAN code.\n\n This is a variation on the ISAN (International Standard Audiovisual Number)\n\n :param name: name for the field\n :return: grammar for an ISRC field\n \"\"\"\n\n if name is None:\n name = 'V-ISAN Field'\n\n field = pp.Regex('[0-9]{25}')\n\n field.setName(name)\n\n return field.setResultsName('visan')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef audio_visual_key(name=None):\n\n if name is None:\n name = 'AVI Field'\n\n society_code = basic.numeric(3)\n society_code = society_code.setName('Society Code') \\\n .setResultsName('society_code')\n\n av_number = basic.alphanum(15, extended=True, isLast=True)\n field_empty = pp.Regex('[ ]{15}')\n field_empty.setParseAction(pp.replaceWith(''))\n av_number = av_number | field_empty\n av_number = av_number.setName('Audio-Visual Number') \\\n .setResultsName('av_number')\n\n field = pp.Group(society_code + pp.Optional(av_number))\n\n field.setParseAction(lambda v: _to_avi(v[0]))\n\n field = field.setName(name)\n\n return field.setResultsName('audio_visual_key')", "response": "Creates the grammar for an audio visual key code."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a grammar for a date and time field.", "response": "def date_time(name=None):\n \"\"\"\n Creates the grammar for a date and time field, which is a combination of\n the Date (D) and Time or Duration field (T).\n\n This field requires first a Date, and then a Time, without any space in\n between.\n\n :param name: name for the field\n :return: grammar for a Date and Time field\n \"\"\"\n if name is None:\n name = 'Date and Time Field'\n\n date = basic.date('Date')\n time = basic.time('Time')\n\n date = date.setResultsName('date')\n time = time.setResultsName('time')\n\n field = pp.Group(date + time)\n\n field.setParseAction(lambda d: _combine_date_time(d[0]))\n\n field.setName(name)\n\n return field.setResultsName('date_time')"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a Lookup field which transforms the result into an integer.", "response": "def lookup_int(values, name=None):\n \"\"\"\n Lookup field which transforms the result into an integer.\n\n :param values: values allowed\n :param name: name for the field\n :return: grammar for the lookup field\n \"\"\"\n field = basic.lookup(values, name)\n\n field.addParseAction(lambda l: int(l[0]))\n\n return field"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef extract_function_argument(text, f_name, f_argn, f_argt=asttypes.String):\n\n tree = parse(text)\n return list(filter_function_argument(tree, f_name, f_argn, f_argt))", "response": "Extract a specific argument from a specific function name."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef yield_amd_require_string_arguments(\n node, pos,\n reserved_module=reserved_module, wrapped=define_wrapped):\n \"\"\"\n This yields only strings within the lists provided in the argument\n list at the specified position from a function call.\n\n Originally, this was implemented for yield a list of module names to\n be imported as represented by this given node, which must be of the\n FunctionCall type.\n \"\"\"\n\n for i, child in enumerate(node.args.items[pos]):\n if isinstance(child, asttypes.String):\n result = to_str(child)\n if ((result not in reserved_module) and (\n result != define_wrapped.get(i))):\n yield result", "response": "Yields only strings within the list of module names that are not defined in the function call."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef yield_string_argument(node, pos):\n\n if not isinstance(node.args.items[pos], asttypes.String):\n return\n yield to_str(node.args.items[pos])", "response": "Yield just a string argument from the given position."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef yield_module_imports(root, checks=string_imports()):\n\n if not isinstance(root, asttypes.Node):\n raise TypeError('provided root must be a node')\n\n for child in yield_function(root, deep_filter):\n for f, condition in checks:\n if condition(child):\n for name in f(child):\n yield name\n continue", "response": "Yields all module names from the source tree root."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef yield_module_imports_nodes(root, checks=import_nodes()):\n\n if not isinstance(root, asttypes.Node):\n raise TypeError('provided root must be a node')\n\n for child in yield_function(root, deep_filter):\n for f, condition in checks:\n if condition(child):\n for name in f(child):\n yield name\n continue", "response": "Yield all nodes that provide an import\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef open_fasta_index(self):\n index = self.fasta_index\n try:\n handle = open(index, 'rb')\n except (IOError, TypeError):\n sys.stderr.write('index not found, creating it\\n')\n try:\n self.build_fasta_index()\n return\n except IOError:\n raise IOError(\"Index File \"+self.fasta_index+\"can't be found nor created, check file permissions\")\n self.sequence_index = {}\n _seq_dict = self.sequence_index\n for row in handle:\n entry = row.decode('utf-8').strip().split('\\t')\n #stored as: {header: length, # of chars to end of this header, length of fasta lines, length of each line including breakchar}\n _seq_dict[entry[0]] = (entry[1], entry[2], entry[3], entry[4])", "response": "open the index file and return the object"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_sequence(self, chrom, start, end, strand='+', indexing=(-1, 0)):\n try:\n divisor = int(self.sequence_index[chrom][2])\n except KeyError:\n self.open_fasta_index()\n try:\n divisor = int(self.sequence_index[chrom][2])\n except KeyError:\n sys.stderr.write(\"%s cannot be found within the fasta index file.\\n\" % chrom)\n return \"\"\n start+=indexing[0]\n end+=indexing[1]\n #is it a valid position?\n if ( start < 0 or end > int(self.sequence_index[chrom][0] )):\n raise ValueError(\"The range %d-%d is invalid. Valid range for this feature is 1-%d.\" % (start-indexing[0], end-indexing[1], \n int(self.sequence_index[chrom][0])))\n #go to start of chromosome\n seekpos = int(self.sequence_index[chrom][1])\n #find how many newlines we have\n seekpos += start+start/divisor\n slen = end-start\n endpos = int(slen + (slen/divisor) + 1) #a hack of sorts but it works and is easy\n self.fasta_file.seek(seekpos, 0)\n output = self.fasta_file.read(endpos)\n output = output.replace('\\n', '')\n out = output[:slen]\n if strand == '+' or strand == 1:\n return out\n if strand == '-' or strand == -1:\n return _reverse_complement(out)", "response": "Get the sequence from the file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef resolve_child_module_registries_lineage(registry):\n\n children = [registry]\n while isinstance(registry, BaseChildModuleRegistry):\n if registry.parent in children:\n # this should never normally occur under normal usage where\n # classes have been properly subclassed with methods defined\n # to specificiation and with standard entry point usage, but\n # non-standard definitions/usage can definitely trigger this\n # self-referential loop.\n raise TypeError(\n \"registry '%s' was already recorded in the lineage, \"\n \"indicating that it may be some (grand)child of itself, which \"\n \"is an illegal reference in the registry system; previously \"\n \"resolved lineage is: %r\" % (registry.parent.registry_name, [\n r.registry_name for r in reversed(children)\n ])\n )\n\n pl = len(registry.parent.registry_name)\n if len(registry.parent.registry_name) > len(registry.registry_name):\n logger.warning(\n \"the parent registry '%s' somehow has a longer name than its \"\n \"child registry '%s'; the underlying registry class may be \"\n \"constructed in an invalid manner\",\n registry.parent.registry_name,\n registry.registry_name,\n )\n elif registry.registry_name[:pl] != registry.parent.registry_name:\n logger.warning(\n \"child registry '%s' does not share the same common prefix as \"\n \"its parent registry '%s'; there may be errors with how the \"\n \"related registries are set up or constructed\",\n registry.registry_name,\n registry.parent.registry_name,\n )\n children.append(registry.parent)\n registry = registry.parent\n # the lineage down from parent to child.\n return iter(reversed(children))", "response": "Given a child module registry attempt to resolve the lineage."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef addModification(self, aa,position, modMass, modType):\n #clean up xtandem\n if not modType:\n #try to figure out what it is\n tmass = abs(modMass)\n smass = str(tmass)\n prec = len(str(tmass-int(tmass)))-2\n precFormat = '%'+'0.%df'%prec\n # modType = \"\"\n # masses = config.MODIFICATION_MASSES\n # for i in masses:\n # if tmass in masses[i] or smass == precFormat%masses[i][0]:\n # #found it\n # modType = i\n # if not modType:\n # sys.stderr.write('mod not found %s\\n'%modMass)\n self.mods.add((aa,str(position),str(modMass),str(modType)))", "response": "Add a modification to the list of modifications."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nattempting to resolve the ArcGIS module name and a distribution.", "response": "def resource_filename_mod_dist(module_name, dist):\n \"\"\"\n Given a module name and a distribution, attempt to resolve the\n actual path to the module.\n \"\"\"\n\n try:\n return pkg_resources.resource_filename(\n dist.as_requirement(), join(*module_name.split('.')))\n except pkg_resources.DistributionNotFound:\n logger.warning(\n \"distribution '%s' not found, falling back to resolution using \"\n \"module_name '%s'\", dist, module_name,\n )\n return pkg_resources.resource_filename(module_name, '')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the full path to the resource file for the given module and entry point.", "response": "def resource_filename_mod_entry_point(module_name, entry_point):\n \"\"\"\n If a given package declares a namespace and also provide submodules\n nested at that namespace level, and for whatever reason that module\n is needed, Python's import mechanism will not have a path associated\n with that module. However, if given an entry_point, this path can\n be resolved through its distribution. That said, the default\n resource_filename function does not accept an entry_point, and so we\n have to chain that back together manually.\n \"\"\"\n\n if entry_point.dist is None:\n # distribution missing is typically caused by mocked entry\n # points from tests; silently falling back to basic lookup\n result = pkg_resources.resource_filename(module_name, '')\n else:\n result = resource_filename_mod_dist(module_name, entry_point.dist)\n\n if not result:\n logger.warning(\n \"resource path cannot be found for module '%s' and entry_point \"\n \"'%s'\", module_name, entry_point\n )\n return None\n if not exists(result):\n logger.warning(\n \"resource path found at '%s' for module '%s' and entry_point \"\n \"'%s', but it does not exist\", result, module_name, entry_point,\n )\n return None\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngenerate a JavaScript styled module location listing for the given entry point.", "response": "def modgen(\n module, entry_point,\n modpath='pkg_resources', globber='root', fext=JS_EXT,\n registry=_utils):\n \"\"\"\n JavaScript styled module location listing generator.\n\n Arguments:\n\n module\n The Python module to start fetching from.\n\n entry_point\n This is the original entry point that has a distribution\n reference such that the resource_filename API call may be used\n to locate the actual resources.\n\n Optional Arguments:\n\n modpath\n The name to the registered modpath function that will fetch the\n paths belonging to the module. Defaults to 'pkg_resources'.\n\n globber\n The name to the registered file globbing function. Defaults to\n one that will only glob the local path.\n\n fext\n The filename extension to match. Defaults to `.js`.\n\n registry\n The \"registry\" to extract the functions from\n\n Yields 3-tuples of\n\n - raw list of module name fragments\n - the source base path to the python module (equivalent to module)\n - the relative path to the actual module\n\n For each of the module basepath and source files the globber finds.\n \"\"\"\n\n globber_f = globber if callable(globber) else registry['globber'][globber]\n modpath_f = modpath if callable(modpath) else registry['modpath'][modpath]\n\n logger.debug(\n 'modgen generating file listing for module %s',\n module.__name__,\n )\n\n module_frags = module.__name__.split('.')\n module_base_paths = modpath_f(module, entry_point)\n\n for module_base_path in module_base_paths:\n logger.debug('searching for *%s files in %s', fext, module_base_path)\n for path in globber_f(module_base_path, '*' + fext):\n mod_path = (relpath(path, module_base_path))\n yield (\n module_frags + mod_path[:-len(fext)].split(sep),\n module_base_path,\n mod_path,\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef modpath_all(module, entry_point):\n\n module_paths = getattr(module, '__path__', [])\n if not module_paths:\n logger.warning(\n \"module '%s' does not appear to be a namespace module or does not \"\n \"export available paths onto the filesystem; JavaScript source \"\n \"files cannot be extracted from this module.\", module.__name__\n )\n return module_paths", "response": "Provides the raw __path__ of the module."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the raw path of the last module in the list.", "response": "def modpath_last(module, entry_point):\n \"\"\"\n Provides the raw __path__. Incompatible with PEP 302-based import\n hooks and incompatible with zip_safe packages.\n\n Deprecated. Will be removed by calmjs-4.0.\n \"\"\"\n\n module_paths = modpath_all(module, entry_point)\n if len(module_paths) > 1:\n logger.info(\n \"module '%s' has multiple paths, default selecting '%s' as base.\",\n module.__name__, module_paths[-1],\n )\n return module_paths[-1:]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngoing through pkg_resources for compliance with various PEPs.", "response": "def modpath_pkg_resources(module, entry_point):\n \"\"\"\n Goes through pkg_resources for compliance with various PEPs.\n\n This one accepts a module as argument.\n \"\"\"\n\n result = []\n try:\n path = resource_filename_mod_entry_point(module.__name__, entry_point)\n except ImportError:\n logger.warning(\"module '%s' could not be imported\", module.__name__)\n except Exception:\n logger.warning(\"%r does not appear to be a valid module\", module)\n else:\n if path:\n result.append(path)\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mapper(module, entry_point,\n modpath='pkg_resources', globber='root', modname='es6',\n fext=JS_EXT, registry=_utils):\n \"\"\"\n General mapper\n\n Loads components from the micro registry.\n \"\"\"\n\n modname_f = modname if callable(modname) else _utils['modname'][modname]\n\n return {\n modname_f(modname_fragments): join(base, subpath)\n for modname_fragments, base, subpath in modgen(\n module, entry_point=entry_point,\n modpath=modpath, globber=globber,\n fext=fext, registry=_utils)\n }", "response": "General mapper for the micro registry."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mapper_python(module, entry_point, globber='root', fext=JS_EXT):\n\n return mapper(\n module, entry_point=entry_point, modpath='pkg_resources',\n globber=globber, modname='python', fext=fext)", "response": "Default mapper using python style globber"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the ID code in a printable form filling with zeros if needed.", "response": "def _printable_id_code(self):\n \"\"\"\n Returns the code in a printable form, filling with zeros if needed.\n\n :return: the ID code in a printable form\n \"\"\"\n code = str(self.id_code)\n while len(code) < self._code_size:\n code = '0' + code\n\n return code"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the code in a printable form separating it into groups of three characters using a point between them.", "response": "def _printable_id_code(self):\n \"\"\"\n Returns the code in a printable form, separating it into groups of\n three characters using a point between them.\n\n :return: the ID code in a printable form\n \"\"\"\n code = super(ISWCCode, self)._printable_id_code()\n\n code1 = code[:3]\n code2 = code[3:6]\n code3 = code[-3:]\n\n return '%s.%s.%s' % (code1, code2, code3)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef discard(self, s):\n\n lines = s.splitlines(True)\n for line in lines:\n if line[-1] not in '\\r\\n':\n if not self.warn:\n logger.warning(\n 'partial line discard UNSUPPORTED; source map '\n 'generated will not match at the column level'\n )\n self.warn = True\n else:\n # simply increment row\n self.row += 1", "response": "Discard from original file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrite a string that is not part of the original file.", "response": "def write_padding(self, s):\n \"\"\"\n Write string that are not part of the original file.\n \"\"\"\n\n lines = s.splitlines(True)\n for line in lines:\n self.stream.write(line)\n if line[-1] in '\\r\\n':\n self._newline()\n else:\n # this is the last line\n self.generated_col += len(line)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format_currency(number, currency, format, locale=babel.numbers.LC_NUMERIC,\n force_frac=None, format_type='standard'):\n \"\"\"Same as ``babel.numbers.format_currency``, but has ``force_frac``\n argument instead of ``currency_digits``.\n\n If the ``force_frac`` argument is given, the argument is passed down to\n ``pattern.apply``.\n \"\"\"\n locale = babel.core.Locale.parse(locale)\n if format:\n pattern = babel.numbers.parse_pattern(format)\n else:\n try:\n pattern = locale.currency_formats[format_type]\n except KeyError:\n raise babel.numbers.UnknownCurrencyFormatError(\n \"%r is not a known currency format type\" % format_type)\n if force_frac is None:\n fractions = babel.core.get_global('currency_fractions')\n try:\n digits = fractions[currency][0]\n except KeyError:\n digits = fractions['DEFAULT'][0]\n frac = (digits, digits)\n else:\n frac = force_frac\n return pattern.apply(number, locale, currency=currency, force_frac=frac)", "response": "Same as babel. numbers. format_currency but has a force_frac argument instead of currency_digits."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef run(unihan_options={}):\n print(\"This example prints variant character data.\")\n\n c = Cihai()\n if not c.unihan.is_bootstrapped: # download and install Unihan to db\n c.unihan.bootstrap(unihan_options)\n\n c.unihan.add_plugin(\n 'cihai.data.unihan.dataset.UnihanVariants', namespace='variants'\n )\n\n print(\"## ZVariants\")\n variant_list(c.unihan, \"kZVariant\")\n\n print(\"## kSemanticVariant\")\n variant_list(c.unihan, \"kSemanticVariant\")\n\n print(\"## kSpecializedSemanticVariant\")\n variant_list(c.unihan, \"kSpecializedSemanticVariant\")", "response": "This example prints the variant character data."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreflect the database info.", "response": "def reflect_db(self):\n \"\"\"\n No-op to reflect db info.\n\n This is available as a method so the database can be reflected\n outside initialization (such bootstrapping unihan during CLI usage).\n \"\"\"\n self.metadata.reflect(views=True, extend_existing=True)\n self.base = automap_base(metadata=self.metadata)\n self.base.prepare()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse(s, element=Element, atomicstring=lambda s: s):\n\n global Element_, AtomicString_\n\n Element_ = element\n AtomicString_ = atomicstring\n\n s, nodes = parse_exprs(s)\n remove_invisible(nodes)\n nodes = map(remove_private, nodes)\n\n return El('math', El('mstyle', *nodes))", "response": "Parses a string representation of a single element into a tree of elements."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef trace_parser(p):\n\n def nodes_to_string(n):\n if isinstance(n, list):\n result = '[ '\n for m in map(nodes_to_string, n):\n result += m\n result += ' '\n result += ']'\n\n return result\n else:\n try:\n return tostring(remove_private(copy(n)))\n except Exception as e:\n return n\n\n def print_trace(*args):\n import sys\n\n sys.stderr.write(\" \" * tracing_level)\n for arg in args:\n sys.stderr.write(str(arg))\n sys.stderr.write(' ')\n sys.stderr.write('\\n')\n sys.stderr.flush()\n\n def wrapped(s, *args, **kwargs):\n global tracing_level\n\n print_trace(p.__name__, repr(s))\n\n tracing_level += 1\n s, n = p(s, *args, **kwargs)\n tracing_level -= 1\n\n print_trace(\"-> \", repr(s), nodes_to_string(n))\n\n return s, n\n\n return wrapped", "response": "Decorator for tracing the parser."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove_symbol_from_dist(dist, index):\n '''\n prob is a ndarray representing a probability distribution.\n index is a number between 0 and and the number of symbols ( len(prob)-1 )\n return the probability distribution if the element at 'index' was no longer available\n '''\n if type(dist) is not Distribution:\n raise TypeError(\"remove_symbol_from_dist got an object ot type {0}\".format(type(dist)))\n\n new_prob = dist.prob.copy()\n new_prob[index]=0\n new_prob /= sum(new_prob)\n return Distribution(new_prob)", "response": "Removes a symbol from a probability distribution."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchanges every response in x that matches index by randomly sampling from prob", "response": "def change_response(x, prob, index):\n '''\n change every response in x that matches 'index' by randomly sampling from prob\n '''\n #pdb.set_trace()\n N = (x==index).sum()\n #x[x==index]=9\n x[x==index] = dist.sample(N)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmake a toy example where x is uniformly distributed with N bits and y is uniformly distributed with N bits and y follows x and with symbol dependent noise.", "response": "def toy_example():\n \"\"\"\n Make a toy example where x is uniformly distributed with N bits and y\n follows x but with symbol dependent noise.\n x=0 -> y=0\n x=1 -> y=1 + e\n x=2 -> y=2 + 2*e\n ...\n x=n -> y=n + n*e\n where by n*e I am saying that the noise grows\n \"\"\"\n #pdb.set_trace()\n N=4\n m = 100\n x = np.zeros(m*(2**N))\n y = np.zeros(m*(2**N))\n\n for i in range(1, 2**N):\n x[i*m:(i+1)*m] = i\n y[i*m:(i+1)*m] = i + np.random.randint(0, 2*i, m)\n\n diff = differentiate_mi(x,y)\n return x, y, diff"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncomputes the mi of the given x and y", "response": "def differentiate_mi(x, y):\n '''\n for each symbol in x, change x such that there are no more of such symbols\n (replacing by a random distribution with the same proba of all other symbols)\n and compute mi(new_x, y)\n '''\n #pdb.set_trace()\n dist = Distribution(discrete.symbols_to_prob(x))\n\n diff = np.zeros(len(dist.prob))\n\n for i in range(len(dist.prob)):\n i = int(i)\n dist = Distribution(remove_symbol_from_dist(dist, i).prob)\n\n new_x = change_response(x, dist, i)\n\n diff[i] = discrete.mi(x,y)\n\n return diff"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating a random number in [ 0 1 ) and return the index into self. prob where the random number is greater than random_number.", "response": "def sample(self, *args):\n '''\n generate a random number in [0,1) and return the index into self.prob\n such that self.prob[index] <= random_number but self.prob[index+1] > random_number\n\n implementation note: the problem is identical to finding the index into self.cumsum\n where the random number should be inserted to keep the array sorted. This is exactly\n what searchsorted does. \n\n usage:\n myDist = Distribution(array(0.5, .25, .25))\n x = myDist.sample() # generates 1 sample\n x = myDist.sample(100) # generates 100 samples\n x = myDist.sample(10,10) # generates a 10x10 ndarray\n '''\n return self.cumsum.searchsorted(np.random.rand(*args))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the default log template for the given log record.", "response": "def default_log_template(self, record):\n \"\"\"Return the prefix for the log message. Template for Formatter.\n\n :param: record: :py:class:`logging.LogRecord` object. this is passed in\n from inside the :py:meth:`logging.Formatter.format` record.\n\n \"\"\"\n\n reset = Style.RESET_ALL\n levelname = [\n LEVEL_COLORS.get(record.levelname),\n Style.BRIGHT,\n '(%(levelname)s)',\n Style.RESET_ALL,\n ' ',\n ]\n asctime = [\n '[',\n Fore.BLACK,\n Style.DIM,\n Style.BRIGHT,\n '%(asctime)s',\n Fore.RESET,\n Style.RESET_ALL,\n ']',\n ]\n name = [\n ' ',\n Fore.WHITE,\n Style.DIM,\n Style.BRIGHT,\n '%(name)s',\n Fore.RESET,\n Style.RESET_ALL,\n ' ',\n ]\n\n tpl = \"\".join(reset + levelname + asctime + name + reset)\n\n return tpl"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef merge_dict(base, additional):\n if base is None:\n return additional\n\n if additional is None:\n return base\n\n if not (\n isinstance(base, collections.Mapping)\n and isinstance(additional, collections.Mapping)\n ):\n return additional\n\n merged = base\n for key, value in additional.items():\n if isinstance(value, collections.Mapping):\n merged[key] = merge_dict(merged.get(key), value)\n else:\n merged[key] = value\n\n return merged", "response": "Combine two dictionary - like objects."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexpand configuration XDG variables environmental variables and tildes.", "response": "def expand_config(d, dirs):\n \"\"\"\n Expand configuration XDG variables, environmental variables, and tildes.\n\n Parameters\n ----------\n d : dict\n config information\n dirs : appdirs.AppDirs\n XDG application mapping\n\n Notes\n -----\n *Environmentable variables* are expanded via :py:func:`os.path.expandvars`.\n So ``${PWD}`` would be replaced by the current PWD in the shell,\n ``${USER}`` would be the user running the app.\n\n *XDG variables* are expanded via :py:meth:`str.format`. These do not have a\n dollar sign. They are:\n\n - ``{user_cache_dir}``\n - ``{user_config_dir}``\n - ``{user_data_dir}``\n - ``{user_log_dir}``\n - ``{site_config_dir}``\n - ``{site_data_dir}``\n\n See Also\n --------\n os.path.expanduser, os.path.expandvars :\n Standard library functions for expanding variables. Same concept, used inside.\n \"\"\"\n context = {\n 'user_cache_dir': dirs.user_cache_dir,\n 'user_config_dir': dirs.user_config_dir,\n 'user_data_dir': dirs.user_data_dir,\n 'user_log_dir': dirs.user_log_dir,\n 'site_config_dir': dirs.site_config_dir,\n 'site_data_dir': dirs.site_data_dir,\n }\n\n for k, v in d.items():\n if isinstance(v, dict):\n expand_config(v, dirs)\n if isinstance(v, string_types):\n d[k] = os.path.expanduser(os.path.expandvars(d[k]))\n d[k] = d[k].format(**context)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef bootstrap_unihan(metadata, options={}):\n options = merge_dict(UNIHAN_ETL_DEFAULT_OPTIONS.copy(), options)\n\n p = unihan.Packager(options)\n p.download()\n data = p.export()\n table = create_unihan_table(UNIHAN_FIELDS, metadata)\n metadata.create_all()\n metadata.bind.execute(table.insert(), data)", "response": "Download extract and import unihan to database."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_bootstrapped(metadata):\n fields = UNIHAN_FIELDS + DEFAULT_COLUMNS\n if TABLE_NAME in metadata.tables.keys():\n table = metadata.tables[TABLE_NAME]\n\n if set(fields) == set(c.name for c in table.columns):\n return True\n else:\n return False\n else:\n return False", "response": "Return True if cihai is correctly bootstrapped."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new table with columns and index.", "response": "def create_unihan_table(columns, metadata):\n \"\"\"Create table and return :class:`sqlalchemy.Table`.\n\n Parameters\n ----------\n columns : list\n columns for table, e.g. ``['kDefinition', 'kCantonese']``\n metadata : :class:`sqlalchemy.schema.MetaData`\n Instance of sqlalchemy metadata\n\n Returns\n -------\n :class:`sqlalchemy.schema.Table` :\n Newly created table with columns and index.\n \"\"\"\n\n if TABLE_NAME not in metadata.tables:\n table = Table(TABLE_NAME, metadata)\n\n table.append_column(Column('char', String(12), primary_key=True))\n table.append_column(Column('ucn', String(12), primary_key=True))\n\n for column_name in columns:\n col = Column(column_name, String(256), nullable=True)\n table.append_column(col)\n\n return table\n else:\n return Table(TABLE_NAME, metadata)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_address(pk, main_net=True, prefix=None):\n if isinstance(pk, str):\n pk = unhexlify(pk.encode())\n assert len(pk) == 32, 'PK is 32bytes {}'.format(len(pk))\n k = keccak_256(pk).digest()\n ripe = RIPEMD160.new(k).digest()\n if prefix is None:\n body = (b\"\\x68\" if main_net else b\"\\x98\") + ripe\n else:\n assert isinstance(prefix, bytes), 'Set prefix 1 bytes'\n body = prefix + ripe\n checksum = keccak_256(body).digest()[0:4]\n return b32encode(body + checksum).decode()", "response": "compute the nem - py address from the public one"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef lookup_char(self, char):\n Unihan = self.sql.base.classes.Unihan\n return self.sql.session.query(Unihan).filter_by(char=char)", "response": "Return character information from datasets."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn QuerySet of objects from SQLAlchemy of results.", "response": "def reverse_char(self, hints):\n \"\"\"Return QuerySet of objects from SQLAlchemy of results.\n\n Parameters\n ----------\n hints: list of str\n strings to lookup\n\n Returns\n -------\n :class:`sqlalchemy.orm.query.Query` :\n reverse matches\n \"\"\"\n if isinstance(hints, string_types):\n hints = [hints]\n\n Unihan = self.sql.base.classes.Unihan\n columns = Unihan.__table__.columns\n return self.sql.session.query(Unihan).filter(\n or_(*[column.contains(hint) for column in columns for hint in hints])\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef with_fields(self, *fields):\n Unihan = self.sql.base.classes.Unihan\n query = self.sql.session.query(Unihan)\n for field in fields:\n query = query.filter(Column(field).isnot(None))\n return query", "response": "Returns list of characters with information for certain fields."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef entropy(data=None, prob=None, method='nearest-neighbors', bins=None, errorVal=1e-5, units='bits'):\n '''\n given a probability distribution (prob) or an interable of symbols (data) compute and\n return its continuous entropy.\n\n inputs:\n ------\n data: samples by dimensions ndarray\n\n prob: iterable with probabilities\n\n method: 'nearest-neighbors', 'gaussian', or 'bin'\n\n bins: either a list of num_bins, or a list of lists containing\n the bin edges\n\n errorVal: if prob is given, 'entropy' checks that the sum is about 1.\n It raises an error if abs(sum(prob)-1) >= errorVal\n\n units: either 'bits' or 'nats'\n\n Different Methods:\n\n 'nearest-neighbors' computes the binless entropy (bits) of a random vector\n using average nearest neighbors distance (Kozachenko and Leonenko, 1987).\n For a review see Beirlant et al., 2001 or Chandler & Field, 2007.\n\n 'gaussian' computes the binless entropy based on estimating the covariance\n matrix and assuming the data is normally distributed.\n\n 'bin' discretizes the data and computes the discrete entropy.\n\n '''\n\n if prob is None and data is None:\n raise ValueError(\"%s.entropy requires either 'prob' or 'data' to be defined\" % __name__)\n\n if prob is not None and data is not None:\n raise ValueError(\"%s.entropy requires only 'prob' or 'data to be given but not both\" % __name__)\n\n if prob is not None and not isinstance(prob, np.ndarray):\n raise TypeError(\"'entropy' in '%s' needs 'prob' to be an ndarray\" % __name__)\n\n if prob is not None and abs(prob.sum()-1) > errorVal:\n raise ValueError(\"parameter 'prob' in '%s.entropy' should sum to 1\" % __name__)\n\n if data is not None:\n num_samples = data.shape[0]\n if len(data.shape) == 1:\n num_dimensions = 1\n else:\n num_dimensions = data.shape[1]\n\n if method == 'nearest-neighbors':\n from sklearn.neighbors import NearestNeighbors\n from scipy.special import gamma\n\n if data is None:\n raise ValueError('Nearest neighbors entropy requires original data')\n\n if len(data.shape) > 1:\n k = num_dimensions\n else:\n k = 1\n\n nbrs = NearestNeighbors(n_neighbors=2, algorithm='auto').fit(data)\n distances, indices = nbrs.kneighbors(data)\n rho = distances[:,1] # take nearest-neighbor distance (first column is always zero)\n Ak = (k*np.pi**(float(k)/float(2)))/gamma(float(k)/float(2)+1)\n\n if units is 'bits':\n # 0.577215... is the Euler-Mascheroni constant (np.euler_gamma)\n return k*np.mean(np.log2(rho)) + np.log2(num_samples*Ak/k) + np.log2(np.exp(1))*np.euler_gamma\n elif units is 'nats':\n # 0.577215... is the Euler-Mascheroni constant (np.euler_gamma)\n return k*np.mean(np.log(rho)) + np.log(num_samples*Ak/k) + np.log(np.exp(1))*np.euler_gamma\n else:\n print('Units not recognized: {}'.format(units))\n\n\n elif method == 'gaussian':\n from numpy.linalg import det\n\n if data is None:\n raise ValueError('Nearest neighbors entropy requires original data')\n\n detCov = det(np.dot(data.transpose(), data)/num_samples)\n normalization = (2*np.pi*np.exp(1))**num_dimensions\n\n if detCov == 0:\n return -np.inf\n else:\n if units is 'bits':\n return 0.5*np.log2(normalization*detCov)\n elif units is 'nats':\n return 0.5*np.log(normalization*detCov)\n else:\n print('Units not recognized: {}'.format(units))\n\n elif method == 'bin':\n if prob is None and bins is None:\n raise ValueError('Either prob or bins must be specified.')\n\n if data is not None:\n prob = symbols_to_prob(data, bins=bins)\n\n if units is 'bits':\n # compute the log2 of the probability and change any -inf by 0s\n logProb = np.log2(prob)\n logProb[logProb == -np.inf] = 0\n elif units is 'nats':\n # compute the log2 of the probability and change any -inf by 0s\n logProb = np.log(prob)\n logProb[logProb == -np.inf] = 0\n else:\n print('Units not recognized: {}'.format(units))\n\n # return sum of product of logProb and prob\n # (not using np.dot here because prob, logprob are nd arrays)\n return -float(np.sum(prob * logProb))", "response": "Compute and return continuous entropy of a random vector."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef mi(x, y, bins_x=None, bins_y=None, bins_xy=None, method='nearest-neighbors', units='bits'):\n '''\n compute and return the mutual information between x and y\n\n inputs:\n -------\n x, y: numpy arrays of shape samples x dimension\n method: 'nearest-neighbors', 'gaussian', or 'bin'\n units: 'bits' or 'nats'\n\n output:\n -------\n mi: float\n\n Notes:\n ------\n if you are trying to mix several symbols together as in mi(x, (y0,y1,...)), try\n\n info[p] = _info.mi(x, info.combine_symbols(y0, y1, ...) )\n '''\n # dict.values() returns a view object that has to be converted to a list before being\n # converted to an array\n # the following lines will execute properly in python3, but not python2 because there\n # is no zip object\n try:\n if isinstance(x, zip):\n x = list(x)\n if isinstance(y, zip):\n y = list(y)\n except:\n pass\n\n # wrapped in try bracket because x, y might have no .shape attribute\n try:\n # handling for 1d np arrays\n if len(x.shape) == 1:\n x = np.expand_dims(x, 1)\n if len(y.shape) == 1:\n y = np.expand_dims(y, 1)\n except:\n pass\n\n HX = entropy(data=x, bins=bins_x, method=method, units=units)\n HY = entropy(data=y, bins=bins_y, method=method, units=units)\n HXY = entropy(data=np.concatenate([x, y], axis=1), bins=bins_xy, method=method, units=units)\n\n return HX + HY - HXY", "response": "compute the mutual information between x and y"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cond_entropy(x, y, bins_y=None, bins_xy=None, method='nearest-neighbors', units='bits'):\n '''\n compute the conditional entropy H(X|Y).\n\n method: 'nearest-neighbors', 'gaussian', or 'bin'\n if 'bin' need to provide bins_y, and bins_xy\n units: 'bits' or 'nats'\n '''\n HXY = entropy(data=np.concatenate([x, y], axis=1), bins=bins_xy, method=method, units=units)\n HY = entropy(data=y, bins=bins_y, method=method, units=units)\n\n return HXY - HY", "response": "compute the conditional entropy of two arrays x and y"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompute and return the entropy of a single object", "response": "def entropy(data=None, prob=None, tol=1e-5):\n '''\n given a probability distribution (prob) or an interable of symbols (data) compute and\n return its entropy\n\n inputs:\n ------\n data: iterable of symbols\n\n prob: iterable with probabilities\n\n tol: if prob is given, 'entropy' checks that the sum is about 1.\n It raises an error if abs(sum(prob)-1) >= tol\n '''\n\n if prob is None and data is None:\n raise ValueError(\"%s.entropy requires either 'prob' or 'data' to be defined\" % __name__)\n\n if prob is not None and data is not None:\n raise ValueError(\"%s.entropy requires only 'prob' or 'data to be given but not both\" % __name__)\n\n if prob is not None and not isinstance(prob, np.ndarray):\n raise TypeError(\"'entropy' in '%s' needs 'prob' to be an ndarray\" % __name__)\n\n if prob is not None and abs(prob.sum()-1) > tol:\n raise ValueError(\"parameter 'prob' in '%s.entropy' should sum to 1\" % __name__)\n\n if data is not None:\n prob = symbols_to_prob(data).prob()\n\n # compute the log2 of the probability and change any -inf by 0s\n logProb = np.log2(prob)\n logProb[logProb == -np.inf] = 0\n\n # return dot product of logProb and prob\n return -float(np.dot(prob, logProb))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef symbols_to_prob(symbols):\n '''\n Return a dict mapping symbols to probability.\n\n input:\n -----\n symbols: iterable of hashable items\n works well if symbols is a zip of iterables\n '''\n myCounter = Counter(symbols)\n\n N = float(len(list(symbols))) # symbols might be a zip object in python 3\n\n for k in myCounter:\n myCounter[k] /= N\n\n return myCounter", "response": "Returns a dict mapping symbols to probability."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncombine different symbols into a super - symbol.", "response": "def combine_symbols(*args):\n '''\n Combine different symbols into a 'super'-symbol\n\n args can be an iterable of iterables that support hashing\n\n see example for 2D ndarray input\n\n usage:\n 1) combine two symbols, each a number into just one symbol\n x = numpy.random.randint(0,4,1000)\n y = numpy.random.randint(0,2,1000)\n z = combine_symbols(x,y)\n\n 2) combine a letter and a number\n s = 'abcd'\n x = numpy.random.randint(0,4,1000)\n y = [s[randint(4)] for i in range(1000)]\n z = combine_symbols(x,y)\n\n 3) suppose you are running an experiment and for each sample, you measure 3 different\n properties and you put the data into a 2d ndarray such that:\n samples_N, properties_N = data.shape\n\n and you want to combine all 3 different properties into just 1 symbol\n In this case you have to find a way to impute each property as an independent array\n\n combined_symbol = combine_symbols(*data.T)\n\n\n 4) if data from 3) is such that:\n properties_N, samples_N = data.shape\n\n then run:\n\n combined_symbol = combine_symbols(*data)\n\n '''\n for arg in args:\n if len(arg)!=len(args[0]):\n raise ValueError(\"combine_symbols got inputs with different sizes\")\n\n return tuple(zip(*args))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mi(x, y):\n '''\n compute and return the mutual information between x and y\n\n inputs:\n -------\n x, y: iterables of hashable items\n\n output:\n -------\n mi: float\n\n Notes:\n ------\n if you are trying to mix several symbols together as in mi(x, (y0,y1,...)), try\n\n info[p] = _info.mi(x, info.combine_symbols(y0, y1, ...) )\n '''\n # dict.values() returns a view object that has to be converted to a list before being\n # converted to an array\n # the following lines will execute properly in python3, but not python2 because there\n # is no zip object\n try:\n if isinstance(x, zip):\n x = list(x)\n if isinstance(y, zip):\n y = list(y)\n except:\n pass\n\n probX = symbols_to_prob(x).prob()\n probY = symbols_to_prob(y).prob()\n probXY = symbols_to_prob(combine_symbols(x, y)).prob()\n\n return entropy(prob=probX) + entropy(prob=probY) - entropy(prob=probXY)", "response": "compute and return the mutual information between x and y"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cond_mi(x, y, z):\n '''\n compute and return the mutual information between x and y given z, I(x, y | z)\n\n inputs:\n -------\n x, y, z: iterables with discrete symbols\n\n output:\n -------\n mi: float\n\n implementation notes:\n ---------------------\n I(x, y | z) = H(x | z) - H(x | y, z)\n = H(x, z) - H(z) - ( H(x, y, z) - H(y,z) )\n = H(x, z) + H(y, z) - H(z) - H(x, y, z)\n '''\n # dict.values() returns a view object that has to be converted to a list before being converted to an array\n probXZ = symbols_to_prob(combine_symbols(x, z)).prob()\n probYZ = symbols_to_prob(combine_symbols(y, z)).prob()\n probXYZ =symbols_to_prob(combine_symbols(x, y, z)).prob()\n probZ = symbols_to_prob(z).prob()\n\n return entropy(prob=probXZ) + entropy(prob=probYZ) - entropy(prob=probXYZ) - entropy(prob=probZ)", "response": "compute and return the mutual information between x y and z given z"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef mi_chain_rule(X, y):\n '''\n Decompose the information between all X and y according to the chain rule and return all the terms in the chain rule.\n\n Inputs:\n -------\n X: iterable of iterables. You should be able to compute [mi(x, y) for x in X]\n\n y: iterable of symbols\n\n output:\n -------\n ndarray: terms of chaing rule\n\n Implemenation notes:\n I(X; y) = I(x0, x1, ..., xn; y)\n = I(x0; y) + I(x1;y | x0) + I(x2; y | x0, x1) + ... + I(xn; y | x0, x1, ..., xn-1)\n '''\n\n # allocate ndarray output\n chain = np.zeros(len(X))\n\n # first term in the expansion is not a conditional information, but the information between the first x and y\n chain[0] = mi(X[0], y)\n\n for i in range(1, len(X)):\n chain[i] = cond_mi(X[i], y, X[:i])\n\n return chain", "response": "Decompose the information between all X and y according to the chain rule and return all the terms in the chain rule."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompute the KL divergence between distributions P and Q where P and Q should be dictionaries linking symbols to probabilities.", "response": "def KL_divergence(P,Q):\n '''\n Compute the KL divergence between distributions P and Q\n \n P and Q should be dictionaries linking symbols to probabilities.\n the keys to P and Q should be the same.\n '''\n assert(P.keys()==Q.keys())\n \n distance = 0\n for k in P.keys():\n distance += P[k] * log(P[k]/Q[k])\n\n return distance"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef bin(x, bins, maxX=None, minX=None):\n '''\n bin signal x using 'binsN' bin. If minX, maxX are None, they default to the full\n range of the signal. If they are not None, everything above maxX gets assigned to\n binsN-1 and everything below minX gets assigned to 0, this is effectively the same\n as clipping x before passing it to 'bin'\n\n input:\n -----\n x: signal to be binned, some sort of iterable\n\n bins: int, number of bins\n iterable, bin edges\n\n maxX: clips data above maxX\n\n minX: clips data below maxX\n\n output:\n ------\n binnedX: x after being binned\n\n bins: bins used for binning.\n if input 'bins' is already an iterable it just returns the\n same iterable\n\n example:\n # make 10 bins of equal length spanning from x.min() to x.max()\n bin(x, 10)\n\n # use predefined bins such that each bin has the same number of points (maximize\n entropy)\n binsN = 10\n percentiles = list(np.arange(0, 100.1, 100/binsN))\n bins = np.percentile(x, percentiles)\n bin(x, bins)\n '''\n if maxX is None:\n maxX = x.max()\n\n if minX is None:\n minX = x.min()\n\n if not np.iterable(bins):\n bins = np.linspace(minX, maxX+1e-5, bins+1)\n\n # digitize works on 1d array but not nd arrays.\n # So I pass the flattened version of x and then reshape back into x's original shape\n return np.digitize(x.ravel(), bins).reshape(x.shape), bins", "response": "bin signal x using binsN bins"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetermining the URL corresponding to the Python object in the specified domain.", "response": "def linkcode_resolve(domain, info): # NOQA: C901\n \"\"\"\n Determine the URL corresponding to Python object\n\n Notes\n -----\n From https://github.com/numpy/numpy/blob/v1.15.1/doc/source/conf.py, 7c49cfa\n on Jul 31. License BSD-3. https://github.com/numpy/numpy/blob/v1.15.1/LICENSE.txt\n \"\"\"\n if domain != 'py':\n return None\n\n modname = info['module']\n fullname = info['fullname']\n\n submod = sys.modules.get(modname)\n if submod is None:\n return None\n\n obj = submod\n for part in fullname.split('.'):\n try:\n obj = getattr(obj, part)\n except Exception:\n return None\n\n # strip decorators, which would resolve to the source of the decorator\n # possibly an upstream bug in getsourcefile, bpo-1764286\n try:\n unwrap = inspect.unwrap\n except AttributeError:\n pass\n else:\n obj = unwrap(obj)\n\n try:\n fn = inspect.getsourcefile(obj)\n except Exception:\n fn = None\n if not fn:\n return None\n\n try:\n source, lineno = inspect.getsourcelines(obj)\n except Exception:\n lineno = None\n\n if lineno:\n linespec = \"#L%d-L%d\" % (lineno, lineno + len(source) - 1)\n else:\n linespec = \"\"\n\n fn = relpath(fn, start=dirname(cihai.__file__))\n\n if 'dev' in about['__version__']:\n return \"%s/blob/master/%s/%s%s\" % (\n about['__github__'],\n about['__package_name__'],\n fn,\n linespec,\n )\n else:\n return \"%s/blob/v%s/%s/%s%s\" % (\n about['__github__'],\n about['__version__'],\n about['__package_name__'],\n fn,\n linespec,\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef inv(z):\n # Adapted from curve25519_athlon.c in djb's Curve25519.\n z = qdiv(z)\n z2 = z * z % PRIME # 2\n z9 = pow2(z2, 2) * z % PRIME # 9\n z11 = z9 * z2 % PRIME # 11\n z2_5_0 = (z11*z11) % PRIME * z9 % PRIME # 31 == 2^5 - 2^0\n z2_10_0 = pow2(z2_5_0, 5) * z2_5_0 % PRIME # 2^10 - 2^0\n z2_20_0 = pow2(z2_10_0, 10) * z2_10_0 % PRIME # ...\n z2_40_0 = pow2(z2_20_0, 20) * z2_20_0 % PRIME\n z2_50_0 = pow2(z2_40_0, 10) * z2_10_0 % PRIME\n z2_100_0 = pow2(z2_50_0, 50) * z2_50_0 % PRIME\n z2_200_0 = pow2(z2_100_0, 100) * z2_100_0 % PRIME\n z2_250_0 = pow2(z2_200_0, 50) * z2_50_0 % PRIME # 2^250 - 2^0\n return pow2(z2_250_0, 5) * z11 % PRIME", "response": "Inverse of the curve25519 curve."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef scalarmult_B(e):\n # scalarmult(B, l) is the identity\n e %= L\n P = IDENT\n for i in range(253):\n if e & 1:\n P = edwards_add(P=P, Q=Bpow[i])\n e //= 2\n assert e == 0, e\n return P", "response": "Implements scalarmult B more efficiently."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef recover(y):\n p = (y*y - 1) * inverse(D*y*y + 1)\n x = powmod(p, (PRIME+3) // 8, PRIME)\n if (x*x - p) % PRIME != 0:\n i = powmod(2, (PRIME-1) // 4, PRIME)\n x = (x*i) % PRIME\n if x % 2 != 0:\n x = PRIME - x\n return x", "response": "given a value y recover the preimage x"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef __rubberband(y, sr, **kwargs):\n '''Execute rubberband\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,) or (n, c)]\n Audio time series, either single or multichannel\n\n sr : int > 0\n sampling rate of y\n\n **kwargs\n keyword arguments to rubberband\n\n Returns\n -------\n y_mod : np.ndarray [shape=(n,) or (n, c)]\n `y` after rubberband transformation\n\n '''\n\n assert sr > 0\n\n # Get the input and output tempfile\n fd, infile = tempfile.mkstemp(suffix='.wav')\n os.close(fd)\n fd, outfile = tempfile.mkstemp(suffix='.wav')\n os.close(fd)\n\n # dump the audio\n sf.write(infile, y, sr)\n\n try:\n # Execute rubberband\n arguments = [__RUBBERBAND_UTIL, '-q']\n\n for key, value in six.iteritems(kwargs):\n arguments.append(str(key))\n arguments.append(str(value))\n\n arguments.extend([infile, outfile])\n\n subprocess.check_call(arguments, stdout=DEVNULL, stderr=DEVNULL)\n\n # Load the processed audio.\n y_out, _ = sf.read(outfile, always_2d=True)\n\n # make sure that output dimensions matches input\n if y.ndim == 1:\n y_out = np.squeeze(y_out)\n\n except OSError as exc:\n six.raise_from(RuntimeError('Failed to execute rubberband. '\n 'Please verify that rubberband-cli '\n 'is installed.'),\n exc)\n\n finally:\n # Remove temp files\n os.unlink(infile)\n os.unlink(outfile)\n\n return y_out", "response": "Execute rubberband on the input time series y and return the output time series y after rubberband transformation"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\napplying a time stretch of rate to an audio time series.", "response": "def time_stretch(y, sr, rate, rbargs=None):\n '''Apply a time stretch of `rate` to an audio time series.\n\n This uses the `tempo` form for rubberband, so the\n higher the rate, the faster the playback.\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,) or (n, c)]\n Audio time series, either single or multichannel\n\n sr : int > 0\n Sampling rate of `y`\n\n rate : float > 0\n Desired playback rate.\n\n rbargs\n Additional keyword parameters for rubberband\n\n See `rubberband -h` for details.\n\n Returns\n -------\n y_stretch : np.ndarray\n Time-stretched audio\n\n Raises\n ------\n ValueError\n if `rate <= 0`\n '''\n\n if rate <= 0:\n raise ValueError('rate must be strictly positive')\n\n if rate == 1.0:\n return y\n\n if rbargs is None:\n rbargs = dict()\n\n rbargs.setdefault('--tempo', rate)\n\n return __rubberband(y, sr, **rbargs)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef timemap_stretch(y, sr, time_map, rbargs=None):\n '''Apply a timemap stretch to an audio time series.\n\n A timemap stretch allows non-linear time-stretching by mapping source to\n target sample frame numbers for fixed time points within the audio data.\n\n This uses the `time` and `timemap` form for rubberband.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,) or (n, c)]\n Audio time series, either single or multichannel\n\n sr : int > 0\n Sampling rate of `y`\n\n time_map : list\n Each element is a tuple `t` of length 2 which corresponds to the\n source sample position and target sample position.\n\n If `t[1] < t[0]` the track will be sped up in this area.\n\n `time_map[-1]` must correspond to the lengths of the source audio and\n target audio.\n\n rbargs\n Additional keyword parameters for rubberband\n\n See `rubberband -h` for details.\n\n Returns\n -------\n y_stretch : np.ndarray\n Time-stretched audio\n\n Raises\n ------\n ValueError\n if `time_map` is not monotonic\n if `time_map` is not non-negative\n if `time_map[-1][0]` is not the input audio length\n '''\n\n if rbargs is None:\n rbargs = dict()\n\n is_positive = all(time_map[i][0] >= 0 and time_map[i][1] >= 0\n for i in range(len(time_map)))\n is_monotonic = all(time_map[i][0] <= time_map[i+1][0] and\n time_map[i][1] <= time_map[i+1][1]\n for i in range(len(time_map)-1))\n if not is_positive:\n raise ValueError('time_map should be non-negative')\n\n if not is_monotonic:\n raise ValueError('time_map is not monotonic')\n\n if time_map[-1][0] != len(y):\n raise ValueError('time_map[-1] should correspond to the last sample')\n\n time_stretch = time_map[-1][1] * 1.0 / time_map[-1][0]\n rbargs.setdefault('--time', time_stretch)\n\n stretch_file = tempfile.NamedTemporaryFile(mode='w', suffix='.txt',\n delete=False)\n try:\n for t in time_map:\n stretch_file.write('{:0} {:1}\\n'.format(t[0], t[1]))\n stretch_file.close()\n\n rbargs.setdefault('--timemap', stretch_file.name)\n y_stretch = __rubberband(y, sr, **rbargs)\n finally:\n # Remove temp file\n os.unlink(stretch_file.name)\n\n return y_stretch", "response": "Apply a timemap stretch to an audio time series."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\napply a pitch shift to an audio time series.", "response": "def pitch_shift(y, sr, n_steps, rbargs=None):\n '''Apply a pitch shift to an audio time series.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,) or (n, c)]\n Audio time series, either single or multichannel\n\n sr : int > 0\n Sampling rate of `y`\n\n n_steps : float\n Shift by `n_steps` semitones.\n\n rbargs\n Additional keyword parameters for rubberband\n\n See `rubberband -h` for details.\n\n Returns\n -------\n y_shift : np.ndarray\n Pitch-shifted audio\n '''\n\n if n_steps == 0:\n return y\n\n if rbargs is None:\n rbargs = dict()\n\n rbargs.setdefault('--pitch', n_steps)\n\n return __rubberband(y, sr, **rbargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert a kuten string to GB2312 - 1980 hex", "response": "def kuten_to_gb2312(kuten):\n \"\"\"\n Convert GB kuten / quwei form (94 zones * 94 points) to GB2312-1980 /\n ISO-2022-CN hex (internal representation)\n \"\"\"\n zone, point = int(kuten[:2]), int(kuten[2:])\n hi, lo = hexd(zone + 0x20), hexd(point + 0x20)\n\n gb2312 = \"%s%s\" % (hi, lo)\n\n assert isinstance(gb2312, bytes)\n return gb2312"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef gb2312_to_euc(gb2312hex):\n hi, lo = int(gb2312hex[:2], 16), int(gb2312hex[2:], 16)\n hi, lo = hexd(hi + 0x80), hexd(lo + 0x80)\n\n euc = \"%s%s\" % (hi, lo)\n assert isinstance(euc, bytes)\n return euc", "response": "Convert a GB2312 - 1980 hex string to EUC - CN hex string"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef euc_to_python(hexstr):\n hi = hexstr[0:2]\n lo = hexstr[2:4]\n gb_enc = b'\\\\x' + hi + b'\\\\x' + lo\n return gb_enc.decode(\"gb2312\")", "response": "Convert a EUC - CN hex string to a Python unicode string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef euc_to_utf8(euchex):\n utf8 = euc_to_python(euchex).encode(\"utf-8\")\n uf8 = utf8.decode('unicode_escape')\n\n uf8 = uf8.encode('latin1')\n\n uf8 = uf8.decode('euc-jp')\n return uf8", "response": "Convert EUC hex to UTF8 hex."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ucn_to_unicode(ucn):\n if isinstance(ucn, string_types):\n ucn = ucn.strip(\"U+\")\n if len(ucn) > int(4):\n char = b'\\U' + format(int(ucn, 16), '08x').encode('latin1')\n char = char.decode('unicode_escape')\n else:\n char = unichr(int(ucn, 16))\n else:\n char = unichr(ucn)\n\n assert isinstance(char, text_type)\n\n return char", "response": "Convert a Unicode Universal Character Number to a Python unicode character."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef euc_to_unicode(hexstr):\n hi = hexstr[0:2]\n lo = hexstr[2:4]\n # hi and lo are only 2 characters long, no risk with eval-ing them\n\n gb_enc = b'\\\\x' + hi + b'\\\\x' + lo\n assert isinstance(gb_enc, bytes)\n\n # Requires coercing back to text_type in 2.7\n gb_enc = gb_enc.decode('unicode_escape')\n\n gb_enc = gb_enc.encode('latin1')\n\n gb_enc = gb_enc.decode('gb2312')\n\n assert isinstance(gb_enc, text_type)\n return gb_enc", "response": "Convert an EUC - CN hex string to a Python unicode string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef python_to_ucn(uni_char, as_bytes=False):\n ucn = uni_char.encode('unicode_escape').decode('latin1')\n ucn = text_type(ucn).replace('\\\\', '').upper().lstrip('U')\n if len(ucn) > int(4):\n # get rid of the zeroes that Python uses to pad 32 byte UCNs\n ucn = ucn.lstrip(\"0\")\n ucn = \"U+\" + ucn.upper()\n\n if as_bytes:\n ucn = ucn.encode('latin1')\n\n return ucn", "response": "Converts a Python Unicode character to a UCN character."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef python_to_euc(uni_char, as_bytes=False):\n euc = repr(uni_char.encode(\"gb2312\"))[1:-1].replace(\"\\\\x\", \"\").strip(\"'\")\n\n if as_bytes:\n euc = euc.encode('utf-8')\n assert isinstance(euc, bytes)\n\n return euc", "response": "Converts a Python Unicode character to an EUC character."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning ucnstring as Unicode.", "response": "def ucnstring_to_unicode(ucn_string):\n \"\"\"Return ucnstring as Unicode.\"\"\"\n ucn_string = ucnstring_to_python(ucn_string).decode('utf-8')\n\n assert isinstance(ucn_string, text_type)\n return ucn_string"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ucnstring_to_python(ucn_string):\n res = re.findall(\"U\\+[0-9a-fA-F]*\", ucn_string)\n for r in res:\n ucn_string = ucn_string.replace(text_type(r), text_type(ucn_to_unicode(r)))\n\n ucn_string = ucn_string.encode('utf-8')\n\n assert isinstance(ucn_string, bytes)\n return ucn_string", "response": "Convert Unicode UCN to native Python Unicode Unicode\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_var(var):\n bits = var.split(\"<\", 1)\n if len(bits) < 2:\n tag = None\n else:\n tag = bits[1]\n return ucn_to_unicode(bits[0]), tag", "response": "Parse a UCN - style variable into a tuple consisting of a string and a tag."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a Cihai instance from a JSON or YAML config file.", "response": "def from_file(cls, config_path=None, *args, **kwargs):\n \"\"\"\n Create a Cihai instance from a JSON or YAML config.\n\n Parameters\n ----------\n config_path : str, optional\n path to custom config file\n\n Returns\n -------\n :class:`Cihai` :\n application object\n \"\"\"\n\n config_reader = kaptan.Kaptan()\n\n config = {}\n\n if config_path:\n if not os.path.exists(config_path):\n raise exc.CihaiException(\n '{0} does not exist.'.format(os.path.abspath(config_path))\n )\n if not any(\n config_path.endswith(ext) for ext in ('json', 'yml', 'yaml', 'ini')\n ):\n raise exc.CihaiException(\n '{0} does not have a yaml,yml,json,ini extend.'.format(\n os.path.abspath(config_path)\n )\n )\n else:\n custom_config = config_reader.import_config(config_path).get()\n config = merge_dict(config, custom_config)\n\n return cls(config)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _process_locale(self, locale):\n\n if locale.lower().startswith('en'):\n return False\n\n return (locale in self.enabled_locales or\n\n self.reverse_locale_map.get(locale.lower(), None)\n in self.enabled_locales or\n\n locale in self.lower_locales or\n\n self.reverse_locale_map.get(locale.lower(), None)\n in self.lower_locales\n )", "response": "Return True if this locale should be processed."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the Desk - style locale for locale.", "response": "def desk_locale(self, locale):\n \"\"\"Return the Desk-style locale for locale.\"\"\"\n\n locale = locale.lower().replace('-', '_')\n return self.vendor_locale_map.get(locale, locale)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef push(self):\n\n tx = Tx(self.tx_project_slug)\n\n # asssemble the template catalog\n template = babel.messages.catalog.Catalog()\n for topic in self.desk.topics():\n if topic.show_in_portal:\n template.add(topic.name)\n\n # serialize the catalog as a PO file\n template_po = StringIO()\n babel.messages.pofile.write_po(template_po, template)\n\n # upload/update the catalog resource\n tx.create_or_update_resource(\n self.TOPIC_STRINGS_SLUG,\n DEFAULT_SOURCE_LANGUAGE,\n \"Help Center Topics\",\n template_po.getvalue(),\n i18n_type='PO',\n project_slug=self.tx_project_slug,\n )", "response": "Push topics to Transifex."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pull(self):\n\n topic_stats = txlib.api.statistics.Statistics.get(\n project_slug=self.tx_project_slug,\n resource_slug=self.TOPIC_STRINGS_SLUG,\n )\n\n translated = {}\n\n # for each language\n for locale in self.enabled_locales:\n\n if not self._process_locale(locale):\n continue\n\n locale_stats = getattr(topic_stats, locale, None)\n if locale_stats is None:\n self.log.debug('Locale %s not present when pulling topics.' %\n (locale,))\n continue\n\n if locale_stats['completed'] == '100%':\n # get the resource from Tx\n translation = txlib.api.translations.Translation.get(\n project_slug=self.tx_project_slug,\n slug=self.TOPIC_STRINGS_SLUG,\n lang=locale,\n )\n\n translated[locale] = babel.messages.pofile.read_po(\n StringIO(translation.content.encode('utf-8'))\n )\n\n # now that we've pulled everything from Tx, upload to Desk\n for topic in self.desk.topics():\n\n for locale in translated:\n\n if topic.name in translated[locale]:\n\n self.log.debug(\n 'Updating topic (%s) for locale (%s)' %\n (topic.name, locale),\n )\n\n if locale in topic.translations:\n topic.translations[locale].update(\n name=translated[locale][topic.name].string,\n )\n else:\n topic.translations.create(\n locale=locale,\n name=translated[locale][topic.name].string,\n )\n else:\n\n self.log.error(\n 'Topic name (%s) does not exist in locale (%s)' %\n (topic['name'], locale),\n )", "response": "Pulls topics from Transifex and uploads them to Desk."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_resource_document(self, title, content, tags=[],):\n\n assert \"\" not in content\n assert \"\" not in content\n\n return \"\"\"\n \n %(title)s\n \n %(content)s\n \n \"\"\" % dict(\n title=title,\n content=content,\n )", "response": "Return a single HTML document containing the title and content."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing the resource document.", "response": "def parse_resource_document(self, content):\n \"\"\"Return a dict with the keys title, content, tags for content.\"\"\"\n\n content = content.strip()\n\n if not content.startswith(''):\n # this is not a full HTML doc, probably content w/o title, tags, etc\n return dict(body=content)\n\n result = {}\n if '' in content and '' in content:\n result['subject'] = content[content.find('') + 7:content.find('')].strip()\n result['body'] = content[content.find('') + 6:content.find('')].strip()\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npushing tutorials to Transifex.", "response": "def push(self):\n \"\"\"Push tutorials to Transifex.\"\"\"\n\n tx = Tx(self.tx_project_slug)\n\n if self.options.resources:\n articles = [\n self.desk.articles().by_id(r.strip())\n for r in self.options.resources.split(',')\n ]\n else:\n articles = self.desk.articles()\n\n for a in articles:\n\n self.log.debug(\n 'Inspecting Desk resource %s', a.api_href\n )\n\n for translation in a.translations.items().values():\n our_locale = self.desk_to_our_locale(translation.locale)\n\n self.log.debug('Checking locale %s', translation.locale)\n\n if not self._process_locale(translation.locale):\n self.log.debug('Skipping locale.')\n continue\n\n # make sure the project exists in Tx\n tx.get_project(our_locale)\n\n a_id = a.api_href.rsplit('/', 1)[1]\n if (self.options.force or\n not tx.resource_exists(a_id, our_locale) or\n translation.outdated\n ):\n self.log.info('Resource %(id)s out of date in %(locale)s; updating.' %\n {'id': a_id,\n 'locale': our_locale,\n },\n )\n\n tx.create_or_update_resource(\n a_id,\n our_locale,\n self.make_resource_title(a),\n self.make_resource_document(a.subject, a.body),\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget or creates a Transifex project for the current project prefix and locale", "response": "def get_project(self, locale, source_language_code=DEFAULT_SOURCE_LANGUAGE, **kwargs):\n \"\"\"\n Gets or creates the Transifex project for the current project prefix and locale\n\n :param locale: A locale to which content is to be translated\n :type locale: string\n :param source_language_code: The language of the original untranslated content (i.e. Spanish),\n defaults to DEFAULT_SOURCE_LANGUAGE, which is English\n :type source_language_code: string, optional\n :return: The Transifex project to which resources can be pushed or pulled\n :rtype: project.Project\n \"\"\"\n\n try:\n locale_project = project.Project.get(slug=self.get_project_slug(locale))\n\n except NotFoundError:\n\n locale_project = project.Project(\n slug=self.get_project_slug(locale),\n )\n defaults = {\n 'name': 'Help Center (%s)' % (locale, ),\n 'description': 'Help Center pages to translate to %s' % (\n locale,\n ),\n 'source_language_code': source_language_code,\n 'private': True,\n }\n\n valid_keys = ('name','description')\n defaults.update(\n dict((k,v) for k,v in kwargs.iteritems() if k in valid_keys)\n )\n\n for k,v in defaults.iteritems():\n setattr(locale_project, k, v)\n\n locale_project.save()\n\n\n return locale_project"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if the translation exists for this slug.", "response": "def translation_exists(self, slug, lang):\n \"\"\"Return True if the translation exists for this slug.\"\"\"\n\n try:\n return translations.Translation.get(\n project_slug=self.get_project_slug(lang),\n slug=slug,\n lang=lang,\n )\n\n except (NotFoundError, RemoteServerError):\n pass\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a sequence of resources for a given lang.", "response": "def list_resources(self, lang):\n \"\"\"Return a sequence of resources for a given lang.\n\n Each Resource is a dict containing the slug, name, i18n_type,\n source_language_code and the category.\n \"\"\"\n\n return registry.registry.http_handler.get(\n '/api/2/project/%s/resources/' % (\n self.get_project_slug(lang),)\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef resources(self, lang, slug):\n\n resource = resources.Resource.get(\n project_slug=self.get_project_slug(lang),\n slug=slug,\n )\n\n return resource", "response": "Generate a list of Resources in the Project."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning True if a Resource with the given slug exists in locale.", "response": "def resource_exists(self, slug, locale, project_slug=None):\n \"\"\"Return True if a Resource with the given slug exists in locale.\"\"\"\n\n try:\n resource = resources.Resource.get(\n project_slug=project_slug or self.get_project_slug(locale),\n slug=slug,\n )\n\n return resource\n\n except NotFoundError:\n pass\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndoing HTTP request and return response as a string.", "response": "def session_request(session, url, **kwargs):\n \"\"\"Do HTTP/S request and return response as a string.\"\"\"\n try:\n response = session(url, **kwargs)\n\n response.raise_for_status()\n\n return response.text\n\n except requests.exceptions.HTTPError as errh:\n _LOGGER.debug(\"%s, %s\", response, errh)\n raise_error(response.status_code)\n\n except requests.exceptions.ConnectionError as errc:\n _LOGGER.debug(\"%s\", errc)\n raise RequestError(\"Connection error: {}\".format(errc))\n\n except requests.exceptions.Timeout as errt:\n _LOGGER.debug(\"%s\", errt)\n raise RequestError(\"Timeout: {}\".format(errt))\n\n except requests.exceptions.RequestException as err:\n _LOGGER.debug(\"%s\", err)\n raise RequestError(\"Unknown error: {}\".format(err))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_event_list(config):\n eventinstances = session_request(\n config.session.post, device_event_url.format(\n proto=config.web_proto, host=config.host, port=config.port),\n auth=config.session.auth, headers=headers, data=request_xml)\n\n raw_event_list = _prepare_event(eventinstances)\n\n event_list = {}\n for entry in MAP + METAMAP:\n instance = raw_event_list\n try:\n for item in sum(entry[MAP_BASE].values(), []):\n instance = instance[item]\n except KeyError:\n continue\n event_list[entry[MAP_TYPE]] = instance\n\n return event_list", "response": "Get a dict of supported events from device."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert an event instance list to a relevant dictionary.", "response": "def _prepare_event(eventinstances):\n \"\"\"Converts event instances to a relevant dictionary.\"\"\"\n import xml.etree.ElementTree as ET\n\n def parse_event(events):\n \"\"\"Find all events inside of an topicset list.\n\n MessageInstance signals that subsequent children will\n contain source and data descriptions.\n \"\"\"\n\n def clean_attrib(attrib={}):\n \"\"\"Clean up child attributes by removing XML namespace.\"\"\"\n attributes = {}\n for key, value in attrib.items():\n attributes[key.split('}')[-1]] = value\n return attributes\n\n description = {}\n for child in events:\n child_tag = child.tag.split('}')[-1]\n child_attrib = clean_attrib(child.attrib)\n if child_tag != 'MessageInstance':\n description[child_tag] = {\n **child_attrib, **parse_event(child)}\n elif child_tag == 'MessageInstance':\n description = {}\n for item in child:\n tag = item.tag.split('}')[-1]\n description[tag] = clean_attrib(item[0].attrib)\n return description\n\n root = ET.fromstring(eventinstances)\n return parse_event(root[0][0][0])"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef url(self):\n return URL.format(http=self.web_proto, host=self.host, port=self.port)", "response": "Represent device base url."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprocessing the raw dict.", "response": "def process_raw(self, raw: dict) -> None:\n \"\"\"Pre-process raw dict.\n\n Prepare parameters to work with APIItems.\n \"\"\"\n raw_ports = {}\n\n for param in raw:\n port_index = REGEX_PORT_INDEX.search(param).group(0)\n\n if port_index not in raw_ports:\n raw_ports[port_index] = {}\n\n name = param.replace(IOPORT + '.I' + port_index + '.', '')\n raw_ports[port_index][name] = raw[param]\n\n super().process_raw(raw_ports)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef name(self) -> str:\n if self.direction == DIRECTION_IN:\n return self.raw.get('Input.Name', '')\n return self.raw.get('Output.Name', '')", "response": "Return name relevant to direction."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef initialize_params(self, preload_data=True) -> None:\n params = ''\n if preload_data:\n params = self.request('get', param_url)\n\n self.params = Params(params, self.request)", "response": "Load device parameters and initialize parameter management."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef initialize_ports(self) -> None:\n if not self.params:\n self.initialize_params(preload_data=False)\n self.params.update_ports()\n\n self.ports = Ports(self.params, self.request)", "response": "Load IO port parameters for device."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef initialize_users(self) -> None:\n users = self.request('get', pwdgrp_url)\n self.users = Users(users, self.request)", "response": "Load device user data and initialize user management."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef new_event(self, event_data: str) -> None:\n event = self.parse_event_xml(event_data)\n\n if EVENT_OPERATION in event:\n self.manage_event(event)", "response": "Process a new event."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreceive new metadata. Operation initialized means new event, also happens if reconnecting. Operation changed updates existing events state.", "response": "def manage_event(self, event) -> None:\n \"\"\"Received new metadata.\n\n Operation initialized means new event, also happens if reconnecting.\n Operation changed updates existing events state.\n \"\"\"\n name = EVENT_NAME.format(\n topic=event[EVENT_TOPIC], source=event.get(EVENT_SOURCE_IDX))\n\n if event[EVENT_OPERATION] == 'Initialized' and name not in self.events:\n\n for event_class in EVENT_CLASSES:\n if event_class.TOPIC in event[EVENT_TOPIC]:\n self.events[name] = event_class(event)\n self.signal('add', name)\n return\n\n _LOGGER.debug('Unsupported event %s', event[EVENT_TOPIC])\n\n elif event[EVENT_OPERATION] == 'Changed' and name in self.events:\n self.events[name].state = event[EVENT_VALUE]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating state of event.", "response": "def state(self, state: str) -> None:\n \"\"\"Update state of event.\"\"\"\n self._state = state\n for callback in self._callbacks:\n callback()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef remove_callback(self, callback) -> None:\n if callback in self._callbacks:\n self._callbacks.remove(callback)", "response": "Remove callback from the list of callbacks."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef enable_events(self, event_callback=None) -> None:\n self.event = EventManager(event_callback)\n self.stream.event = self.event", "response": "Enable events for stream."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_brand(self) -> None:\n self.update(path=URL_GET + GROUP.format(group=BRAND))", "response": "Update brand group of parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_ports(self) -> None:\n self.update(path=URL_GET + GROUP.format(group=INPUT))\n self.update(path=URL_GET + GROUP.format(group=IOPORT))\n self.update(path=URL_GET + GROUP.format(group=OUTPUT))", "response": "Update port groups of parameters."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a smaller dictionary containing all ports.", "response": "def ports(self) -> dict:\n \"\"\"Create a smaller dictionary containing all ports.\"\"\"\n return {\n param: self[param].raw\n for param in self\n if param.startswith(IOPORT)\n }"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_properties(self) -> None:\n self.update(path=URL_GET + GROUP.format(group=PROPERTIES))", "response": "Update properties group of parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef process_raw(self, raw: str) -> None:\n raw_params = dict(group.split('=', 1) for group in raw.splitlines())\n\n super().process_raw(raw_params)", "response": "Pre - process raw string.\n\n Prepare parameters to work with APIItems.\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves user from the cache.", "response": "def delete(self, user: str) -> None:\n \"\"\"Remove user.\"\"\"\n data = {\n 'action': 'remove',\n 'user': user\n }\n\n self._request('post', URL, data=data)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef process_raw(self, raw: str) -> None:\n raw_dict = dict(group.split('=') for group in raw.splitlines())\n\n raw_users = {\n user: {\n group: user in REGEX_STRING.findall(raw_dict[group])\n for group in [ADMIN, OPERATOR, VIEWER, PTZ]\n }\n for user in REGEX_STRING.findall(raw_dict['users'])\n }\n\n super().process_raw(raw_users)", "response": "Pre-process raw string.\n\n Prepare users to work with APIItems.\n Create booleans with user levels."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_done(self, fut):\n try:\n if fut.exception():\n fut.result()\n except OSError as err:\n _LOGGER.debug('RTSP got exception %s', err)\n self.stop()\n self.callback(SIGNAL_FAILED)", "response": "Callback when the init process is complete."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef connection_made(self, transport):\n self.transport = transport\n self.transport.write(self.method.message.encode())\n self.time_out_handle = self.loop.call_later(\n TIME_OUT_LIMIT, self.time_out)", "response": "Connect to device is successful."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef data_received(self, data):\n self.time_out_handle.cancel()\n self.session.update(data.decode())\n\n if self.session.state == STATE_STARTING:\n self.transport.write(self.method.message.encode())\n self.time_out_handle = self.loop.call_later(\n TIME_OUT_LIMIT, self.time_out)\n\n elif self.session.state == STATE_PLAYING:\n self.callback(SIGNAL_PLAYING)\n\n if self.session.session_timeout != 0:\n interval = self.session.session_timeout - 5\n self.loop.call_later(interval, self.keep_alive)\n\n else:\n self.stop()", "response": "Called when data is received from the server."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstops the RTSP session and send a SIGAL_FAILED signal.", "response": "def time_out(self):\n \"\"\"If we don't get a response within time the RTSP request time out.\n\n This usually happens if device isn't available on specified IP.\n \"\"\"\n _LOGGER.warning('Response timed out %s', self.session.host)\n self.stop()\n self.callback(SIGNAL_FAILED)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns RTSP method based on sequence number from session.", "response": "def message(self):\n \"\"\"Return RTSP method based on sequence number from session.\"\"\"\n message = self.message_methods[self.session.method]()\n _LOGGER.debug(message)\n return message"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nrequests options device supports.", "response": "def OPTIONS(self, authenticate=True):\n \"\"\"Request options device supports.\"\"\"\n message = \"OPTIONS \" + self.session.url + \" RTSP/1.0\\r\\n\"\n message += self.sequence\n message += self.authentication if authenticate else ''\n message += self.user_agent\n message += self.session_id\n message += '\\r\\n'\n return message"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef DESCRIBE(self):\n message = \"DESCRIBE \" + self.session.url + \" RTSP/1.0\\r\\n\"\n message += self.sequence\n message += self.authentication\n message += self.user_agent\n message += \"Accept: application/sdp\\r\\n\"\n message += '\\r\\n'\n return message", "response": "Request description of what services RTSP server make available."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef SETUP(self):\n message = \"SETUP \" + self.session.control_url + \" RTSP/1.0\\r\\n\"\n message += self.sequence\n message += self.authentication\n message += self.user_agent\n message += self.transport\n message += '\\r\\n'\n return message", "response": "Set up stream transport."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef transport(self):\n transport = \"Transport: RTP/AVP;unicast;client_port={}-{}\\r\\n\"\n return transport.format(\n str(self.session.rtp_port), str(self.session.rtcp_port))", "response": "Generate the transport string."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the internal state of the object from the device response.", "response": "def update(self, response):\n \"\"\"Update session information from device response.\n\n Increment sequence number when starting stream, not when playing.\n If device requires authentication resend previous message with auth.\n \"\"\"\n data = response.splitlines()\n _LOGGER.debug('Received data %s from %s', data, self.host)\n while data:\n line = data.pop(0)\n if 'RTSP/1.0' in line:\n self.rtsp_version = int(line.split(' ')[0][5])\n self.status_code = int(line.split(' ')[1])\n self.status_text = line.split(' ')[2]\n elif 'CSeq' in line:\n self.sequence_ack = int(line.split(': ')[1])\n elif 'Date' in line:\n self.date = line.split(': ')[1]\n elif 'Public' in line:\n self.methods_ack = line.split(': ')[1].split(', ')\n elif \"WWW-Authenticate: Basic\" in line:\n self.basic = True\n self.realm = line.split('\"')[1]\n elif \"WWW-Authenticate: Digest\" in line:\n self.digest = True\n self.realm = line.split('\"')[1]\n self.nonce = line.split('\"')[3]\n self.stale = (line.split('stale=')[1] == 'TRUE')\n elif 'Content-Type' in line:\n self.content_type = line.split(': ')[1]\n elif 'Content-Base' in line:\n self.content_base = line.split(': ')[1]\n elif 'Content-Length' in line:\n self.content_length = int(line.split(': ')[1])\n elif 'Session' in line:\n self.session_id = line.split(': ')[1].split(\";\")[0]\n if '=' in line:\n self.session_timeout = int(line.split(': ')[1].split('=')[1])\n elif 'Transport' in line:\n self.transport_ack = line.split(': ')[1]\n elif 'Range' in line:\n self.range = line.split(': ')[1]\n elif 'RTP-Info' in line:\n self.rtp_info = line.split(': ')[1]\n elif not line:\n if data:\n self.sdp = data\n break\n if self.sdp:\n stream_found = False\n for param in self.sdp:\n if not stream_found and 'm=application' in param:\n stream_found = True\n elif stream_found and 'a=control:rtsp' in param:\n self.control_url = param.split(':', 1)[1]\n break\n\n if self.status_code == 200:\n if self.state == STATE_STARTING:\n self.sequence += 1\n elif self.status_code == 401:\n # Device requires authorization, do not increment to next method\n pass\n else:\n # If device configuration is correct we should never get here\n _LOGGER.debug(\n \"%s RTSP %s %s\", self.host, self.status_code, self.status_text)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generate_digest(self):\n from hashlib import md5\n ha1 = self.username + ':' + self.realm + ':' + self.password\n HA1 = md5(ha1.encode('UTF-8')).hexdigest()\n ha2 = self.method + ':' + self.url\n HA2 = md5(ha2.encode('UTF-8')).hexdigest()\n encrypt_response = HA1 + ':' + self.nonce + ':' + HA2\n response = md5(encrypt_response.encode('UTF-8')).hexdigest()\n\n digest_auth = 'Digest '\n digest_auth += 'username=\\\"' + self.username + \"\\\", \"\n digest_auth += 'realm=\\\"' + self.realm + \"\\\", \"\n digest_auth += \"algorithm=\\\"MD5\\\", \"\n digest_auth += 'nonce=\\\"' + self.nonce + \"\\\", \"\n digest_auth += 'uri=\\\"' + self.url + \"\\\", \"\n digest_auth += 'response=\\\"' + response + '\\\"'\n return digest_auth", "response": "RFC 2617. 1. 1"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef generate_basic(self):\n from base64 import b64encode\n if not self.basic_auth:\n creds = self.username + ':' + self.password\n self.basic_auth = 'Basic '\n self.basic_auth += b64encode(creds.encode('UTF-8')).decode('UTF-8')\n return self.basic_auth", "response": "RFC 2617. 1. 1 Basic Auth"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nretry calling the decorated function using an exponential backoff. :param callback_by_exception: callback/method invocation on certain exceptions :type callback_by_exception: None or dict", "response": "def retry(ExceptionToCheck, tries=10, timeout_secs=1.0, logger=None, callback_by_exception=None):\n \"\"\"\n Retry calling the decorated function using an exponential backoff.\n :param callback_by_exception: callback/method invocation on certain exceptions\n :type callback_by_exception: None or dict\n \"\"\"\n def deco_retry(f):\n def f_retry(*args, **kwargs):\n mtries, mdelay = tries, timeout_secs\n run_one_last_time = True\n while mtries > 1:\n try:\n return f(*args, **kwargs)\n except ExceptionToCheck as e:\n # check if this exception is something the caller wants special handling for\n callback_errors = callback_by_exception or {}\n for error_type in callback_errors:\n if isinstance(e, error_type):\n callback_logic = callback_by_exception[error_type]\n should_break_out = run_one_last_time = False\n if isinstance(callback_logic, (list, tuple)):\n callback_logic, should_break_out = callback_logic\n if isinstance(should_break_out, (list, tuple)):\n should_break_out, run_one_last_time = should_break_out\n callback_logic()\n if should_break_out: # caller requests we stop handling this exception\n break\n # traceback.print_exc()\n half_interval = mdelay * 0.10 # interval size\n actual_delay = random.uniform(mdelay - half_interval, mdelay + half_interval)\n msg = \"Retrying in %.2f seconds ...\" % actual_delay\n if logger is None:\n logging.exception(msg)\n else:\n logger.exception(msg)\n time.sleep(actual_delay)\n mtries -= 1\n mdelay *= 2\n if run_one_last_time: # one exception may be all the caller wanted in certain cases\n return f(*args, **kwargs)\n return f_retry # true decorator\n return deco_retry"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild RTSP url for stream.", "response": "def stream_url(self):\n \"\"\"Build url for stream.\"\"\"\n rtsp_url = RTSP_URL.format(\n host=self.config.host, video=self.video_query,\n audio=self.audio_query, event=self.event_query)\n _LOGGER.debug(rtsp_url)\n return rtsp_url"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef session_callback(self, signal):\n if signal == SIGNAL_DATA:\n self.event.new_event(self.data)\n\n elif signal == SIGNAL_FAILED:\n self.retry()\n\n if signal in [SIGNAL_PLAYING, SIGNAL_FAILED] and \\\n self.connection_status_callback:\n self.connection_status_callback(signal)", "response": "Signal from stream session."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef retry(self):\n self.stream = None\n self.config.loop.call_later(RETRY_TIMER, self.start)\n _LOGGER.debug('Reconnecting to %s', self.config.host)", "response": "Retry connection to the device."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef connect_mysql():\n MySQLConnection.get_characterset_info = MySQLConnection.get_charset\n\n db = create_engine(engine_name)\n db.echo = True\n db.connect()\n \n return db", "response": "Connect to MySQL and return an inspector object"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_perm(user_id, permission_code):\n try:\n perm = db.DBSession.query(Perm).filter(Perm.code==permission_code).one()\n except NoResultFound:\n raise PermissionError(\"Nonexistent permission type: %s\"%(permission_code))\n\n\n try:\n res = db.DBSession.query(User).join(RoleUser, RoleUser.user_id==User.id).\\\n join(Perm, Perm.id==perm.id).\\\n join(RolePerm, RolePerm.perm_id==Perm.id).filter(User.id==user_id).one()\n except NoResultFound:\n raise PermissionError(\"Permission denied. User %s does not have permission %s\"%\n (user_id, permission_code))", "response": "Checks whether a user has permission to perform an action."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef required_perms(*req_perms):\n def dec_wrapper(wfunc):\n @wraps(wfunc)\n def wrapped(*args, **kwargs):\n user_id = kwargs.get(\"user_id\")\n for perm in req_perms:\n check_perm(user_id, perm)\n\n return wfunc(*args, **kwargs)\n\n return wrapped\n return dec_wrapper", "response": "Decorator to check if user has required perms"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngives a time period name fetch the hydra - compatible time abbreviation.", "response": "def get_time_period(period_name):\n \"\"\"\n Given a time period name, fetch the hydra-compatible time\n abbreviation.\n \"\"\"\n time_abbreviation = time_map.get(period_name.lower())\n\n if time_abbreviation is None:\n raise Exception(\"Symbol %s not recognised as a time period\"%period_name)\n\n return time_abbreviation"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts a string timestamp into a date time.", "response": "def get_datetime(timestamp):\n \"\"\"\n Turn a string timestamp into a date time. First tries to use dateutil.\n Failing that it tries to guess the time format and converts it manually\n using stfptime.\n\n @returns: A timezone unaware timestamp.\n \"\"\"\n timestamp_is_float = False\n try:\n float(timestamp)\n timestamp_is_float = True\n except (ValueError, TypeError):\n pass\n\n if timestamp_is_float == True:\n raise ValueError(\"Timestamp %s is a float\"%(timestamp,))\n\n #First try to use date util. Failing that, continue\n try:\n parsed_dt = parse(timestamp, dayfirst=False)\n if parsed_dt.tzinfo is None:\n return parsed_dt\n else:\n\n return parsed_dt.replace(tzinfo=None)\n except:\n pass\n\n if isinstance(timestamp, datetime):\n return timestamp\n\n fmt = guess_timefmt(timestamp)\n if fmt is None:\n fmt = FORMAT\n\n # and proceed as usual\n try:\n ts_time = datetime.strptime(timestamp, fmt)\n except ValueError as e:\n if e.message.split(' ', 1)[0].strip() == 'unconverted':\n utcoffset = e.message.split()[3].strip()\n timestamp = timestamp.replace(utcoffset, '')\n ts_time = datetime.strptime(timestamp, fmt)\n # Apply offset\n tzoffset = timedelta(hours=int(utcoffset[0:3]),\n minutes=int(utcoffset[3:5]))\n ts_time -= tzoffset\n else:\n raise e\n\n return ts_time"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a timestamp as defined in the soap interface to the time format stored in the database.", "response": "def timestamp_to_ordinal(timestamp):\n \"\"\"Convert a timestamp as defined in the soap interface to the time format\n stored in the database.\n \"\"\"\n\n if timestamp is None:\n return None\n\n ts_time = get_datetime(timestamp)\n # Convert time to Gregorian ordinal (1 = January 1st, year 1)\n ordinal_ts_time = Decimal(ts_time.toordinal())\n total_seconds = (ts_time -\n datetime(ts_time.year,\n ts_time.month,\n ts_time.day,\n 0, 0, 0)).total_seconds()\n\n fraction = (Decimal(repr(total_seconds)) / Decimal(86400)).quantize(Decimal('.00000000000000000001'),rounding=ROUND_HALF_UP)\n ordinal_ts_time += fraction\n log.debug(\"%s converted to %s\", timestamp, ordinal_ts_time)\n\n return ordinal_ts_time"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef date_to_string(date, seasonal=False):\n\n seasonal_key = config.get('DEFAULT', 'seasonal_key', '9999')\n if seasonal:\n FORMAT = seasonal_key+'-%m-%dT%H:%M:%S.%f'\n else:\n FORMAT = '%Y-%m-%dT%H:%M:%S.%f'\n return date.strftime(FORMAT)", "response": "Convert a date to a standard string used by Hydra."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef guess_timefmt(datestr):\n\n if isinstance(datestr, float) or isinstance(datestr, int):\n return None\n\n seasonal_key = str(config.get('DEFAULT', 'seasonal_key', '9999'))\n\n #replace 'T' with space to handle ISO times.\n if datestr.find('T') > 0:\n dt_delim = 'T'\n else:\n dt_delim = ' '\n\n delimiters = ['-', '.', ' ', '/']\n formatstrings = [['%Y', '%m', '%d'],\n ['%d', '%m', '%Y'],\n ['%d', '%b', '%Y'],\n ['XXXX', '%m', '%d'],\n ['%d', '%m', 'XXXX'],\n ['%d', '%b', 'XXXX'],\n [seasonal_key, '%m', '%d'],\n ['%d', '%m', seasonal_key],\n ['%d', '%b', seasonal_key]]\n\n timeformats = ['%H:%M:%S.%f', '%H:%M:%S', '%H:%M', '%H:%M:%S.%f000Z', '%H:%M:%S.%fZ']\n\n # Check if a time is indicated or not\n for timefmt in timeformats:\n try:\n datetime.strptime(datestr.split(dt_delim)[-1].strip(), timefmt)\n usetime = True\n break\n except ValueError:\n usetime = False\n\n # Check the simple ones:\n for fmt in formatstrings:\n for delim in delimiters:\n datefmt = fmt[0] + delim + fmt[1] + delim + fmt[2]\n if usetime:\n for timefmt in timeformats:\n complfmt = datefmt + dt_delim + timefmt\n try:\n datetime.strptime(datestr, complfmt)\n return complfmt\n except ValueError:\n pass\n else:\n try:\n datetime.strptime(datestr, datefmt)\n return datefmt\n except ValueError:\n pass\n\n # Check for other formats:\n custom_formats = ['%d/%m/%Y', '%b %d %Y', '%B %d %Y','%d/%m/XXXX', '%d/%m/'+seasonal_key]\n\n for fmt in custom_formats:\n if usetime:\n for timefmt in timeformats:\n complfmt = fmt + dt_delim + timefmt\n try:\n datetime.strptime(datestr, complfmt)\n return complfmt\n except ValueError:\n pass\n\n else:\n try:\n datetime.strptime(datestr, fmt)\n return fmt\n except ValueError:\n pass\n\n return None", "response": "Try to guess the time format for a given date string."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef reindex_timeseries(ts_string, new_timestamps):\n #If a single timestamp is passed in, turn it into a list\n #Reindexing can't work if it's not a list\n if not isinstance(new_timestamps, list):\n new_timestamps = [new_timestamps]\n\n #Convert the incoming timestamps to datetimes\n #if they are not datetimes.\n new_timestamps_converted = []\n for t in new_timestamps:\n new_timestamps_converted.append(get_datetime(t))\n\n new_timestamps = new_timestamps_converted\n\n seasonal_year = config.get('DEFAULT','seasonal_year', '1678')\n seasonal_key = config.get('DEFAULT', 'seasonal_key', '9999')\n\n ts = ts_string.replace(seasonal_key, seasonal_year)\n\n timeseries = pd.read_json(ts)\n\n idx = timeseries.index\n\n ts_timestamps = new_timestamps\n\n #'Fix' the incoming timestamp in case it's a seasonal value\n if type(idx) == pd.DatetimeIndex:\n if set(idx.year) == set([int(seasonal_year)]):\n if isinstance(new_timestamps, list):\n seasonal_timestamp = []\n for t in ts_timestamps:\n t_1900 = t.replace(year=int(seasonal_year))\n seasonal_timestamp.append(t_1900)\n ts_timestamps = seasonal_timestamp\n\n #Reindex the timeseries to reflect the requested timestamps\n reindexed_ts = timeseries.reindex(ts_timestamps, method='ffill')\n\n i = reindexed_ts.index\n\n reindexed_ts.index = pd.Index(new_timestamps, names=i.names)\n\n #If there are no values at all, just return None\n if len(reindexed_ts.dropna()) == 0:\n return None\n\n #Replace all numpy NAN values with None\n pandas_ts = reindexed_ts.where(reindexed_ts.notnull(), None)\n\n return pandas_ts", "response": "reindex a timeseries with the supplied timestamps"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_time_step(time_step, target='s', units_ref=None):\n log.info(\"Parsing time step %s\", time_step)\n # export numerical value from string using regex\n value = re.findall(r'\\d+', time_step)[0]\n valuelen = len(value)\n\n try:\n value = float(value)\n except:\n HydraPluginError(\"Unable to extract number of time steps (%s) from time step %s\" % (value, time_step))\n\n unit = time_step[valuelen:].strip()\n\n period = get_time_period(unit)\n\n log.info(\"Time period is %s\", period)\n\n converted_time_step = units_ref.convert(value, period, target)\n\n log.info(\"Time period is %s %s\", converted_time_step, period)\n\n return float(converted_time_step), value, period", "response": "Read in the time step and convert it to seconds."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a list of datetimes based on an start time, end time and time step. If such a list is already passed in, then this is not necessary. Often either the start_time, end_time, time_step is passed into an app or the time_axis is passed in directly. This function returns a time_axis in both situations.", "response": "def get_time_axis(start_time, end_time, time_step, time_axis=None):\n \"\"\"\n Create a list of datetimes based on an start time, end time and\n time step. If such a list is already passed in, then this is not\n necessary.\n\n Often either the start_time, end_time, time_step is passed into an\n app or the time_axis is passed in directly. This function returns a\n time_axis in both situations.\n \"\"\"\n\n #Do this import here to avoid a circular dependency\n from ..lib import units\n\n if time_axis is not None:\n actual_dates_axis = []\n for t in time_axis:\n #If the user has entered the time_axis with commas, remove them.\n t = t.replace(',', '').strip()\n if t == '':\n continue\n actual_dates_axis.append(get_datetime(t))\n return actual_dates_axis\n\n else:\n if start_time is None:\n raise HydraPluginError(\"A start time must be specified\")\n if end_time is None:\n raise HydraPluginError(\"And end time must be specified\")\n if time_step is None:\n raise HydraPluginError(\"A time-step must be specified\")\n\n start_date = get_datetime(start_time)\n end_date = get_datetime(end_time)\n delta_t, value, output_units = parse_time_step(time_step, units_ref=units)\n\n time_axis = [start_date]\n\n value = int(value)\n while start_date < end_date:\n #Months and years are a special case, so treat them differently\n if(output_units.lower() == \"mon\"):\n start_date = start_date + relativedelta(months=value)\n elif (output_units.lower() == \"yr\"):\n start_date = start_date + relativedelta(years=value)\n else:\n start_date += timedelta(seconds=delta_t)\n time_axis.append(start_date)\n return time_axis"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_all_attributes(network):\n attrs = network.attributes\n for n in network.nodes:\n attrs.extend(n.attributes)\n for l in network.links:\n attrs.extend(l.attributes)\n for g in network.resourcegroups:\n attrs.extend(g.attributes)\n\n return attrs", "response": "Get all the complex mode attributes in the network."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a network to the database.", "response": "def add_network(network,**kwargs):\n \"\"\"\n Takes an entire network complex model and saves it to the DB. This\n complex model includes links & scenarios (with resource data). Returns\n the network's complex model.\n\n As links connect two nodes using the node_ids, if the nodes are new\n they will not yet have node_ids. In this case, use negative ids as\n temporary IDS until the node has been given an permanent ID.\n\n All inter-object referencing of new objects should be done using\n negative IDs in the client.\n\n The returned object will have positive IDS\n\n \"\"\"\n db.DBSession.autoflush = False\n\n start_time = datetime.datetime.now()\n log.debug(\"Adding network\")\n\n insert_start = datetime.datetime.now()\n\n proj_i = db.DBSession.query(Project).filter(Project.id == network.project_id).first()\n if proj_i is None:\n raise HydraError(\"Project ID is none. A project ID must be specified on the Network\")\n\n existing_net = db.DBSession.query(Network).filter(Network.project_id == network.project_id, Network.name==network.name).first()\n if existing_net is not None:\n raise HydraError(\"A network with the name %s is already in project %s\"%(network.name, network.project_id))\n\n user_id = kwargs.get('user_id')\n proj_i.check_write_permission(user_id)\n\n net_i = Network()\n net_i.project_id = network.project_id\n net_i.name = network.name\n net_i.description = network.description\n net_i.created_by = user_id\n net_i.projection = network.projection\n\n if network.layout is not None:\n net_i.layout = network.get_layout()\n\n network.id = net_i.id\n db.DBSession.add(net_i)\n db.DBSession.flush()\n #These two lists are used for comparison and lookup, so when\n #new attributes are added, these lists are extended.\n\n #List of all the resource attributes\n all_resource_attrs = {}\n\n name_map = {network.name:net_i}\n network_attrs, network_defaults = _bulk_add_resource_attrs(net_i.id, 'NETWORK', [network], name_map)\n hdb.add_resource_types(net_i, network.types)\n\n all_resource_attrs.update(network_attrs)\n\n log.info(\"Network attributes added in %s\", get_timing(start_time))\n node_id_map, node_attrs, node_datasets = _add_nodes(net_i, network.nodes)\n all_resource_attrs.update(node_attrs)\n\n link_id_map, link_attrs, link_datasets = _add_links(net_i, network.links, node_id_map)\n all_resource_attrs.update(link_attrs)\n\n grp_id_map, grp_attrs, grp_datasets = _add_resource_groups(net_i, network.resourcegroups)\n all_resource_attrs.update(grp_attrs)\n\n defaults = list(grp_datasets.values()) + list(link_datasets.values()) \\\n + list(node_datasets.values()) + list(network_defaults.values())\n\n start_time = datetime.datetime.now()\n\n scenario_names = []\n if network.scenarios is not None:\n log.info(\"Adding scenarios to network\")\n for s in network.scenarios:\n\n if s.name in scenario_names:\n raise HydraError(\"Duplicate scenario name: %s\"%(s.name))\n\n scen = Scenario()\n scen.name = s.name\n scen.description = s.description\n scen.layout = s.get_layout()\n scen.start_time = str(timestamp_to_ordinal(s.start_time)) if s.start_time else None\n scen.end_time = str(timestamp_to_ordinal(s.end_time)) if s.end_time else None\n scen.time_step = s.time_step\n scen.created_by = user_id\n\n scenario_names.append(s.name)\n\n #extract the data from each resourcescenario\n incoming_datasets = []\n scenario_resource_attrs = []\n for r_scen in s.resourcescenarios:\n ra = all_resource_attrs[r_scen.resource_attr_id]\n incoming_datasets.append(r_scen.dataset)\n scenario_resource_attrs.append(ra)\n\n data_start_time = datetime.datetime.now()\n\n for default in defaults:\n scen.add_resource_scenario(JSONObject(default),\n JSONObject({'id':default['dataset_id']}),\n source=kwargs.get('app_name'))\n\n datasets = data._bulk_insert_data(\n incoming_datasets,\n user_id,\n kwargs.get('app_name')\n )\n\n log.info(\"Data bulk insert took %s\", get_timing(data_start_time))\n ra_start_time = datetime.datetime.now()\n for i, ra in enumerate(scenario_resource_attrs):\n scen.add_resource_scenario(ra, datasets[i], source=kwargs.get('app_name'))\n\n log.info(\"Resource scenarios added in %s\", get_timing(ra_start_time))\n\n item_start_time = datetime.datetime.now()\n if s.resourcegroupitems is not None:\n for group_item in s.resourcegroupitems:\n group_item_i = ResourceGroupItem()\n group_item_i.group = grp_id_map[group_item.group_id]\n group_item_i.ref_key = group_item.ref_key\n if group_item.ref_key == 'NODE':\n group_item_i.node = node_id_map[group_item.ref_id]\n elif group_item.ref_key == 'LINK':\n group_item_i.link = link_id_map[group_item.ref_id]\n elif group_item.ref_key == 'GROUP':\n group_item_i.subgroup = grp_id_map[group_item.ref_id]\n else:\n raise HydraError(\"A ref key of %s is not valid for a \"\n \"resource group item.\",\\\n group_item.ref_key)\n\n scen.resourcegroupitems.append(group_item_i)\n log.info(\"Group items insert took %s\", get_timing(item_start_time))\n net_i.scenarios.append(scen)\n\n log.info(\"Scenarios added in %s\", get_timing(start_time))\n net_i.set_owner(user_id)\n\n db.DBSession.flush()\n log.info(\"Insertion of network took: %s\",(datetime.datetime.now()-insert_start))\n\n return net_i"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets all the resource attributes for a given network.", "response": "def _get_all_resource_attributes(network_id, template_id=None):\n \"\"\"\n Get all the attributes for the nodes, links and groups of a network.\n Return these attributes as a dictionary, keyed on type (NODE, LINK, GROUP)\n then by ID of the node or link.\n \"\"\"\n base_qry = db.DBSession.query(\n ResourceAttr.id.label('id'),\n ResourceAttr.ref_key.label('ref_key'),\n ResourceAttr.cr_date.label('cr_date'),\n ResourceAttr.attr_is_var.label('attr_is_var'),\n ResourceAttr.node_id.label('node_id'),\n ResourceAttr.link_id.label('link_id'),\n ResourceAttr.group_id.label('group_id'),\n ResourceAttr.network_id.label('network_id'),\n ResourceAttr.attr_id.label('attr_id'),\n Attr.name.label('name'),\n Attr.dimension_id.label('dimension_id'),\n ).filter(Attr.id==ResourceAttr.attr_id)\n\n\n all_node_attribute_qry = base_qry.join(Node).filter(Node.network_id==network_id)\n\n all_link_attribute_qry = base_qry.join(Link).filter(Link.network_id==network_id)\n\n all_group_attribute_qry = base_qry.join(ResourceGroup).filter(ResourceGroup.network_id==network_id)\n network_attribute_qry = base_qry.filter(ResourceAttr.network_id==network_id)\n\n\n #Filter the group attributes by template\n if template_id is not None:\n all_node_attribute_qry = all_node_attribute_qry.join(ResourceType).join(TemplateType).join(TypeAttr).filter(TemplateType.template_id==template_id).filter(ResourceAttr.attr_id==TypeAttr.attr_id)\n all_link_attribute_qry = all_link_attribute_qry.join(ResourceType).join(TemplateType).join(TypeAttr).filter(TemplateType.template_id==template_id).filter(ResourceAttr.attr_id==TypeAttr.attr_id)\n all_group_attribute_qry = all_group_attribute_qry.join(ResourceType).join(TemplateType).join(TypeAttr).filter(TemplateType.template_id==template_id).filter(ResourceAttr.attr_id==TypeAttr.attr_id)\n network_attribute_qry = network_attribute_qry.join(ResourceType, ResourceAttr.network_id==ResourceType.network_id).join(TemplateType).join(TypeAttr).filter(TemplateType.template_id==template_id).filter(ResourceAttr.attr_id==TypeAttr.attr_id)\n\n x = time.time()\n logging.info(\"Getting all attributes using execute\")\n attribute_qry = all_node_attribute_qry.union(all_link_attribute_qry, all_group_attribute_qry, network_attribute_qry)\n all_attributes = db.DBSession.execute(attribute_qry.statement).fetchall()\n log.info(\"%s attrs retrieved in %s\", len(all_attributes), time.time()-x)\n\n logging.info(\"Attributes retrieved. Processing results...\")\n x = time.time()\n node_attr_dict = dict()\n link_attr_dict = dict()\n group_attr_dict = dict()\n network_attr_dict = dict()\n\n for attr in all_attributes:\n if attr.ref_key == 'NODE':\n nodeattr = node_attr_dict.get(attr.node_id, [])\n nodeattr.append(attr)\n node_attr_dict[attr.node_id] = nodeattr\n elif attr.ref_key == 'LINK':\n linkattr = link_attr_dict.get(attr.link_id, [])\n linkattr.append(attr)\n link_attr_dict[attr.link_id] = linkattr\n elif attr.ref_key == 'GROUP':\n groupattr = group_attr_dict.get(attr.group_id, [])\n groupattr.append(attr)\n group_attr_dict[attr.group_id] = groupattr\n elif attr.ref_key == 'NETWORK':\n networkattr = network_attr_dict.get(attr.network_id, [])\n networkattr.append(attr)\n network_attr_dict[attr.network_id] = networkattr\n\n all_attributes = {\n 'NODE' : node_attr_dict,\n 'LINK' : link_attr_dict,\n 'GROUP': group_attr_dict,\n 'NETWORK': network_attr_dict,\n }\n\n logging.info(\"Attributes processed in %s\", time.time()-x)\n return all_attributes"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_all_templates(network_id, template_id):\n base_qry = db.DBSession.query(\n ResourceType.ref_key.label('ref_key'),\n ResourceType.node_id.label('node_id'),\n ResourceType.link_id.label('link_id'),\n ResourceType.group_id.label('group_id'),\n ResourceType.network_id.label('network_id'),\n Template.name.label('template_name'),\n Template.id.label('template_id'),\n TemplateType.id.label('type_id'),\n TemplateType.layout.label('layout'),\n TemplateType.name.label('type_name'),\n ).filter(TemplateType.id==ResourceType.type_id,\n Template.id==TemplateType.template_id)\n\n\n all_node_type_qry = base_qry.filter(Node.id==ResourceType.node_id,\n Node.network_id==network_id)\n\n all_link_type_qry = base_qry.filter(Link.id==ResourceType.link_id,\n Link.network_id==network_id)\n\n all_group_type_qry = base_qry.filter(ResourceGroup.id==ResourceType.group_id,\n ResourceGroup.network_id==network_id)\n\n network_type_qry = base_qry.filter(ResourceType.network_id==network_id)\n\n #Filter the group attributes by template\n if template_id is not None:\n all_node_type_qry = all_node_type_qry.filter(Template.id==template_id)\n all_link_type_qry = all_link_type_qry.filter(Template.id==template_id)\n all_group_type_qry = all_group_type_qry.filter(Template.id==template_id)\n\n x = time.time()\n log.info(\"Getting all types\")\n type_qry = all_node_type_qry.union(all_link_type_qry, all_group_type_qry, network_type_qry)\n all_types = db.DBSession.execute(type_qry.statement).fetchall()\n log.info(\"%s types retrieved in %s\", len(all_types), time.time()-x)\n\n\n log.info(\"Attributes retrieved. Processing results...\")\n x = time.time()\n node_type_dict = dict()\n link_type_dict = dict()\n group_type_dict = dict()\n network_type_dict = dict()\n\n for t in all_types:\n templatetype = JSONObject({\n 'template_id':t.template_id,\n 'id':t.type_id,\n 'template_name':t.template_name,\n 'layout': t.layout,\n 'name': t.type_name,})\n\n if t.ref_key == 'NODE':\n nodetype = node_type_dict.get(t.node_id, [])\n nodetype.append(templatetype)\n node_type_dict[t.node_id] = nodetype\n elif t.ref_key == 'LINK':\n linktype = link_type_dict.get(t.link_id, [])\n linktype.append(templatetype)\n link_type_dict[t.link_id] = linktype\n elif t.ref_key == 'GROUP':\n grouptype = group_type_dict.get(t.group_id, [])\n grouptype.append(templatetype)\n group_type_dict[t.group_id] = grouptype\n elif t.ref_key == 'NETWORK':\n nettype = network_type_dict.get(t.network_id, [])\n nettype.append(templatetype)\n network_type_dict[t.network_id] = nettype\n\n\n all_types = {\n 'NODE' : node_type_dict,\n 'LINK' : link_type_dict,\n 'GROUP': group_type_dict,\n 'NETWORK': network_type_dict,\n }\n\n logging.info(\"Attributes processed in %s\", time.time()-x)\n return all_types", "response": "Get all the templates for a given network and template."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_all_group_items(network_id):\n base_qry = db.DBSession.query(ResourceGroupItem)\n\n item_qry = base_qry.join(Scenario).filter(Scenario.network_id==network_id)\n\n x = time.time()\n logging.info(\"Getting all items\")\n all_items = db.DBSession.execute(item_qry.statement).fetchall()\n log.info(\"%s groups jointly retrieved in %s\", len(all_items), time.time()-x)\n\n\n logging.info(\"items retrieved. Processing results...\")\n x = time.time()\n item_dict = dict()\n for item in all_items:\n\n items = item_dict.get(item.scenario_id, [])\n items.append(item)\n item_dict[item.scenario_id] = items\n\n logging.info(\"items processed in %s\", time.time()-x)\n\n return item_dict", "response": "Get all the resource group items in the network across all scenarios\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget all the resource scenarios in a network across all scenarios", "response": "def _get_all_resourcescenarios(network_id, user_id):\n \"\"\"\n Get all the resource scenarios in a network, across all scenarios\n returns a dictionary of dict objects, keyed on scenario_id\n \"\"\"\n\n rs_qry = db.DBSession.query(\n Dataset.type,\n Dataset.unit_id,\n Dataset.name,\n Dataset.hash,\n Dataset.cr_date,\n Dataset.created_by,\n Dataset.hidden,\n Dataset.value,\n ResourceScenario.dataset_id,\n ResourceScenario.scenario_id,\n ResourceScenario.resource_attr_id,\n ResourceScenario.source,\n ResourceAttr.attr_id,\n ).outerjoin(DatasetOwner, and_(DatasetOwner.dataset_id==Dataset.id, DatasetOwner.user_id==user_id)).filter(\n or_(Dataset.hidden=='N', Dataset.created_by==user_id, DatasetOwner.user_id != None),\n ResourceAttr.id == ResourceScenario.resource_attr_id,\n Scenario.id==ResourceScenario.scenario_id,\n Scenario.network_id==network_id,\n Dataset.id==ResourceScenario.dataset_id)\n\n x = time.time()\n logging.info(\"Getting all resource scenarios\")\n all_rs = db.DBSession.execute(rs_qry.statement).fetchall()\n log.info(\"%s resource scenarios retrieved in %s\", len(all_rs), time.time()-x)\n\n\n logging.info(\"resource scenarios retrieved. Processing results...\")\n x = time.time()\n rs_dict = dict()\n for rs in all_rs:\n rs_obj = JSONObject(rs)\n rs_attr = JSONObject({'attr_id':rs.attr_id})\n\n value = rs.value\n\n rs_dataset = JSONDataset({\n 'id':rs.dataset_id,\n 'type' : rs.type,\n 'unit_id' : rs.unit_id,\n 'name' : rs.name,\n 'hash' : rs.hash,\n 'cr_date':rs.cr_date,\n 'created_by':rs.created_by,\n 'hidden':rs.hidden,\n 'value':value,\n 'metadata':{},\n })\n rs_obj.resourceattr = rs_attr\n rs_obj.value = rs_dataset\n rs_obj.dataset = rs_dataset\n\n scenario_rs = rs_dict.get(rs.scenario_id, [])\n scenario_rs.append(rs_obj)\n rs_dict[rs.scenario_id] = scenario_rs\n\n logging.info(\"resource scenarios processed in %s\", time.time()-x)\n\n return rs_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_metadata(network_id, user_id):\n log.info(\"Getting Metadata\")\n dataset_qry = db.DBSession.query(\n Dataset\n ).outerjoin(DatasetOwner, and_(DatasetOwner.dataset_id==Dataset.id, DatasetOwner.user_id==user_id)).filter(\n or_(Dataset.hidden=='N', DatasetOwner.user_id != None),\n Scenario.id==ResourceScenario.scenario_id,\n Scenario.network_id==network_id,\n Dataset.id==ResourceScenario.dataset_id).distinct().subquery()\n\n rs_qry = db.DBSession.query(\n Metadata\n ).join(dataset_qry, Metadata.dataset_id==dataset_qry.c.id)\n\n x = time.time()\n logging.info(\"Getting all matadata\")\n all_metadata = db.DBSession.execute(rs_qry.statement).fetchall()\n log.info(\"%s metadata jointly retrieved in %s\",len(all_metadata), time.time()-x)\n\n logging.info(\"metadata retrieved. Processing results...\")\n x = time.time()\n metadata_dict = dict()\n for m in all_metadata:\n if metadata_dict.get(m.dataset_id):\n metadata_dict[m.dataset_id][m.key] = six.text_type(m.value)\n else:\n metadata_dict[m.dataset_id] = {m.key : six.text_type(m.value)}\n\n logging.info(\"metadata processed in %s\", time.time()-x)\n\n return metadata_dict", "response": "Get all the metadata in a network across all scenarios\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_network_owners(network_id):\n owners_i = db.DBSession.query(NetworkOwner).filter(\n NetworkOwner.network_id==network_id).options(noload('network')).options(joinedload_all('user')).all()\n\n owners = [JSONObject(owner_i) for owner_i in owners_i]\n\n return owners", "response": "Get all the nodes in a network"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_nodes(network_id, template_id=None):\n extras = {'types':[], 'attributes':[]}\n\n node_qry = db.DBSession.query(Node).filter(\n Node.network_id==network_id,\n Node.status=='A').options(\n noload('network')\n )\n if template_id is not None:\n node_qry = node_qry.filter(ResourceType.node_id==Node.id,\n TemplateType.id==ResourceType.type_id,\n TemplateType.template_id==template_id)\n node_res = db.DBSession.execute(node_qry.statement).fetchall()\n\n nodes = []\n for n in node_res:\n nodes.append(JSONObject(n, extras=extras))\n\n return nodes", "response": "Get all the nodes in a network"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets all the links in a network", "response": "def _get_links(network_id, template_id=None):\n \"\"\"\n Get all the links in a network\n \"\"\"\n extras = {'types':[], 'attributes':[]}\n link_qry = db.DBSession.query(Link).filter(\n Link.network_id==network_id,\n Link.status=='A').options(\n noload('network')\n )\n if template_id is not None:\n link_qry = link_qry.filter(ResourceType.link_id==Link.id,\n TemplateType.id==ResourceType.type_id,\n TemplateType.template_id==template_id)\n\n link_res = db.DBSession.execute(link_qry.statement).fetchall()\n\n links = []\n for l in link_res:\n links.append(JSONObject(l, extras=extras))\n\n return links"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_groups(network_id, template_id=None):\n extras = {'types':[], 'attributes':[]}\n group_qry = db.DBSession.query(ResourceGroup).filter(\n ResourceGroup.network_id==network_id,\n ResourceGroup.status=='A').options(\n noload('network')\n )\n\n if template_id is not None:\n group_qry = group_qry.filter(ResourceType.group_id==ResourceGroup.id,\n TemplateType.id==ResourceType.type_id,\n TemplateType.template_id==template_id)\n\n group_res = db.DBSession.execute(group_qry.statement).fetchall()\n groups = []\n for g in group_res:\n groups.append(JSONObject(g, extras=extras))\n\n return groups", "response": "Get all the resource groups in a network"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_scenarios(network_id, include_data, user_id, scenario_ids=None):\n scen_qry = db.DBSession.query(Scenario).filter(\n Scenario.network_id == network_id).options(\n noload('network')).filter(\n Scenario.status == 'A')\n\n if scenario_ids:\n logging.info(\"Filtering by scenario_ids %s\",scenario_ids)\n scen_qry = scen_qry.filter(Scenario.id.in_(scenario_ids))\n extras = {'resourcescenarios': [], 'resourcegroupitems': []}\n scens = [JSONObject(s,extras=extras) for s in db.DBSession.execute(scen_qry.statement).fetchall()]\n\n all_resource_group_items = _get_all_group_items(network_id)\n\n if include_data == 'Y' or include_data == True:\n all_rs = _get_all_resourcescenarios(network_id, user_id)\n metadata = _get_metadata(network_id, user_id)\n\n for s in scens:\n s.resourcegroupitems = all_resource_group_items.get(s.id, [])\n\n if include_data == 'Y' or include_data == True:\n s.resourcescenarios = all_rs.get(s.id, [])\n\n for rs in s.resourcescenarios:\n rs.dataset.metadata = metadata.get(rs.dataset_id, {})\n\n return scens", "response": "Get all the scenarios in a network"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a single network as a dictionary.", "response": "def get_network(network_id, summary=False, include_data='N', scenario_ids=None, template_id=None, **kwargs):\n \"\"\"\n Return a whole network as a dictionary.\n network_id: ID of the network to retrieve\n include_data: 'Y' or 'N'. Indicate whether scenario data is to be returned.\n This has a significant speed impact as retrieving large amounts\n of data can be expensive.\n scenario_ids: list of IDS to be returned. Used if a network has multiple\n scenarios but you only want one returned. Using this filter\n will speed up this function call.\n template_id: Return the network with only attributes associated with this\n template on the network, groups, nodes and links.\n \"\"\"\n log.debug(\"getting network %s\"%network_id)\n user_id = kwargs.get('user_id')\n\n network_id = int(network_id)\n\n try:\n log.debug(\"Querying Network %s\", network_id)\n net_i = db.DBSession.query(Network).filter(\n Network.id == network_id).options(\n noload('scenarios')).options(\n noload('nodes')).options(\n noload('links')).options(\n noload('types')).options(\n noload('attributes')).options(\n noload('resourcegroups')).one()\n\n net_i.check_read_permission(user_id)\n\n net = JSONObject(net_i)\n\n net.nodes = _get_nodes(network_id, template_id=template_id)\n net.links = _get_links(network_id, template_id=template_id)\n net.resourcegroups = _get_groups(network_id, template_id=template_id)\n net.owners = _get_network_owners(network_id)\n\n if summary is False:\n all_attributes = _get_all_resource_attributes(network_id, template_id)\n log.info(\"Setting attributes\")\n net.attributes = all_attributes['NETWORK'].get(network_id, [])\n for node_i in net.nodes:\n node_i.attributes = all_attributes['NODE'].get(node_i.id, [])\n log.info(\"Node attributes set\")\n for link_i in net.links:\n link_i.attributes = all_attributes['LINK'].get(link_i.id, [])\n log.info(\"Link attributes set\")\n for group_i in net.resourcegroups:\n group_i.attributes = all_attributes['GROUP'].get(group_i.id, [])\n log.info(\"Group attributes set\")\n\n\n log.info(\"Setting types\")\n all_types = _get_all_templates(network_id, template_id)\n net.types = all_types['NETWORK'].get(network_id, [])\n for node_i in net.nodes:\n node_i.types = all_types['NODE'].get(node_i.id, [])\n for link_i in net.links:\n link_i.types = all_types['LINK'].get(link_i.id, [])\n for group_i in net.resourcegroups:\n group_i.types = all_types['GROUP'].get(group_i.id, [])\n\n log.info(\"Getting scenarios\")\n\n net.scenarios = _get_scenarios(network_id, include_data, user_id, scenario_ids)\n\n except NoResultFound:\n raise ResourceNotFoundError(\"Network (network_id=%s) not found.\" %\n network_id)\n\n return net"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting all the nodes in a network.", "response": "def get_nodes(network_id, template_id=None, **kwargs):\n \"\"\"\n Get all the nodes in a network.\n args:\n network_id (int): The network in which to search\n template_id (int): Only return nodes whose type is in this template.\n \"\"\"\n user_id = kwargs.get('user_id')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_read_permission(user_id=user_id)\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n\n node_qry = db.DBSession.query(Node).filter(\n Node.network_id==network_id,\n Node.status=='A').options(\n noload('network')\n ).options(\n joinedload_all('types.templatetype')\n ).options(\n joinedload_all('attributes.attr')\n )\n if template_id is not None:\n node_qry = node_qry.filter(ResourceType.node_id==Node.id,\n TemplateType.id==ResourceType.type_id,\n TemplateType.template_id==template_id)\n nodes = node_qry.all()\n\n return nodes"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_links(network_id, template_id=None, **kwargs):\n user_id = kwargs.get('user_id')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_read_permission(user_id=user_id)\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n\n link_qry = db.DBSession.query(Link).filter(\n Link.network_id==network_id,\n Link.status=='A').options(\n noload('network')\n ).options(\n joinedload_all('types.templatetype')\n ).options(\n joinedload_all('attributes.attr')\n )\n\n if template_id is not None:\n link_qry = link_qry.filter(ResourceType.link_id==Link.id,\n TemplateType.id==ResourceType.type_id,\n TemplateType.template_id==template_id)\n\n links = link_qry.all()\n return links", "response": "Get all the links in a network."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_groups(network_id, template_id=None, **kwargs):\n user_id = kwargs.get('user_id')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_read_permission(user_id=user_id)\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n\n group_qry = db.DBSession.query(ResourceGroup).filter(\n ResourceGroup.network_id==network_id,\n ResourceGroup.status=='A').options(\n noload('network')\n ).options(\n joinedload_all('types.templatetype')\n ).options(\n joinedload_all('attributes.attr')\n )\n if template_id is not None:\n group_qry = group_qry.filter(ResourceType.group_id==ResourceGroup.id,\n TemplateType.id==ResourceType.type_id,\n TemplateType.template_id==template_id)\n\n groups = group_qry.all()\n\n return groups", "response": "Get all the resource groups in a network."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_network_by_name(project_id, network_name,**kwargs):\n\n try:\n res = db.DBSession.query(Network.id).filter(func.lower(Network.name).like(network_name.lower()), Network.project_id == project_id).one()\n net = get_network(res.id, 'Y', None, **kwargs)\n return net\n except NoResultFound:\n raise ResourceNotFoundError(\"Network with name %s not found\"%(network_name))", "response": "Get a network as a complex model."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks if a network exists in the database.", "response": "def network_exists(project_id, network_name,**kwargs):\n \"\"\"\n Return a whole network as a complex model.\n \"\"\"\n try:\n db.DBSession.query(Network.id).filter(func.lower(Network.name).like(network_name.lower()), Network.project_id == project_id).one()\n return 'Y'\n except NoResultFound:\n return 'N'"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates a network in the database.", "response": "def update_network(network,\n update_nodes = True,\n update_links = True,\n update_groups = True,\n update_scenarios = True,\n **kwargs):\n \"\"\"\n Update an entire network\n \"\"\"\n log.info(\"Updating Network %s\", network.name)\n user_id = kwargs.get('user_id')\n #check_perm('update_network')\n\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network.id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Network with id %s not found\"%(network.id))\n\n net_i.project_id = network.project_id\n net_i.name = network.name\n net_i.description = network.description\n net_i.projection = network.projection\n net_i.layout = network.get_layout()\n\n all_resource_attrs = {}\n new_network_attributes = _update_attributes(net_i, network.attributes)\n all_resource_attrs.update(new_network_attributes)\n hdb.add_resource_types(net_i, network.types)\n\n #Maps temporary node_ids to real node_ids\n node_id_map = dict()\n\n if network.nodes is not None and update_nodes is True:\n log.info(\"Updating nodes\")\n t0 = time.time()\n #First add all the nodes\n node_id_map = dict([(n.id, n) for n in net_i.nodes])\n for node in network.nodes:\n #If we get a negative or null node id, we know\n #it is a new node.\n if node.id is not None and node.id > 0:\n n = node_id_map[node.id]\n n.name = node.name\n n.description = node.description\n n.x = node.x\n n.y = node.y\n n.status = node.status\n n.layout = node.get_layout()\n else:\n log.info(\"Adding new node %s\", node.name)\n n = net_i.add_node(node.name,\n node.description,\n node.get_layout(),\n node.x,\n node.y)\n net_i.nodes.append(n)\n node_id_map[n.id] = n\n\n all_resource_attrs.update(_update_attributes(n, node.attributes))\n hdb.add_resource_types(n, node.types)\n log.info(\"Updating nodes took %s\", time.time() - t0)\n\n link_id_map = dict()\n if network.links is not None and update_links is True:\n log.info(\"Updating links\")\n t0 = time.time()\n link_id_map = dict([(l.link_id, l) for l in net_i.links])\n for link in network.links:\n node_1 = node_id_map[link.node_1_id]\n\n node_2 = node_id_map[link.node_2_id]\n\n if link.id is None or link.id < 0:\n log.info(\"Adding new link %s\", link.name)\n l = net_i.add_link(link.name,\n link.description,\n link.get_layout(),\n node_1,\n node_2)\n net_i.links.append(l)\n link_id_map[link.id] = l\n else:\n l = link_id_map[link.id]\n l.name = link.name\n l.link_descripion = link.description\n l.node_a = node_1\n l.node_b = node_2\n l.layout = link.get_layout()\n\n\n all_resource_attrs.update(_update_attributes(l, link.attributes))\n hdb.add_resource_types(l, link.types)\n log.info(\"Updating links took %s\", time.time() - t0)\n\n group_id_map = dict()\n #Next all the groups\n if network.resourcegroups is not None and update_groups is True:\n log.info(\"Updating groups\")\n t0 = time.time()\n group_id_map = dict([(g.group_id, g) for g in net_i.resourcegroups])\n for group in network.resourcegroups:\n #If we get a negative or null group id, we know\n #it is a new group.\n if group.id is not None and group.id > 0:\n g_i = group_id_map[group.id]\n g_i.name = group.name\n g_i.description = group.description\n g_i.status = group.status\n else:\n log.info(\"Adding new group %s\", group.name)\n g_i = net_i.add_group(group.name,\n group.description,\n group.status)\n net_i.resourcegroups.append(net_i)\n group_id_map[g_i.group_id] = g_i\n\n all_resource_attrs.update(_update_attributes(g_i, group.attributes))\n hdb.add_resource_types(g_i, group.types)\n group_id_map[group.id] = g_i\n log.info(\"Updating groups took %s\", time.time() - t0)\n\n errors = []\n if network.scenarios is not None and update_scenarios is True:\n for s in network.scenarios:\n add_scenario = False\n if s.id is not None:\n if s.id > 0:\n try:\n scen_i = db.DBSession.query(Scenario).filter(Scenario.id==s.id).one()\n if scen_i.locked == 'Y':\n errors.append('Scenario %s was not updated as it is locked'%(s.id))\n continue\n\n scenario.update_scenario(s, flush=False, **kwargs)\n except NoResultFound:\n raise ResourceNotFoundError(\"Scenario %s not found\"%(s.id))\n else:\n add_scenario = True\n else:\n add_scenario = True\n\n if add_scenario is True:\n log.info(\"Adding new scenario %s to network\", s.name)\n scenario.add_scenario(network.id, s, **kwargs)\n\n db.DBSession.flush()\n\n updated_net = get_network(network.id, summary=True, **kwargs)\n return updated_net"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_network_status(network_id,status,**kwargs):\n user_id = kwargs.get('user_id')\n #check_perm(user_id, 'delete_network')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_write_permission(user_id)\n net_i.status = status\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n db.DBSession.flush()\n return 'OK'", "response": "Activates a network by setting its status attribute to A."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngives a network return its maximum extents.", "response": "def get_network_extents(network_id,**kwargs):\n \"\"\"\n Given a network, return its maximum extents.\n This would be the minimum x value of all nodes,\n the minimum y value of all nodes,\n the maximum x value of all nodes and\n maximum y value of all nodes.\n\n @returns NetworkExtents object\n \"\"\"\n rs = db.DBSession.query(Node.x, Node.y).filter(Node.network_id==network_id).all()\n if len(rs) == 0:\n return dict(\n network_id = network_id,\n min_x=None,\n max_x=None,\n min_y=None,\n max_y=None,\n )\n\n # Compute min/max extent of the network.\n x = [r.x for r in rs if r.x is not None]\n if len(x) > 0:\n x_min = min(x)\n x_max = max(x)\n else:\n # Default x extent if all None values\n x_min, x_max = 0, 1\n\n y = [r.y for r in rs if r.y is not None]\n if len(y) > 0:\n y_min = min(y)\n y_max = max(y)\n else:\n # Default y extent if all None values\n y_min, y_max = 0, 1\n\n ne = JSONObject(dict(\n network_id = network_id,\n min_x=x_min,\n max_x=x_max,\n min_y=y_min,\n max_y=y_max,\n ))\n return ne"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_nodes(network_id, nodes,**kwargs):\n start_time = datetime.datetime.now()\n\n names=[] # used to check uniqueness of node name\n for n_i in nodes:\n if n_i.name in names:\n raise HydraError(\"Duplicate Node Name: %s\"%(n_i.name))\n names.append(n_i.name)\n\n user_id = kwargs.get('user_id')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_write_permission(user_id)\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n\n _add_nodes_to_database(net_i, nodes)\n\n net_i.project_id=net_i.project_id\n db.DBSession.flush()\n\n node_s = db.DBSession.query(Node).filter(Node.network_id==network_id).all()\n\n #Maps temporary node_ids to real node_ids\n node_id_map = dict()\n\n iface_nodes = dict()\n for n_i in node_s:\n iface_nodes[n_i.name] = n_i\n\n for node in nodes:\n node_id_map[node.id] = iface_nodes[node.name]\n\n _bulk_add_resource_attrs(network_id, 'NODE', nodes, iface_nodes)\n\n log.info(\"Nodes added in %s\", get_timing(start_time))\n return node_s", "response": "Add nodes to network"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_links(network_id, links,**kwargs):\n '''\n add links to network\n '''\n start_time = datetime.datetime.now()\n user_id = kwargs.get('user_id')\n names=[] # used to check uniqueness of link name before saving links to database\n for l_i in links:\n if l_i.name in names:\n raise HydraError(\"Duplicate Link Name: %s\"%(l_i.name))\n names.append(l_i.name)\n\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_write_permission(user_id)\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n node_id_map=dict()\n for node in net_i.nodes:\n node_id_map[node.id]=node\n _add_links_to_database(net_i, links, node_id_map)\n\n net_i.project_id=net_i.project_id\n db.DBSession.flush()\n link_s = db.DBSession.query(Link).filter(Link.network_id==network_id).all()\n iface_links = {}\n for l_i in link_s:\n iface_links[l_i.name] = l_i\n link_attrs = _bulk_add_resource_attrs(net_i.id, 'LINK', links, iface_links)\n log.info(\"Nodes added in %s\", get_timing(start_time))\n return link_s", "response": "add links to network\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_node(node, flush=True, **kwargs):\n user_id = kwargs.get('user_id')\n try:\n node_i = db.DBSession.query(Node).filter(Node.id == node.id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Node %s not found\"%(node.id))\n\n node_i.network.check_write_permission(user_id)\n\n node_i.name = node.name if node.name is not None else node_i.name\n node_i.x = node.x if node.x is not None else node_i.x\n node_i.y = node.y if node.y is not None else node_i.y\n node_i.description = node.description if node.description is not None else node_i.description\n node_i.layout = node.get_layout() if node.layout is not None else node_i.layout\n\n if node.attributes is not None:\n _update_attributes(node_i, node.attributes)\n\n if node.types is not None:\n hdb.add_resource_types(node_i, node.types)\n\n if flush is True:\n db.DBSession.flush()\n\n return node_i", "response": "Update a node.\n If new attributes are present, they will be added to the node.\n The non-presence of attributes does not remove them.\n\n The flush argument indicates whether dbsession.flush should be called. THis\n is set to False when update_node is called from another function which does\n the flush."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_nodes(nodes,**kwargs):\n user_id = kwargs.get('user_id')\n updated_nodes = []\n for n in nodes:\n updated_node_i = update_node(n, flush=False, user_id=user_id)\n updated_nodes.append(updated_node_i)\n\n db.DBSession.flush()\n\n return updated_nodes", "response": "Update multiple nodes.\n If new attributes are present, they will be added to the node.\n The non-presence of attributes does not remove them.\n\n %TODO:merge this with the 'update_nodes' functionality in the 'update_netework'\n function, so we're not duplicating functionality. D.R.Y!\n\n returns: a list of updated nodes"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_node_status(node_id, status, **kwargs):\n user_id = kwargs.get('user_id')\n try:\n node_i = db.DBSession.query(Node).filter(Node.id == node_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Node %s not found\"%(node_id))\n\n node_i.network.check_write_permission(user_id)\n\n node_i.status = status\n\n for link in node_i.links_to:\n link.status = status\n for link in node_i.links_from:\n link.status = status\n\n db.DBSession.flush()\n\n return node_i", "response": "Set the status of a node to X"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef purge_network(network_id, purge_data,**kwargs):\n user_id = kwargs.get('user_id')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n\n log.info(\"Deleting network %s, id=%s\", net_i.name, network_id)\n\n net_i.check_write_permission(user_id)\n db.DBSession.delete(net_i)\n db.DBSession.flush()\n return 'OK'", "response": "Remove a network from DB completely"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes all the datasets that are unique to a resource.", "response": "def _purge_datasets_unique_to_resource(ref_key, ref_id):\n \"\"\"\n Find the number of times a a resource and dataset combination\n occurs. If this equals the number of times the dataset appears, then\n we can say this dataset is unique to this resource, therefore it can be deleted\n \"\"\"\n count_qry = db.DBSession.query(ResourceScenario.dataset_id,\n func.count(ResourceScenario.dataset_id)).group_by(\n ResourceScenario.dataset_id).filter(\n ResourceScenario.resource_attr_id==ResourceAttr.id)\n\n if ref_key == 'NODE':\n count_qry.filter(ResourceAttr.node_id==ref_id)\n elif ref_key == 'LINK':\n count_qry.filter(ResourceAttr.link_id==ref_id)\n elif ref_key == 'GROUP':\n count_qry.filter(ResourceAttr.group_id==ref_id)\n\n count_rs = count_qry.all()\n\n for dataset_id, count in count_rs:\n full_dataset_count = db.DBSession.query(ResourceScenario).filter(ResourceScenario.dataset_id==dataset_id).count()\n if full_dataset_count == count:\n \"\"\"First delete all the resource scenarios\"\"\"\n datasets_rs_to_delete = db.DBSession.query(ResourceScenario).filter(ResourceScenario.dataset_id==dataset_id).all()\n for dataset_rs in datasets_rs_to_delete:\n db.DBSession.delete(dataset_rs)\n\n \"\"\"Then delete all the datasets\"\"\"\n dataset_to_delete = db.DBSession.query(Dataset).filter(Dataset.id==dataset_id).one()\n log.info(\"Deleting %s dataset %s (%s)\", ref_key, dataset_to_delete.name, dataset_to_delete.id)\n db.DBSession.delete(dataset_to_delete)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_node(node_id, purge_data,**kwargs):\n user_id = kwargs.get('user_id')\n try:\n node_i = db.DBSession.query(Node).filter(Node.id == node_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Node %s not found\"%(node_id))\n\n group_items = db.DBSession.query(ResourceGroupItem).filter(\n ResourceGroupItem.node_id==node_id).all()\n for gi in group_items:\n db.DBSession.delete(gi)\n\n if purge_data == 'Y':\n _purge_datasets_unique_to_resource('NODE', node_id)\n\n log.info(\"Deleting node %s, id=%s\", node_i.name, node_id)\n\n node_i.network.check_write_permission(user_id)\n db.DBSession.delete(node_i)\n db.DBSession.flush()\n return 'OK'", "response": "Delete a node from DB completely"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a link to a network", "response": "def add_link(network_id, link,**kwargs):\n \"\"\"\n Add a link to a network\n \"\"\"\n user_id = kwargs.get('user_id')\n\n #check_perm(user_id, 'edit_topology')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_write_permission(user_id)\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n\n try:\n node_1 = db.DBSession.query(Node).filter(Node.id==link.node_1_id).one()\n node_2 = db.DBSession.query(Node).filter(Node.id==link.node_2_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Nodes for link not found\")\n\n link_i = net_i.add_link(link.name, link.description, link.layout, node_1, node_2)\n\n hdb.add_resource_attributes(link_i, link.attributes)\n\n db.DBSession.flush()\n\n if link.types is not None and len(link.types) > 0:\n res_types = []\n res_attrs = []\n res_scenarios = {}\n for typesummary in link.types:\n ra, rt, rs = template.set_resource_type(link_i,\n typesummary.id,\n **kwargs)\n res_types.append(rt)\n res_attrs.extend(ra)\n res_scenarios.update(rs)#rs is a dict\n\n if len(res_types) > 0:\n db.DBSession.bulk_insert_mappings(ResourceType, res_types)\n if len(res_attrs) > 0:\n db.DBSession.bulk_insert_mappings(ResourceAttr, res_attrs)\n\n new_res_attrs = db.DBSession.query(ResourceAttr).order_by(ResourceAttr.id.desc()).limit(len(res_attrs)).all()\n all_rs = []\n for ra in new_res_attrs:\n ra_id = ra.id\n if ra.attr_id in res_scenarios:\n rs_list = res_scenarios[ra.attr_id]\n for rs in rs_list:\n rs_list[rs]['resource_attr_id'] = ra_id\n all_rs.append(rs_list[rs])\n\n if len(all_rs) > 0:\n db.DBSession.bulk_insert_mappings(ResourceScenario, all_rs)\n\n db.DBSession.refresh(link_i)\n\n return link_i"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the status of a link.", "response": "def set_link_status(link_id, status, **kwargs):\n \"\"\"\n Set the status of a link\n \"\"\"\n user_id = kwargs.get('user_id')\n #check_perm(user_id, 'edit_topology')\n try:\n link_i = db.DBSession.query(Link).filter(Link.id == link_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Link %s not found\"%(link_id))\n\n link_i.network.check_write_permission(user_id)\n\n link_i.status = status\n db.DBSession.flush()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_link(link_id, purge_data,**kwargs):\n user_id = kwargs.get('user_id')\n try:\n link_i = db.DBSession.query(Link).filter(Link.id == link_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Link %s not found\"%(link_id))\n\n group_items = db.DBSession.query(ResourceGroupItem).filter(\n ResourceGroupItem.link_id==link_id).all()\n for gi in group_items:\n db.DBSession.delete(gi)\n\n if purge_data == 'Y':\n _purge_datasets_unique_to_resource('LINK', link_id)\n\n log.info(\"Deleting link %s, id=%s\", link_i.name, link_id)\n\n link_i.network.check_write_permission(user_id)\n db.DBSession.delete(link_i)\n db.DBSession.flush()", "response": "Delete a link from DB completely"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a resourcegroup to a network", "response": "def add_group(network_id, group,**kwargs):\n \"\"\"\n Add a resourcegroup to a network\n \"\"\"\n\n user_id = kwargs.get('user_id')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_write_permission(user_id=user_id)\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n\n res_grp_i = net_i.add_group(group.name, group.description, group.status)\n\n hdb.add_resource_attributes(res_grp_i, group.attributes)\n\n db.DBSession.flush()\n if group.types is not None and len(group.types) > 0:\n res_types = []\n res_attrs = []\n res_scenarios = {}\n for typesummary in group.types:\n ra, rt, rs = template.set_resource_type(res_grp_i,\n typesummary.id,\n **kwargs)\n res_types.append(rt)\n res_attrs.extend(ra)\n res_scenarios.update(rs)#rs is a dict\n if len(res_types) > 0:\n db.DBSession.bulk_insert_mappings(ResourceType, res_types)\n if len(res_attrs) > 0:\n db.DBSession.bulk_insert_mappings(ResourceAttr, res_attrs)\n\n new_res_attrs = db.DBSession.query(ResourceAttr).order_by(ResourceAttr.id.desc()).limit(len(res_attrs)).all()\n all_rs = []\n for ra in new_res_attrs:\n ra_id = ra.id\n if ra.attr_id in res_scenarios:\n rs_list = res_scenarios[ra.attr_id]\n for rs in rs_list:\n rs_list[rs]['resource_attr_id'] = ra_id\n all_rs.append(rs_list[rs])\n\n if len(all_rs) > 0:\n db.DBSession.bulk_insert_mappings(ResourceScenario, all_rs)\n\n\n db.DBSession.refresh(res_grp_i)\n\n return res_grp_i"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate a group. If new attributes are present, they will be added to the group. The non-presence of attributes does not remove them.", "response": "def update_group(group,**kwargs):\n \"\"\"\n Update a group.\n If new attributes are present, they will be added to the group.\n The non-presence of attributes does not remove them.\n \"\"\"\n user_id = kwargs.get('user_id')\n try:\n group_i = db.DBSession.query(ResourceGroup).filter(ResourceGroup.id == group.id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"group %s not found\"%(group.id))\n\n group_i.network.check_write_permission(user_id)\n\n group_i.name = group.name if group.name != None else group_i.name\n group_i.description = group.description if group.description else group_i.description\n\n if group.attributes is not None:\n _update_attributes(group_i, group.attributes)\n\n if group.types is not None:\n hdb.add_resource_types(group_i, group.types)\n\n db.DBSession.flush()\n\n return group_i"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_group_status(group_id, status, **kwargs):\n user_id = kwargs.get('user_id')\n try:\n group_i = db.DBSession.query(ResourceGroup).filter(ResourceGroup.id == group_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"ResourceGroup %s not found\"%(group_id))\n\n group_i.network.check_write_permission(user_id)\n\n group_i.status = status\n\n db.DBSession.flush()\n\n return group_i", "response": "Set the status of a group to X"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_group(group_id, purge_data,**kwargs):\n user_id = kwargs.get('user_id')\n try:\n group_i = db.DBSession.query(ResourceGroup).filter(ResourceGroup.id == group_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Group %s not found\"%(group_id))\n\n group_items = db.DBSession.query(ResourceGroupItem).filter(\n ResourceGroupItem.group_id==group_id).all()\n for gi in group_items:\n db.DBSession.delete(gi)\n\n if purge_data == 'Y':\n _purge_datasets_unique_to_resource('GROUP', group_id)\n\n log.info(\"Deleting group %s, id=%s\", group_i.name, group_id)\n\n group_i.network.check_write_permission(user_id)\n db.DBSession.delete(group_i)\n db.DBSession.flush()", "response": "Delete group from DB completely"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_scenarios(network_id,**kwargs):\n\n user_id = kwargs.get('user_id')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_read_permission(user_id=user_id)\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n\n return net_i.scenarios", "response": "Get all the scenarios in a given network."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate_network_topology(network_id,**kwargs):\n\n user_id = kwargs.get('user_id')\n try:\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).one()\n net_i.check_write_permission(user_id=user_id)\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n\n nodes = []\n for node_i in net_i.nodes:\n if node_i.status == 'A':\n nodes.append(node_i.node_id)\n\n link_nodes = []\n for link_i in net_i.links:\n if link_i.status != 'A':\n continue\n if link_i.node_1_id not in link_nodes:\n link_nodes.append(link_i.node_1_id)\n\n if link_i.node_2_id not in link_nodes:\n link_nodes.append(link_i.node_2_id)\n\n nodes = set(nodes)\n link_nodes = set(link_nodes)\n\n isolated_nodes = nodes - link_nodes\n\n return isolated_nodes", "response": "Validate that the network topology is valid."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_resources_of_type(network_id, type_id, **kwargs):\n\n nodes_with_type = db.DBSession.query(Node).join(ResourceType).filter(Node.network_id==network_id, ResourceType.type_id==type_id).all()\n links_with_type = db.DBSession.query(Link).join(ResourceType).filter(Link.network_id==network_id, ResourceType.type_id==type_id).all()\n groups_with_type = db.DBSession.query(ResourceGroup).join(ResourceType).filter(ResourceGroup.network_id==network_id, ResourceType.type_id==type_id).all()\n\n return nodes_with_type, links_with_type, groups_with_type", "response": "Get the nodes links and resource groups which have the type specified."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef clean_up_network(network_id, **kwargs):\n user_id = kwargs.get('user_id')\n #check_perm(user_id, 'delete_network')\n try:\n log.debug(\"Querying Network %s\", network_id)\n net_i = db.DBSession.query(Network).filter(Network.id == network_id).\\\n options(noload('scenarios')).options(noload('nodes')).options(noload('links')).options(noload('resourcegroups')).options(joinedload_all('types.templatetype.template')).one()\n net_i.attributes\n\n #Define the basic resource queries\n node_qry = db.DBSession.query(Node).filter(Node.network_id==network_id).filter(Node.status=='X').all()\n\n link_qry = db.DBSession.query(Link).filter(Link.network_id==network_id).filter(Link.status=='X').all()\n\n group_qry = db.DBSession.query(ResourceGroup).filter(ResourceGroup.network_id==network_id).filter(ResourceGroup.status=='X').all()\n\n scenario_qry = db.DBSession.query(Scenario).filter(Scenario.network_id==network_id).filter(Scenario.status=='X').all()\n\n\n for n in node_qry:\n db.DBSession.delete(n)\n for l in link_qry:\n db.DBSession.delete(l)\n for g in group_qry:\n db.DBSession.delete(g)\n for s in scenario_qry:\n db.DBSession.delete(s)\n\n except NoResultFound:\n raise ResourceNotFoundError(\"Network %s not found\"%(network_id))\n db.DBSession.flush()\n return 'OK'", "response": "Delete any deleted nodes links resourcegroups and scenarios in a given network"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget all resource attributes in a network.", "response": "def get_all_resource_attributes_in_network(attr_id, network_id, **kwargs):\n \"\"\"\n Find every resource attribute in the network matching the supplied attr_id\n \"\"\"\n\n user_id = kwargs.get('user_id')\n\n try:\n a = db.DBSession.query(Attr).filter(Attr.id == attr_id).one()\n except NoResultFound:\n raise HydraError(\"Attribute %s not found\"%(attr_id,))\n\n ra_qry = db.DBSession.query(ResourceAttr).filter(\n ResourceAttr.attr_id==attr_id,\n or_(Network.id == network_id,\n Node.network_id==network_id,\n Link.network_id==network_id,\n ResourceGroup.network_id==network_id)\n ).outerjoin('node')\\\n .outerjoin('link')\\\n .outerjoin('network')\\\n .outerjoin('resourcegroup')\\\n .options(joinedload_all('node'))\\\n .options(joinedload_all('link'))\\\n .options(joinedload_all('resourcegroup'))\\\n .options(joinedload_all('network'))\n\n resourceattrs = ra_qry.all()\n\n json_ra = []\n #Load the metadata too\n for ra in resourceattrs:\n ra_j = JSONObject(ra, extras={'node':JSONObject(ra.node) if ra.node else None,\n 'link':JSONObject(ra.link) if ra.link else None,\n 'resourcegroup':JSONObject(ra.resourcegroup) if ra.resourcegroup else None,\n 'network':JSONObject(ra.network) if ra.network else None})\n \n if ra_j.node is not None:\n ra_j.resource = ra_j.node\n elif ra_j.link is not None:\n ra_j.resource = ra_j.link\n elif ra_j.resourcegroup is not None:\n ra_j.resource = ra_j.resourcegroup\n elif ra.network is not None:\n ra_j.resource = ra_j.network\n \n json_ra.append(ra_j)\n\n return json_ra"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef clone_network(network_id, recipient_user_id=None, new_network_name=None, project_id=None, project_name=None, new_project=True, **kwargs):\n\n user_id = kwargs['user_id']\n\n ex_net = db.DBSession.query(Network).filter(Network.id==network_id).one()\n\n ex_net.check_read_permission(user_id)\n\n if project_id is None and new_project == True:\n\n log.info(\"Creating a new project for cloned network\")\n\n ex_proj = db.DBSession.query(Project).filter(Project.id==ex_net.project_id).one()\n\n user = db.DBSession.query(User).filter(User.id==user_id).one()\n\n project = Project()\n if project_name is None or project_name==\"\":\n project_name=ex_proj.name + \" (Cloned by %s)\" % user.display_name\n\n #check a project with this name doesn't already exist:\n ex_project = db.DBSession.query(Project).filter(Project.name==project_name,\n Project.created_by==user_id).all()\n #If it exists, use it.\n if len(ex_project) > 0:\n project=ex_project[0]\n else:\n project.name = project_name\n project.created_by = user_id\n\n project.set_owner(user_id)\n\n if recipient_user_id!=None:\n project.set_owner(recipient_user_id)\n\n db.DBSession.add(project)\n db.DBSession.flush()\n\n project_id=project.id\n\n elif project_id is None:\n log.info(\"Using current project for cloned network\")\n project_id=ex_net.project_id\n\n if new_network_name is None or new_network_name == \"\":\n new_network_name=ex_net.name\n\n log.info('Cloning Network...')\n\n #Find if there's any projects with this name in the project already\n ex_network = db.DBSession.query(Network).filter(Network.project_id==project_id,\n Network.name.like(\"{0}%\".format(new_network_name))).all()\n\n if len(ex_network) > 0:\n new_network_name = new_network_name + \" \" + str(len(ex_network))\n\n newnet = Network()\n\n newnet.project_id = project_id\n newnet.name = new_network_name\n newnet.description = ex_net.description\n newnet.layout = ex_net.layout\n newnet.status = ex_net.status\n newnet.projection = ex_net.projection\n newnet.created_by = user_id\n\n newnet.set_owner(user_id)\n if recipient_user_id is not None:\n newnet.set_owner(recipient_user_id)\n\n db.DBSession.add(newnet)\n\n db.DBSession.flush()\n\n newnetworkid = newnet.id\n\n log.info('CLoning Nodes')\n node_id_map = _clone_nodes(network_id, newnetworkid)\n\n log.info('Cloning Links')\n link_id_map = _clone_links(network_id, newnetworkid, node_id_map)\n\n log.info('CLoning Groups')\n group_id_map = _clone_groups(network_id,\n newnetworkid,\n node_id_map,\n link_id_map)\n\n log.info(\"Cloning Resource Attributes\")\n ra_id_map = _clone_resourceattrs(network_id, newnetworkid, node_id_map, link_id_map, group_id_map)\n\n log.info(\"Cloning Resource Types\")\n _clone_resourcetypes(network_id, newnetworkid, node_id_map, link_id_map, group_id_map)\n\n log.info('Cloning Scenarios')\n _clone_scenarios(network_id, newnetworkid, ra_id_map, node_id_map, link_id_map, group_id_map, user_id)\n\n db.DBSession.flush()\n\n return newnetworkid", "response": "Clone a network with the specified name and project."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncopy data from source to target scenario.", "response": "def copy_data_from_scenario(resource_attrs, source_scenario_id, target_scenario_id, **kwargs):\n \"\"\"\n For a given list of resource attribute IDS copy the dataset_ids from\n the resource scenarios in the source scenario to those in the 'target' scenario.\n \"\"\"\n\n #Get all the resource scenarios we wish to update\n target_resourcescenarios = db.DBSession.query(ResourceScenario).filter(\n ResourceScenario.scenario_id==target_scenario_id,\n ResourceScenario.resource_attr_id.in_(resource_attrs)).all()\n\n target_rs_dict = {}\n for target_rs in target_resourcescenarios:\n target_rs_dict[target_rs.resource_attr_id] = target_rs\n\n #get all the resource scenarios we are using to get our datsets source.\n source_resourcescenarios = db.DBSession.query(ResourceScenario).filter(\n ResourceScenario.scenario_id==source_scenario_id,\n ResourceScenario.resource_attr_id.in_(resource_attrs)).all()\n\n #If there is an RS in scenario 'source' but not in 'target', then create\n #a new one in 'target'\n for source_rs in source_resourcescenarios:\n target_rs = target_rs_dict.get(source_rs.resource_attr_id)\n if target_rs is not None:\n target_rs.dataset_id = source_rs.dataset_id\n else:\n target_rs = ResourceScenario()\n target_rs.scenario_id = target_scenario_id\n target_rs.dataset_id = source_rs.dataset_id\n target_rs.resource_attr_id = source_rs.resource_attr_id\n db.DBSession.add(target_rs)\n\n db.DBSession.flush()\n\n return target_resourcescenarios"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the specified scenario", "response": "def get_scenario(scenario_id,**kwargs):\n \"\"\"\n Get the specified scenario\n \"\"\"\n\n user_id = kwargs.get('user_id')\n\n scen_i = _get_scenario(scenario_id, user_id)\n\n scen_j = JSONObject(scen_i)\n rscen_rs = db.DBSession.query(ResourceScenario).filter(ResourceScenario.scenario_id==scenario_id).options(joinedload_all('dataset.metadata')).all()\n\n #lazy load resource attributes and attributes\n for rs in rscen_rs:\n rs.resourceattr\n rs.resourceattr.attr\n\n rgi_rs = db.DBSession.query(ResourceGroupItem).filter(ResourceGroupItem.scenario_id==scenario_id).all()\n\n scen_j.resourcescenarios = []\n for rs in rscen_rs:\n rs_j = JSONObject(rs, extras={'resourceattr':JSONObject(rs.resourceattr)})\n if rs.dataset.check_read_permission(user_id, do_raise=False) is False:\n rs_j.dataset['value'] = None\n rs_j.dataset.metadata = JSONObject({})\n scen_j.resourcescenarios.append(rs_j)\n\n scen_j.resourcegroupitems =[JSONObject(r) for r in rgi_rs]\n\n return scen_j"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_scenario(network_id, scenario,**kwargs):\n user_id = int(kwargs.get('user_id'))\n log.info(\"Adding scenarios to network\")\n\n _check_network_ownership(network_id, user_id)\n\n existing_scen = db.DBSession.query(Scenario).filter(Scenario.name==scenario.name, Scenario.network_id==network_id).first()\n if existing_scen is not None:\n raise HydraError(\"Scenario with name %s already exists in network %s\"%(scenario.name, network_id))\n\n scen = Scenario()\n scen.name = scenario.name\n scen.description = scenario.description\n scen.layout = scenario.get_layout()\n scen.network_id = network_id\n scen.created_by = user_id\n scen.start_time = str(timestamp_to_ordinal(scenario.start_time)) if scenario.start_time else None\n scen.end_time = str(timestamp_to_ordinal(scenario.end_time)) if scenario.end_time else None\n scen.time_step = scenario.time_step\n scen.resourcescenarios = []\n scen.resourcegroupitems = []\n\n #Just in case someone puts in a negative ID for the scenario.\n if scenario.id < 0:\n scenario.id = None\n\n if scenario.resourcescenarios is not None:\n #extract the data from each resourcescenario so it can all be\n #inserted in one go, rather than one at a time\n all_data = [r.dataset for r in scenario.resourcescenarios]\n\n datasets = data._bulk_insert_data(all_data, user_id=user_id)\n\n #record all the resource attribute ids\n resource_attr_ids = [r.resource_attr_id for r in scenario.resourcescenarios]\n\n #get all the resource scenarios into a list and bulk insert them\n for i, ra_id in enumerate(resource_attr_ids):\n rs_i = ResourceScenario()\n rs_i.resource_attr_id = ra_id\n rs_i.dataset_id = datasets[i].id\n rs_i.scenario_id = scen.id\n rs_i.dataset = datasets[i]\n scen.resourcescenarios.append(rs_i)\n\n if scenario.resourcegroupitems is not None:\n #Again doing bulk insert.\n for group_item in scenario.resourcegroupitems:\n group_item_i = ResourceGroupItem()\n group_item_i.scenario_id = scen.id\n group_item_i.group_id = group_item.group_id\n group_item_i.ref_key = group_item.ref_key\n if group_item.ref_key == 'NODE':\n group_item_i.node_id = group_item.ref_id\n elif group_item.ref_key == 'LINK':\n group_item_i.link_id = group_item.ref_id\n elif group_item.ref_key == 'GROUP':\n group_item_i.subgroup_id = group_item.ref_id\n scen.resourcegroupitems.append(group_item_i)\n db.DBSession.add(scen)\n db.DBSession.flush()\n return scen", "response": "Add a scenario to a network."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate a single scenario in the DB.", "response": "def update_scenario(scenario,update_data=True,update_groups=True,flush=True,**kwargs):\n \"\"\"\n Update a single scenario\n as all resources already exist, there is no need to worry\n about negative IDS\n\n flush = True flushes to the DB at the end of the function.\n flush = False does not flush, assuming that it will happen as part\n of another process, like update_network.\n \"\"\"\n user_id = kwargs.get('user_id')\n scen = _get_scenario(scenario.id, user_id)\n\n if scen.locked == 'Y':\n raise PermissionError('Scenario is locked. Unlock before editing.')\n\n start_time = None\n if isinstance(scenario.start_time, float):\n start_time = six.text_type(scenario.start_time)\n else:\n start_time = timestamp_to_ordinal(scenario.start_time)\n if start_time is not None:\n start_time = six.text_type(start_time)\n\n end_time = None\n if isinstance(scenario.end_time, float):\n end_time = six.text_type(scenario.end_time)\n else:\n end_time = timestamp_to_ordinal(scenario.end_time)\n if end_time is not None:\n end_time = six.text_type(end_time)\n\n scen.name = scenario.name\n scen.description = scenario.description\n scen.layout = scenario.get_layout()\n scen.start_time = start_time\n scen.end_time = end_time\n scen.time_step = scenario.time_step\n\n if scenario.resourcescenarios == None:\n scenario.resourcescenarios = []\n if scenario.resourcegroupitems == None:\n scenario.resourcegroupitems = []\n\n #lazy load resourcescenarios from the DB\n scen.resourcescenarios\n\n if update_data is True:\n datasets = [rs.dataset for rs in scenario.resourcescenarios]\n updated_datasets = data._bulk_insert_data(datasets, user_id, kwargs.get('app_name'))\n for i, r_scen in enumerate(scenario.resourcescenarios):\n _update_resourcescenario(scen, r_scen, dataset=updated_datasets[i], user_id=user_id, source=kwargs.get('app_name'))\n\n #lazy load resource grou items from the DB\n scen.resourcegroupitems\n\n if update_groups is True:\n #Get all the exiting resource group items for this scenario.\n #THen process all the items sent to this handler.\n #Any in the DB that are not passed in here are removed.\n for group_item in scenario.resourcegroupitems:\n _add_resourcegroupitem(group_item, scenario.id)\n\n if flush is True:\n db.DBSession.flush()\n\n return scen"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_scenario_status(scenario_id, status, **kwargs):\n\n user_id = kwargs.get('user_id')\n\n _check_can_edit_scenario(scenario_id, kwargs['user_id'])\n\n scenario_i = _get_scenario(scenario_id, user_id)\n\n scenario_i.status = status\n db.DBSession.flush()\n return 'OK'", "response": "Set the status of a scenario."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the status of a scenario.", "response": "def purge_scenario(scenario_id, **kwargs):\n \"\"\"\n Set the status of a scenario.\n \"\"\"\n\n _check_can_edit_scenario(scenario_id, kwargs['user_id'])\n\n user_id = kwargs.get('user_id')\n\n scenario_i = _get_scenario(scenario_id, user_id)\n\n db.DBSession.delete(scenario_i)\n db.DBSession.flush()\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nturn a dictionary into a named tuple so it can be passed into the constructor of a complex model generator.", "response": "def _get_as_obj(obj_dict, name):\n \"\"\"\n Turn a dictionary into a named tuple so it can be\n passed into the constructor of a complex model generator.\n \"\"\"\n if obj_dict.get('_sa_instance_state'):\n del obj_dict['_sa_instance_state']\n obj = namedtuple(name, tuple(obj_dict.keys()))\n for k, v in obj_dict.items():\n setattr(obj, k, v)\n log.info(\"%s = %s\",k,getattr(obj,k))\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_resource_scenario(resource_attr_id, scenario_id, **kwargs):\n user_id = kwargs.get('user_id')\n\n _get_scenario(scenario_id, user_id)\n\n try:\n rs = db.DBSession.query(ResourceScenario).filter(ResourceScenario.resource_attr_id==resource_attr_id,\n ResourceScenario.scenario_id == scenario_id).options(joinedload_all('dataset')).options(joinedload_all('dataset.metadata')).one()\n\n return rs\n except NoResultFound:\n raise ResourceNotFoundError(\"resource scenario for %s not found in scenario %s\"%(resource_attr_id, scenario_id))", "response": "Get the resource scenario object for a given resource attribute and scenario."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef bulk_update_resourcedata(scenario_ids, resource_scenarios,**kwargs):\n user_id = kwargs.get('user_id')\n res = None\n\n res = {}\n\n net_ids = db.DBSession.query(Scenario.network_id).filter(Scenario.id.in_(scenario_ids)).all()\n\n if len(set(net_ids)) != 1:\n raise HydraError(\"Scenario IDS are not in the same network\")\n\n for scenario_id in scenario_ids:\n _check_can_edit_scenario(scenario_id, kwargs['user_id'])\n\n scen_i = _get_scenario(scenario_id, user_id)\n res[scenario_id] = []\n\n for rs in resource_scenarios:\n if rs.dataset is not None:\n updated_rs = _update_resourcescenario(scen_i, rs, user_id=user_id, source=kwargs.get('app_name'))\n res[scenario_id].append(updated_rs)\n else:\n _delete_resourcescenario(scenario_id, rs.resource_attr_id)\n\n db.DBSession.flush()\n\n return res", "response": "Bulk update the data associated with a list of scenarios."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_resourcedata(scenario_id, resource_scenarios,**kwargs):\n user_id = kwargs.get('user_id')\n res = None\n\n _check_can_edit_scenario(scenario_id, kwargs['user_id'])\n\n scen_i = _get_scenario(scenario_id, user_id)\n\n res = []\n for rs in resource_scenarios:\n if rs.dataset is not None:\n updated_rs = _update_resourcescenario(scen_i, rs, user_id=user_id, source=kwargs.get('app_name'))\n res.append(updated_rs)\n else:\n _delete_resourcescenario(scenario_id, rs.resource_attr_id)\n\n db.DBSession.flush()\n\n return res", "response": "Update the data associated with a resource scenario."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes the data associated with a resource in a scenario.", "response": "def delete_resource_scenario(scenario_id, resource_attr_id, quiet=False, **kwargs):\n \"\"\"\n Remove the data associated with a resource in a scenario.\n \"\"\"\n _check_can_edit_scenario(scenario_id, kwargs['user_id'])\n\n _delete_resourcescenario(scenario_id, resource_attr_id, suppress_error=quiet)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_resourcedata(scenario_id, resource_scenario, quiet = False, **kwargs):\n\n _check_can_edit_scenario(scenario_id, kwargs['user_id'])\n\n _delete_resourcescenario(scenario_id, resource_scenario.resource_attr_id, suppress_error=quiet)", "response": "Delete the data associated with a resource in a scenario."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate the value of a resource attribute in a scenario.", "response": "def _update_resourcescenario(scenario, resource_scenario, dataset=None, new=False, user_id=None, source=None):\n \"\"\"\n Insert or Update the value of a resource's attribute by first getting the\n resource, then parsing the input data, then assigning the value.\n\n returns a ResourceScenario object.\n \"\"\"\n if scenario is None:\n scenario = db.DBSession.query(Scenario).filter(Scenario.id==1).one()\n\n ra_id = resource_scenario.resource_attr_id\n\n log.debug(\"Assigning resource attribute: %s\",ra_id)\n try:\n r_scen_i = db.DBSession.query(ResourceScenario).filter(\n ResourceScenario.scenario_id==scenario.id,\n ResourceScenario.resource_attr_id==ra_id).one()\n except NoResultFound as e:\n log.info(\"Creating new RS\")\n r_scen_i = ResourceScenario()\n r_scen_i.resource_attr_id = resource_scenario.resource_attr_id\n r_scen_i.scenario_id = scenario.id\n r_scen_i.scenario = scenario\n\n db.DBSession.add(r_scen_i)\n\n\n if scenario.locked == 'Y':\n log.info(\"Scenario %s is locked\",scenario.id)\n return r_scen_i\n\n\n if dataset is not None:\n r_scen_i.dataset = dataset\n\n return r_scen_i\n\n dataset = resource_scenario.dataset\n\n value = dataset.parse_value()\n\n log.info(\"Assigning %s to resource attribute: %s\", value, ra_id)\n\n if value is None:\n log.info(\"Cannot set data on resource attribute %s\",ra_id)\n return None\n\n metadata = dataset.get_metadata_as_dict(source=source, user_id=user_id)\n data_unit_id = dataset.unit_id\n\n data_hash = dataset.get_hash(value, metadata)\n\n assign_value(r_scen_i,\n dataset.type.lower(),\n value,\n data_unit_id,\n dataset.name,\n metadata=metadata,\n data_hash=data_hash,\n user_id=user_id,\n source=source)\n return r_scen_i"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef assign_value(rs, data_type, val,\n unit_id, name, metadata={}, data_hash=None, user_id=None, source=None):\n \"\"\"\n Insert or update a piece of data in a scenario.\n If the dataset is being shared by other resource scenarios, a new dataset is inserted.\n If the dataset is ONLY being used by the resource scenario in question, the dataset\n is updated to avoid unnecessary duplication.\n \"\"\"\n\n log.debug(\"Assigning value %s to rs %s in scenario %s\",\n name, rs.resource_attr_id, rs.scenario_id)\n\n if rs.scenario.locked == 'Y':\n raise PermissionError(\"Cannot assign value. Scenario %s is locked\"\n %(rs.scenario_id))\n\n #Check if this RS is the only RS in the DB connected to this dataset.\n #If no results is found, the RS isn't in the DB yet, so the condition is false.\n update_dataset = False # Default behaviour is to create a new dataset.\n\n if rs.dataset is not None:\n\n #Has this dataset changed?\n if rs.dataset.hash == data_hash:\n log.debug(\"Dataset has not changed. Returning.\")\n return\n\n connected_rs = db.DBSession.query(ResourceScenario).filter(ResourceScenario.dataset_id==rs.dataset.id).all()\n #If there's no RS found, then the incoming rs is new, so the dataset can be altered\n #without fear of affecting something else.\n if len(connected_rs) == 0:\n #If it's 1, the RS exists in the DB, but it's the only one using this dataset or\n #The RS isn't in the DB yet and the datset is being used by 1 other RS.\n update_dataset = True\n\n if len(connected_rs) == 1 :\n if connected_rs[0].scenario_id == rs.scenario_id and connected_rs[0].resource_attr_id==rs.resource_attr_id:\n update_dataset = True\n else:\n update_dataset=False\n\n if update_dataset is True:\n log.info(\"Updating dataset '%s'\", name)\n dataset = data.update_dataset(rs.dataset.id, name, data_type, val, unit_id, metadata, flush=False, **dict(user_id=user_id))\n rs.dataset = dataset\n rs.dataset_id = dataset.id\n log.info(\"Set RS dataset id to %s\"%dataset.id)\n else:\n log.info(\"Creating new dataset %s in scenario %s\", name, rs.scenario_id)\n dataset = data.add_dataset(data_type,\n val,\n unit_id,\n metadata=metadata,\n name=name,\n **dict(user_id=user_id))\n rs.dataset = dataset\n rs.source = source\n\n db.DBSession.flush()", "response": "Assign a value to a resource in a scenario."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_data_to_attribute(scenario_id, resource_attr_id, dataset,**kwargs):\n user_id = kwargs.get('user_id')\n\n _check_can_edit_scenario(scenario_id, user_id)\n\n scenario_i = _get_scenario(scenario_id, user_id)\n\n try:\n r_scen_i = db.DBSession.query(ResourceScenario).filter(\n ResourceScenario.scenario_id==scenario_id,\n ResourceScenario.resource_attr_id==resource_attr_id).one()\n log.info(\"Existing resource scenario found for %s in scenario %s\", resource_attr_id, scenario_id)\n except NoResultFound:\n log.info(\"No existing resource scenarios found for %s in scenario %s. Adding a new one.\", resource_attr_id, scenario_id)\n r_scen_i = ResourceScenario()\n r_scen_i.scenario_id = scenario_id\n r_scen_i.resource_attr_id = resource_attr_id\n scenario_i.resourcescenarios.append(r_scen_i)\n\n data_type = dataset.type.lower()\n\n value = dataset.parse_value()\n\n dataset_metadata = dataset.get_metadata_as_dict(user_id=kwargs.get('user_id'),\n source=kwargs.get('source'))\n if value is None:\n raise HydraError(\"Cannot set value to attribute. \"\n \"No value was sent with dataset %s\", dataset.id)\n\n data_hash = dataset.get_hash(value, dataset_metadata)\n\n assign_value(r_scen_i, data_type, value, dataset.unit_id, dataset.name,\n metadata=dataset_metadata, data_hash=data_hash, user_id=user_id)\n\n db.DBSession.flush()\n return r_scen_i", "response": "Add data to a resource scenario in a network update the resource scenario"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets all the datasets from the group with the specified name AttributeNames", "response": "def get_scenario_data(scenario_id,**kwargs):\n \"\"\"\n Get all the datasets from the group with the specified name\n @returns a list of dictionaries\n \"\"\"\n user_id = kwargs.get('user_id')\n\n scenario_data = db.DBSession.query(Dataset).filter(Dataset.id==ResourceScenario.dataset_id, ResourceScenario.scenario_id==scenario_id).options(joinedload_all('metadata')).distinct().all()\n\n for sd in scenario_data:\n if sd.hidden == 'Y':\n try:\n sd.check_read_permission(user_id)\n except:\n sd.value = None\n sd.metadata = []\n\n db.DBSession.expunge_all()\n\n log.info(\"Retrieved %s datasets\", len(scenario_data))\n return scenario_data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting all the resources and resource scenarios for a given attribute or set of attributes.", "response": "def get_attribute_data(attr_ids, node_ids, **kwargs):\n \"\"\"\n For a given attribute or set of attributes, return all the resources and\n resource scenarios in the network\n \"\"\"\n node_attrs = db.DBSession.query(ResourceAttr).\\\n options(joinedload_all('attr')).\\\n filter(ResourceAttr.node_id.in_(node_ids),\n ResourceAttr.attr_id.in_(attr_ids)).all()\n\n ra_ids = []\n for ra in node_attrs:\n ra_ids.append(ra.id)\n\n\n resource_scenarios = db.DBSession.query(ResourceScenario).filter(ResourceScenario.resource_attr_id.in_(ra_ids)).options(joinedload('resourceattr')).options(joinedload_all('dataset.metadata')).order_by(ResourceScenario.scenario_id).all()\n\n\n for rs in resource_scenarios:\n if rs.dataset.hidden == 'Y':\n try:\n rs.dataset.check_read_permission(kwargs.get('user_id'))\n except:\n rs.dataset.value = None\n db.DBSession.expunge(rs)\n\n return node_attrs, resource_scenarios"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget all the resource scenarios for a given ref_key in a given scenario.", "response": "def get_resource_data(ref_key, ref_id, scenario_id, type_id=None, expunge_session=True, **kwargs):\n \"\"\"\n Get all the resource scenarios for a given resource\n in a given scenario. If type_id is specified, only\n return the resource scenarios for the attributes\n within the type.\n \"\"\"\n\n user_id = kwargs.get('user_id')\n\n resource_data_qry = db.DBSession.query(ResourceScenario).filter(\n ResourceScenario.dataset_id == Dataset.id,\n ResourceAttr.id == ResourceScenario.resource_attr_id,\n ResourceScenario.scenario_id == scenario_id,\n ResourceAttr.ref_key == ref_key,\n or_(\n ResourceAttr.network_id==ref_id,\n ResourceAttr.node_id==ref_id,\n ResourceAttr.link_id==ref_id,\n ResourceAttr.group_id==ref_id\n )).distinct().\\\n options(joinedload('resourceattr')).\\\n options(joinedload_all('dataset.metadata')).\\\n order_by(ResourceAttr.attr_is_var)\n\n if type_id is not None:\n attr_ids = []\n rs = db.DBSession.query(TypeAttr).filter(TypeAttr.type_id==type_id).all()\n for r in rs:\n attr_ids.append(r.attr_id)\n\n resource_data_qry = resource_data_qry.filter(ResourceAttr.attr_id.in_(attr_ids))\n\n resource_data = resource_data_qry.all()\n\n for rs in resource_data:\n\n #TODO: Design a mechanism to read the value of the dataset if it's stored externally\n\n if rs.dataset.hidden == 'Y':\n try:\n rs.dataset.check_read_permission(user_id)\n except:\n rs.dataset.value = None\n\n if expunge_session == True:\n db.DBSession.expunge_all()\n\n return resource_data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_attribute_datasets(attr_id, scenario_id, **kwargs):\n\n user_id = kwargs.get('user_id')\n\n scenario_i = _get_scenario(scenario_id, user_id)\n\n try:\n a = db.DBSession.query(Attr).filter(Attr.id == attr_id).one()\n except NoResultFound:\n raise HydraError(\"Attribute %s not found\"%(attr_id,))\n\n rs_qry = db.DBSession.query(ResourceScenario).filter(\n ResourceAttr.attr_id==attr_id,\n ResourceScenario.scenario_id==scenario_i.id,\n ResourceScenario.resource_attr_id==ResourceAttr.id\n ).options(joinedload_all('dataset'))\\\n .options(joinedload_all('resourceattr'))\\\n .options(joinedload_all('resourceattr.node'))\\\n .options(joinedload_all('resourceattr.link'))\\\n .options(joinedload_all('resourceattr.resourcegroup'))\\\n .options(joinedload_all('resourceattr.network'))\n\n resourcescenarios = rs_qry.all()\n\n json_rs = []\n #Load the metadata too\n for rs in resourcescenarios:\n rs.dataset.metadata\n tmp_rs = JSONObject(rs)\n tmp_rs.resourceattr=JSONObject(rs.resourceattr)\n if rs.resourceattr.node_id is not None:\n tmp_rs.resourceattr.node = JSONObject(rs.resourceattr.node)\n elif rs.resourceattr.link_id is not None:\n tmp_rs.resourceattr.link = JSONObject(rs.resourceattr.link)\n elif rs.resourceattr.group_id is not None:\n tmp_rs.resourceattr.resourcegroup = JSONObject(rs.resourceattr.resourcegroup)\n elif rs.resourceattr.network_id is not None:\n tmp_rs.resourceattr.network = JSONObject(rs.resourceattr.network)\n\n json_rs.append(tmp_rs)\n\n\n return json_rs", "response": "Retrieve all the datasets in a scenario for a given attribute."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_resourcegroupitems(group_id, scenario_id, **kwargs):\n\n \"\"\"\n Get all the items in a group, in a scenario. If group_id is None, return\n all items across all groups in the scenario.\n \"\"\"\n\n rgi_qry = db.DBSession.query(ResourceGroupItem).\\\n filter(ResourceGroupItem.scenario_id==scenario_id)\n\n if group_id is not None:\n rgi_qry = rgi_qry.filter(ResourceGroupItem.group_id==group_id)\n\n rgi = rgi_qry.all()\n\n return rgi", "response": "Get all the items in a group in a scenario."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeleting specified items in a group in a scenario.", "response": "def delete_resourcegroupitems(scenario_id, item_ids, **kwargs):\n \"\"\"\n Delete specified items in a group, in a scenario.\n \"\"\"\n user_id = int(kwargs.get('user_id'))\n #check the scenario exists\n _get_scenario(scenario_id, user_id)\n for item_id in item_ids:\n rgi = db.DBSession.query(ResourceGroupItem).\\\n filter(ResourceGroupItem.id==item_id).one()\n db.DBSession.delete(rgi)\n\n db.DBSession.flush()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes all itemas in a group in a scenario.", "response": "def empty_group(group_id, scenario_id, **kwargs):\n \"\"\"\n Delete all itemas in a group, in a scenario.\n \"\"\"\n user_id = int(kwargs.get('user_id'))\n #check the scenario exists\n _get_scenario(scenario_id, user_id)\n\n rgi = db.DBSession.query(ResourceGroupItem).\\\n filter(ResourceGroupItem.group_id==group_id).\\\n filter(ResourceGroupItem.scenario_id==scenario_id).all()\n rgi.delete()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_resourcegroupitems(scenario_id, items, scenario=None, **kwargs):\n\n \"\"\"\n Get all the items in a group, in a scenario.\n \"\"\"\n user_id = int(kwargs.get('user_id'))\n\n if scenario is None:\n scenario = _get_scenario(scenario_id, user_id)\n\n _check_network_ownership(scenario.network_id, user_id)\n\n newitems = []\n for group_item in items:\n group_item_i = _add_resourcegroupitem(group_item, scenario.id)\n newitems.append(group_item_i)\n\n db.DBSession.flush()\n\n return newitems", "response": "Add items to a group in a scenario."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _add_resourcegroupitem(group_item, scenario_id):\n if group_item.id and group_item.id > 0:\n try:\n group_item_i = db.DBSession.query(ResourceGroupItem).filter(ResourceGroupItem.id == group_item.id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"ResourceGroupItem %s not found\" % (group_item.id))\n\n else:\n group_item_i = ResourceGroupItem()\n group_item_i.group_id = group_item.group_id\n if scenario_id is not None:\n group_item_i.scenario_id = scenario_id\n\n db.DBSession.add(group_item_i)\n\n ref_key = group_item.ref_key\n group_item_i.ref_key = ref_key\n if ref_key == 'NODE':\n group_item_i.node_id =group_item.ref_id if group_item.ref_id else group_item.node_id\n elif ref_key == 'LINK':\n group_item_i.link_id =group_item.ref_id if group_item.ref_id else group_item.link_id\n elif ref_key == 'GROUP':\n group_item_i.subgroup_id = group_item.ref_id if group_item.ref_id else group_item.subgroup_id\n\n return group_item_i", "response": "Add a resource group item to the database."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate the value of a resource attribute mapping between two scenarios.", "response": "def update_value_from_mapping(source_resource_attr_id, target_resource_attr_id, source_scenario_id, target_scenario_id, **kwargs):\n \"\"\"\n Using a resource attribute mapping, take the value from the source and apply\n it to the target. Both source and target scenarios must be specified (and therefor\n must exist).\n \"\"\"\n user_id = int(kwargs.get('user_id'))\n\n rm = aliased(ResourceAttrMap, name='rm')\n #Check the mapping exists.\n mapping = db.DBSession.query(rm).filter(\n or_(\n and_(\n rm.resource_attr_id_a == source_resource_attr_id,\n rm.resource_attr_id_b == target_resource_attr_id\n ),\n and_(\n rm.resource_attr_id_a == target_resource_attr_id,\n rm.resource_attr_id_b == source_resource_attr_id\n )\n )\n ).first()\n\n if mapping is None:\n raise ResourceNotFoundError(\"Mapping between %s and %s not found\"%\n (source_resource_attr_id,\n target_resource_attr_id))\n\n #check scenarios exist\n s1 = _get_scenario(source_scenario_id, user_id)\n s2 = _get_scenario(target_scenario_id, user_id)\n\n rs = aliased(ResourceScenario, name='rs')\n rs1 = db.DBSession.query(rs).filter(rs.resource_attr_id == source_resource_attr_id,\n rs.scenario_id == source_scenario_id).first()\n rs2 = db.DBSession.query(rs).filter(rs.resource_attr_id == target_resource_attr_id,\n rs.scenario_id == target_scenario_id).first()\n\n #3 possibilities worth considering:\n #1: Both RS exist, so update the target RS\n #2: Target RS does not exist, so create it with the dastaset from RS1\n #3: Source RS does not exist, so it must be removed from the target scenario if it exists\n return_value = None#Either return null or return a new or updated resource scenario\n if rs1 is not None:\n if rs2 is not None:\n log.info(\"Destination Resource Scenario exists. Updating dastaset ID\")\n rs2.dataset_id = rs1.dataset_id\n else:\n log.info(\"Destination has no data, so making a new Resource Scenario\")\n rs2 = ResourceScenario(resource_attr_id=target_resource_attr_id, scenario_id=target_scenario_id, dataset_id=rs1.dataset_id)\n db.DBSession.add(rs2)\n db.DBSession.flush()\n return_value = rs2\n else:\n log.info(\"Source Resource Scenario does not exist. Deleting destination Resource Scenario\")\n if rs2 is not None:\n db.DBSession.delete(rs2)\n\n db.DBSession.flush()\n return return_value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting all available plugins and their details.", "response": "def get_plugins(**kwargs):\n \"\"\"\n Get all available plugins\n \"\"\"\n \n plugins = []\n plugin_paths = []\n \n #Look in directory or set of directories for\n #plugins\n \n base_plugin_dir = config.get('plugin', 'default_directory')\n plugin_xsd_path = config.get('plugin', 'plugin_xsd_path')\n\n base_plugin_dir_contents = os.listdir(base_plugin_dir)\n for directory in base_plugin_dir_contents:\n #ignore hidden files\n if directory[0] == '.' or directory == 'xml':\n continue\n\n #Is this a file or a directory? If it's a directory, it's a plugin.\n path = os.path.join(base_plugin_dir, directory)\n if os.path.isdir(path):\n plugin_paths.append(path)\n \n\n #For each plugin, get its details (an XML string)\n \n #Retrieve the xml schema for validating the XML to make sure\n #what is being provided to the IU is correct.\n xmlschema_doc = etree.parse(plugin_xsd_path)\n \n xmlschema = etree.XMLSchema(xmlschema_doc)\n \n #Get the xml description file from the plugin directory. If there\n #is no xml file, the plugin in unusable.\n for plugin_dir in plugin_paths:\n full_plugin_path = os.path.join(plugin_dir, 'trunk')\n \n dir_contents = os.listdir(full_plugin_path)\n #look for a plugin.xml file in the plugin directory \n for file_name in dir_contents:\n file_path = os.path.join(full_plugin_path, file_name)\n if file_name == 'plugin.xml':\n \n f = open(file_path, 'r')\n \n #validate the xml using the xml schema for defining\n #plugin details\n try:\n y = open(file_path, 'r')\n \n xml_tree = etree.parse(y)\n\n xmlschema.assertValid(xml_tree)\n \n plugins.append(etree.tostring(xml_tree))\n except Exception as e:\n log.critical(\"Schema %s did not validate! (error was %s)\"%(file_name, e))\n\n break\n else:\n log.warning(\"No xml plugin details found for %s. Ignoring\", plugin_dir)\n\n return plugins"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run_plugin(plugin,**kwargs):\n \n args = [sys.executable]\n\n #Get plugin executable\n home = os.path.expanduser('~')\n path_to_plugin = os.path.join(home, 'svn/HYDRA/HydraPlugins', plugin.location)\n args.append(path_to_plugin)\n \n #Parse plugin arguments into a string\n plugin_params = \" \"\n for p in plugin.params:\n param = \"--%s=%s \"%(p.name, p.value)\n args.append(\"--%s\"%p.name)\n args.append(p.value)\n plugin_params = plugin_params + param\n\n log_dir = config.get('plugin', 'result_file')\n log_file = os.path.join(home, log_dir, plugin.name)\n\n #this reads all the logs so far. We're not interested in them\n #Everything after this is new content to the file\n try:\n f = open(log_file, 'r')\n f.read()\n except:\n f = open(log_file, 'w')\n f.close()\n f = open(log_file, 'r')\n\n pid = subprocess.Popen(args).pid\n #run plugin\n #os.system(\"%s %s\"%(path_to_plugin, plugin_params))\n \n log.info(\"Process started! PID: %s\", pid)\n\n return str(pid)", "response": "Run a plugin and return the pid"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef load_config():\n global localfiles\n global localfile\n global repofile\n global repofiles\n global userfile\n global userfiles\n global sysfile\n global sysfiles\n global CONFIG\n logging.basicConfig(level='INFO')\n\n config = ConfigParser.ConfigParser(allow_no_value=True)\n\n modulepath = os.path.dirname(os.path.abspath(__file__))\n\n localfile = os.path.join(os.getcwd(), 'hydra.ini')\n localfiles = glob.glob(localfile)\n\n repofile = os.path.join(modulepath, 'hydra.ini')\n repofiles = glob.glob(repofile)\n\n if os.name == 'nt':\n import winpaths\n userfile = os.path.join(os.path.expanduser('~'),'AppData','Local','hydra.ini')\n userfiles = glob.glob(userfile)\n\n sysfile = os.path.join(winpaths.get_common_documents(), 'Hydra','hydra.ini')\n sysfiles = glob.glob(sysfile)\n else:\n userfile = os.path.join(os.path.expanduser('~'), '.hydra', 'hydra.ini')\n userfiles = glob.glob(userfile)\n\n sysfile = os.path.join('etc','hydra','hydra.ini')\n sysfiles = glob.glob(sysfile)\n\n\n for ini_file in repofiles:\n logging.debug(\"Repofile: %s\"%ini_file)\n config.read(ini_file)\n for ini_file in sysfiles:\n logging.debug(\"Sysfile: %s\"%ini_file)\n config.read(ini_file)\n for ini_file in userfiles:\n logging.debug(\"Userfile: %s\"%ini_file)\n config.read(ini_file)\n for ini_file in localfiles:\n logging.info(\"Localfile: %s\"%ini_file)\n config.read(ini_file)\n\n env_value = os.environ.get('HYDRA_CONFIG')\n if env_value is not None:\n if os.path.exists(env_value):\n config.read(ini_file)\n else:\n logging.warning('HYDRA_CONFIG set as %s but file does not exist', env_value)\n\n\n if os.name == 'nt':\n set_windows_env_variables(config)\n\n try:\n home_dir = config.get('DEFAULT', 'home_dir')\n except:\n home_dir = os.environ.get('HYDRA_HOME_DIR', '~')\n config.set('DEFAULT', 'home_dir', os.path.expanduser(home_dir))\n\n try:\n hydra_base = config.get('DEFAULT', 'hydra_base_dir')\n except:\n hydra_base = os.environ.get('HYDRA_BASE_DIR', modulepath)\n config.set('DEFAULT', 'hydra_base_dir', os.path.expanduser(hydra_base))\n\n read_values_from_environment(config, 'mysqld', 'server_name')\n\n\n CONFIG = config\n\n return config", "response": "This function loads a config file in the order of the file names."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_mysql_db(db_url):\n\n #Remove trailing whitespace and forwardslashes\n db_url = db_url.strip().strip('/')\n\n\n #Check this is a mysql URL\n if db_url.find('mysql') >= 0:\n\n\n #Get the DB name from config and check if it's in the URL\n db_name = config.get('mysqld', 'db_name', 'hydradb')\n if db_url.find(db_name) >= 0:\n no_db_url = db_url.rsplit(\"/\", 1)[0]\n else:\n #Check that there is a hostname specified, as we'll be using the '@' symbol soon..\n if db_url.find('@') == -1:\n raise HydraError(\"No Hostname specified in DB url\")\n\n #Check if there's a DB name specified that's different to the one in config.\n host_and_db_name = db_url.split('@')[1]\n if host_and_db_name.find('/') >= 0:\n no_db_url, db_name = db_url.rsplit(\"/\", 1)\n else:\n no_db_url = db_url\n db_url = no_db_url + \"/\" + db_name\n\n db_url = \"{}?charset=utf8&use_unicode=1\".format(db_url)\n\n if config.get('mysqld', 'auto_create', 'Y') == 'Y':\n tmp_engine = create_engine(no_db_url)\n log.warning(\"Creating database {0} as it does not exist.\".format(db_name))\n tmp_engine.execute(\"CREATE DATABASE IF NOT EXISTS {0}\".format(db_name))\n\n return db_url", "response": "Create a mysql database if it doesn t exist."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_project(project,**kwargs):\n user_id = kwargs.get('user_id')\n\n existing_proj = get_project_by_name(project.name,user_id=user_id)\n\n if len(existing_proj) > 0:\n raise HydraError(\"A Project with the name \\\"%s\\\" already exists\"%(project.name,))\n\n #check_perm(user_id, 'add_project')\n proj_i = Project()\n proj_i.name = project.name\n proj_i.description = project.description\n proj_i.created_by = user_id\n\n attr_map = hdb.add_resource_attributes(proj_i, project.attributes)\n db.DBSession.flush() #Needed to get the resource attr's ID\n proj_data = _add_project_attribute_data(proj_i, attr_map, project.attribute_data)\n proj_i.attribute_data = proj_data\n\n proj_i.set_owner(user_id)\n\n db.DBSession.add(proj_i)\n db.DBSession.flush()\n\n return proj_i", "response": "Add a new project returns a project object"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate a project Returns a project", "response": "def update_project(project,**kwargs):\n \"\"\"\n Update a project\n returns a project complexmodel\n \"\"\"\n\n user_id = kwargs.get('user_id')\n #check_perm(user_id, 'update_project')\n proj_i = _get_project(project.id)\n\n proj_i.check_write_permission(user_id)\n\n proj_i.name = project.name\n proj_i.description = project.description\n\n attr_map = hdb.add_resource_attributes(proj_i, project.attributes)\n proj_data = _add_project_attribute_data(proj_i, attr_map, project.attribute_data)\n proj_i.attribute_data = proj_data\n db.DBSession.flush()\n\n return proj_i"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_project(project_id, include_deleted_networks=False, **kwargs):\n user_id = kwargs.get('user_id')\n proj_i = _get_project(project_id)\n\n #lazy load owners\n proj_i.owners\n\n proj_i.check_read_permission(user_id)\n\n proj_j = JSONObject(proj_i)\n\n proj_j.networks = []\n for net_i in proj_i.networks:\n #lazy load owners\n net_i.owners\n net_i.scenarios\n\n if include_deleted_networks==False and net_i.status.lower() == 'x':\n continue\n\n can_read_network = net_i.check_read_permission(user_id, do_raise=False)\n if can_read_network is False:\n continue\n\n net_j = JSONObject(net_i)\n proj_j.networks.append(net_j)\n\n return proj_j", "response": "get a project complexmodel"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_project_by_network_id(network_id,**kwargs):\n user_id = kwargs.get('user_id')\n\n projects_i = db.DBSession.query(Project).join(ProjectOwner).join(Network, Project.id==Network.project_id).filter(\n Network.id==network_id,\n ProjectOwner.user_id==user_id).order_by('name').all()\n\n ret_project = None\n for project_i in projects_i:\n try:\n project_i.check_read_permission(user_id)\n ret_project = project_i\n except:\n log.info(\"Can't return project %s. User %s does not have permission to read it.\", project_i.id, user_id)\n return ret_project", "response": "get a project by a network_id"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_project_by_name(project_name,**kwargs):\n user_id = kwargs.get('user_id')\n\n projects_i = db.DBSession.query(Project).join(ProjectOwner).filter(\n Project.name==project_name,\n ProjectOwner.user_id==user_id).order_by('name').all()\n\n ret_projects = []\n for project_i in projects_i:\n try:\n project_i.check_read_permission(user_id)\n ret_projects.append(project_i)\n except:\n log.info(\"Can't return project %s. User %s does not have permission to read it.\", project_i.id, user_id)\n\n return ret_projects", "response": "get a project by name"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts SQLAlchemy object to a named tuple.", "response": "def to_named_tuple(obj, visited_children=None, back_relationships=None, levels=None, ignore=[], extras={}):\n \"\"\"\n Altered from an example found on stackoverflow\n http://stackoverflow.com/questions/23554119/convert-sqlalchemy-orm-result-to-dict\n \"\"\"\n\n if visited_children is None:\n visited_children = []\n\n if back_relationships is None:\n back_relationships = []\n\n serialized_data = {c.key: getattr(obj, c.key) for c in obj.__table__.columns}\n\n\n #Any other non-column data to include in the keyed tuple\n for k, v in extras.items():\n serialized_data[k] = v\n\n relationships = class_mapper(obj.__class__).relationships\n\n #Set the attributes to 'None' first, so the attributes are there, even if they don't\n #get filled in:\n for name, relation in relationships.items():\n if relation.uselist:\n serialized_data[name] = tuple([])\n else:\n serialized_data[name] = None\n\n\n visitable_relationships = [(name, rel) for name, rel in relationships.items() if name not in back_relationships]\n\n if levels is not None and levels > 0:\n for name, relation in visitable_relationships:\n\n levels = levels - 1\n\n if name in ignore:\n continue\n\n if relation.backref:\n back_relationships.append(relation.backref)\n\n relationship_children = getattr(obj, name)\n\n if relationship_children is not None:\n if relation.uselist:\n children = []\n for child in [c for c in relationship_children if c not in visited_children]:\n visited_children.append(child)\n children.append(to_named_tuple(child, visited_children, back_relationships, ignore=ignore, levels=levels))\n serialized_data[name] = tuple(children)\n else:\n serialized_data[name] = to_named_tuple(relationship_children, visited_children, back_relationships, ignore=ignore, levels=levels)\n\n vals = []\n cols = []\n for k, v in serialized_data.items():\n vals.append(k)\n cols.append(v)\n\n result = KeyedTuple(cols, vals)\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting all the projects owned by the specified user.", "response": "def get_projects(uid, include_shared_projects=True, projects_ids_list_filter=None, **kwargs):\n \"\"\"\n Get all the projects owned by the specified user.\n These include projects created by the user, but also ones shared with the user.\n For shared projects, only include networks in those projects which are accessible to the user.\n\n the include_shared_projects flag indicates whether to include projects which have been shared\n with the user, or to only return projects created directly by this user.\n \"\"\"\n req_user_id = kwargs.get('user_id')\n\n ##Don't load the project's networks. Load them separately, as the networks\n #must be checked individually for ownership\n projects_qry = db.DBSession.query(Project)\n\n log.info(\"Getting projects for %s\", uid)\n\n if include_shared_projects is True:\n projects_qry = projects_qry.join(ProjectOwner).filter(Project.status=='A',\n or_(ProjectOwner.user_id==uid,\n Project.created_by==uid))\n else:\n projects_qry = projects_qry.join(ProjectOwner).filter(Project.created_by==uid)\n\n if projects_ids_list_filter is not None:\n # Filtering the search of project id\n if isinstance(projects_ids_list_filter, str):\n # Trying to read a csv string\n projects_ids_list_filter = eval(projects_ids_list_filter)\n if type(projects_ids_list_filter) is int:\n projects_qry = projects_qry.filter(Project.id==projects_ids_list_filter)\n else:\n projects_qry = projects_qry.filter(Project.id.in_(projects_ids_list_filter))\n\n\n\n projects_qry = projects_qry.options(noload('networks')).order_by('id')\n\n projects_i = projects_qry.all()\n\n log.info(\"Project query done for user %s. %s projects found\", uid, len(projects_i))\n\n user = db.DBSession.query(User).filter(User.id==req_user_id).one()\n isadmin = user.is_admin()\n\n #Load each\n projects_j = []\n for project_i in projects_i:\n #Ensure the requesting user is allowed to see the project\n project_i.check_read_permission(req_user_id)\n #lazy load owners\n project_i.owners\n\n network_qry = db.DBSession.query(Network)\\\n .filter(Network.project_id==project_i.id,\\\n Network.status=='A')\n if not isadmin:\n network_qry.outerjoin(NetworkOwner)\\\n .filter(or_(\n and_(NetworkOwner.user_id != None,\n NetworkOwner.view == 'Y'),\n Network.created_by == uid\n ))\n\n networks_i = network_qry.all()\n\n networks_j = []\n for network_i in networks_i:\n network_i.owners\n net_j = JSONObject(network_i)\n if net_j.layout is not None:\n net_j.layout = JSONObject(net_j.layout)\n else:\n net_j.layout = JSONObject({})\n networks_j.append(net_j)\n\n project_j = JSONObject(project_i)\n project_j.networks = networks_j\n projects_j.append(project_j)\n\n log.info(\"Networks loaded projects for user %s\", uid)\n\n return projects_j"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the status of a project to X", "response": "def set_project_status(project_id, status, **kwargs):\n \"\"\"\n Set the status of a project to 'X'\n \"\"\"\n user_id = kwargs.get('user_id')\n #check_perm(user_id, 'delete_project')\n project = _get_project(project_id)\n project.check_write_permission(user_id)\n project.status = status\n db.DBSession.flush()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the status of a project to 'X'", "response": "def delete_project(project_id,**kwargs):\n \"\"\"\n Set the status of a project to 'X'\n \"\"\"\n user_id = kwargs.get('user_id')\n #check_perm(user_id, 'delete_project')\n project = _get_project(project_id)\n project.check_write_permission(user_id)\n db.DBSession.delete(project)\n db.DBSession.flush()\n\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget all networks in a project Returns an array of network objects.", "response": "def get_networks(project_id, include_data='N', **kwargs):\n \"\"\"\n Get all networks in a project\n Returns an array of network objects.\n \"\"\"\n log.info(\"Getting networks for project %s\", project_id)\n user_id = kwargs.get('user_id')\n project = _get_project(project_id)\n project.check_read_permission(user_id)\n\n rs = db.DBSession.query(Network.id, Network.status).filter(Network.project_id==project_id).all()\n networks=[]\n for r in rs:\n if r.status != 'A':\n continue\n try:\n net = network.get_network(r.id, summary=True, include_data=include_data, **kwargs)\n log.info(\"Network %s retrieved\", net.name)\n networks.append(net)\n except PermissionError:\n log.info(\"Not returning network %s as user %s does not have \"\n \"permission to read it.\"%(r.id, user_id))\n\n return networks"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the project that a network is in", "response": "def get_network_project(network_id, **kwargs):\n \"\"\"\n get the project that a network is in\n \"\"\"\n\n net_proj = db.DBSession.query(Project).join(Network, and_(Project.id==Network.id, Network.id==network_id)).first()\n\n if net_proj is None:\n raise HydraError(\"Network %s not found\"% network_id)\n\n return net_proj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsave a reference to the types used for this resource.", "response": "def add_resource_types(resource_i, types):\n \"\"\"\n Save a reference to the types used for this resource.\n\n @returns a list of type_ids representing the type ids\n on the resource.\n\n \"\"\"\n if types is None:\n return []\n\n existing_type_ids = []\n if resource_i.types:\n for t in resource_i.types:\n existing_type_ids.append(t.type_id)\n\n new_type_ids = []\n for templatetype in types:\n\n if templatetype.id in existing_type_ids:\n continue\n\n rt_i = ResourceType()\n rt_i.type_id = templatetype.id\n rt_i.ref_key = resource_i.ref_key\n if resource_i.ref_key == 'NODE':\n rt_i.node_id = resource_i.id\n elif resource_i.ref_key == 'LINK':\n rt_i.link_id = resource_i.id\n elif resource_i.ref_key == 'GROUP':\n rt_i.group_id = resource_i.id\n resource_i.types.append(rt_i)\n new_type_ids.append(templatetype.id)\n\n return new_type_ids"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_default_users_and_perms():\n\n # perms = db.DBSession.query(Perm).all()\n # if len(perms) > 0:\n # return\n\n default_perms = ( (\"add_user\", \"Add User\"),\n (\"edit_user\", \"Edit User\"),\n (\"add_role\", \"Add Role\"),\n (\"edit_role\", \"Edit Role\"),\n (\"add_perm\", \"Add Permission\"),\n (\"edit_perm\", \"Edit Permission\"),\n\n (\"add_network\", \"Add network\"),\n (\"edit_network\", \"Edit network\"),\n (\"delete_network\", \"Delete network\"),\n (\"share_network\", \"Share network\"),\n (\"edit_topology\", \"Edit network topology\"),\n\n (\"add_project\", \"Add Project\"),\n (\"edit_project\", \"Edit Project\"),\n (\"delete_project\", \"Delete Project\"),\n (\"share_project\", \"Share Project\"),\n\n (\"edit_data\", \"Edit network data\"),\n (\"view_data\", \"View network data\"),\n\n (\"add_template\", \"Add Template\"),\n (\"edit_template\", \"Edit Template\"),\n\n (\"add_dimension\", \"Add Dimension\"),\n (\"update_dimension\", \"Update Dimension\"),\n (\"delete_dimension\", \"Delete Dimension\"),\n\n (\"add_unit\", \"Add Unit\"),\n (\"update_unit\", \"Update Unit\"),\n (\"delete_unit\", \"Delete Unit\")\n\n\n )\n\n default_roles = (\n (\"admin\", \"Administrator\"),\n (\"dev\", \"Developer\"),\n (\"modeller\", \"Modeller / Analyst\"),\n (\"manager\", \"Manager\"),\n (\"grad\", \"Graduate\"),\n (\"developer\", \"Developer\"),\n (\"decision\", \"Decision Maker\"),\n )\n\n roleperms = (\n # Admin permissions\n ('admin', \"add_user\"),\n ('admin', \"edit_user\"),\n ('admin', \"add_role\"),\n ('admin', \"edit_role\"),\n ('admin', \"add_perm\"),\n ('admin', \"edit_perm\"),\n ('admin', \"add_network\"),\n ('admin', \"edit_network\"),\n ('admin', \"delete_network\"),\n ('admin', \"share_network\"),\n ('admin', \"add_project\"),\n ('admin', \"edit_project\"),\n ('admin', \"delete_project\"),\n ('admin', \"share_project\"),\n ('admin', \"edit_topology\"),\n ('admin', \"edit_data\"),\n ('admin', \"view_data\"),\n ('admin', \"add_template\"),\n ('admin', \"edit_template\"),\n\n ('admin', \"add_dimension\"),\n ('admin', \"update_dimension\"),\n ('admin', \"delete_dimension\"),\n\n ('admin', \"add_unit\"),\n ('admin', \"update_unit\"),\n ('admin', \"delete_unit\"),\n\n # Developer permissions\n (\"developer\", \"add_network\"),\n (\"developer\", \"edit_network\"),\n (\"developer\", \"delete_network\"),\n (\"developer\", \"share_network\"),\n (\"developer\", \"add_project\"),\n (\"developer\", \"edit_project\"),\n (\"developer\", \"delete_project\"),\n (\"developer\", \"share_project\"),\n (\"developer\", \"edit_topology\"),\n (\"developer\", \"edit_data\"),\n (\"developer\", \"view_data\"),\n (\"developer\", \"add_template\"),\n (\"developer\", \"edit_template\"),\n\n ('developer', \"add_dimension\"),\n ('developer', \"update_dimension\"),\n ('developer', \"delete_dimension\"),\n\n ('developer', \"add_unit\"),\n ('developer', \"update_unit\"),\n ('developer', \"delete_unit\"),\n\n # modeller permissions\n (\"modeller\", \"add_network\"),\n (\"modeller\", \"edit_network\"),\n (\"modeller\", \"delete_network\"),\n (\"modeller\", \"share_network\"),\n (\"modeller\", \"edit_topology\"),\n (\"modeller\", \"add_project\"),\n (\"modeller\", \"edit_project\"),\n (\"modeller\", \"delete_project\"),\n (\"modeller\", \"share_project\"),\n (\"modeller\", \"edit_data\"),\n (\"modeller\", \"view_data\"),\n\n # Manager permissions\n (\"manager\", \"edit_data\"),\n (\"manager\", \"view_data\"),\n )\n\n # Map for code to ID\n id_maps_dict = {\n \"perm\": {},\n \"role\": {}\n }\n # Adding perms\n perm_dict = {}\n for code, name in default_perms:\n perm = Perm(code=code, name=name)\n perm_dict[code] = perm\n perms_by_name = db.DBSession.query(Perm).filter(Perm.code==code).all()\n if len(perms_by_name)==0:\n # Adding perm\n log.debug(\"# Adding PERM {}\".format(code))\n db.DBSession.add(perm)\n db.DBSession.flush()\n\n perm_by_name = db.DBSession.query(Perm).filter(Perm.code==code).one()\n id_maps_dict[\"perm\"][code] = perm_by_name.id\n\n # Adding roles\n role_dict = {}\n for code, name in default_roles:\n role = Role(code=code, name=name)\n role_dict[code] = role\n roles_by_name = db.DBSession.query(Role).filter(Role.code==code).all()\n if len(roles_by_name)==0:\n # Adding perm\n log.debug(\"# Adding ROLE {}\".format(code))\n db.DBSession.add(role)\n db.DBSession.flush()\n\n role_by_name = db.DBSession.query(Role).filter(Role.code==code).one()\n id_maps_dict[\"role\"][code] = role_by_name.id\n\n # Adding connections\n for role_code, perm_code in roleperms:\n #log.info(\"Link Role:{}({}) <---> Perm:{}({})\".format(role_code, id_maps_dict[\"role\"][role_code], perm_code, id_maps_dict[\"perm\"][perm_code]))\n\n links_found = db.DBSession.query(RolePerm).filter(RolePerm.role_id==id_maps_dict[\"role\"][role_code]).filter(RolePerm.perm_id==id_maps_dict[\"perm\"][perm_code]).all()\n if len(links_found)==0:\n # Adding link\n log.debug(\"# Adding link\")\n roleperm = RolePerm()\n # roleperm.role = role_dict[role_code]\n # roleperm.perm = perm_dict[perm_code]\n roleperm.role_id = id_maps_dict[\"role\"][role_code]\n roleperm.perm_id = id_maps_dict[\"perm\"][perm_code]\n db.DBSession.add(roleperm)\n db.DBSession.flush()\n\n db.DBSession.flush()", "response": "Creates the default roles and perms for the current user and creates the default roles and perm."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_default_units_and_dimensions():\n default_units_file_location = os.path.realpath(\\\n os.path.join(os.path.dirname(os.path.realpath(__file__)),\n '../',\n 'static',\n 'default_units_and_dimensions.json'))\n\n d=None\n\n with open(default_units_file_location) as json_data:\n d = json.load(json_data)\n json_data.close()\n\n for json_dimension in d[\"dimension\"]:\n new_dimension = None\n dimension_name = get_utf8_encoded_string(json_dimension[\"name\"])\n\n db_dimensions_by_name = db.DBSession.query(Dimension).filter(Dimension.name==dimension_name).all()\n\n if len(db_dimensions_by_name) == 0:\n # Adding the dimension\n log.debug(\"Adding Dimension `{}`\".format(dimension_name))\n new_dimension = Dimension()\n if \"id\" in json_dimension:\n # If ID is specified\n new_dimension.id = json_dimension[\"id\"]\n\n new_dimension.name = dimension_name\n\n db.DBSession.add(new_dimension)\n db.DBSession.flush()\n\n # Get the dimension by name\n new_dimension = get_dimension_from_db_by_name(dimension_name)\n\n for json_unit in json_dimension[\"unit\"]:\n db_units_by_name = db.DBSession.query(Unit).filter(Unit.abbreviation==get_utf8_encoded_string(json_unit['abbr'])).all()\n if len(db_units_by_name) == 0:\n # Adding the unit\n log.debug(\"Adding Unit %s in %s\",json_unit['abbr'], json_dimension[\"name\"])\n new_unit = Unit()\n if \"id\" in json_unit:\n new_unit.id = json_unit[\"id\"]\n new_unit.dimension_id = new_dimension.id\n new_unit.name = get_utf8_encoded_string(json_unit['name'])\n new_unit.abbreviation = get_utf8_encoded_string(json_unit['abbr'])\n new_unit.lf = get_utf8_encoded_string(json_unit['lf'])\n new_unit.cf = get_utf8_encoded_string(json_unit['cf'])\n if \"description\" in json_unit:\n # If Description is specified\n new_unit.description = get_utf8_encoded_string(json_unit[\"description\"])\n\n # Save on DB\n db.DBSession.add(new_unit)\n db.DBSession.flush()\n else:\n #log.critical(\"UNIT {}.{} EXISTANT\".format(dimension_name,json_unit['abbr']))\n pass\n try:\n # Needed for test. on HWI it fails so we need to catch the exception and pass by\n db.DBSession.commit()\n except Exception as e:\n # Needed for HWI\n pass\n return", "response": "Creates the default units and dimensions for the current language."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_dimension_from_db_by_name(dimension_name):\n try:\n dimension = db.DBSession.query(Dimension).filter(Dimension.name==dimension_name).one()\n return JSONObject(dimension)\n except NoResultFound:\n raise ResourceNotFoundError(\"Dimension %s not found\"%(dimension_name))", "response": "Gets a dimension from the DB table."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_rules(scenario_id, **kwargs):\n rules = db.DBSession.query(Rule).filter(Rule.scenario_id==scenario_id, Rule.status=='A').all()\n\n return rules", "response": "Get all the rules for a given scenario."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_attribute_by_id(attr_id, **kwargs):\n\n try:\n attr_i = db.DBSession.query(Attr).filter(Attr.id==attr_id).one()\n return attr_i\n except NoResultFound:\n return None", "response": "Get a specific attribute by its ID."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_template_attributes(template_id, **kwargs):\n\n try:\n attrs_i = db.DBSession.query(Attr).filter(TemplateType.template_id==template_id).filter(TypeAttr.type_id==TemplateType.id).filter(Attr.id==TypeAttr.id).all()\n log.debug(attrs_i)\n return attrs_i\n except NoResultFound:\n return None", "response": "Get a specific attribute by its ID."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_attribute_by_name_and_dimension(name, dimension_id=None,**kwargs):\n try:\n attr_i = db.DBSession.query(Attr).filter(and_(Attr.name==name, Attr.dimension_id==dimension_id)).one()\n log.debug(\"Attribute retrieved\")\n return attr_i\n except NoResultFound:\n return None", "response": "Get a specific attribute by its name and dimension_id."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a generic attribute to the resource attribute list and return a dictionary of the new attribute.", "response": "def add_attribute(attr,**kwargs):\n \"\"\"\n Add a generic attribute, which can then be used in creating\n a resource attribute, and put into a type.\n\n .. code-block:: python\n\n (Attr){\n id = 1020\n name = \"Test Attr\"\n dimension_id = 123\n }\n\n \"\"\"\n log.debug(\"Adding attribute: %s\", attr.name)\n\n try:\n attr_i = db.DBSession.query(Attr).filter(Attr.name == attr.name,\n Attr.dimension_id == attr.dimension_id).one()\n log.info(\"Attr already exists\")\n except NoResultFound:\n attr_i = Attr(name = attr.name, dimension_id = attr.dimension_id)\n attr_i.description = attr.description\n db.DBSession.add(attr_i)\n db.DBSession.flush()\n log.info(\"New attr added\")\n return JSONObject(attr_i)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_attribute(attr,**kwargs):\n\n log.debug(\"Updating attribute: %s\", attr.name)\n attr_i = _get_attr(attr.id)\n attr_i.name = attr.name\n attr_i.dimension_id = attr.dimension_id\n attr_i.description = attr.description\n\n #Make sure an update hasn't caused an inconsistency.\n #check_sion(attr_i.id)\n\n db.DBSession.flush()\n return JSONObject(attr_i)", "response": "Update an attribute in a resource attribute list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a list of generic attributes to the internal list of resource attributes and returns a list of resource attributes.", "response": "def add_attributes(attrs,**kwargs):\n \"\"\"\n Add a list of generic attributes, which can then be used in creating\n a resource attribute, and put into a type.\n\n .. code-block:: python\n\n (Attr){\n id = 1020\n name = \"Test Attr\"\n dimen = \"very big\"\n }\n\n \"\"\"\n\n #Check to see if any of the attributs being added are already there.\n #If they are there already, don't add a new one. If an attribute\n #with the same name is there already but with a different dimension,\n #add a new attribute.\n\n # All existing attributes\n all_attrs = db.DBSession.query(Attr).all()\n attr_dict = {}\n for attr in all_attrs:\n attr_dict[(attr.name.lower(), attr.dimension_id)] = JSONObject(attr)\n\n attrs_to_add = []\n existing_attrs = []\n for potential_new_attr in attrs:\n if potential_new_attr is not None:\n # If the attrinute is None we cannot manage it\n log.debug(\"Adding attribute: %s\", potential_new_attr)\n\n if attr_dict.get((potential_new_attr.name.lower(), potential_new_attr.dimension_id)) is None:\n attrs_to_add.append(JSONObject(potential_new_attr))\n else:\n existing_attrs.append(attr_dict.get((potential_new_attr.name.lower(), potential_new_attr.dimension_id)))\n\n new_attrs = []\n for attr in attrs_to_add:\n attr_i = Attr()\n attr_i.name = attr.name\n attr_i.dimension_id = attr.dimension_id\n attr_i.description = attr.description\n db.DBSession.add(attr_i)\n new_attrs.append(attr_i)\n\n db.DBSession.flush()\n \n\n new_attrs = new_attrs + existing_attrs\n\n return [JSONObject(a) for a in new_attrs]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets all attributes in the hierarchy", "response": "def get_attributes(**kwargs):\n \"\"\"\n Get all attributes\n \"\"\"\n\n attrs = db.DBSession.query(Attr).order_by(Attr.name).all()\n\n return attrs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate a resource attribute.", "response": "def update_resource_attribute(resource_attr_id, is_var, **kwargs):\n \"\"\"\n Deletes a resource attribute and all associated data.\n \"\"\"\n user_id = kwargs.get('user_id')\n try:\n ra = db.DBSession.query(ResourceAttr).filter(ResourceAttr.id == resource_attr_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Resource Attribute %s not found\"%(resource_attr_id))\n\n ra.check_write_permission(user_id)\n\n ra.is_var = is_var\n\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes a resource attribute and all associated data.", "response": "def delete_resource_attribute(resource_attr_id, **kwargs):\n \"\"\"\n Deletes a resource attribute and all associated data.\n \"\"\"\n user_id = kwargs.get('user_id')\n try:\n ra = db.DBSession.query(ResourceAttr).filter(ResourceAttr.id == resource_attr_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Resource Attribute %s not found\"%(resource_attr_id))\n\n ra.check_write_permission(user_id)\n db.DBSession.delete(ra)\n db.DBSession.flush()\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_resource_attribute(resource_type, resource_id, attr_id, is_var, error_on_duplicate=True, **kwargs):\n\n attr = db.DBSession.query(Attr).filter(Attr.id==attr_id).first()\n\n if attr is None:\n raise ResourceNotFoundError(\"Attribute with ID %s does not exist.\"%attr_id)\n\n resource_i = _get_resource(resource_type, resource_id)\n\n resourceattr_qry = db.DBSession.query(ResourceAttr).filter(ResourceAttr.ref_key==resource_type)\n\n if resource_type == 'NETWORK':\n resourceattr_qry = resourceattr_qry.filter(ResourceAttr.network_id==resource_id)\n elif resource_type == 'NODE':\n resourceattr_qry = resourceattr_qry.filter(ResourceAttr.node_id==resource_id)\n elif resource_type == 'LINK':\n resourceattr_qry = resourceattr_qry.filter(ResourceAttr.link_id==resource_id)\n elif resource_type == 'GROUP':\n resourceattr_qry = resourceattr_qry.filter(ResourceAttr.group_id==resource_id)\n elif resource_type == 'PROJECT':\n resourceattr_qry = resourceattr_qry.filter(ResourceAttr.project_id==resource_id)\n else:\n raise HydraError('Resource type \"{}\" not recognised.'.format(resource_type))\n resource_attrs = resourceattr_qry.all()\n\n for ra in resource_attrs:\n if ra.attr_id == attr_id:\n if not error_on_duplicate:\n return ra\n\n raise HydraError(\"Duplicate attribute. %s %s already has attribute %s\"\n %(resource_type, resource_i.get_name(), attr.name))\n\n attr_is_var = 'Y' if is_var == 'Y' else 'N'\n\n new_ra = resource_i.add_attribute(attr_id, attr_is_var)\n db.DBSession.flush()\n\n return new_ra", "response": "Add a resource attribute to a resource."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_resource_attrs_from_type(type_id, resource_type, resource_id,**kwargs):\n type_i = _get_templatetype(type_id)\n\n resource_i = _get_resource(resource_type, resource_id)\n\n resourceattr_qry = db.DBSession.query(ResourceAttr).filter(ResourceAttr.ref_key==resource_type)\n\n if resource_type == 'NETWORK':\n resourceattr_qry.filter(ResourceAttr.network_id==resource_id)\n elif resource_type == 'NODE':\n resourceattr_qry.filter(ResourceAttr.node_id==resource_id)\n elif resource_type == 'LINK':\n resourceattr_qry.filter(ResourceAttr.link_id==resource_id)\n elif resource_type == 'GROUP':\n resourceattr_qry.filter(ResourceAttr.group_id==resource_id)\n elif resource_type == 'PROJECT':\n resourceattr_qry.filter(ResourceAttr.project_id==resource_id)\n\n resource_attrs = resourceattr_qry.all()\n\n attrs = {}\n for res_attr in resource_attrs:\n attrs[res_attr.attr_id] = res_attr\n\n new_resource_attrs = []\n for item in type_i.typeattrs:\n if attrs.get(item.attr_id) is None:\n ra = resource_i.add_attribute(item.attr_id)\n new_resource_attrs.append(ra)\n\n db.DBSession.flush()\n\n return new_resource_attrs", "response": "Adds all the attributes defined by a type to a node."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget all the resource attributes for a given resource type within a given network.", "response": "def get_all_resource_attributes(ref_key, network_id, template_id=None, **kwargs):\n \"\"\"\n Get all the resource attributes for a given resource type in the network.\n That includes all the resource attributes for a given type within the network.\n For example, if the ref_key is 'NODE', then it will return all the attirbutes\n of all nodes in the network. This function allows a front end to pre-load an entire\n network's resource attribute information to reduce on function calls.\n If type_id is specified, only\n return the resource attributes within the type.\n \"\"\"\n\n user_id = kwargs.get('user_id')\n\n resource_attr_qry = db.DBSession.query(ResourceAttr).\\\n outerjoin(Node, Node.id==ResourceAttr.node_id).\\\n outerjoin(Link, Link.id==ResourceAttr.link_id).\\\n outerjoin(ResourceGroup, ResourceGroup.id==ResourceAttr.group_id).filter(\n ResourceAttr.ref_key == ref_key,\n or_(\n and_(ResourceAttr.node_id != None,\n ResourceAttr.node_id == Node.id,\n Node.network_id==network_id),\n\n and_(ResourceAttr.link_id != None,\n ResourceAttr.link_id == Link.id,\n Link.network_id==network_id),\n\n and_(ResourceAttr.group_id != None,\n ResourceAttr.group_id == ResourceGroup.id,\n ResourceGroup.network_id==network_id)\n ))\n\n if template_id is not None:\n attr_ids = []\n rs = db.DBSession.query(TypeAttr).join(TemplateType,\n TemplateType.id==TypeAttr.type_id).filter(\n TemplateType.template_id==template_id).all()\n for r in rs:\n attr_ids.append(r.attr_id)\n\n resource_attr_qry = resource_attr_qry.filter(ResourceAttr.attr_id.in_(attr_ids))\n\n resource_attrs = resource_attr_qry.all()\n\n return resource_attrs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_resource_attributes(ref_key, ref_id, type_id=None, **kwargs):\n\n user_id = kwargs.get('user_id')\n\n resource_attr_qry = db.DBSession.query(ResourceAttr).filter(\n ResourceAttr.ref_key == ref_key,\n or_(\n ResourceAttr.network_id==ref_id,\n ResourceAttr.node_id==ref_id,\n ResourceAttr.link_id==ref_id,\n ResourceAttr.group_id==ref_id\n ))\n\n if type_id is not None:\n attr_ids = []\n rs = db.DBSession.query(TypeAttr).filter(TypeAttr.type_id==type_id).all()\n for r in rs:\n attr_ids.append(r.attr_id)\n\n resource_attr_qry = resource_attr_qry.filter(ResourceAttr.attr_id.in_(attr_ids))\n\n resource_attrs = resource_attr_qry.all()\n\n return resource_attrs", "response": "Get all the resource attributes for a given resource."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck that the dimension of the resource attribute data is consistent with the definition of the attribute.", "response": "def check_attr_dimension(attr_id, **kwargs):\n \"\"\"\n Check that the dimension of the resource attribute data is consistent\n with the definition of the attribute.\n If the attribute says 'volume', make sure every dataset connected\n with this attribute via a resource attribute also has a dimension\n of 'volume'.\n \"\"\"\n attr_i = _get_attr(attr_id)\n\n datasets = db.DBSession.query(Dataset).filter(Dataset.id == ResourceScenario.dataset_id,\n ResourceScenario.resource_attr_id == ResourceAttr.id,\n ResourceAttr.attr_id == attr_id).all()\n bad_datasets = []\n for d in datasets:\n if attr_i.dimension_id is None and d.unit is not None or \\\n attr_i.dimension_id is not None and d.unit is None or \\\n units.get_dimension_by_unit_id(d.unit_id) != attr_i.dimension_id:\n # If there is an inconsistency\n bad_datasets.append(d.id)\n\n if len(bad_datasets) > 0:\n raise HydraError(\"Datasets %s have a different dimension_id to attribute %s\"%(bad_datasets, attr_id))\n\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_resource_attribute(resource_attr_id, **kwargs):\n\n resource_attr_qry = db.DBSession.query(ResourceAttr).filter(\n ResourceAttr.id == resource_attr_id,\n )\n\n resource_attr = resource_attr_qry.first()\n\n if resource_attr is None:\n raise ResourceNotFoundError(\"Resource attribute %s does not exist\", resource_attr_id)\n\n return resource_attr", "response": "Get a specific resource attribute by ID."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndefine one resource attribute from one network as being the same as the same as that from another network.", "response": "def set_attribute_mapping(resource_attr_a, resource_attr_b, **kwargs):\n \"\"\"\n Define one resource attribute from one network as being the same as\n that from another network.\n \"\"\"\n user_id = kwargs.get('user_id')\n ra_1 = get_resource_attribute(resource_attr_a)\n ra_2 = get_resource_attribute(resource_attr_b)\n\n mapping = ResourceAttrMap(resource_attr_id_a = resource_attr_a,\n resource_attr_id_b = resource_attr_b,\n network_a_id = ra_1.get_network().id,\n network_b_id = ra_2.get_network().id )\n\n db.DBSession.add(mapping)\n\n db.DBSession.flush()\n\n return mapping"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndelete the mapping between two resource attributes.", "response": "def delete_attribute_mapping(resource_attr_a, resource_attr_b, **kwargs):\n \"\"\"\n Define one resource attribute from one network as being the same as\n that from another network.\n \"\"\"\n user_id = kwargs.get('user_id')\n\n rm = aliased(ResourceAttrMap, name='rm')\n\n log.info(\"Trying to delete attribute map. %s -> %s\", resource_attr_a, resource_attr_b)\n mapping = db.DBSession.query(rm).filter(\n rm.resource_attr_id_a == resource_attr_a,\n rm.resource_attr_id_b == resource_attr_b).first()\n\n if mapping is not None:\n log.info(\"Deleting attribute map. %s -> %s\", resource_attr_a, resource_attr_b)\n db.DBSession.delete(mapping)\n db.DBSession.flush()\n\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete_mappings_in_network(network_id, network_2_id=None, **kwargs):\n qry = db.DBSession.query(ResourceAttrMap).filter(or_(ResourceAttrMap.network_a_id == network_id, ResourceAttrMap.network_b_id == network_id))\n\n if network_2_id is not None:\n qry = qry.filter(or_(ResourceAttrMap.network_a_id==network_2_id, ResourceAttrMap.network_b_id==network_2_id))\n\n mappings = qry.all()\n\n for m in mappings:\n db.DBSession.delete(m)\n db.DBSession.flush()\n\n return 'OK'", "response": "Delete all the resource attribute mappings in a network."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget all the resource attribute mappings in a network.", "response": "def get_mappings_in_network(network_id, network_2_id=None, **kwargs):\n \"\"\"\n Get all the resource attribute mappings in a network. If another network\n is specified, only return the mappings between the two networks.\n \"\"\"\n qry = db.DBSession.query(ResourceAttrMap).filter(or_(ResourceAttrMap.network_a_id == network_id, ResourceAttrMap.network_b_id == network_id))\n\n if network_2_id is not None:\n qry = qry.filter(or_(ResourceAttrMap.network_a_id==network_2_id, ResourceAttrMap.network_b_id==network_2_id))\n\n return qry.all()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets all the resource attribute mappings between two nodes.", "response": "def get_node_mappings(node_id, node_2_id=None, **kwargs):\n \"\"\"\n Get all the resource attribute mappings in a network. If another network\n is specified, only return the mappings between the two networks.\n \"\"\"\n qry = db.DBSession.query(ResourceAttrMap).filter(\n or_(\n and_(\n ResourceAttrMap.resource_attr_id_a == ResourceAttr.id,\n ResourceAttr.node_id == node_id),\n and_(\n ResourceAttrMap.resource_attr_id_b == ResourceAttr.id,\n ResourceAttr.node_id == node_id)))\n\n if node_2_id is not None:\n aliased_ra = aliased(ResourceAttr, name=\"ra2\")\n qry = qry.filter(or_(\n and_(\n ResourceAttrMap.resource_attr_id_a == aliased_ra.id,\n aliased_ra.node_id == node_2_id),\n and_(\n ResourceAttrMap.resource_attr_id_b == aliased_ra.id,\n aliased_ra.node_id == node_2_id)))\n\n return qry.all()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets all the resource attribute mappings in a network.", "response": "def get_link_mappings(link_id, link_2_id=None, **kwargs):\n \"\"\"\n Get all the resource attribute mappings in a network. If another network\n is specified, only return the mappings between the two networks.\n \"\"\"\n qry = db.DBSession.query(ResourceAttrMap).filter(\n or_(\n and_(\n ResourceAttrMap.resource_attr_id_a == ResourceAttr.id,\n ResourceAttr.link_id == link_id),\n and_(\n ResourceAttrMap.resource_attr_id_b == ResourceAttr.id,\n ResourceAttr.link_id == link_id)))\n\n if link_2_id is not None:\n aliased_ra = aliased(ResourceAttr, name=\"ra2\")\n qry = qry.filter(or_(\n and_(\n ResourceAttrMap.resource_attr_id_a == aliased_ra.id,\n aliased_ra.link_id == link_2_id),\n and_(\n ResourceAttrMap.resource_attr_id_b == aliased_ra.id,\n aliased_ra.link_id == link_2_id)))\n\n return qry.all()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_network_mappings(network_id, network_2_id=None, **kwargs):\n qry = db.DBSession.query(ResourceAttrMap).filter(\n or_(\n and_(\n ResourceAttrMap.resource_attr_id_a == ResourceAttr.id,\n ResourceAttr.network_id == network_id),\n and_(\n ResourceAttrMap.resource_attr_id_b == ResourceAttr.id,\n ResourceAttr.network_id == network_id)))\n\n if network_2_id is not None:\n aliased_ra = aliased(ResourceAttr, name=\"ra2\")\n qry = qry.filter(or_(\n and_(\n ResourceAttrMap.resource_attr_id_a == aliased_ra.id,\n aliased_ra.network_id == network_2_id),\n and_(\n ResourceAttrMap.resource_attr_id_b == aliased_ra.id,\n aliased_ra.network_id == network_2_id)))\n\n return qry.all()", "response": "Get all the mappings between two networks."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a specific attribute group", "response": "def get_attribute_group(group_id, **kwargs):\n \"\"\"\n Get a specific attribute group\n \"\"\"\n\n user_id=kwargs.get('user_id')\n\n try:\n group_i = db.DBSession.query(AttrGroup).filter(\n AttrGroup.id==group_id).one()\n group_i.project.check_read_permission(user_id)\n except NoResultFound:\n raise HydraError(\"Group %s not found\" % (group_id,))\n\n return group_i"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_attribute_group(attributegroup, **kwargs):\n log.info(\"attributegroup.project_id %s\",attributegroup.project_id) # It is None while it should be valued\n user_id=kwargs.get('user_id')\n project_i = db.DBSession.query(Project).filter(Project.id==attributegroup.project_id).one()\n project_i.check_write_permission(user_id)\n try:\n\n group_i = db.DBSession.query(AttrGroup).filter(\n AttrGroup.name==attributegroup.name,\n AttrGroup.project_id==attributegroup.project_id).one()\n log.info(\"Group %s already exists in project %s\", attributegroup.name, attributegroup.project_id)\n\n except NoResultFound:\n\n group_i = AttrGroup()\n group_i.project_id = attributegroup.project_id\n group_i.name = attributegroup.name\n group_i.description = attributegroup.description\n group_i.layout = attributegroup.get_layout()\n group_i.exclusive = attributegroup.exclusive\n\n db.DBSession.add(group_i)\n db.DBSession.flush()\n\n log.info(\"Attribute Group %s added to project %s\", attributegroup.name, attributegroup.project_id)\n\n return group_i", "response": "Adds an attribute group to the list of available attributes."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating an attribute group.", "response": "def update_attribute_group(attributegroup, **kwargs):\n \"\"\"\n Add a new attribute group.\n\n An attribute group is a container for attributes which need to be grouped\n in some logical way. For example, if the 'attr_is_var' flag isn't expressive\n enough to delineate different groupings.\n\n an attribute group looks like:\n {\n 'project_id' : XXX,\n 'name' : 'my group name'\n 'description : 'my group description' (optional)\n 'layout' : 'my group layout' (optional)\n 'exclusive' : 'N' (or 'Y' ) (optional, default to 'N')\n }\n \"\"\"\n user_id=kwargs.get('user_id')\n\n if attributegroup.id is None:\n raise HydraError(\"cannot update attribute group. no ID specified\")\n\n try:\n\n group_i = db.DBSession.query(AttrGroup).filter(AttrGroup.id==attributegroup.id).one()\n group_i.project.check_write_permission(user_id)\n\n group_i.name = attributegroup.name\n group_i.description = attributegroup.description\n group_i.layout = attributegroup.layout\n group_i.exclusive = attributegroup.exclusive\n\n db.DBSession.flush()\n\n log.info(\"Group %s in project %s updated\", attributegroup.id, attributegroup.project_id)\n except NoResultFound:\n\n raise HydraError('No Attribute Group %s was found in project %s', attributegroup.id, attributegroup.project_id)\n\n\n return group_i"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_attribute_group(group_id, **kwargs):\n user_id = kwargs['user_id']\n\n try:\n\n group_i = db.DBSession.query(AttrGroup).filter(AttrGroup.id==group_id).one()\n\n group_i.project.check_write_permission(user_id)\n\n db.DBSession.delete(group_i)\n db.DBSession.flush()\n\n log.info(\"Group %s in project %s deleted\", group_i.id, group_i.project_id)\n except NoResultFound:\n\n raise HydraError('No Attribute Group %s was found', group_id)\n\n\n return 'OK'", "response": "Delete an attribute group."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting all the group items in a network", "response": "def get_network_attributegroup_items(network_id, **kwargs):\n \"\"\"\n Get all the group items in a network\n \"\"\"\n\n user_id=kwargs.get('user_id')\n\n\n net_i = _get_network(network_id)\n\n net_i.check_read_permission(user_id)\n\n group_items_i = db.DBSession.query(AttrGroupItem).filter(\n AttrGroupItem.network_id==network_id).all()\n\n return group_items_i"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting all the items in a specified group within a network", "response": "def get_group_attributegroup_items(network_id, group_id, **kwargs):\n \"\"\"\n Get all the items in a specified group, within a network\n \"\"\"\n user_id=kwargs.get('user_id')\n\n network_i = _get_network(network_id)\n\n network_i.check_read_permission(user_id)\n\n group_items_i = db.DBSession.query(AttrGroupItem).filter(\n AttrGroupItem.network_id==network_id,\n AttrGroupItem.group_id==group_id).all()\n\n return group_items_i"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_attribute_item_groups(network_id, attr_id, **kwargs):\n user_id=kwargs.get('user_id')\n\n network_i = _get_network(network_id)\n\n network_i.check_read_permission(user_id)\n\n group_items_i = db.DBSession.query(AttrGroupItem).filter(\n AttrGroupItem.network_id==network_id,\n AttrGroupItem.attr_id==attr_id).all()\n\n return group_items_i", "response": "Get all the group items in a given attribute_id\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_attribute_group_items(attributegroupitems, **kwargs):\n\n \"\"\"\n Populate attribute groups with items.\n ** attributegroupitems : a list of items, of the form:\n ```{\n 'attr_id' : X,\n 'group_id' : Y,\n 'network_id' : Z,\n }```\n\n Note that this approach supports the possibility of populating groups\n within multiple networks at the same time.\n\n When adding a group item, the function checks whether it can be added,\n based on the 'exclusivity' setup of the groups -- if a group is specified\n as being 'exclusive', then any attributes within that group cannot appear\n in any other group (within a network).\n \"\"\"\n\n user_id=kwargs.get('user_id')\n\n if not isinstance(attributegroupitems, list):\n raise HydraError(\"Cannpt add attribute group items. Attributegroupitems must be a list\")\n\n new_agis_i = []\n\n group_lookup = {}\n\n #for each network, keep track of what attributes are contained in which groups it's in\n #structure: {NETWORK_ID : {ATTR_ID: [GROUP_ID]}\n agi_lookup = {}\n\n network_lookup = {}\n\n #'agi' = shorthand for 'attribute group item'\n for agi in attributegroupitems:\n\n\n network_i = network_lookup.get(agi.network_id)\n\n if network_i is None:\n network_i = _get_network(agi.network_id)\n network_lookup[agi.network_id] = network_i\n\n network_i.check_write_permission(user_id)\n\n\n #Get the group so we can check for exclusivity constraints\n group_i = group_lookup.get(agi.group_id)\n if group_i is None:\n group_lookup[agi.group_id] = _get_attr_group(agi.group_id)\n\n network_agis = agi_lookup\n\n #Create a map of all agis currently in the network\n if agi_lookup.get(agi.network_id) is None:\n agi_lookup[agi.network_id] = {}\n network_agis = _get_attributegroupitems(agi.network_id)\n log.info(network_agis)\n for net_agi in network_agis:\n\n if net_agi.group_id not in group_lookup:\n group_lookup[net_agi.group_id] = _get_attr_group(net_agi.group_id)\n\n if agi_lookup.get(net_agi.network_id) is None:\n agi_lookup[net_agi.network_id][net_agi.attr_id] = [net_agi.group_id]\n else:\n if agi_lookup[net_agi.network_id].get(net_agi.attr_id) is None:\n agi_lookup[net_agi.network_id][net_agi.attr_id] = [net_agi.group_id]\n elif net_agi.group_id not in agi_lookup[net_agi.network_id][net_agi.attr_id]:\n agi_lookup[net_agi.network_id][net_agi.attr_id].append(net_agi.group_id)\n #Does this agi exist anywhere else inside this network?\n #Go through all the groups that this attr is in and make sure it's not exclusive\n if agi_lookup[agi.network_id].get(agi.attr_id) is not None:\n for group_id in agi_lookup[agi.network_id][agi.attr_id]:\n group = group_lookup[group_id]\n #Another group has been found.\n if group.exclusive == 'Y':\n #The other group is exclusive, so this attr can't be added\n raise HydraError(\"Attribute %s is already in Group %s for network %s. This group is exclusive, so attr %s cannot exist in another group.\"%(agi.attr_id, group.id, agi.network_id, agi.attr_id))\n\n #Now check that if this group is exclusive, then the attr isn't in\n #any other groups\n if group_lookup[agi.group_id].exclusive == 'Y':\n if len(agi_lookup[agi.network_id][agi.attr_id]) > 0:\n #The other group is exclusive, so this attr can't be added\n raise HydraError(\"Cannot add attribute %s to group %s. This group is exclusive, but attr %s has been found in other groups (%s)\" % (agi.attr_id, agi.group_id, agi.attr_id, agi_lookup[agi.network_id][agi.attr_id]))\n\n\n agi_i = AttrGroupItem()\n agi_i.network_id = agi.network_id\n agi_i.group_id = agi.group_id\n agi_i.attr_id = agi.attr_id\n\n #Update the lookup table in preparation for the next pass.\n if agi_lookup[agi.network_id].get(agi.attr_id) is None:\n agi_lookup[agi.network_id][agi.attr_id] = [agi.group_id]\n elif agi.group_id not in agi_lookup[agi.network_id][agi.attr_id]:\n agi_lookup[agi.network_id][agi.attr_id].append(agi.group_id)\n\n\n db.DBSession.add(agi_i)\n\n new_agis_i.append(agi_i)\n log.info(agi_lookup)\n\n db.DBSession.flush()\n\n return new_agis_i", "response": "Add a list of items to the attribute group."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting the items from the attribute group.", "response": "def delete_attribute_group_items(attributegroupitems, **kwargs):\n\n \"\"\"\n remove attribute groups items .\n ** attributegroupitems : a list of items, of the form:\n ```{\n 'attr_id' : X,\n 'group_id' : Y,\n 'network_id' : Z,\n }```\n \"\"\"\n\n\n user_id=kwargs.get('user_id')\n\n log.info(\"Deleting %s attribute group items\", len(attributegroupitems))\n\n #if there area attributegroupitems from different networks, keep track of those\n #networks to ensure the user actually has permission to remove the items.\n network_lookup = {}\n\n if not isinstance(attributegroupitems, list):\n raise HydraError(\"Cannpt add attribute group items. Attributegroupitems must be a list\")\n\n #'agi' = shorthand for 'attribute group item'\n for agi in attributegroupitems:\n\n network_i = network_lookup.get(agi.network_id)\n\n if network_i is None:\n network_i = _get_network(agi.network_id)\n network_lookup[agi.network_id] = network_i\n\n network_i.check_write_permission(user_id)\n\n agi_i = db.DBSession.query(AttrGroupItem).filter(AttrGroupItem.network_id == agi.network_id,\n AttrGroupItem.group_id == agi.group_id,\n AttrGroupItem.attr_id == agi.attr_id).first()\n\n if agi_i is not None:\n db.DBSession.delete(agi_i)\n\n db.DBSession.flush()\n\n log.info(\"Attribute group items deleted\")\n\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef share_network(network_id, usernames, read_only, share,**kwargs):\n\n user_id = kwargs.get('user_id')\n net_i = _get_network(network_id)\n net_i.check_share_permission(user_id)\n\n if read_only == 'Y':\n write = 'N'\n share = 'N'\n else:\n write = 'Y'\n\n if net_i.created_by != int(user_id) and share == 'Y':\n raise HydraError(\"Cannot share the 'sharing' ability as user %s is not\"\n \" the owner of network %s\"%\n (user_id, network_id))\n\n for username in usernames:\n user_i = _get_user(username)\n #Set the owner ship on the network itself\n net_i.set_owner(user_i.id, write=write, share=share)\n for o in net_i.project.owners:\n if o.user_id == user_i.id:\n break\n else:\n #Give the user read access to the containing project\n net_i.project.set_owner(user_i.id, write='N', share='N')\n db.DBSession.flush()", "response": "Share a network with a list of users identified by their usernames."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unshare_network(network_id, usernames,**kwargs):\n\n user_id = kwargs.get('user_id')\n net_i = _get_network(network_id)\n net_i.check_share_permission(user_id)\n\n for username in usernames:\n user_i = _get_user(username)\n #Set the owner ship on the network itself\n\n net_i.unset_owner(user_i.id, write=write, share=share)\n db.DBSession.flush()", "response": "Un - Share a network with a list of users identified by their usernames."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef share_project(project_id, usernames, read_only, share,**kwargs):\n user_id = kwargs.get('user_id')\n\n proj_i = _get_project(project_id)\n\n #Is the sharing user allowed to share this project?\n proj_i.check_share_permission(int(user_id))\n\n user_id = int(user_id)\n\n for owner in proj_i.owners:\n if user_id == owner.user_id:\n break\n else:\n raise HydraError(\"Permission Denied. Cannot share project.\")\n\n if read_only == 'Y':\n write = 'N'\n share = 'N'\n else:\n write = 'Y'\n\n if proj_i.created_by != user_id and share == 'Y':\n raise HydraError(\"Cannot share the 'sharing' ability as user %s is not\"\n \" the owner of project %s\"%\n (user_id, project_id))\n\n for username in usernames:\n user_i = _get_user(username)\n\n proj_i.set_owner(user_i.id, write=write, share=share)\n\n for net_i in proj_i.networks:\n net_i.set_owner(user_i.id, write=write, share=share)\n db.DBSession.flush()", "response": "Share an entire project with a list of users identifed by usernames."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting permissions on a project to a list of users identifed by usernames.", "response": "def set_project_permission(project_id, usernames, read, write, share,**kwargs):\n \"\"\"\n Set permissions on a project to a list of users, identifed by\n their usernames.\n\n The read flag ('Y' or 'N') sets read access, the write\n flag sets write access. If the read flag is 'N', then there is\n automatically no write access or share access.\n \"\"\"\n user_id = kwargs.get('user_id')\n\n proj_i = _get_project(project_id)\n\n #Is the sharing user allowed to share this project?\n proj_i.check_share_permission(user_id)\n\n #You cannot edit something you cannot see.\n if read == 'N':\n write = 'N'\n share = 'N'\n\n for username in usernames:\n user_i = _get_user(username)\n\n #The creator of a project must always have read and write access\n #to their project\n if proj_i.created_by == user_i.id:\n raise HydraError(\"Cannot set permissions on project %s\"\n \" for user %s as this user is the creator.\" %\n (project_id, username))\n\n proj_i.set_owner(user_i.id, read=read, write=write)\n\n for net_i in proj_i.networks:\n net_i.set_owner(user_i.id, read=read, write=write, share=share)\n db.DBSession.flush()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_network_permission(network_id, usernames, read, write, share,**kwargs):\n\n user_id = kwargs.get('user_id')\n\n net_i = _get_network(network_id)\n\n #Check if the user is allowed to share this network.\n net_i.check_share_permission(user_id)\n\n #You cannot edit something you cannot see.\n if read == 'N':\n write = 'N'\n share = 'N'\n\n for username in usernames:\n\n user_i = _get_user(username)\n\n #The creator of a network must always have read and write access\n #to their project\n if net_i.created_by == user_i.id:\n raise HydraError(\"Cannot set permissions on network %s\"\n \" for user %s as tis user is the creator.\" %\n (network_id, username))\n\n net_i.set_owner(user_i.id, read=read, write=write, share=share)\n db.DBSession.flush()", "response": "Set permissions on a network to a list of users identifed by usernames."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nhide a particular piece of data by the specified user.", "response": "def hide_dataset(dataset_id, exceptions, read, write, share,**kwargs):\n \"\"\"\n Hide a particular piece of data so it can only be seen by its owner.\n Only an owner can hide (and unhide) data.\n Data with no owner cannot be hidden.\n\n The exceptions paramater lists the usernames of those with permission to view the data\n read, write and share indicate whether these users can read, edit and share this data.\n \"\"\"\n\n user_id = kwargs.get('user_id')\n dataset_i = _get_dataset(dataset_id)\n #check that I can hide the dataset\n if dataset_i.created_by != int(user_id):\n raise HydraError('Permission denied. '\n 'User %s is not the owner of dataset %s'\n %(user_id, dataset_i.name))\n\n dataset_i.hidden = 'Y'\n if exceptions is not None:\n for username in exceptions:\n user_i = _get_user(username)\n dataset_i.set_owner(user_i.id, read=read, write=write, share=share)\n db.DBSession.flush()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef unhide_dataset(dataset_id,**kwargs):\n\n user_id = kwargs.get('user_id')\n dataset_i = _get_dataset(dataset_id)\n #check that I can unhide the dataset\n if dataset_i.created_by != int(user_id):\n raise HydraError('Permission denied. '\n 'User %s is not the owner of dataset %s'\n %(user_id, dataset_i.name))\n\n dataset_i.hidden = 'N'\n db.DBSession.flush()", "response": "Unhide a particular piece of data from a dataset."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the project owner entries for all the requested projects.", "response": "def get_all_project_owners(project_ids=None, **kwargs):\n \"\"\"\n Get the project owner entries for all the requested projects.\n If the project_ids argument is None, return all the owner entries\n for ALL projects\n \"\"\"\n\n\n projowner_qry = db.DBSession.query(ProjectOwner)\n\n if project_ids is not None:\n projowner_qry = projowner_qry.filter(ProjectOwner.project_id.in_(project_ids))\n\n project_owners_i = projowner_qry.all()\n\n return [JSONObject(project_owner_i) for project_owner_i in project_owners_i]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_all_network_owners(network_ids=None, **kwargs):\n\n\n networkowner_qry = db.DBSession.query(NetworkOwner)\n\n if network_ids is not None:\n networkowner_qry = networkowner_qry.filter(NetworkOwner.network_id.in_(network_ids))\n\n network_owners_i = networkowner_qry.all()\n\n return [JSONObject(network_owner_i) for network_owner_i in network_owners_i]", "response": "Get the network owner entries for all the requested networks."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbulking set the project owner of multiple projects at once.", "response": "def bulk_set_project_owners(project_owners, **kwargs):\n \"\"\"\n Set the project owner of multiple projects at once.\n Accepts a list of JSONObjects which look like:\n {\n 'project_id': XX,\n 'user_id' : YY,\n 'view' : 'Y'/ 'N'\n 'edit' : 'Y'/ 'N'\n 'share' : 'Y'/ 'N'\n }\n \"\"\"\n \n project_ids = [po.project_id for po in project_owners]\n\n existing_projowners = db.DBSession.query(ProjectOwner).filter(ProjectOwner.project_id.in_(project_ids)).all()\n\n #Create a lookup based on the unique key for this table (project_id, user_id)\n po_lookup = {}\n \n for po in existing_projowners:\n po_lookup[(po.project_id, po.user_id)] = po\n\n for project_owner in project_owners:\n #check if the entry is already there\n if po_lookup.get((project_owner.project_id, project_owner.user_id)):\n continue\n\n new_po = ProjectOwner()\n new_po.project_id = project_owner.project_id\n new_po.user_id = project_owner.user_id\n new_po.view = project_owner.view\n new_po.edit = project_owner.edit\n new_po.share = project_owner.share\n\n db.DBSession.add(new_po)\n\n db.DBSession.flush()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbulk set the network owner of multiple networks at once.", "response": "def bulk_set_network_owners(network_owners, **kwargs):\n \"\"\"\n Set the network owner of multiple networks at once.\n Accepts a list of JSONObjects which look like:\n {\n 'network_id': XX,\n 'user_id' : YY,\n 'view' : 'Y'/ 'N'\n 'edit' : 'Y'/ 'N'\n 'share' : 'Y'/ 'N'\n }\n \"\"\"\n \n network_ids = [no.network_id for no in network_owners]\n\n existing_projowners = db.DBSession.query(NetworkOwner).filter(NetworkOwner.network_id.in_(network_ids)).all()\n\n #Create a lookup based on the unique key for this table (network_id, user_id)\n no_lookup = {}\n \n for no in existing_projowners:\n no_lookup[(no.network_id, no.user_id)] = no\n\n for network_owner in network_owners:\n #check if the entry is already there\n if no_lookup.get((network_owner.network_id, network_owner.user_id)):\n continue\n\n new_no = NetworkOwner()\n new_no.network_id = network_owner.network_id\n new_no.user_id = network_owner.user_id\n new_no.view = network_owner.view\n new_no.edit = network_owner.edit\n new_no.share = network_owner.share\n\n db.DBSession.add(new_no)\n\n db.DBSession.flush()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a single dataset by ID.", "response": "def get_dataset(dataset_id,**kwargs):\n \"\"\"\n Get a single dataset, by ID\n \"\"\"\n\n user_id = int(kwargs.get('user_id'))\n\n if dataset_id is None:\n return None\n try:\n dataset_rs = db.DBSession.query(Dataset.id,\n Dataset.type,\n Dataset.unit_id,\n Dataset.name,\n Dataset.hidden,\n Dataset.cr_date,\n Dataset.created_by,\n DatasetOwner.user_id,\n null().label('metadata'),\n case([(and_(Dataset.hidden=='Y', DatasetOwner.user_id is not None), None)],\n else_=Dataset.value).label('value')).filter(\n Dataset.id==dataset_id).outerjoin(DatasetOwner,\n and_(DatasetOwner.dataset_id==Dataset.id,\n DatasetOwner.user_id==user_id)).one()\n\n rs_dict = dataset_rs._asdict()\n\n #convert the value row into a string as it is returned as a binary\n if dataset_rs.value is not None:\n rs_dict['value'] = str(dataset_rs.value)\n\n if dataset_rs.hidden == 'N' or (dataset_rs.hidden == 'Y' and dataset_rs.user_id is not None):\n metadata = db.DBSession.query(Metadata).filter(Metadata.dataset_id==dataset_id).all()\n rs_dict['metadata'] = metadata\n else:\n rs_dict['metadata'] = []\n\n except NoResultFound:\n raise HydraError(\"Dataset %s does not exist.\"%(dataset_id))\n\n\n dataset = namedtuple('Dataset', rs_dict.keys())(**rs_dict)\n\n return dataset"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a single dataset by ID.", "response": "def clone_dataset(dataset_id,**kwargs):\n \"\"\"\n Get a single dataset, by ID\n \"\"\"\n\n user_id = int(kwargs.get('user_id'))\n\n if dataset_id is None:\n return None\n\n dataset = db.DBSession.query(Dataset).filter(\n Dataset.id==dataset_id).options(joinedload_all('metadata')).first()\n\n if dataset is None:\n raise HydraError(\"Dataset %s does not exist.\"%(dataset_id))\n\n if dataset is not None and dataset.created_by != user_id:\n owner = db.DBSession.query(DatasetOwner).filter(\n DatasetOwner.dataset_id==Dataset.id,\n DatasetOwner.user_id==user_id).first()\n if owner is None:\n raise PermissionError(\"User %s is not an owner of dataset %s and therefore cannot clone it.\"%(user_id, dataset_id))\n\n db.DBSession.expunge(dataset)\n\n make_transient(dataset)\n\n dataset.name = dataset.name + \"(Clone)\"\n dataset.id = None\n dataset.cr_date = None\n\n #Try to avoid duplicate metadata entries if the entry has been cloned previously\n for m in dataset.metadata:\n if m.key in (\"clone_of\", \"cloned_by\"):\n del(m)\n\n cloned_meta = Metadata()\n cloned_meta.key = \"clone_of\"\n cloned_meta.value = str(dataset_id)\n dataset.metadata.append(cloned_meta)\n cloned_meta = Metadata()\n cloned_meta.key = \"cloned_by\"\n cloned_meta.value = str(user_id)\n dataset.metadata.append(cloned_meta)\n\n dataset.set_hash()\n db.DBSession.add(dataset)\n db.DBSession.flush()\n\n cloned_dataset = db.DBSession.query(Dataset).filter(\n Dataset.id==dataset.id).first()\n\n return cloned_dataset"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a list of datasets by ID.", "response": "def get_datasets(dataset_ids,**kwargs):\n \"\"\"\n Get a single dataset, by ID\n \"\"\"\n\n user_id = int(kwargs.get('user_id'))\n datasets = []\n if len(dataset_ids) == 0:\n return []\n try:\n dataset_rs = db.DBSession.query(Dataset.id,\n Dataset.type,\n Dataset.unit_id,\n Dataset.name,\n Dataset.hidden,\n Dataset.cr_date,\n Dataset.created_by,\n DatasetOwner.user_id,\n null().label('metadata'),\n case([(and_(Dataset.hidden=='Y', DatasetOwner.user_id is not None), None)],\n else_=Dataset.value).label('value')).filter(\n Dataset.id.in_(dataset_ids)).outerjoin(DatasetOwner,\n and_(DatasetOwner.dataset_id==Dataset.id,\n DatasetOwner.user_id==user_id)).all()\n\n #convert the value row into a string as it is returned as a binary\n for dataset_row in dataset_rs:\n dataset_dict = dataset_row._asdict()\n\n if dataset_row.value is not None:\n dataset_dict['value'] = str(dataset_row.value)\n\n if dataset_row.hidden == 'N' or (dataset_row.hidden == 'Y' and dataset_row.user_id is not None):\n metadata = db.DBSession.query(Metadata).filter(Metadata.dataset_id == dataset_row.id).all()\n dataset_dict['metadata'] = metadata\n else:\n dataset_dict['metadata'] = []\n\n datasets.append(namedtuple('Dataset', dataset_dict.keys())(**dataset_dict))\n\n\n except NoResultFound:\n raise ResourceNotFoundError(\"Datasets not found.\")\n\n return datasets"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef search_datasets(dataset_id=None,\n dataset_name=None,\n collection_name=None,\n data_type=None,\n unit_id=None,\n scenario_id=None,\n metadata_key=None,\n metadata_val=None,\n attr_id = None,\n type_id = None,\n unconnected = None,\n inc_metadata='N',\n inc_val = 'N',\n page_start = 0,\n page_size = 2000,\n **kwargs):\n \"\"\"\n Get multiple datasets, based on several\n filters. If all filters are set to None, all\n datasets in the DB (that the user is allowe to see)\n will be returned.\n \"\"\"\n\n\n log.info(\"Searching datasets: \\ndatset_id: %s,\\n\"\n \"datset_name: %s,\\n\"\n \"collection_name: %s,\\n\"\n \"data_type: %s,\\n\"\n \"unit_id: %s,\\n\"\n \"scenario_id: %s,\\n\"\n \"metadata_key: %s,\\n\"\n \"metadata_val: %s,\\n\"\n \"attr_id: %s,\\n\"\n \"type_id: %s,\\n\"\n \"unconnected: %s,\\n\"\n \"inc_metadata: %s,\\n\"\n \"inc_val: %s,\\n\"\n \"page_start: %s,\\n\"\n \"page_size: %s\" % (dataset_id,\n dataset_name,\n collection_name,\n data_type,\n unit_id,\n scenario_id,\n metadata_key,\n metadata_val,\n attr_id,\n type_id,\n unconnected,\n inc_metadata,\n inc_val,\n page_start,\n page_size))\n\n if page_size is None:\n page_size = config.get('SEARCH', 'page_size', 2000)\n\n user_id = int(kwargs.get('user_id'))\n\n dataset_qry = db.DBSession.query(Dataset.id,\n Dataset.type,\n Dataset.unit_id,\n Dataset.name,\n Dataset.hidden,\n Dataset.cr_date,\n Dataset.created_by,\n DatasetOwner.user_id,\n null().label('metadata'),\n Dataset.value\n )\n\n #Dataset ID is unique, so there's no point using the other filters.\n #Only use other filters if the datset ID is not specified.\n if dataset_id is not None:\n dataset_qry = dataset_qry.filter(\n Dataset.id==dataset_id)\n\n else:\n if dataset_name is not None:\n dataset_qry = dataset_qry.filter(\n func.lower(Dataset.name).like(\"%%%s%%\"%dataset_name.lower())\n )\n if collection_name is not None:\n dc = aliased(DatasetCollection)\n dci = aliased(DatasetCollectionItem)\n dataset_qry = dataset_qry.join(dc,\n func.lower(dc.name).like(\"%%%s%%\"%collection_name.lower())\n ).join(dci,and_(\n dci.collection_id == dc.id,\n dci.dataset_id == Dataset.id))\n\n if data_type is not None:\n dataset_qry = dataset_qry.filter(\n func.lower(Dataset.type) == data_type.lower())\n\n #null is a valid unit, so we need a way for the searcher\n #to specify that they want to search for datasets with a null unit\n #rather than ignoring the unit. We use 'null' to do this.\n\n if unit_id is not None:\n dataset_qry = dataset_qry.filter(\n Dataset.unit_id == unit_id)\n\n if scenario_id is not None:\n dataset_qry = dataset_qry.join(ResourceScenario,\n and_(ResourceScenario.dataset_id == Dataset.id,\n ResourceScenario.scenario_id == scenario_id))\n\n if attr_id is not None:\n dataset_qry = dataset_qry.join(\n ResourceScenario, ResourceScenario.dataset_id == Dataset.id).join(\n ResourceAttr, and_(ResourceAttr.id==ResourceScenario.resource_attr_id,\n ResourceAttr.attr_id==attr_id))\n\n if type_id is not None:\n dataset_qry = dataset_qry.join(\n ResourceScenario, ResourceScenario.dataset_id == Dataset.id).join(\n ResourceAttr, ResourceAttr.id==ResourceScenario.resource_attr_id).join(\n TypeAttr, and_(TypeAttr.attr_id==ResourceAttr.attr_id, TypeAttr.type_id==type_id))\n\n if unconnected == 'Y':\n stmt = db.DBSession.query(distinct(ResourceScenario.dataset_id).label('dataset_id'),\n literal_column(\"0\").label('col')).subquery()\n dataset_qry = dataset_qry.outerjoin(\n stmt, stmt.c.dataset_id == Dataset.id)\n dataset_qry = dataset_qry.filter(stmt.c.col == None)\n elif unconnected == 'N':\n #The dataset has to be connected to something\n stmt = db.DBSession.query(distinct(ResourceScenario.dataset_id).label('dataset_id'),\n literal_column(\"0\").label('col')).subquery()\n dataset_qry = dataset_qry.join(\n stmt, stmt.c.dataset_id == Dataset.id)\n if metadata_key is not None and metadata_val is not None:\n dataset_qry = dataset_qry.join(Metadata,\n and_(Metadata.dataset_id == Dataset.id,\n func.lower(Metadata.key).like(\"%%%s%%\"%metadata_key.lower()),\n func.lower(Metadata.value).like(\"%%%s%%\"%metadata_val.lower())))\n elif metadata_key is not None and metadata_val is None:\n dataset_qry = dataset_qry.join(Metadata,\n and_(Metadata.dataset_id == Dataset.id,\n func.lower(Metadata.key).like(\"%%%s%%\"%metadata_key.lower())))\n elif metadata_key is None and metadata_val is not None:\n dataset_qry = dataset_qry.join(Metadata,\n and_(Metadata.dataset_id == Dataset.id,\n func.lower(Metadata.value).like(\"%%%s%%\"%metadata_val.lower())))\n\n #All datasets must be joined on dataset owner so only datasets that the\n #user can see are retrieved.\n dataset_qry = dataset_qry.outerjoin(DatasetOwner,\n and_(DatasetOwner.dataset_id==Dataset.id,\n DatasetOwner.user_id==user_id))\n\n dataset_qry = dataset_qry.filter(or_(Dataset.hidden=='N', and_(DatasetOwner.user_id is not None, Dataset.hidden=='Y')))\n\n log.info(str(dataset_qry))\n\n datasets = dataset_qry.all()\n\n log.info(\"Retrieved %s datasets\", len(datasets))\n\n #page the datasets:\n if page_start + page_size > len(datasets):\n page_end = None\n else:\n page_end = page_start + page_size\n\n datasets = datasets[page_start:page_end]\n\n log.info(\"Datasets paged from result %s to %s\", page_start, page_end)\n\n datasets_to_return = []\n for dataset_row in datasets:\n\n dataset_dict = dataset_row._asdict()\n\n\n\n if inc_val == 'N':\n dataset_dict['value'] = None\n else:\n #convert the value row into a string as it is returned as a binary\n if dataset_row.value is not None:\n dataset_dict['value'] = str(dataset_row.value)\n\n if inc_metadata=='Y':\n metadata = db.DBSession.query(Metadata).filter(Metadata.dataset_id==dataset_row.dataset_id).all()\n dataset_dict['metadata'] = metadata\n else:\n dataset_dict['metadata'] = []\n\n dataset = namedtuple('Dataset', dataset_dict.keys())(**dataset_dict)\n\n datasets_to_return.append(dataset)\n\n return datasets_to_return", "response": "Search the datasets in the DB."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_dataset(dataset_id, name, data_type, val, unit_id, metadata={}, flush=True, **kwargs):\n\n if dataset_id is None:\n raise HydraError(\"Dataset must have an ID to be updated.\")\n\n user_id = kwargs.get('user_id')\n\n dataset = db.DBSession.query(Dataset).filter(Dataset.id==dataset_id).one()\n #This dataset been seen before, so it may be attached\n #to other scenarios, which may be locked. If they are locked, we must\n #not change their data, so new data must be created for the unlocked scenarios\n locked_scenarios = []\n unlocked_scenarios = []\n for dataset_rs in dataset.resourcescenarios:\n if dataset_rs.scenario.locked == 'Y':\n locked_scenarios.append(dataset_rs)\n else:\n unlocked_scenarios.append(dataset_rs)\n\n #Are any of these scenarios locked?\n if len(locked_scenarios) > 0:\n #If so, create a new dataset and assign to all unlocked datasets.\n dataset = add_dataset(data_type,\n val,\n unit_id,\n metadata=metadata,\n name=name,\n user_id=kwargs['user_id'])\n for unlocked_rs in unlocked_scenarios:\n unlocked_rs.dataset = dataset\n\n else:\n\n dataset.type = data_type\n dataset.value = val\n dataset.set_metadata(metadata)\n\n dataset.unit_id = unit_id\n dataset.name = name\n dataset.created_by = kwargs['user_id']\n dataset.hash = dataset.set_hash()\n\n #Is there a dataset in the DB already which is identical to the updated dataset?\n existing_dataset = db.DBSession.query(Dataset).filter(Dataset.hash==dataset.hash, Dataset.id != dataset.id).first()\n if existing_dataset is not None and existing_dataset.check_user(user_id):\n log.warning(\"An identical dataset %s has been found to dataset %s.\"\n \" Deleting dataset and returning dataset %s\",\n existing_dataset.id, dataset.id, existing_dataset.id)\n db.DBSession.delete(dataset)\n dataset = existing_dataset\n if flush==True:\n db.DBSession.flush()\n\n return dataset", "response": "Update an existing dataset."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a dataset to the database.", "response": "def add_dataset(data_type, val, unit_id=None, metadata={}, name=\"\", user_id=None, flush=False):\n \"\"\"\n Data can exist without scenarios. This is the mechanism whereby\n single pieces of data can be added without doing it through a scenario.\n\n A typical use of this would be for setting default values on types.\n \"\"\"\n\n d = Dataset()\n\n d.type = data_type\n d.value = val\n d.set_metadata(metadata)\n\n d.unit_id = unit_id\n d.name = name\n d.created_by = user_id\n d.hash = d.set_hash()\n\n try:\n existing_dataset = db.DBSession.query(Dataset).filter(Dataset.hash==d.hash).one()\n if existing_dataset.check_user(user_id):\n d = existing_dataset\n else:\n d.set_metadata({'created_at': datetime.datetime.now()})\n d.set_hash()\n db.DBSession.add(d)\n except NoResultFound:\n db.DBSession.add(d)\n\n if flush == True:\n db.DBSession.flush()\n return d"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninserts data into the DB.", "response": "def _bulk_insert_data(bulk_data, user_id=None, source=None):\n \"\"\"\n Insert lots of datasets at once to reduce the number of DB interactions.\n user_id indicates the user adding the data\n source indicates the name of the app adding the data\n both user_id and source are added as metadata\n \"\"\"\n get_timing = lambda x: datetime.datetime.now() - x\n start_time=datetime.datetime.now()\n\n new_data = _process_incoming_data(bulk_data, user_id, source)\n log.info(\"Incoming data processed in %s\", (get_timing(start_time)))\n\n existing_data = _get_existing_data(new_data.keys())\n\n log.info(\"Existing data retrieved.\")\n\n #The list of dataset IDS to be returned.\n hash_id_map = {}\n new_datasets = []\n metadata = {}\n #This is what gets returned.\n for d in bulk_data:\n dataset_dict = new_data[d.hash]\n current_hash = d.hash\n\n #if this piece of data is already in the DB, then\n #there is no need to insert it!\n if existing_data.get(current_hash) is not None:\n\n dataset = existing_data.get(current_hash)\n #Is this user allowed to use this dataset?\n if dataset.check_user(user_id) == False:\n new_dataset = _make_new_dataset(dataset_dict)\n new_datasets.append(new_dataset)\n metadata[new_dataset['hash']] = dataset_dict['metadata']\n else:\n hash_id_map[current_hash] = dataset#existing_data[current_hash]\n elif current_hash in hash_id_map:\n new_datasets.append(dataset_dict)\n else:\n #set a placeholder for a dataset_id we don't know yet.\n #The placeholder is the hash, which is unique to this object and\n #therefore easily identifiable.\n new_datasets.append(dataset_dict)\n hash_id_map[current_hash] = dataset_dict\n metadata[current_hash] = dataset_dict['metadata']\n\n log.debug(\"Isolating new data %s\", get_timing(start_time))\n #Isolate only the new datasets and insert them\n new_data_for_insert = []\n #keep track of the datasets that are to be inserted to avoid duplicate\n #inserts\n new_data_hashes = []\n for d in new_datasets:\n if d['hash'] not in new_data_hashes:\n new_data_for_insert.append(d)\n new_data_hashes.append(d['hash'])\n\n if len(new_data_for_insert) > 0:\n \t#If we're working with mysql, we have to lock the table..\n \t#For sqlite, this is not possible. Hence the try: except\n #try:\n # db.DBSession.execute(\"LOCK TABLES tDataset WRITE, tMetadata WRITE\")\n #except OperationalError:\n # pass\n\n log.debug(\"Inserting new data %s\", get_timing(start_time))\n db.DBSession.bulk_insert_mappings(Dataset, new_data_for_insert)\n log.debug(\"New data Inserted %s\", get_timing(start_time))\n\n #try:\n # db.DBSession.execute(\"UNLOCK TABLES\")\n #except OperationalError:\n # pass\n\n\n new_data = _get_existing_data(new_data_hashes)\n log.debug(\"New data retrieved %s\", get_timing(start_time))\n\n for k, v in new_data.items():\n hash_id_map[k] = v\n\n _insert_metadata(metadata, hash_id_map)\n log.debug(\"Metadata inserted %s\", get_timing(start_time))\n\n returned_ids = []\n for d in bulk_data:\n returned_ids.append(hash_id_map[d.hash])\n\n log.info(\"Done bulk inserting data. %s datasets\", len(returned_ids))\n\n return returned_ids"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_metadata(dataset_ids):\n metadata = []\n if len(dataset_ids) == 0:\n return []\n if len(dataset_ids) > qry_in_threshold:\n idx = 0\n extent = qry_in_threshold\n while idx < len(dataset_ids):\n log.info(\"Querying %s metadatas\", len(dataset_ids[idx:extent]))\n rs = db.DBSession.query(Metadata).filter(Metadata.dataset_id.in_(dataset_ids[idx:extent])).all()\n metadata.extend(rs)\n idx = idx + qry_in_threshold\n\n if idx + qry_in_threshold > len(dataset_ids):\n extent = len(dataset_ids)\n else:\n extent = extent +qry_in_threshold\n else:\n metadata_qry = db.DBSession.query(Metadata).filter(Metadata.dataset_id.in_(dataset_ids))\n for m in metadata_qry:\n metadata.append(m)\n\n return metadata", "response": "Get all the metadata for a given list of datasets\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets all the datasets in a list of dataset IDS.", "response": "def _get_datasets(dataset_ids):\n \"\"\"\n Get all the datasets in a list of dataset IDS. This must be done in chunks of 999,\n as sqlite can only handle 'in' with < 1000 elements.\n \"\"\"\n\n dataset_dict = {}\n\n datasets = []\n if len(dataset_ids) > qry_in_threshold:\n idx = 0\n extent =qry_in_threshold\n while idx < len(dataset_ids):\n log.info(\"Querying %s datasets\", len(dataset_ids[idx:extent]))\n rs = db.DBSession.query(Dataset).filter(Dataset.id.in_(dataset_ids[idx:extent])).all()\n datasets.extend(rs)\n idx = idx + qry_in_threshold\n\n if idx + qry_in_threshold > len(dataset_ids):\n extent = len(dataset_ids)\n else:\n extent = extent + qry_in_threshold\n else:\n datasets = db.DBSession.query(Dataset).filter(Dataset.id.in_(dataset_ids))\n\n\n for r in datasets:\n dataset_dict[r.id] = r\n\n log.info(\"Retrieved %s datasets\", len(dataset_dict))\n\n return dataset_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_collection(collection_id):\n try:\n collection = db.DBSession.query(DatasetCollection).filter(DatasetCollection.id==collection_id).one()\n return collection\n except NoResultFound:\n raise ResourceNotFoundError(\"No dataset collection found with id %s\"%collection_id)", "response": "Get a dataset collection by ID."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_collection_item(collection_id, dataset_id):\n collection_item = db.DBSession.query(DatasetCollectionItem).\\\n filter(DatasetCollectionItem.collection_id==collection_id,\n DatasetCollectionItem.dataset_id==dataset_id).first()\n return collection_item", "response": "Get a single dataset collection entry by collection ID and dataset ID"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_dataset_to_collection(dataset_id, collection_id, **kwargs):\n collection_i = _get_collection(collection_id)\n collection_item = _get_collection_item(collection_id, dataset_id)\n if collection_item is not None:\n raise HydraError(\"Dataset Collection %s already contains dataset %s\", collection_id, dataset_id)\n\n new_item = DatasetCollectionItem()\n new_item.dataset_id=dataset_id\n new_item.collection_id=collection_id\n\n collection_i.items.append(new_item)\n\n db.DBSession.flush()\n\n return 'OK'", "response": "Add a single dataset to a dataset collection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef remove_dataset_from_collection(dataset_id, collection_id, **kwargs):\n collection_i = _get_collection(collection_id)\n collection_item = _get_collection_item(collection_id, dataset_id)\n if collection_item is None:\n raise HydraError(\"Dataset %s is not in collection %s.\",\n dataset_id,\n collection_id)\n db.DBSession.delete(collection_item)\n db.DBSession.flush()\n\n db.DBSession.expunge_all()\n\n return 'OK'", "response": "Remove a single dataset from a dataset collection."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef check_dataset_in_collection(dataset_id, collection_id, **kwargs):\n\n _get_collection(collection_id)\n collection_item = _get_collection_item(collection_id, dataset_id)\n if collection_item is None:\n return 'N'\n else:\n return 'Y'", "response": "Check whether a dataset is contained inside a collection."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget all the datasets from the collection with the specified name", "response": "def get_collections_like_name(collection_name,**kwargs):\n \"\"\"\n Get all the datasets from the collection with the specified name\n \"\"\"\n try:\n collections = db.DBSession.query(DatasetCollection).filter(DatasetCollection.name.like(\"%%%s%%\"%collection_name.lower())).all()\n except NoResultFound:\n raise ResourceNotFoundError(\"No dataset collection found with name %s\"%collection_name)\n\n return collections"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting all the datasets from the collection with the specified name", "response": "def get_collection_datasets(collection_id,**kwargs):\n \"\"\"\n Get all the datasets from the collection with the specified name\n \"\"\"\n collection_datasets = db.DBSession.query(Dataset).filter(Dataset.id==DatasetCollectionItem.dataset_id,\n DatasetCollectionItem.collection_id==DatasetCollection.id,\n DatasetCollection.id==collection_id).all()\n return collection_datasets"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngiving a timestamp or list of timestamps return the values appropriate to the requested times.", "response": "def get_val_at_time(dataset_id, timestamps,**kwargs):\n \"\"\"\n Given a timestamp (or list of timestamps) and some timeseries data,\n return the values appropriate to the requested times.\n\n If the timestamp is before the start of the timeseries data, return\n None If the timestamp is after the end of the timeseries data, return\n the last value. \"\"\"\n t = []\n for time in timestamps:\n t.append(get_datetime(time))\n dataset_i = db.DBSession.query(Dataset).filter(Dataset.id==dataset_id).one()\n #for time in t:\n # data.append(td.get_val(timestamp=time))\n\n data = dataset_i.get_val(timestamp=t)\n if data is not None:\n dataset = JSONObject({'data': json.dumps(data)})\n else:\n dataset = JSONObject({'data': None})\n\n return dataset"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_multiple_vals_at_time(dataset_ids, timestamps,**kwargs):\n datasets = _get_datasets(dataset_ids)\n datetimes = []\n for time in timestamps:\n datetimes.append(get_datetime(time))\n\n return_vals = {}\n for dataset_i in datasets.values():\n data = dataset_i.get_val(timestamp=datetimes)\n ret_data = {}\n if type(data) is list:\n for i, t in enumerate(timestamps):\n ret_data[t] = data[i]\n return_vals['dataset_%s'%dataset_i.id] = ret_data\n\n return return_vals", "response": "Given a list of timestamps and a list of datasets return the values appropriate to the requested times."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_vals_between_times(dataset_id, start_time, end_time, timestep,increment,**kwargs):\n try:\n server_start_time = get_datetime(start_time)\n server_end_time = get_datetime(end_time)\n times = [server_start_time]\n\n next_time = server_start_time\n while next_time < server_end_time:\n if int(increment) == 0:\n raise HydraError(\"%s is not a valid increment for this search.\"%increment)\n next_time = next_time + datetime.timedelta(**{timestep:int(increment)})\n times.append(next_time)\n except ValueError:\n try:\n server_start_time = Decimal(start_time)\n server_end_time = Decimal(end_time)\n times = [float(server_start_time)]\n\n next_time = server_start_time\n while next_time < server_end_time:\n next_time = float(next_time) + increment\n times.append(next_time)\n except:\n raise HydraError(\"Unable to get times. Please check to and from times.\")\n\n td = db.DBSession.query(Dataset).filter(Dataset.id==dataset_id).one()\n log.debug(\"Number of times to fetch: %s\", len(times))\n\n data = td.get_val(timestamp=times)\n\n data_to_return = []\n if type(data) is list:\n for d in data:\n if d is not None:\n data_to_return.append(list(d))\n elif data is None:\n data_to_return = []\n else:\n data_to_return.append(data)\n\n dataset = JSONObject({'data' : json.dumps(data_to_return)})\n\n return dataset", "response": "This function returns the values of a dataset between two specified times within a timeseries."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete a dataset from the DB.", "response": "def delete_dataset(dataset_id,**kwargs):\n \"\"\"\n Removes a piece of data from the DB.\n CAUTION! Use with care, as this cannot be undone easily.\n \"\"\"\n try:\n d = db.DBSession.query(Dataset).filter(Dataset.id==dataset_id).one()\n except NoResultFound:\n raise HydraError(\"Dataset %s does not exist.\"%dataset_id)\n\n dataset_rs = db.DBSession.query(ResourceScenario).filter(ResourceScenario.dataset_id==dataset_id).all()\n if len(dataset_rs) > 0:\n raise HydraError(\"Cannot delete %s. Dataset is used by one or more resource scenarios.\"%dataset_id)\n\n db.DBSession.delete(d)\n\n db.DBSession.flush()\n\n #Remove ORM references to children of this dataset (metadata, collection items)\n db.DBSession.expunge_all()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_notes(ref_key, ref_id, **kwargs):\n notes = db.DBSession.query(Note).filter(Note.ref_key==ref_key)\n if ref_key == 'NETWORK':\n notes = notes.filter(Note.network_id == ref_id)\n elif ref_key == 'NODE':\n notes = notes.filter(Note.node_id == ref_id)\n elif ref_key == 'LINK':\n notes = notes.filter(Note.link_id == ref_id)\n elif ref_key == 'GROUP':\n notes = notes.filter(Note.group_id == ref_id)\n elif ref_key == 'PROJECT':\n notes = notes.filter(Note.project_id == ref_id)\n elif ref_key == 'SCENARIO':\n notes = notes.filter(Note.scenario_id == ref_id)\n else:\n raise HydraError(\"Ref Key %s not recognised.\")\n\n note_rs = notes.all()\n\n return note_rs", "response": "Get all the notes for a resource identified by ref_key and ref_id."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_note(note, **kwargs):\n note_i = Note()\n note_i.ref_key = note.ref_key\n\n note_i.set_ref(note.ref_key, note.ref_id)\n\n note_i.value = note.value\n\n note_i.created_by = kwargs.get('user_id')\n\n db.DBSession.add(note_i)\n db.DBSession.flush()\n\n return note_i", "response": "Add a note to the\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nupdating a note in the database", "response": "def update_note(note, **kwargs):\n \"\"\"\n Update a note\n \"\"\"\n note_i = _get_note(note.id)\n\n if note.ref_key != note_i.ref_key:\n raise HydraError(\"Cannot convert a %s note to a %s note. Please create a new note instead.\"%(note_i.ref_key, note.ref_key))\n\n note_i.set_ref(note.ref_key, note.ref_id)\n\n note_i.value = note.value\n\n db.DBSession.flush()\n\n return note_i"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef purge_note(note_id, **kwargs):\n note_i = _get_note(note_id)\n\n db.DBSession.delete(note_i)\n\n db.DBSession.flush()", "response": "Remove a note from the DB permenantly\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef login(username, password, **kwargs):\n\n user_id = util.hdb.login_user(username, password)\n\n hydra_session = session.Session({}, #This is normally a request object, but in this case is empty\n validate_key=config.get('COOKIES', 'VALIDATE_KEY', 'YxaDbzUUSo08b+'),\n type='file',\n cookie_expires=True,\n data_dir=config.get('COOKIES', 'DATA_DIR', '/tmp'),\n file_dir=config.get('COOKIES', 'FILE_DIR', '/tmp/auth'))\n\n hydra_session['user_id'] = user_id\n hydra_session['username'] = username\n hydra_session.save()\n\n return (user_id, hydra_session.id)", "response": "Login a user and return a dict containing the user_id and session_id."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_session_user(session_id, **kwargs):\n\n hydra_session_object = session.SessionObject({}, #This is normally a request object, but in this case is empty\n validate_key=config.get('COOKIES', 'VALIDATE_KEY', 'YxaDbzUUSo08b+'),\n type='file',\n cookie_expires=True,\n data_dir=config.get('COOKIES', 'DATA_DIR', '/tmp'),\n file_dir=config.get('COOKIES', 'FILE_DIR', '/tmp/auth'))\n\n hydra_session = hydra_session_object.get_by_id(session_id)\n\n if hydra_session is not None:\n return hydra_session['user_id']\n\n return None", "response": "Given a session ID get the user ID that it is associated with\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef array_dim(arr):\n dim = []\n while True:\n try:\n dim.append(len(arr))\n arr = arr[0]\n except TypeError:\n return dim", "response": "Return the size of a multidimansional array."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef check_array_struct(array):\n\n #If a list is transformed into a numpy array and the sub elements\n #of this array are still lists, then numpy failed to fully convert\n #the list, meaning it is not symmetrical.\n try:\n arr = np.array(array)\n except:\n raise HydraError(\"Array %s is not valid.\"%(array,))\n if type(arr[0]) is list:\n raise HydraError(\"Array %s is not valid.\"%(array,))", "response": "Check to ensure that the given array is symmetrical."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef arr_to_vector(arr):\n dim = array_dim(arr)\n tmp_arr = []\n for n in range(len(dim) - 1):\n for inner in arr:\n for i in inner:\n tmp_arr.append(i)\n arr = tmp_arr\n tmp_arr = []\n return arr", "response": "Reshape a multidimensional array to a vector."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the value of a dataset as a single value or as 1 - d list of all the values.", "response": "def _get_val(val, full=False):\n \"\"\"\n Get the value(s) of a dataset as a single value or as 1-d list of\n values. In the special case of timeseries, when a check is for time-based\n criteria, you can return the entire timeseries.\n \"\"\"\n try:\n val = val.strip()\n except:\n pass\n\n logging.debug(\"%s, type=%s\", val, type(val))\n\n if isinstance(val, float):\n return val\n\n if isinstance(val, int):\n return val\n\n\n if isinstance(val, np.ndarray):\n return list(val)\n\n try:\n val = float(val)\n return val\n except:\n pass\n\n try:\n val = int(val)\n return val\n except:\n pass\n\n if type(val) == pd.DataFrame:\n\n if full:\n return val\n\n newval = []\n values = val.values\n for v in values:\n newv = _get_val(v)\n if type(newv) == list:\n newval.extend(newv)\n else:\n newval.append(newv)\n val = newval\n\n elif type(val) == dict:\n\n if full:\n return val\n\n newval = []\n for v in val.values():\n newv = _get_val(v)\n if type(newv) == list:\n newval.extend(newv)\n else:\n newval.append(newv)\n val = newval\n\n elif type(val) == list or type(val) == np.ndarray:\n newval = []\n for arr_val in val:\n v = _get_val(arr_val)\n newval.append(v)\n val = newval\n return val"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing the XML element of a resource restriction element and returns a dictionary of the types and values.", "response": "def get_restriction_as_dict(restriction_xml):\n \"\"\"\n turn:\n ::\n\n \n \n MAXLEN\n 3\n \n \n VALUERANGE\n 110\n \n \n\n into:\n ::\n\n {\n 'MAXLEN' : 3,\n 'VALUERANGE' : [1, 10]\n }\n\n \"\"\"\n restriction_dict = {}\n if restriction_xml is None:\n return restriction_dict\n\n if restriction_xml.find('restriction') is not None:\n restrictions = restriction_xml.findall('restriction')\n for restriction in restrictions:\n restriction_type = restriction.find('type').text\n restriction_val = restriction.find('value')\n val = None\n if restriction_val is not None:\n if restriction_val.text.strip() != \"\":\n val = _get_val(restriction_val.text)\n else:\n items = restriction_val.findall('item')\n val = []\n for item in items:\n val.append(_get_val(item.text))\n restriction_dict[restriction_type] = val\n return restriction_dict"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntesting to ensure that the given value is contained in the ENUM structure.", "response": "def validate_ENUM(in_value, restriction):\n \"\"\"\n Test to ensure that the given value is contained in the provided list.\n the value parameter must be either a single value or a 1-dimensional list.\n All the values in this list must satisfy the ENUM\n \"\"\"\n value = _get_val(in_value)\n if type(value) is list:\n for subval in value:\n if type(subval) is tuple:\n subval = subval[1]\n validate_ENUM(subval, restriction)\n else:\n if value not in restriction:\n raise ValidationError(\"ENUM : %s\"%(restriction))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nvalidates the NUMPLACES parameter of the resource.", "response": "def validate_NUMPLACES(in_value, restriction):\n \"\"\"\n the value parameter must be either a single value or a 1-dimensional list.\n All the values in this list must satisfy the condition\n \"\"\"\n #Sometimes restriction values can accidentally be put in the template 100,\n #Making them a list, not a number. Rather than blowing up, just get value 1 from the list.\n if type(restriction) is list:\n restriction = restriction[0]\n\n value = _get_val(in_value)\n if type(value) is list:\n for subval in value:\n if type(subval) is tuple:\n subval = subval[1]\n validate_NUMPLACES(subval, restriction)\n else:\n restriction = int(restriction) # Just in case..\n dec_val = Decimal(str(value))\n num_places = dec_val.as_tuple().exponent * -1 #exponent returns a negative num\n if restriction != num_places:\n raise ValidationError(\"NUMPLACES: %s\"%(restriction))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef validate_VALUERANGE(in_value, restriction):\n if len(restriction) != 2:\n raise ValidationError(\"Template ERROR: Only two values can be specified in a date range.\")\n value = _get_val(in_value)\n if type(value) is list:\n for subval in value:\n if type(subval) is tuple:\n subval = subval[1]\n validate_VALUERANGE(subval, restriction)\n else:\n min_val = Decimal(restriction[0])\n max_val = Decimal(restriction[1])\n val = Decimal(value)\n if val < min_val or val > max_val:\n raise ValidationError(\"VALUERANGE: %s, %s\"%(min_val, max_val))", "response": "Test to ensure that a value sits between a lower and upper bound."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate_DATERANGE(value, restriction):\n if len(restriction) != 2:\n raise ValidationError(\"Template ERROR: Only two values can be specified in a date range.\")\n\n if type(value) == pd.DataFrame:\n dates = [get_datetime(v) for v in list(value.index)]\n else:\n dates = value\n\n if type(dates) is list:\n for date in dates:\n validate_DATERANGE(date, restriction)\n return\n\n min_date = get_datetime(restriction[0])\n max_date = get_datetime(restriction[1])\n if value < min_date or value > max_date:\n raise ValidationError(\"DATERANGE: %s <%s> %s\"%(min_date,value,max_date))", "response": "Test to ensure that the times in a timeseries fall between a lower and upper bound."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntests to ensure that a list has the prescribed length.", "response": "def validate_MAXLEN(value, restriction):\n \"\"\"\n Test to ensure that a list has the prescribed length.\n Parameters: A list and an integer, which defines the required length of\n the list.\n \"\"\"\n #Sometimes restriction values can accidentally be put in the template 100,\n #Making them a list, not a number. Rather than blowing up, just get value 1 from the list.\n if type(restriction) is list:\n restriction = restriction[0]\n else:\n return\n\n if len(value) > restriction:\n raise ValidationError(\"MAXLEN: %s\"%(restriction))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntest to ensure that a value is less than a prescribed value.", "response": "def validate_LESSTHAN(in_value, restriction):\n \"\"\"\n Test to ensure that a value is less than a prescribed value.\n Parameter: Two values, which will be compared for the difference..\n \"\"\"\n #Sometimes restriction values can accidentally be put in the template 100,\n #Making them a list, not a number. Rather than blowing up, just get value 1 from the list.\n if type(restriction) is list:\n restriction = restriction[0]\n\n value = _get_val(in_value)\n if type(value) is list:\n for subval in value:\n if type(subval) is tuple:\n subval = subval[1]\n validate_LESSTHAN(subval, restriction)\n else:\n try:\n if value >= restriction:\n raise ValidationError(\"LESSTHAN: %s\"%(restriction))\n except TypeError:\n # Incompatible types for comparison.\n raise ValidationError(\"LESSTHAN: Incompatible types %s\"%(restriction))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntesting to ensure that a value is less than or equal to a prescribed value.", "response": "def validate_LESSTHANEQ(value, restriction):\n \"\"\"\n Test to ensure that a value is less than or equal to a prescribed value.\n Parameter: Two values, which will be compared for the difference..\n \"\"\"\n #Sometimes restriction values can accidentally be put in the template 100,\n #Making them a list, not a number. Rather than blowing up, just get value 1 from the list.\n if type(restriction) is list:\n restriction = restriction[0]\n\n value = _get_val(value)\n if type(value) is list:\n for subval in value:\n if type(subval) is tuple:\n subval = subval[1]\n validate_LESSTHANEQ(subval, restriction)\n else:\n try:\n if value > restriction:\n raise ValidationError(\"LESSTHANEQ: %s\" % (restriction))\n except TypeError:\n # Incompatible types for comparison.\n raise ValidationError(\"LESSTHANEQ: Incompatible types %s\"%(restriction))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntest to ensure that the values of a list sum to a specified value", "response": "def validate_SUMTO(in_value, restriction):\n \"\"\"\n Test to ensure the values of a list sum to a specified value:\n Parameters: a list of numeric values and a target to which the values\n in the list must sum\n \"\"\"\n #Sometimes restriction values can accidentally be put in the template 100,\n #Making them a list, not a number. Rather than blowing up, just get value 1 from the list.\n if type(restriction) is list:\n restriction = restriction[0]\n\n value = _get_val(in_value, full=True)\n\n if len(value) == 0:\n return\n\n flat_list = _flatten_value(value)\n\n try:\n sum(flat_list)\n except:\n raise ValidationError(\"List cannot be summed: %s\"%(flat_list,))\n\n if sum(flat_list) != restriction:\n raise ValidationError(\"SUMTO: %s\"%(restriction))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate_INCREASING(in_value, restriction):\n\n flat_list = _flatten_value(in_value)\n\n previous = None\n for a in flat_list:\n if previous is None:\n previous = a\n continue\n try:\n if a < previous:\n raise ValidationError(\"INCREASING\")\n except TypeError:\n raise ValueError(\"INCREASING: Incompatible types\")\n previous = a", "response": "Test to ensure the values in a list are increasing."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate_EQUALTIMESTEPS(value, restriction):\n if len(value) == 0:\n return\n\n if type(value) == pd.DataFrame:\n if str(value.index[0]).startswith('9999'):\n tmp_val = value.to_json().replace('9999', '1900')\n value = pd.read_json(tmp_val)\n\n\n #If the timeseries is not datetime-based, check for a consistent timestep\n if type(value.index) == pd.Int64Index:\n timesteps = list(value.index)\n timestep = timesteps[1] - timesteps[0]\n for i, t in enumerate(timesteps[1:]):\n if timesteps[i] - timesteps[i-1] != timestep:\n raise ValidationError(\"Timesteps not equal: %s\"%(list(value.index)))\n\n\n if not hasattr(value.index, 'inferred_freq'):\n raise ValidationError(\"Timesteps not equal: %s\"%(list(value.index),))\n\n if restriction is None:\n if value.index.inferred_freq is None:\n raise ValidationError(\"Timesteps not equal: %s\"%(list(value.index),))\n else:\n if value.index.inferred_freq != restriction:\n raise ValidationError(\"Timesteps not equal: %s\"%(list(value.index),))", "response": "Validate that the timesteps in a timeseries are equal to the specified restriction."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nflatten a value into a list of lists.", "response": "def _flatten_value(value):\n \"\"\"\n 1: Turn a multi-dimensional array into a 1-dimensional array\n 2: Turn a timeseries of values into a single 1-dimensional array\n \"\"\"\n\n if type(value) == pd.DataFrame:\n value = value.values.tolist()\n\n if type(value) != list:\n raise ValidationError(\"Value %s cannot be processed.\"%(value))\n\n if len(value) == 0:\n return\n\n flat_list = _flatten_list(value)\n\n return flat_list"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef count_levels(value):\n if not isinstance(value, dict) or len(value) == 0:\n return 0\n elif len(value) == 0:\n return 0 #An emptu dict has 0\n else:\n nextval = list(value.values())[0]\n return 1 + count_levels(nextval)", "response": "Count how many levels are in a dict"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef flatten_dict(value, target_depth=1, depth=None):\n\n #failsafe in case someone specified null\n if target_depth is None:\n target_depth = 1\n\n values = list(value.values())\n if len(values) == 0:\n return {}\n else:\n if depth is None:\n depth = count_levels(value)\n\n if isinstance(values[0], dict) and len(values[0]) > 0:\n subval = list(values[0].values())[0]\n if not isinstance(subval, dict) != 'object':\n return value\n\n if target_depth >= depth:\n return value\n\n flatval = {}\n for k in value.keys():\n subval = flatten_dict(value[k], target_depth, depth-1)\n for k1 in subval.keys():\n flatval[str(k)+\"_\"+str(k1)] = subval[k1];\n return flatval\n else:\n return value", "response": "Flatten a hashtable with multiple nested dicts and return a\nTaxonomy dict where the keys are a concatenation of each sub - key."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_named_tuple(keys, values):\n\n values = [dbobject.__dict__[key] for key in dbobject.keys()]\n\n tuple_object = namedtuple('DBObject', dbobject.keys())\n\n tuple_instance = tuple_object._make(values)\n\n return tuple_instance", "response": "Convert a sqlalchemy object into a named tuple"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_val(dataset, timestamp=None):\n if dataset.type == 'array':\n #TODO: design a mechansim to retrieve this data if it's stored externally\n return json.loads(dataset.value)\n\n elif dataset.type == 'descriptor':\n return str(dataset.value)\n elif dataset.type == 'scalar':\n return Decimal(str(dataset.value))\n elif dataset.type == 'timeseries':\n #TODO: design a mechansim to retrieve this data if it's stored externally\n val = dataset.value\n\n seasonal_year = config.get('DEFAULT','seasonal_year', '1678')\n seasonal_key = config.get('DEFAULT', 'seasonal_key', '9999')\n val = dataset.value.replace(seasonal_key, seasonal_year)\n\n timeseries = pd.read_json(val, convert_axes=True)\n\n if timestamp is None:\n return timeseries\n else:\n try:\n idx = timeseries.index\n #Seasonal timeseries are stored in the year\n #1678 (the lowest year pandas allows for valid times).\n #Therefore if the timeseries is seasonal,\n #the request must be a seasonal request, not a\n #standard request\n\n if type(idx) == pd.DatetimeIndex:\n if set(idx.year) == set([int(seasonal_year)]):\n if isinstance(timestamp, list):\n seasonal_timestamp = []\n for t in timestamp:\n t_1900 = t.replace(year=int(seasonal_year))\n seasonal_timestamp.append(t_1900)\n timestamp = seasonal_timestamp\n else:\n timestamp = [timestamp.replace(year=int(seasonal_year))]\n\n pandas_ts = timeseries.reindex(timestamp, method='ffill')\n\n #If there are no values at all, just return None\n if len(pandas_ts.dropna()) == 0:\n return None\n\n #Replace all numpy NAN values with None\n pandas_ts = pandas_ts.where(pandas_ts.notnull(), None)\n\n val_is_array = False\n if len(pandas_ts.columns) > 1:\n val_is_array = True\n\n if val_is_array:\n if type(timestamp) is list and len(timestamp) == 1:\n ret_val = pandas_ts.loc[timestamp[0]].values.tolist()\n else:\n ret_val = pandas_ts.loc[timestamp].values.tolist()\n else:\n col_name = pandas_ts.loc[timestamp].columns[0]\n if type(timestamp) is list and len(timestamp) == 1:\n ret_val = pandas_ts.loc[timestamp[0]].loc[col_name]\n else:\n ret_val = pandas_ts.loc[timestamp][col_name].values.tolist()\n\n return ret_val\n\n except Exception as e:\n log.critical(\"Unable to retrive data. Check timestamps.\")\n log.critical(e)", "response": "Returns the value of a dataset in an appropriate language."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_layout_as_string(layout):\n\n if isinstance(layout, dict):\n return json.dumps(layout)\n\n if(isinstance(layout, six.string_types)):\n try:\n return get_layout_as_string(json.loads(layout))\n except:\n return layout", "response": "Take a dict or string and return a string."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntakes a dict or string and return a dict.", "response": "def get_layout_as_dict(layout):\n \"\"\"\n Take a dict or string and return a dict if the data is json-encoded.\n The string will json parsed to check for json validity. In order to deal\n with strings which have been json encoded multiple times, keep json decoding\n until a dict is retrieved or until a non-json structure is identified.\n \"\"\"\n\n if isinstance(layout, dict):\n return layout\n\n if(isinstance(layout, six.string_types)):\n try:\n return get_layout_as_dict(json.loads(layout))\n except:\n return layout"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_username(uid,**kwargs):\n rs = db.DBSession.query(User.username).filter(User.id==uid).one()\n\n if rs is None:\n raise ResourceNotFoundError(\"User with ID %s not found\"%uid)\n\n return rs.username", "response": "Get the username of a given user_id"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_usernames_like(username,**kwargs):\n checkname = \"%%%s%%\"%username\n rs = db.DBSession.query(User.username).filter(User.username.like(checkname)).all()\n return [r.username for r in rs]", "response": "Return a list of usernames like the given string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd a user to the user list", "response": "def add_user(user, **kwargs):\n \"\"\"\n Add a user\n \"\"\"\n #check_perm(kwargs.get('user_id'), 'add_user')\n u = User()\n\n u.username = user.username\n u.display_name = user.display_name\n\n user_id = _get_user_id(u.username)\n\n #If the user is already there, cannot add another with\n #the same username.\n if user_id is not None:\n raise HydraError(\"User %s already exists!\"%user.username)\n\n u.password = bcrypt.hashpw(str(user.password).encode('utf-8'), bcrypt.gensalt())\n\n db.DBSession.add(u)\n db.DBSession.flush()\n\n return u"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates a user s display name", "response": "def update_user_display_name(user,**kwargs):\n \"\"\"\n Update a user's display name\n \"\"\"\n #check_perm(kwargs.get('user_id'), 'edit_user')\n try:\n user_i = db.DBSession.query(User).filter(User.id==user.id).one()\n user_i.display_name = user.display_name\n return user_i\n except NoResultFound:\n raise ResourceNotFoundError(\"User (id=%s) not found\"%(user.id))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_user_password(new_pwd_user_id, new_password,**kwargs):\n #check_perm(kwargs.get('user_id'), 'edit_user')\n try:\n user_i = db.DBSession.query(User).filter(User.id==new_pwd_user_id).one()\n user_i.password = bcrypt.hashpw(str(new_password).encode('utf-8'), bcrypt.gensalt())\n return user_i\n except NoResultFound:\n raise ResourceNotFoundError(\"User (id=%s) not found\"%(new_pwd_user_id))", "response": "Update a user s password"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a user by ID", "response": "def get_user(uid, **kwargs):\n \"\"\"\n Get a user by ID\n \"\"\"\n user_id=kwargs.get('user_id')\n if uid is None:\n uid = user_id\n user_i = _get_user(uid)\n return user_i"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a user by username", "response": "def get_user_by_name(uname,**kwargs):\n \"\"\"\n Get a user by username\n \"\"\"\n try:\n user_i = db.DBSession.query(User).filter(User.username==uname).one()\n return user_i\n except NoResultFound:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a user by id", "response": "def get_user_by_id(uid,**kwargs):\n \"\"\"\n Get a user by username\n \"\"\"\n user_id = kwargs.get('user_id')\n try:\n user_i = _get_user(uid)\n return user_i\n except NoResultFound:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a role to the hierarchy.", "response": "def add_role(role,**kwargs):\n \"\"\"\n Add a new role\n \"\"\"\n #check_perm(kwargs.get('user_id'), 'add_role')\n role_i = Role(name=role.name, code=role.code)\n db.DBSession.add(role_i)\n db.DBSession.flush()\n\n return role_i"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_perm(perm,**kwargs):\n #check_perm(kwargs.get('user_id'), 'add_perm')\n perm_i = Perm(name=perm.name, code=perm.code)\n db.DBSession.add(perm_i)\n db.DBSession.flush()\n\n return perm_i", "response": "Add a permission to a node"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\napplies role_id to new_user_id", "response": "def set_user_role(new_user_id, role_id, **kwargs):\n \"\"\"\n Apply `role_id` to `new_user_id`\n\n Note this function returns the `Role` instance associated with `role_id`\n \"\"\"\n #check_perm(kwargs.get('user_id'), 'edit_role')\n try:\n _get_user(new_user_id)\n role_i = _get_role(role_id)\n roleuser_i = RoleUser(user_id=new_user_id, role_id=role_id)\n role_i.roleusers.append(roleuser_i)\n db.DBSession.flush()\n except Exception as e: # Will occur if the foreign keys do not exist\n log.exception(e)\n raise ResourceNotFoundError(\"User or Role does not exist\")\n\n return role_i"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_user_role(deleted_user_id, role_id,**kwargs):\n #check_perm(kwargs.get('user_id'), 'edit_role')\n try:\n _get_user(deleted_user_id)\n _get_role(role_id)\n roleuser_i = db.DBSession.query(RoleUser).filter(RoleUser.user_id==deleted_user_id, RoleUser.role_id==role_id).one()\n db.DBSession.delete(roleuser_i)\n except NoResultFound:\n raise ResourceNotFoundError(\"User Role does not exist\")\n\n return 'OK'", "response": "Remove a user from a role"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_role_perm(role_id, perm_id,**kwargs):\n #check_perm(kwargs.get('user_id'), 'edit_perm')\n\n _get_perm(perm_id)\n role_i = _get_role(role_id)\n roleperm_i = RolePerm(role_id=role_id, perm_id=perm_id)\n\n role_i.roleperms.append(roleperm_i)\n\n db.DBSession.flush()\n\n return role_i", "response": "Insert a permission into a role"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves a permission from a role", "response": "def delete_role_perm(role_id, perm_id,**kwargs):\n \"\"\"\n Remove a permission from a role\n \"\"\"\n #check_perm(kwargs.get('user_id'), 'edit_perm')\n _get_perm(perm_id)\n _get_role(role_id)\n\n try:\n roleperm_i = db.DBSession.query(RolePerm).filter(RolePerm.role_id==role_id, RolePerm.perm_id==perm_id).one()\n db.DBSession.delete(roleperm_i)\n except NoResultFound:\n raise ResourceNotFoundError(\"Role Perm does not exist\")\n\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update_role(role,**kwargs):\n #check_perm(kwargs.get('user_id'), 'edit_role')\n try:\n role_i = db.DBSession.query(Role).filter(Role.id==role.id).one()\n role_i.name = role.name\n role_i.code = role.code\n except NoResultFound:\n raise ResourceNotFoundError(\"Role (role_id=%s) does not exist\"%(role.id))\n\n for perm in role.permissions:\n _get_perm(perm.id)\n roleperm_i = RolePerm(role_id=role.id,\n perm_id=perm.id\n )\n\n db.DBSession.add(roleperm_i)\n\n for user in role.users:\n _get_user(user.id)\n roleuser_i = RoleUser(user_id=user.id,\n perm_id=perm.id\n )\n\n db.DBSession.add(roleuser_i)\n\n db.DBSession.flush()\n return role_i", "response": "Update the role.\n Used to add permissions and users to a role."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget all users in the current language.", "response": "def get_all_users(**kwargs):\n \"\"\"\n Get the username & ID of all users.\n Use the the filter if it has been provided\n The filter has to be a list of values\n \"\"\"\n users_qry = db.DBSession.query(User)\n\n filter_type = kwargs.get('filter_type')\n filter_value = kwargs.get('filter_value')\n\n if filter_type is not None:\n # Filtering the search of users\n if filter_type == \"id\":\n if isinstance(filter_value, str):\n # Trying to read a csv string\n log.info(\"[HB.users] Getting user by Filter ID : %s\", filter_value)\n filter_value = eval(filter_value)\n if type(filter_value) is int:\n users_qry = users_qry.filter(User.id==filter_value)\n else:\n users_qry = users_qry.filter(User.id.in_(filter_value))\n elif filter_type == \"username\":\n if isinstance(filter_value, str):\n # Trying to read a csv string\n log.info(\"[HB.users] Getting user by Filter Username : %s\", filter_value)\n filter_value = filter_value.split(\",\")\n for i, em in enumerate(filter_value):\n log.info(\"[HB.users] >>> Getting user by single Username : %s\", em)\n filter_value[i] = em.strip()\n if isinstance(filter_value, str):\n users_qry = users_qry.filter(User.username==filter_value)\n else:\n users_qry = users_qry.filter(User.username.in_(filter_value))\n else:\n raise Exception(\"Filter type '{}' not allowed\".format(filter_type))\n\n else:\n log.info('[HB.users] Getting All Users')\n\n rs = users_qry.all()\n\n return rs"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_role(role_id,**kwargs):\n try:\n role = db.DBSession.query(Role).filter(Role.id==role_id).one()\n return role\n except NoResultFound:\n raise HydraError(\"Role not found (role_id={})\".format(role_id))", "response": "Get a role by its ID."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the roles for a user.", "response": "def get_user_roles(uid,**kwargs):\n \"\"\"\n Get the roles for a user.\n @param user_id\n \"\"\"\n try:\n user_roles = db.DBSession.query(Role).filter(Role.id==RoleUser.role_id,\n RoleUser.user_id==uid).all()\n return user_roles\n except NoResultFound:\n raise HydraError(\"Roles not found for user (user_id={})\".format(uid))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the permissions for a user.", "response": "def get_user_permissions(uid, **kwargs):\n \"\"\"\n Get the roles for a user.\n @param user_id\n \"\"\"\n try:\n _get_user(uid)\n\n user_perms = db.DBSession.query(Perm).filter(Perm.id==RolePerm.perm_id,\n RolePerm.role_id==Role.id,\n Role.id==RoleUser.role_id,\n RoleUser.user_id==uid).all()\n return user_perms\n except:\n raise HydraError(\"Permissions not found for user (user_id={})\".format(uid))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets a role by its code", "response": "def get_role_by_code(role_code,**kwargs):\n \"\"\"\n Get a role by its code\n \"\"\"\n try:\n role = db.DBSession.query(Role).filter(Role.code==role_code).one()\n return role\n except NoResultFound:\n raise ResourceNotFoundError(\"Role not found (role_code={})\".format(role_code))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting all permissions with the given ID.", "response": "def get_perm(perm_id,**kwargs):\n \"\"\"\n Get all permissions\n \"\"\"\n\n try:\n perm = db.DBSession.query(Perm).filter(Perm.id==perm_id).one()\n return perm\n except NoResultFound:\n raise ResourceNotFoundError(\"Permission not found (perm_id={})\".format(perm_id))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget a permission by its code", "response": "def get_perm_by_code(perm_code,**kwargs):\n \"\"\"\n Get a permission by its code\n \"\"\"\n\n try:\n perm = db.DBSession.query(Perm).filter(Perm.code==perm_code).one()\n return perm\n except NoResultFound:\n raise ResourceNotFoundError(\"Permission not found (perm_code={})\".format(perm_code))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _create_dataframe(cls, value):\n try:\n\n ordered_jo = json.loads(six.text_type(value), object_pairs_hook=collections.OrderedDict)\n\n #Pandas does not maintain the order of dicts, so we must break the dict\n #up and put it into the dataframe manually to maintain the order.\n\n cols = list(ordered_jo.keys())\n\n if len(cols) == 0:\n raise ValueError(\"Dataframe has no columns\")\n\n #Assume all sub-dicts have the same set of keys\n if isinstance(ordered_jo[cols[0]], list):\n index = range(len(ordered_jo[cols[0]]))\n else:\n index = list(ordered_jo[cols[0]].keys())\n data = []\n for c in cols:\n if isinstance(ordered_jo[c], list):\n data.append(ordered_jo[c])\n else:\n data.append(list(ordered_jo[c].values()))\n\n # This goes in 'sideways' (cols=index, index=cols), so it needs to be transposed after to keep\n # the correct structure\n # We also try to coerce the data to a regular numpy array first. If the shape is correct\n # this is a much faster way of creating the DataFrame instance.\n try:\n np_data = np.array(data)\n except ValueError:\n np_data = None\n\n if np_data is not None and np_data.shape == (len(cols), len(index)):\n df = pd.DataFrame(np_data, columns=index, index=cols).transpose()\n else:\n # TODO should these heterogenous structure be supported?\n # See https://github.com/hydraplatform/hydra-base/issues/72\n df = pd.DataFrame(data, columns=index, index=cols).transpose()\n\n\n except ValueError as e:\n \"\"\" Raised on scalar types used as pd.DataFrame values\n in absence of index arg\n \"\"\"\n raise HydraError(str(e))\n\n except AssertionError as e:\n log.warning(\"An error occurred creating the new data frame: %s. Defaulting to a simple read_json\"%(e))\n df = pd.read_json(value).fillna(0)\n\n return df", "response": "Builds a dataframe from the value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_value(self):\n try:\n if self.value is None:\n log.warning(\"Cannot parse dataset. No value specified.\")\n return None\n\n # attr_data.value is a dictionary but the keys have namespaces which must be stripped\n data = six.text_type(self.value)\n\n if data.upper().strip() in (\"NULL\", \"\"):\n return \"NULL\"\n\n data = data[0:100]\n log.info(\"[Dataset.parse_value] Parsing %s (%s)\", data, type(data))\n\n return HydraObjectFactory.valueFromDataset(self.type, self.value, self.get_metadata_as_dict())\n\n except Exception as e:\n log.exception(e)\n raise HydraError(\"Error parsing value %s: %s\"%(self.value, e))", "response": "Turn the value of an incoming dataset into a hydra - friendly value."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_metadata_as_dict(self, user_id=None, source=None):\n\n if self.metadata is None or self.metadata == \"\":\n return {}\n\n metadata_dict = self.metadata if isinstance(self.metadata, dict) else json.loads(self.metadata)\n\n # These should be set on all datasets by default, but we don't enforce this rigidly\n metadata_keys = [m.lower() for m in metadata_dict]\n if user_id is not None and 'user_id' not in metadata_keys:\n metadata_dict['user_id'] = six.text_type(user_id)\n\n if source is not None and 'source' not in metadata_keys:\n metadata_dict['source'] = six.text_type(source)\n\n return { k : six.text_type(v) for k, v in metadata_dict.items() }", "response": "Convert a metadata json string into a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a new resource group to a network.", "response": "def add_resourcegroup(group, network_id,**kwargs):\n \"\"\"\n Add a new group to a network.\n \"\"\"\n group_i = ResourceGroup()\n group_i.name = group.name\n group_i.description = group.description\n group_i.status = group.status\n group_i.network_id = network_id\n db.DBSession.add(group_i)\n db.DBSession.flush()\n return group_i"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes a group from the database.", "response": "def delete_resourcegroup(group_id,**kwargs):\n \"\"\"\n Add a new group to a scenario.\n \"\"\"\n group_i = _get_group(group_id)\n #This should cascaded to delete all the group items.\n db.DBSession.delete(group_i)\n db.DBSession.flush()\n\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update_resourcegroup(group,**kwargs):\n\n group_i = _get_group(group.id)\n group_i.name = group.name\n group_i.description = group.description\n group_i.status = group.status\n\n db.DBSession.flush()\n\n return group_i", "response": "Update the name description and status of a group in the network."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_sqlite_backup_db(audit_tables):\n #we always want to create a whole new DB, so delete the old one first\n #if it exists.\n try:\n Popen(\"rm %s\"%(config.get('sqlite', 'backup_url')), shell=True)\n logging.warn(\"Old sqlite backup DB removed\")\n except Exception as e:\n logging.warn(e)\n\n try:\n aux_dir = config.get('DEFAULT', 'hydra_aux_dir')\n os.mkdir(aux_dir)\n logging.warn(\"%s created\", aux_dir)\n except Exception as e:\n logging.warn(e)\n\n try:\n backup_dir = config.get('db', 'export_target')\n os.mkdir(backup_dir)\n logging.warn(\"%s created\", backup_dir)\n except Exception as e:\n logging.warn(e)\n\n db = create_engine(sqlite_engine, echo=True)\n db.connect()\n metadata = MetaData(db)\n\n for main_audit_table in audit_tables:\n cols = []\n for c in main_audit_table.columns:\n col = c.copy()\n if col.type.python_type == Decimal:\n col.type = DECIMAL()\n\n cols.append(col)\n Table(main_audit_table.name, metadata, *cols, sqlite_autoincrement=True)\n\n metadata.create_all(db)", "response": "create a sqlite backup DB"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _is_admin(user_id):\n user = get_session().query(User).filter(User.id==user_id).one()\n\n if user.is_admin():\n return True\n else:\n return False", "response": "Is the specified user an admin?"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_metadata(self, metadata_dict):\n if metadata_dict is None:\n return\n\n existing_metadata = []\n for m in self.metadata:\n existing_metadata.append(m.key)\n if m.key in metadata_dict:\n if m.value != metadata_dict[m.key]:\n m.value = metadata_dict[m.key]\n\n\n for k, v in metadata_dict.items():\n if k not in existing_metadata:\n m_i = Metadata(key=str(k),value=str(v))\n self.metadata.append(m_i)\n\n metadata_to_delete = set(existing_metadata).difference(set(metadata_dict.keys()))\n for m in self.metadata:\n if m.key in metadata_to_delete:\n get_session().delete(m)", "response": "Sets the metadata on a dataset"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck whether this user can read this dataset", "response": "def check_user(self, user_id):\n \"\"\"\n Check whether this user can read this dataset\n \"\"\"\n\n if self.hidden == 'N':\n return True\n\n for owner in self.owners:\n if int(owner.user_id) == int(user_id):\n if owner.view == 'Y':\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_network(self):\n ref_key = self.ref_key\n if ref_key == 'NETWORK':\n return self.network\n elif ref_key == 'NODE':\n return self.node.network\n elif ref_key == 'LINK':\n return self.link.network\n elif ref_key == 'GROUP':\n return self.group.network\n elif ref_key == 'PROJECT':\n return None", "response": "Get the network that this resource attribute is in."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef check_read_permission(self, user_id, do_raise=True):\n return self.get_resource().check_read_permission(user_id, do_raise=do_raise)", "response": "Check whether this user can read this attribute"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_write_permission(self, user_id, do_raise=True):\n return self.get_resource().check_write_permission(user_id, do_raise=do_raise)", "response": "Check whether this user can write this node."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_link(self, name, desc, layout, node_1, node_2):\n\n existing_link = get_session().query(Link).filter(Link.name==name, Link.network_id==self.id).first()\n if existing_link is not None:\n raise HydraError(\"A link with name %s is already in network %s\"%(name, self.id))\n\n l = Link()\n l.name = name\n l.description = desc\n l.layout = json.dumps(layout) if layout is not None else None\n l.node_a = node_1\n l.node_b = node_2\n\n get_session().add(l)\n\n self.links.append(l)\n\n return l", "response": "Add a link to a network."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_node(self, name, desc, layout, node_x, node_y):\n existing_node = get_session().query(Node).filter(Node.name==name, Node.network_id==self.id).first()\n if existing_node is not None:\n raise HydraError(\"A node with name %s is already in network %s\"%(name, self.id))\n\n node = Node()\n node.name = name\n node.description = desc\n node.layout = str(layout) if layout is not None else None\n node.x = node_x\n node.y = node_y\n\n #Do not call save here because it is likely that we may want\n #to bulk insert nodes, not one at a time.\n\n get_session().add(node)\n\n self.nodes.append(node)\n\n return node", "response": "Add a node to a network."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a new resource group to a network.", "response": "def add_group(self, name, desc, status):\n \"\"\"\n Add a new group to a network.\n \"\"\"\n\n existing_group = get_session().query(ResourceGroup).filter(ResourceGroup.name==name, ResourceGroup.network_id==self.id).first()\n if existing_group is not None:\n raise HydraError(\"A resource group with name %s is already in network %s\"%(name, self.id))\n\n group_i = ResourceGroup()\n group_i.name = name\n group_i.description = desc\n group_i.status = status\n\n get_session().add(group_i)\n\n self.resourcegroups.append(group_i)\n\n\n return group_i"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks whether this user can read this network", "response": "def check_read_permission(self, user_id, do_raise=True):\n \"\"\"\n Check whether this user can read this network\n \"\"\"\n if _is_admin(user_id):\n return True\n\n if int(self.created_by) == int(user_id):\n return True\n\n for owner in self.owners:\n if int(owner.user_id) == int(user_id):\n if owner.view == 'Y':\n break\n else:\n if do_raise is True:\n raise PermissionError(\"Permission denied. User %s does not have read\"\n \" access on network %s\" %\n (user_id, self.id))\n else:\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking whether this user can write this project", "response": "def check_share_permission(self, user_id):\n \"\"\"\n Check whether this user can write this project\n \"\"\"\n\n if _is_admin(user_id):\n return\n\n if int(self.created_by) == int(user_id):\n return\n\n for owner in self.owners:\n if owner.user_id == int(user_id):\n if owner.view == 'Y' and owner.share == 'Y':\n break\n else:\n raise PermissionError(\"Permission denied. User %s does not have share\"\n \" access on network %s\" %\n (user_id, self.id))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_read_permission(self, user_id, do_raise=True):\n return self.network.check_read_permission(user_id, do_raise=do_raise)", "response": "Check whether this user can read this link."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_write_permission(self, user_id, do_raise=True):\n\n return self.network.check_write_permission(user_id, do_raise=do_raise)", "response": "Check whether this user can write this link."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_items(self, scenario_id):\n items = get_session().query(ResourceGroupItem)\\\n .filter(ResourceGroupItem.group_id==self.id).\\\n filter(ResourceGroupItem.scenario_id==scenario_id).all()\n\n return items", "response": "Get all the items in this group in the given scenario"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the reference to the appropriate resource type.", "response": "def set_ref(self, ref_key, ref_id):\n \"\"\"\n Using a ref key and ref id set the\n reference to the appropriate resource type.\n \"\"\"\n if ref_key == 'NETWORK':\n self.network_id = ref_id\n elif ref_key == 'NODE':\n self.node_id = ref_id\n elif ref_key == 'LINK':\n self.link_id = ref_id\n elif ref_key == 'GROUP':\n self.group_id = ref_id\n elif ref_key == 'SCENARIO':\n self.scenario_id = ref_id\n elif ref_key == 'PROJECT':\n self.project_id = ref_id\n\n else:\n raise HydraError(\"Ref Key %s not recognised.\"%ref_key)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the ID of the resource to which this not is attached to the resource.", "response": "def get_ref_id(self):\n\n \"\"\"\n Return the ID of the resource to which this not is attached\n \"\"\"\n if self.ref_key == 'NETWORK':\n return self.network_id\n elif self.ref_key == 'NODE':\n return self.node_id\n elif self.ref_key == 'LINK':\n return self.link_id\n elif self.ref_key == 'GROUP':\n return self.group_id\n elif self.ref_key == 'SCENARIO':\n return self.scenario_id\n elif self.ref_key == 'PROJECT':\n return self.project_id"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the ID of the resource to which this not is attached to the current resource.", "response": "def get_ref(self):\n \"\"\"\n Return the ID of the resource to which this not is attached\n \"\"\"\n if self.ref_key == 'NETWORK':\n return self.network\n elif self.ref_key == 'NODE':\n return self.node\n elif self.ref_key == 'LINK':\n return self.link\n elif self.ref_key == 'GROUP':\n return self.group\n elif self.ref_key == 'SCENARIO':\n return self.scenario\n elif self.ref_key == 'PROJECT':\n return self.project"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a set with all permissions granted to the user.", "response": "def permissions(self):\n \"\"\"Return a set with all permissions granted to the user.\"\"\"\n perms = set()\n for r in self.roles:\n perms = perms | set(r.permissions)\n return perms"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a set with all roles granted to the user.", "response": "def roles(self):\n \"\"\"Return a set with all roles granted to the user.\"\"\"\n roles = []\n for ur in self.roleusers:\n roles.append(ur.role)\n return set(roles)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck that the user is an admin", "response": "def is_admin(self):\n \"\"\"\n Check that the user has a role with the code 'admin'\n \"\"\"\n for ur in self.roleusers:\n if ur.role.code == 'admin':\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck that the unit and dimension on a type attribute match.", "response": "def _check_dimension(typeattr, unit_id=None):\n \"\"\"\n Check that the unit and dimension on a type attribute match.\n Alternatively, pass in a unit manually to check against the dimension\n of the type attribute\n \"\"\"\n if unit_id is None:\n unit_id = typeattr.unit_id\n\n dimension_id = _get_attr(typeattr.attr_id).dimension_id\n\n if unit_id is not None and dimension_id is None:\n # First error case\n unit_dimension_id = units.get_dimension_by_unit_id(unit_id).id\n raise HydraError(\"Unit %s (abbreviation=%s) has dimension_id %s(name=%s), but attribute has no dimension\"%\n (unit_id, units.get_unit(unit_id).abbreviation,\n unit_dimension_id, units.get_dimension(unit_dimension_id, do_accept_dimension_id_none=True).name))\n elif unit_id is not None and dimension_id is not None:\n unit_dimension_id = units.get_dimension_by_unit_id(unit_id).id\n if unit_dimension_id != dimension_id:\n # Only error case\n raise HydraError(\"Unit %s (abbreviation=%s) has dimension_id %s(name=%s), but attribute has dimension_id %s(name=%s)\"%\n (unit_id, units.get_unit(unit_id).abbreviation,\n unit_dimension_id, units.get_dimension(unit_dimension_id, do_accept_dimension_id_none=True).name,\n dimension_id, units.get_dimension(dimension_id, do_accept_dimension_id_none=True).name))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_types_by_attr(resource, template_id=None):\n\n resource_type_templates = []\n\n #Create a list of all of this resources attributes.\n attr_ids = []\n for res_attr in resource.attributes:\n attr_ids.append(res_attr.attr_id)\n all_resource_attr_ids = set(attr_ids)\n\n all_types = db.DBSession.query(TemplateType).options(joinedload_all('typeattrs')).filter(TemplateType.resource_type==resource.ref_key)\n if template_id is not None:\n all_types = all_types.filter(TemplateType.template_id==template_id)\n\n all_types = all_types.all()\n\n #tmpl type attrs must be a subset of the resource's attrs\n for ttype in all_types:\n type_attr_ids = []\n for typeattr in ttype.typeattrs:\n type_attr_ids.append(typeattr.attr_id)\n if set(type_attr_ids).issubset(all_resource_attr_ids):\n resource_type_templates.append(ttype)\n\n return resource_type_templates", "response": "Get all the types that this resource has the attributes of the resource."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget an attribute by name and dimension_id.", "response": "def _get_attr_by_name_and_dimension(name, dimension_id):\n \"\"\"\n Search for an attribute with the given name and dimension_id.\n If such an attribute does not exist, create one.\n \"\"\"\n\n attr = db.DBSession.query(Attr).filter(Attr.name==name, Attr.dimension_id==dimension_id).first()\n\n if attr is None:\n # In this case the attr does not exists so we must create it\n attr = Attr()\n attr.dimension_id = dimension_id\n attr.name = name\n\n log.debug(\"Attribute not found, creating new attribute: name:%s, dimen:%s\",\n attr.name, attr.dimension_id)\n\n db.DBSession.add(attr)\n\n return attr"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a template as a JSON string.", "response": "def get_template_as_json(template_id, **kwargs):\n \"\"\"\n Get a template (including attribute and dataset definitions) as a JSON\n string. This is just a wrapper around the get_template_as_dict function.\n \"\"\"\n user_id = kwargs['user_id']\n return json.dumps(get_template_as_dict(template_id, user_id=user_id))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_template_as_xml(template_id,**kwargs):\n template_xml = etree.Element(\"template_definition\")\n\n template_i = db.DBSession.query(Template).filter(\n Template.id==template_id).options(\n #joinedload_all('templatetypes.typeattrs.default_dataset.metadata')\n joinedload('templatetypes').joinedload('typeattrs').joinedload('default_dataset').joinedload('metadata')\n\n ).one()\n\n template_name = etree.SubElement(template_xml, \"template_name\")\n template_name.text = template_i.name\n template_description = etree.SubElement(template_xml, \"template_description\")\n template_description.text = template_i.description\n resources = etree.SubElement(template_xml, \"resources\")\n\n for type_i in template_i.templatetypes:\n xml_resource = etree.SubElement(resources, \"resource\")\n\n resource_type = etree.SubElement(xml_resource, \"type\")\n resource_type.text = type_i.resource_type\n\n name = etree.SubElement(xml_resource, \"name\")\n name.text = type_i.name\n\n description = etree.SubElement(xml_resource, \"description\")\n description.text = type_i.description\n\n alias = etree.SubElement(xml_resource, \"alias\")\n alias.text = type_i.alias\n\n if type_i.layout is not None and type_i.layout != \"\":\n layout = _get_layout_as_etree(type_i.layout)\n xml_resource.append(layout)\n\n for type_attr in type_i.typeattrs:\n attr = _make_attr_element_from_typeattr(xml_resource, type_attr)\n\n resources.append(xml_resource)\n\n xml_string = etree.tostring(template_xml, encoding=\"unicode\")\n\n return xml_string", "response": "Get a template into an xml template"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nimporting a template from a JSON string.", "response": "def import_template_json(template_json_string,allow_update=True, **kwargs):\n \"\"\"\n Add the template, type and typeattrs described\n in a JSON file.\n\n Delete type, typeattr entries in the DB that are not in the XML file\n The assumption is that they have been deleted and are no longer required.\n\n The allow_update indicates whether an existing template of the same name should\n be updated, or whether it should throw an 'existing name' error.\n \"\"\"\n\n user_id = kwargs.get('user_id')\n\n try:\n template_dict = json.loads(template_json_string)\n except:\n raise HydraError(\"Unable to parse JSON string. Plese ensure it is JSON compatible.\")\n\n return import_template_dict(template_dict, allow_update=allow_update, user_id=user_id)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nimports a template XML file into the database.", "response": "def import_template_xml(template_xml, allow_update=True, **kwargs):\n \"\"\"\n Add the template, type and typeattrs described\n in an XML file.\n\n Delete type, typeattr entries in the DB that are not in the XML file\n The assumption is that they have been deleted and are no longer required.\n \"\"\"\n\n template_xsd_path = config.get('templates', 'template_xsd_path')\n xmlschema_doc = etree.parse(template_xsd_path)\n\n xmlschema = etree.XMLSchema(xmlschema_doc)\n\n xml_tree = etree.fromstring(template_xml)\n\n xmlschema.assertValid(xml_tree)\n\n template_name = xml_tree.find('template_name').text\n template_description = xml_tree.find('template_description').text\n\n template_layout = None\n if xml_tree.find('layout') is not None and \\\n xml_tree.find('layout').text is not None:\n layout = xml_tree.find('layout')\n layout_string = get_etree_layout_as_dict(layout)\n template_layout = json.dumps(layout_string)\n\n try:\n tmpl_i = db.DBSession.query(Template).filter(Template.name==template_name).options(joinedload_all('templatetypes.typeattrs.attr')).one()\n\n if allow_update == False:\n raise HydraError(\"Existing Template Found with name %s\"%(template_name,))\n else:\n log.debug(\"Existing template found. name=%s\", template_name)\n tmpl_i.layout = template_layout\n tmpl_i.description = template_description\n except NoResultFound:\n log.debug(\"Template not found. Creating new one. name=%s\", template_name)\n tmpl_i = Template(name=template_name, description=template_description, layout=template_layout)\n db.DBSession.add(tmpl_i)\n\n types = xml_tree.find('resources')\n #Delete any types which are in the DB but no longer in the XML file\n type_name_map = {r.name:r.id for r in tmpl_i.templatetypes}\n attr_name_map = {}\n for type_i in tmpl_i.templatetypes:\n for attr in type_i.typeattrs:\n attr_name_map[attr.attr.name] = (attr.id, attr.type_id)\n\n existing_types = set([r.name for r in tmpl_i.templatetypes])\n\n new_types = set([r.find('name').text for r in types.findall('resource')])\n\n types_to_delete = existing_types - new_types\n\n for type_to_delete in types_to_delete:\n type_id = type_name_map[type_to_delete]\n try:\n type_i = db.DBSession.query(TemplateType).filter(TemplateType.id==type_id).one()\n log.debug(\"Deleting type %s\", type_i.name)\n db.DBSession.delete(type_i)\n except NoResultFound:\n pass\n\n #Add or update types.\n for resource in types.findall('resource'):\n type_name = resource.find('name').text\n #check if the type is already in the DB. If not, create a new one.\n type_is_new = False\n if type_name in existing_types:\n type_id = type_name_map[type_name]\n type_i = db.DBSession.query(TemplateType).filter(TemplateType.id==type_id).options(joinedload_all('typeattrs.attr')).one()\n\n else:\n log.debug(\"Type %s not found, creating new one.\", type_name)\n type_i = TemplateType()\n type_i.name = type_name\n tmpl_i.templatetypes.append(type_i)\n type_is_new = True\n\n if resource.find('alias') is not None:\n type_i.alias = resource.find('alias').text\n\n if resource.find('description') is not None:\n type_i.description = resource.find('description').text\n\n if resource.find('type') is not None:\n type_i.resource_type = resource.find('type').text\n\n if resource.find('layout') is not None and \\\n resource.find('layout').text is not None:\n layout = resource.find('layout')\n layout_string = get_etree_layout_as_dict(layout)\n type_i.layout = json.dumps(layout_string)\n\n #delete any TypeAttrs which are in the DB but not in the XML file\n existing_attrs = []\n if not type_is_new:\n for r in tmpl_i.templatetypes:\n if r.name == type_name:\n for typeattr in r.typeattrs:\n existing_attrs.append(typeattr.attr.name)\n\n existing_attrs = set(existing_attrs)\n\n template_attrs = set([r.find('name').text for r in resource.findall('attribute')])\n\n attrs_to_delete = existing_attrs - template_attrs\n for attr_to_delete in attrs_to_delete:\n attr_id, type_id = attr_name_map[attr_to_delete]\n try:\n attr_i = db.DBSession.query(TypeAttr).filter(TypeAttr.attr_id==attr_id, TypeAttr.type_id==type_id).options(joinedload_all('attr')).one()\n db.DBSession.delete(attr_i)\n log.debug(\"Attr %s in type %s deleted\",attr_i.attr.name, attr_i.templatetype.name)\n except NoResultFound:\n log.debug(\"Attr %s not found in type %s\"%(attr_id, type_id))\n continue\n\n #Add or update type typeattrs\n for attribute in resource.findall('attribute'):\n new_typeattr = parse_xml_typeattr(type_i, attribute)\n\n db.DBSession.flush()\n\n return tmpl_i"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\napplying a template to a network.", "response": "def apply_template_to_network(template_id, network_id, **kwargs):\n \"\"\"\n For each node and link in a network, check whether it matches\n a type in a given template. If so, assign the type to the node / link.\n \"\"\"\n\n net_i = db.DBSession.query(Network).filter(Network.id==network_id).one()\n #There should only ever be one matching type, but if there are more,\n #all we can do is pick the first one.\n try:\n network_type_id = db.DBSession.query(TemplateType.id).filter(TemplateType.template_id==template_id,\n TemplateType.resource_type=='NETWORK').one()\n assign_type_to_resource(network_type_id.id, 'NETWORK', network_id,**kwargs)\n except NoResultFound:\n log.debug(\"No network type to set.\")\n pass\n\n for node_i in net_i.nodes:\n templates = get_types_by_attr(node_i, template_id)\n if len(templates) > 0:\n assign_type_to_resource(templates[0].id, 'NODE', node_i.id,**kwargs)\n for link_i in net_i.links:\n templates = get_types_by_attr(link_i, template_id)\n if len(templates) > 0:\n assign_type_to_resource(templates[0].id, 'LINK', link_i.id,**kwargs)\n\n for group_i in net_i.resourcegroups:\n templates = get_types_by_attr(group_i, template_id)\n if len(templates) > 0:\n assign_type_to_resource(templates[0].id, 'GROUP', group_i.id,**kwargs)\n\n db.DBSession.flush()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_network_template(template_id, network_id, **kwargs):\n\n resource_types = []\n\n #There should only ever be one matching type, but if there are more,\n #all we can do is pick the first one.\n try:\n network_type = db.DBSession.query(ResourceType).filter(ResourceType.ref_key=='NETWORK',\n ResourceType.network_id==network_id,\n ResourceType.type_id==TemplateType.type_id,\n TemplateType.template_id==template_id).one()\n resource_types.append(network_type)\n\n except NoResultFound:\n log.debug(\"No network type to set.\")\n pass\n\n node_types = db.DBSession.query(ResourceType).filter(ResourceType.ref_key=='NODE',\n ResourceType.node_id==Node.node_id,\n Node.network_id==network_id,\n ResourceType.type_id==TemplateType.type_id,\n TemplateType.template_id==template_id).all()\n link_types = db.DBSession.query(ResourceType).filter(ResourceType.ref_key=='LINK',\n ResourceType.link_id==Link.link_id,\n Link.network_id==network_id,\n ResourceType.type_id==TemplateType.type_id,\n TemplateType.template_id==template_id).all()\n group_types = db.DBSession.query(ResourceType).filter(ResourceType.ref_key=='GROUP',\n ResourceType.group_id==ResourceGroup.group_id,\n ResourceGroup.network_id==network_id,\n ResourceType.type_id==TemplateType.type_id,\n TemplateType.template_id==template_id).all()\n\n resource_types.extend(node_types)\n resource_types.extend(link_types)\n resource_types.extend(group_types)\n\n assign_types_to_resources(resource_types)\n\n log.debug(\"Finished setting network template\")", "response": "This function is used to set a template to a network."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef remove_template_from_network(network_id, template_id, remove_attrs, **kwargs):\n\n try:\n network = db.DBSession.query(Network).filter(Network.id==network_id).one()\n except NoResultFound:\n raise HydraError(\"Network %s not found\"%network_id)\n\n try:\n template = db.DBSession.query(Template).filter(Template.id==template_id).one()\n except NoResultFound:\n raise HydraError(\"Template %s not found\"%template_id)\n\n type_ids = [tmpltype.id for tmpltype in template.templatetypes]\n\n node_ids = [n.id for n in network.nodes]\n link_ids = [l.id for l in network.links]\n group_ids = [g.id for g in network.resourcegroups]\n\n if remove_attrs == 'Y':\n #find the attributes to remove\n resource_attrs_to_remove = _get_resources_to_remove(network, template)\n for n in network.nodes:\n resource_attrs_to_remove.extend(_get_resources_to_remove(n, template))\n for l in network.links:\n resource_attrs_to_remove.extend(_get_resources_to_remove(l, template))\n for g in network.resourcegroups:\n resource_attrs_to_remove.extend(_get_resources_to_remove(g, template))\n\n for ra in resource_attrs_to_remove:\n db.DBSession.delete(ra)\n\n resource_types = db.DBSession.query(ResourceType).filter(\n and_(or_(\n ResourceType.network_id==network_id,\n ResourceType.node_id.in_(node_ids),\n ResourceType.link_id.in_(link_ids),\n ResourceType.group_id.in_(group_ids),\n ), ResourceType.type_id.in_(type_ids))).all()\n\n for resource_type in resource_types:\n db.DBSession.delete(resource_type)\n\n db.DBSession.flush()", "response": "Removes a template from a network."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_resources_to_remove(resource, template):\n type_ids = [tmpltype.id for tmpltype in template.templatetypes]\n\n node_attr_ids = dict([(ra.attr_id, ra) for ra in resource.attributes])\n attrs_to_remove = []\n attrs_to_keep = []\n for nt in resource.types:\n if nt.templatetype.id in type_ids:\n for ta in nt.templatetype.typeattrs:\n if node_attr_ids.get(ta.attr_id):\n attrs_to_remove.append(node_attr_ids[ta.attr_id])\n else:\n for ta in nt.templatetype.typeattrs:\n if node_attr_ids.get(ta.attr_id):\n attrs_to_keep.append(node_attr_ids[ta.attr_id])\n #remove any of the attributes marked for deletion as they are\n #marked for keeping based on being in another type.\n final_attrs_to_remove = set(attrs_to_remove) - set(attrs_to_keep)\n\n return list(final_attrs_to_remove)", "response": "Identify the resources which can be removed from the tree."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_matching_resource_types(resource_type, resource_id,**kwargs):\n resource_i = None\n if resource_type == 'NETWORK':\n resource_i = db.DBSession.query(Network).filter(Network.id==resource_id).one()\n elif resource_type == 'NODE':\n resource_i = db.DBSession.query(Node).filter(Node.id==resource_id).one()\n elif resource_type == 'LINK':\n resource_i = db.DBSession.query(Link).filter(Link.id==resource_id).one()\n elif resource_type == 'GROUP':\n resource_i = db.DBSession.query(ResourceGroup).filter(ResourceGroup.id==resource_id).one()\n\n matching_types = get_types_by_attr(resource_i)\n return matching_types", "response": "Get the possible types of a resource by checking its attributes\n against all available types."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nassigns new types to resources.", "response": "def assign_types_to_resources(resource_types,**kwargs):\n \"\"\"\n Assign new types to list of resources.\n This function checks if the necessary\n attributes are present and adds them if needed. Non existing attributes\n are also added when the type is already assigned. This means that this\n function can also be used to update resources, when a resource type has\n changed.\n \"\"\"\n #Remove duplicate values from types by turning it into a set\n type_ids = list(set([rt.type_id for rt in resource_types]))\n\n db_types = db.DBSession.query(TemplateType).filter(TemplateType.id.in_(type_ids)).options(joinedload_all('typeattrs')).all()\n\n types = {}\n for db_type in db_types:\n if types.get(db_type.id) is None:\n types[db_type.id] = db_type\n log.debug(\"Retrieved all the appropriate template types\")\n res_types = []\n res_attrs = []\n res_scenarios = []\n\n net_id = None\n node_ids = []\n link_ids = []\n grp_ids = []\n for resource_type in resource_types:\n ref_id = resource_type.ref_id\n ref_key = resource_type.ref_key\n if resource_type.ref_key == 'NETWORK':\n net_id = ref_id\n elif resource_type.ref_key == 'NODE':\n node_ids.append(ref_id)\n elif resource_type.ref_key == 'LINK':\n link_ids.append(ref_id)\n elif resource_type.ref_key == 'GROUP':\n grp_ids.append(ref_id)\n if net_id:\n net = db.DBSession.query(Network).filter(Network.id==net_id).one()\n nodes = _get_nodes(node_ids)\n links = _get_links(link_ids)\n groups = _get_groups(grp_ids)\n for resource_type in resource_types:\n ref_id = resource_type.ref_id\n ref_key = resource_type.ref_key\n type_id = resource_type.type_id\n if ref_key == 'NETWORK':\n resource = net\n elif ref_key == 'NODE':\n resource = nodes[ref_id]\n elif ref_key == 'LINK':\n resource = links[ref_id]\n elif ref_key == 'GROUP':\n resource = groups[ref_id]\n\n ra, rt, rs= set_resource_type(resource, type_id, types)\n if rt is not None:\n res_types.append(rt)\n if len(ra) > 0:\n res_attrs.extend(ra)\n if len(rs) > 0:\n res_scenarios.extend(rs)\n\n log.debug(\"Retrieved all the appropriate resources\")\n if len(res_types) > 0:\n new_types = db.DBSession.execute(ResourceType.__table__.insert(), res_types)\n if len(res_attrs) > 0:\n new_res_attrs = db.DBSession.execute(ResourceAttr.__table__.insert(), res_attrs)\n new_ras = db.DBSession.query(ResourceAttr).filter(and_(ResourceAttr.id>=new_res_attrs.lastrowid, ResourceAttr.id<(new_res_attrs.lastrowid+len(res_attrs)))).all()\n\n ra_map = {}\n for ra in new_ras:\n ra_map[(ra.ref_key, ra.attr_id, ra.node_id, ra.link_id, ra.group_id, ra.network_id)] = ra.id\n\n for rs in res_scenarios:\n rs['resource_attr_id'] = ra_map[(rs['ref_key'], rs['attr_id'], rs['node_id'], rs['link_id'], rs['group_id'], rs['network_id'])]\n\n if len(res_scenarios) > 0:\n new_scenarios = db.DBSession.execute(ResourceScenario.__table__.insert(), res_scenarios)\n #Make DBsession 'dirty' to pick up the inserts by doing a fake delete.\n\n db.DBSession.query(ResourceAttr).filter(ResourceAttr.attr_id==None).delete()\n\n ret_val = [t for t in types.values()]\n return ret_val"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if the type_1_id and type_2_id are compatible with the resource type.", "response": "def check_type_compatibility(type_1_id, type_2_id):\n \"\"\"\n When applying a type to a resource, it may be the case that the resource already\n has an attribute specified in the new type, but the template which defines this\n pre-existing attribute has a different unit specification to the new template.\n\n This function checks for any situations where different types specify the same\n attributes, but with different units.\n \"\"\"\n errors = []\n\n type_1 = db.DBSession.query(TemplateType).filter(TemplateType.id==type_1_id).options(joinedload_all('typeattrs')).one()\n type_2 = db.DBSession.query(TemplateType).filter(TemplateType.id==type_2_id).options(joinedload_all('typeattrs')).one()\n template_1_name = type_1.template.name\n template_2_name = type_2.template.name\n\n type_1_attrs=set([t.attr_id for t in type_1.typeattrs])\n type_2_attrs=set([t.attr_id for t in type_2.typeattrs])\n\n shared_attrs = type_1_attrs.intersection(type_2_attrs)\n\n if len(shared_attrs) == 0:\n return []\n\n type_1_dict = {}\n for t in type_1.typeattrs:\n if t.attr_id in shared_attrs:\n type_1_dict[t.attr_id]=t\n\n for ta in type_2.typeattrs:\n type_2_unit_id = ta.unit_id\n type_1_unit_id = type_1_dict[ta.attr_id].unit_id\n\n fmt_dict = {\n 'template_1_name': template_1_name,\n 'template_2_name': template_2_name,\n 'attr_name': ta.attr.name,\n 'type_1_unit_id': type_1_unit_id,\n 'type_2_unit_id': type_2_unit_id,\n 'type_name' : type_1.name\n }\n\n if type_1_unit_id is None and type_2_unit_id is not None:\n errors.append(\"Type %(type_name)s in template %(template_1_name)s\"\n \" stores %(attr_name)s with no units, while template\"\n \"%(template_2_name)s stores it with unit %(type_2_unit_id)s\"%fmt_dict)\n elif type_1_unit_id is not None and type_2_unit_id is None:\n errors.append(\"Type %(type_name)s in template %(template_1_name)s\"\n \" stores %(attr_name)s in %(type_1_unit_id)s.\"\n \" Template %(template_2_name)s stores it with no unit.\"%fmt_dict)\n elif type_1_unit_id != type_2_unit_id:\n errors.append(\"Type %(type_name)s in template %(template_1_name)s\"\n \" stores %(attr_name)s in %(type_1_unit_id)s, while\"\n \" template %(template_2_name)s stores it in %(type_2_unit_id)s\"%fmt_dict)\n return errors"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nassigning a new type to a resource.", "response": "def assign_type_to_resource(type_id, resource_type, resource_id,**kwargs):\n \"\"\"Assign new type to a resource. This function checks if the necessary\n attributes are present and adds them if needed. Non existing attributes\n are also added when the type is already assigned. This means that this\n function can also be used to update resources, when a resource type has\n changed.\n \"\"\"\n\n if resource_type == 'NETWORK':\n resource = db.DBSession.query(Network).filter(Network.id==resource_id).one()\n elif resource_type == 'NODE':\n resource = db.DBSession.query(Node).filter(Node.id==resource_id).one()\n elif resource_type == 'LINK':\n resource = db.DBSession.query(Link).filter(Link.id==resource_id).one()\n elif resource_type == 'GROUP':\n resource = db.DBSession.query(ResourceGroup).filter(ResourceGroup.id==resource_id).one()\n\n res_attrs, res_type, res_scenarios = set_resource_type(resource, type_id, **kwargs)\n\n type_i = db.DBSession.query(TemplateType).filter(TemplateType.id==type_id).one()\n if resource_type != type_i.resource_type:\n raise HydraError(\"Cannot assign a %s type to a %s\"%\n (type_i.resource_type,resource_type))\n\n if res_type is not None:\n db.DBSession.bulk_insert_mappings(ResourceType, [res_type])\n\n if len(res_attrs) > 0:\n db.DBSession.bulk_insert_mappings(ResourceAttr, res_attrs)\n\n if len(res_scenarios) > 0:\n db.DBSession.bulk_insert_mappings(ResourceScenario, res_scenarios)\n\n #Make DBsession 'dirty' to pick up the inserts by doing a fake delete.\n db.DBSession.query(Attr).filter(Attr.id==None).delete()\n\n db.DBSession.flush()\n\n return db.DBSession.query(TemplateType).filter(TemplateType.id==type_id).one()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_resource_type(resource, type_id, types={}, **kwargs):\n\n ref_key = resource.ref_key\n\n existing_attr_ids = []\n for res_attr in resource.attributes:\n existing_attr_ids.append(res_attr.attr_id)\n\n if type_id in types:\n type_i = types[type_id]\n else:\n type_i = db.DBSession.query(TemplateType).filter(TemplateType.id==type_id).options(joinedload_all('typeattrs')).one()\n\n type_attrs = dict()\n for typeattr in type_i.typeattrs:\n type_attrs[typeattr.attr_id]={\n 'is_var':typeattr.attr_is_var,\n 'default_dataset_id': typeattr.default_dataset.id if typeattr.default_dataset else None}\n\n # check if attributes exist\n missing_attr_ids = set(type_attrs.keys()) - set(existing_attr_ids)\n\n # add attributes if necessary\n new_res_attrs = []\n\n #This is a dict as the length of the list may not match the new_res_attrs\n #Keyed on attr_id, as resource_attr_id doesn't exist yet, and there should only\n #be one attr_id per template.\n new_res_scenarios = {}\n for attr_id in missing_attr_ids:\n ra_dict = dict(\n ref_key = ref_key,\n attr_id = attr_id,\n attr_is_var = type_attrs[attr_id]['is_var'],\n node_id = resource.id if ref_key == 'NODE' else None,\n link_id = resource.id if ref_key == 'LINK' else None,\n group_id = resource.id if ref_key == 'GROUP' else None,\n network_id = resource.id if ref_key == 'NETWORK' else None,\n\n )\n new_res_attrs.append(ra_dict)\n\n\n\n if type_attrs[attr_id]['default_dataset_id'] is not None:\n if hasattr(resource, 'network'):\n for s in resource.network.scenarios:\n\n if new_res_scenarios.get(attr_id) is None:\n new_res_scenarios[attr_id] = {}\n\n new_res_scenarios[attr_id][s.id] = dict(\n dataset_id = type_attrs[attr_id]['default_dataset_id'],\n scenario_id = s.id,\n #Not stored in the DB, but needed to connect the RA ID later.\n attr_id = attr_id,\n ref_key = ref_key,\n node_id = ra_dict['node_id'],\n link_id = ra_dict['link_id'],\n group_id = ra_dict['group_id'],\n network_id = ra_dict['network_id'],\n )\n\n\n resource_type = None\n for rt in resource.types:\n if rt.type_id == type_i.id:\n break\n else:\n errors = check_type_compatibility(rt.type_id, type_i.id)\n if len(errors) > 0:\n raise HydraError(\"Cannot apply type %s to resource as it \"\n \"conflicts with type %s. Errors are: %s\"\n %(type_i.name, resource.get_name(),\n rt.templatetype.name, ','.join(errors)))\n else:\n # add type to tResourceType if it doesn't exist already\n resource_type = dict(\n node_id = resource.id if ref_key == 'NODE' else None,\n link_id = resource.id if ref_key == 'LINK' else None,\n group_id = resource.id if ref_key == 'GROUP' else None,\n network_id = resource.id if ref_key == 'NETWORK' else None,\n ref_key = ref_key,\n type_id = type_id,\n )\n\n return new_res_attrs, resource_type, new_res_scenarios", "response": "Set this resource to a certain type."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nremove a resource type from a resource", "response": "def remove_type_from_resource( type_id, resource_type, resource_id,**kwargs):\n \"\"\"\n Remove a resource type trom a resource\n \"\"\"\n node_id = resource_id if resource_type == 'NODE' else None\n link_id = resource_id if resource_type == 'LINK' else None\n group_id = resource_id if resource_type == 'GROUP' else None\n\n resourcetype = db.DBSession.query(ResourceType).filter(\n ResourceType.type_id==type_id,\n ResourceType.ref_key==resource_type,\n ResourceType.node_id == node_id,\n ResourceType.link_id == link_id,\n ResourceType.group_id == group_id).one()\n\n db.DBSession.delete(resourcetype)\n db.DBSession.flush()\n\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_template(template, **kwargs):\n tmpl = Template()\n tmpl.name = template.name\n if template.description:\n tmpl.description = template.description\n if template.layout:\n tmpl.layout = get_layout_as_string(template.layout)\n\n db.DBSession.add(tmpl)\n\n if template.templatetypes is not None:\n types = template.templatetypes\n for templatetype in types:\n ttype = _update_templatetype(templatetype)\n tmpl.templatetypes.append(ttype)\n\n db.DBSession.flush()\n return tmpl", "response": "Add a template and a type and typeattrs."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the template and a type and typeattrs.", "response": "def update_template(template,**kwargs):\n \"\"\"\n Update template and a type and typeattrs.\n \"\"\"\n tmpl = db.DBSession.query(Template).filter(Template.id==template.id).one()\n tmpl.name = template.name\n if template.description:\n tmpl.description = template.description\n\n #Lazy load the rest of the template\n for tt in tmpl.templatetypes:\n for ta in tt.typeattrs:\n ta.attr\n\n if template.layout:\n tmpl.layout = get_layout_as_string(template.layout)\n\n type_dict = dict([(t.id, t) for t in tmpl.templatetypes])\n existing_templatetypes = []\n\n if template.types is not None or template.templatetypes is not None:\n types = template.types if template.types is not None else template.templatetypes\n for templatetype in types:\n if templatetype.id is not None:\n type_i = type_dict[templatetype.id]\n _update_templatetype(templatetype, type_i)\n existing_templatetypes.append(type_i.id)\n else:\n #Give it a template ID if it doesn't have one\n templatetype.template_id = template.id\n new_templatetype_i = _update_templatetype(templatetype)\n existing_templatetypes.append(new_templatetype_i.id)\n\n for tt in tmpl.templatetypes:\n if tt.id not in existing_templatetypes:\n delete_templatetype(tt.id)\n\n db.DBSession.flush()\n\n return tmpl"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_template(template_id,**kwargs):\n try:\n tmpl = db.DBSession.query(Template).filter(Template.id==template_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Template %s not found\"%(template_id,))\n db.DBSession.delete(tmpl)\n db.DBSession.flush()\n return 'OK'", "response": "Delete a template and its type and typeattrs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_templates(load_all=True, **kwargs):\n if load_all is False:\n templates = db.DBSession.query(Template).all()\n else:\n templates = db.DBSession.query(Template).options(joinedload_all('templatetypes.typeattrs')).all()\n\n return templates", "response": "Get all templates.\n Args:\n load_all Boolean: Returns just the template entry or the full template structure (template types and type attrs)\n Returns:\n List of Template objects"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting a specific resource template by ID.", "response": "def get_template(template_id,**kwargs):\n \"\"\"\n Get a specific resource template template, by ID.\n \"\"\"\n try:\n tmpl_i = db.DBSession.query(Template).filter(Template.id==template_id).options(joinedload_all('templatetypes.typeattrs.default_dataset.metadata')).one()\n\n #Load the attributes.\n for tmpltype_i in tmpl_i.templatetypes:\n for typeattr_i in tmpltype_i.typeattrs:\n typeattr_i.attr\n\n return tmpl_i\n except NoResultFound:\n raise HydraError(\"Template %s not found\"%template_id)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a specific resource template by name.", "response": "def get_template_by_name(name,**kwargs):\n \"\"\"\n Get a specific resource template, by name.\n \"\"\"\n try:\n tmpl_i = db.DBSession.query(Template).filter(Template.name == name).options(joinedload_all('templatetypes.typeattrs.default_dataset.metadata')).one()\n return tmpl_i\n except NoResultFound:\n log.info(\"%s is not a valid identifier for a template\",name)\n raise HydraError('Template \"%s\" not found'%name)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a template type to the hierarchy.", "response": "def add_templatetype(templatetype,**kwargs):\n \"\"\"\n Add a template type with typeattrs.\n \"\"\"\n\n type_i = _update_templatetype(templatetype)\n\n db.DBSession.flush()\n\n return type_i"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates a resource type and its typeattrs.", "response": "def update_templatetype(templatetype,**kwargs):\n \"\"\"\n Update a resource type and its typeattrs.\n New typeattrs will be added. typeattrs not sent will be ignored.\n To delete typeattrs, call delete_typeattr\n \"\"\"\n\n tmpltype_i = db.DBSession.query(TemplateType).filter(TemplateType.id == templatetype.id).one()\n\n _update_templatetype(templatetype, tmpltype_i)\n\n db.DBSession.flush()\n\n return tmpltype_i"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _set_typeattr(typeattr, existing_ta = None):\n if existing_ta is None:\n ta = TypeAttr(attr_id=typeattr.attr_id)\n else:\n ta = existing_ta\n\n ta.unit_id = typeattr.unit_id\n ta.type_id = typeattr.type_id\n ta.data_type = typeattr.data_type\n\n if hasattr(typeattr, 'default_dataset_id') and typeattr.default_dataset_id is not None:\n ta.default_dataset_id = typeattr.default_dataset_id\n\n ta.description = typeattr.description\n\n ta.properties = typeattr.get_properties()\n\n ta.attr_is_var = typeattr.is_var if typeattr.is_var is not None else 'N'\n\n ta.data_restriction = _parse_data_restriction(typeattr.data_restriction)\n\n if typeattr.dimension_id is None:\n # All right. Check passed\n pass\n else:\n\n if typeattr.attr_id is not None and typeattr.attr_id > 0:\n # Getting the passed attribute, so we need to check consistency between attr dimension id and typeattr dimension id\n attr = ta.attr\n\n if attr is not None and attr.dimension_id is not None and attr.dimension_id != typeattr.dimension_id or \\\n attr is not None and attr.dimension_id is not None:\n # In this case there is an inconsistency between attr.dimension_id and typeattr.dimension_id\n raise HydraError(\"Cannot set a dimension on type attribute which \"\n \"does not match its attribute. Create a new attribute if \"\n \"you want to use attribute %s with dimension_id %s\"%\n (attr.name, typeattr.dimension_id))\n elif typeattr.attr_id is None and typeattr.name is not None:\n # Getting/creating the attribute by typeattr dimension id and typeattr name\n # In this case the dimension_id \"null\"/\"not null\" status is ininfluent\n attr = _get_attr_by_name_and_dimension(typeattr.name, typeattr.dimension_id)\n\n ta.attr_id = attr.id\n ta.attr = attr\n\n\n _check_dimension(ta)\n\n if existing_ta is None:\n log.debug(\"Adding ta to DB\")\n db.DBSession.add(ta)\n\n return ta", "response": "Sets the type attributes of the current node."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating a template type in the database.", "response": "def _update_templatetype(templatetype, existing_tt=None):\n \"\"\"\n Add or update a templatetype. If an existing template type is passed in,\n update that one. Otherwise search for an existing one. If not found, add.\n \"\"\"\n if existing_tt is None:\n if \"id\" in templatetype and templatetype.id is not None:\n tmpltype_i = db.DBSession.query(TemplateType).filter(TemplateType.id == templatetype.id).one()\n else:\n tmpltype_i = TemplateType()\n else:\n tmpltype_i = existing_tt\n\n tmpltype_i.template_id = templatetype.template_id\n tmpltype_i.name = templatetype.name\n tmpltype_i.description = templatetype.description\n tmpltype_i.alias = templatetype.alias\n\n if templatetype.layout is not None:\n tmpltype_i.layout = get_layout_as_string(templatetype.layout)\n\n tmpltype_i.resource_type = templatetype.resource_type\n\n ta_dict = {}\n for t in tmpltype_i.typeattrs:\n ta_dict[t.attr_id] = t\n\n existing_attrs = []\n\n if templatetype.typeattrs is not None:\n for typeattr in templatetype.typeattrs:\n if typeattr.attr_id in ta_dict:\n ta = _set_typeattr(typeattr, ta_dict[typeattr.attr_id])\n existing_attrs.append(ta.attr_id)\n else:\n ta = _set_typeattr(typeattr)\n tmpltype_i.typeattrs.append(ta)\n existing_attrs.append(ta.attr_id)\n\n log.debug(\"Deleting any type attrs not sent\")\n for ta in ta_dict.values():\n if ta.attr_id not in existing_attrs:\n delete_typeattr(ta)\n\n if existing_tt is None:\n db.DBSession.add(tmpltype_i)\n\n return tmpltype_i"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndelete a template type and its typeattrs.", "response": "def delete_templatetype(type_id,template_i=None, **kwargs):\n \"\"\"\n Delete a template type and its typeattrs.\n \"\"\"\n try:\n tmpltype_i = db.DBSession.query(TemplateType).filter(TemplateType.id == type_id).one()\n except NoResultFound:\n raise ResourceNotFoundError(\"Template Type %s not found\"%(type_id,))\n\n if template_i is None:\n template_i = db.DBSession.query(Template).filter(Template.id==tmpltype_i.template_id).one()\n\n template_i.templatetypes.remove(tmpltype_i)\n\n db.DBSession.delete(tmpltype_i)\n db.DBSession.flush()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_templatetype(type_id,**kwargs):\n\n templatetype = db.DBSession.query(TemplateType).filter(\n TemplateType.id==type_id).options(\n joinedload_all(\"typeattrs\")).one()\n\n return templatetype", "response": "Get a specific resource type by ID."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_templatetype_by_name(template_id, type_name,**kwargs):\n\n try:\n templatetype = db.DBSession.query(TemplateType).filter(TemplateType.id==template_id, TemplateType.name==type_name).one()\n except NoResultFound:\n raise HydraError(\"%s is not a valid identifier for a type\"%(type_name))\n\n return templatetype", "response": "Get a specific resource type by name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd an typeattr to an existing type.", "response": "def add_typeattr(typeattr,**kwargs):\n \"\"\"\n Add an typeattr to an existing type.\n \"\"\"\n\n tmpltype = get_templatetype(typeattr.type_id, user_id=kwargs.get('user_id'))\n\n ta = _set_typeattr(typeattr)\n\n tmpltype.typeattrs.append(ta)\n\n db.DBSession.flush()\n\n return ta"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_typeattr(typeattr,**kwargs):\n\n tmpltype = get_templatetype(typeattr.type_id, user_id=kwargs.get('user_id'))\n\n ta = db.DBSession.query(TypeAttr).filter(TypeAttr.type_id == typeattr.type_id,\n TypeAttr.attr_id == typeattr.attr_id).one()\n\n tmpltype.typeattrs.remove(ta)\n\n db.DBSession.flush()\n\n return 'OK'", "response": "Remove an typeattr from an existing type\n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nvalidates a resource attribute against the resource scenario.", "response": "def validate_attr(resource_attr_id, scenario_id, template_id=None):\n \"\"\"\n Check that a resource attribute satisfies the requirements of all the types of the\n resource.\n \"\"\"\n rs = db.DBSession.query(ResourceScenario).\\\n filter(ResourceScenario.resource_attr_id==resource_attr_id,\n ResourceScenario.scenario_id==scenario_id).options(\n joinedload_all(\"resourceattr\")).options(\n joinedload_all(\"dataset\")\n ).one()\n\n error = None\n\n try:\n _do_validate_resourcescenario(rs, template_id)\n except HydraError as e:\n\n error = JSONObject(dict(\n ref_key = rs.resourceattr.ref_key,\n ref_id = rs.resourceattr.get_resource_id(),\n ref_name = rs.resourceattr.get_resource().get_name(),\n resource_attr_id = rs.resource_attr_id,\n attr_id = rs.resourceattr.attr.id,\n attr_name = rs.resourceattr.attr.name,\n dataset_id = rs.dataset_id,\n scenario_id=scenario_id,\n template_id=template_id,\n error_text=e.args[0]))\n return error"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate_attrs(resource_attr_ids, scenario_id, template_id=None):\n multi_rs = db.DBSession.query(ResourceScenario).\\\n filter(ResourceScenario.resource_attr_id.in_(resource_attr_ids),\\\n ResourceScenario.scenario_id==scenario_id).\\\n options(joinedload_all(\"resourceattr\")).\\\n options(joinedload_all(\"dataset\")).all()\n\n errors = []\n for rs in multi_rs:\n try:\n _do_validate_resourcescenario(rs, template_id)\n except HydraError as e:\n\n error = dict(\n ref_key = rs.resourceattr.ref_key,\n ref_id = rs.resourceattr.get_resource_id(),\n ref_name = rs.resourceattr.get_resource().get_name(),\n resource_attr_id = rs.resource_attr_id,\n attr_id = rs.resourceattr.attr.id,\n attr_name = rs.resourceattr.attr.name,\n dataset_id = rs.dataset_id,\n scenario_id = scenario_id,\n template_id = template_id,\n error_text = e.args[0])\n\n errors.append(error)\n\n return errors", "response": "Check that multiple resource attribute ids satisfy the requirements of the types of resources to the scenario."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _do_validate_resourcescenario(resourcescenario, template_id=None):\n res = resourcescenario.resourceattr.get_resource()\n\n types = res.types\n\n dataset = resourcescenario.dataset\n\n if len(types) == 0:\n return\n\n if template_id is not None:\n if template_id not in [r.templatetype.template_id for r in res.types]:\n raise HydraError(\"Template %s is not used for resource attribute %s in scenario %s\"%\\\n (template_id, resourcescenario.resourceattr.attr.name,\n resourcescenario.scenario.name))\n\n #Validate against all the types for the resource\n for resourcetype in types:\n #If a specific type has been specified, then only validate\n #against that type and ignore all the others\n if template_id is not None:\n if resourcetype.templatetype.template_id != template_id:\n continue\n #Identify the template types for the template\n tmpltype = resourcetype.templatetype\n for ta in tmpltype.typeattrs:\n #If we find a template type which mactches the current attribute.\n #we can do some validation.\n if ta.attr_id == resourcescenario.resourceattr.attr_id:\n if ta.data_restriction:\n log.debug(\"Validating against %s\", ta.data_restriction)\n validation_dict = eval(ta.data_restriction)\n dataset_util.validate_value(validation_dict, dataset.get_val())", "response": "Perform a check to ensure a resource scenario s datasets are correct given what the resource attribute of that resource is specified."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngives a network, scenario and template, ensure that all the nodes, links & groups in the network have the correct resource attributes as defined by the types in the template. Also ensure valid entries in tresourcetype. This validation will not fail if a resource has more than the required type, but will fail if it has fewer or if any attribute has a conflicting dimension or unit.", "response": "def validate_network(network_id, template_id, scenario_id=None):\n \"\"\"\n Given a network, scenario and template, ensure that all the nodes, links & groups\n in the network have the correct resource attributes as defined by the types in the template.\n Also ensure valid entries in tresourcetype.\n This validation will not fail if a resource has more than the required type, but will fail if\n it has fewer or if any attribute has a conflicting dimension or unit.\n \"\"\"\n\n network = db.DBSession.query(Network).filter(Network.id==network_id).options(noload('scenarios')).first()\n\n if network is None:\n raise HydraError(\"Could not find network %s\"%(network_id))\n\n resource_scenario_dict = {}\n if scenario_id is not None:\n scenario = db.DBSession.query(Scenario).filter(Scenario.id==scenario_id).first()\n\n if scenario is None:\n raise HydraError(\"Could not find scenario %s\"%(scenario_id,))\n\n for rs in scenario.resourcescenarios:\n resource_scenario_dict[rs.resource_attr_id] = rs\n\n template = db.DBSession.query(Template).filter(Template.id == template_id).options(joinedload_all('templatetypes')).first()\n\n if template is None:\n raise HydraError(\"Could not find template %s\"%(template_id,))\n\n resource_type_defs = {\n 'NETWORK' : {},\n 'NODE' : {},\n 'LINK' : {},\n 'GROUP' : {},\n }\n for tt in template.templatetypes:\n resource_type_defs[tt.resource_type][tt.id] = tt\n\n errors = []\n #Only check if there are type definitions for a network in the template.\n if resource_type_defs.get('NETWORK'):\n net_types = resource_type_defs['NETWORK']\n errors.extend(_validate_resource(network, net_types, resource_scenario_dict))\n\n #check all nodes\n if resource_type_defs.get('NODE'):\n node_types = resource_type_defs['NODE']\n for node in network.nodes:\n errors.extend(_validate_resource(node, node_types, resource_scenario_dict))\n\n #check all links\n if resource_type_defs.get('LINK'):\n link_types = resource_type_defs['LINK']\n for link in network.links:\n errors.extend(_validate_resource(link, link_types, resource_scenario_dict))\n\n #check all groups\n if resource_type_defs.get('GROUP'):\n group_types = resource_type_defs['GROUP']\n for group in network.resourcegroups:\n errors.extend(_validate_resource(group, group_types, resource_scenario_dict))\n\n return errors"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget an existing network into an xml template.", "response": "def get_network_as_xml_template(network_id,**kwargs):\n \"\"\"\n Turn an existing network into an xml template\n using its attributes.\n If an optional scenario ID is passed in, default\n values will be populated from that scenario.\n \"\"\"\n template_xml = etree.Element(\"template_definition\")\n\n net_i = db.DBSession.query(Network).filter(Network.id==network_id).one()\n\n template_name = etree.SubElement(template_xml, \"template_name\")\n template_name.text = \"TemplateType from Network %s\"%(net_i.name)\n layout = _get_layout_as_etree(net_i.layout)\n\n resources = etree.SubElement(template_xml, \"resources\")\n if net_i.attributes:\n net_resource = etree.SubElement(resources, \"resource\")\n\n resource_type = etree.SubElement(net_resource, \"type\")\n resource_type.text = \"NETWORK\"\n\n resource_name = etree.SubElement(net_resource, \"name\")\n resource_name.text = net_i.name\n\n layout = _get_layout_as_etree(net_i.layout)\n if layout is not None:\n net_resource.append(layout)\n\n for net_attr in net_i.attributes:\n _make_attr_element_from_resourceattr(net_resource, net_attr)\n\n resources.append(net_resource)\n\n existing_types = {'NODE': [], 'LINK': [], 'GROUP': []}\n for node_i in net_i.nodes:\n node_attributes = node_i.attributes\n attr_ids = [res_attr.attr_id for res_attr in node_attributes]\n if len(attr_ids) > 0 and attr_ids not in existing_types['NODE']:\n\n node_resource = etree.Element(\"resource\")\n\n resource_type = etree.SubElement(node_resource, \"type\")\n resource_type.text = \"NODE\"\n\n resource_name = etree.SubElement(node_resource, \"name\")\n resource_name.text = node_i.node_name\n\n layout = _get_layout_as_etree(node_i.layout)\n\n if layout is not None:\n node_resource.append(layout)\n\n for node_attr in node_attributes:\n _make_attr_element_from_resourceattr(node_resource, node_attr)\n\n existing_types['NODE'].append(attr_ids)\n resources.append(node_resource)\n\n for link_i in net_i.links:\n link_attributes = link_i.attributes\n attr_ids = [link_attr.attr_id for link_attr in link_attributes]\n if len(attr_ids) > 0 and attr_ids not in existing_types['LINK']:\n link_resource = etree.Element(\"resource\")\n\n resource_type = etree.SubElement(link_resource, \"type\")\n resource_type.text = \"LINK\"\n\n resource_name = etree.SubElement(link_resource, \"name\")\n resource_name.text = link_i.link_name\n\n layout = _get_layout_as_etree(link_i.layout)\n\n if layout is not None:\n link_resource.append(layout)\n\n for link_attr in link_attributes:\n _make_attr_element_from_resourceattr(link_resource, link_attr)\n\n existing_types['LINK'].append(attr_ids)\n resources.append(link_resource)\n\n for group_i in net_i.resourcegroups:\n group_attributes = group_i.attributes\n attr_ids = [group_attr.attr_id for group_attr in group_attributes]\n if len(attr_ids) > 0 and attr_ids not in existing_types['GROUP']:\n group_resource = etree.Element(\"resource\")\n\n resource_type = etree.SubElement(group_resource, \"type\")\n resource_type.text = \"GROUP\"\n\n resource_name = etree.SubElement(group_resource, \"name\")\n resource_name.text = group_i.group_name\n\n\n for group_attr in group_attributes:\n _make_attr_element_from_resourceattr(group_resource, group_attr)\n\n existing_types['GROUP'].append(attr_ids)\n resources.append(group_resource)\n\n xml_string = etree.tostring(template_xml, encoding=\"unicode\")\n\n return xml_string"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _make_attr_element_from_typeattr(parent, type_attr_i):\n\n attr = _make_attr_element(parent, type_attr_i.attr)\n\n if type_attr_i.unit_id is not None:\n attr_unit = etree.SubElement(attr, 'unit')\n attr_unit.text = units.get_unit(type_attr_i.unit_id).abbreviation\n\n attr_is_var = etree.SubElement(attr, 'is_var')\n attr_is_var.text = type_attr_i.attr_is_var\n\n if type_attr_i.data_type is not None:\n attr_data_type = etree.SubElement(attr, 'data_type')\n attr_data_type.text = type_attr_i.data_type\n\n if type_attr_i.data_restriction is not None:\n attr_data_restriction = etree.SubElement(attr, 'restrictions')\n attr_data_restriction.text = type_attr_i.data_restriction\n\n return attr", "response": "This function creates an attribute element from a type_attr element."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _make_attr_element_from_resourceattr(parent, resource_attr_i):\n\n attr = _make_attr_element(parent, resource_attr_i.attr)\n\n attr_is_var = etree.SubElement(attr, 'is_var')\n attr_is_var.text = resource_attr_i.attr_is_var\n\n return attr", "response": "This function creates an attribute element from a resource attribute."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _make_attr_element(parent, attr_i):\n attr = etree.SubElement(parent, \"attribute\")\n\n attr_name = etree.SubElement(attr, 'name')\n attr_name.text = attr_i.name\n\n attr_desc = etree.SubElement(attr, 'description')\n attr_desc.text = attr_i.description\n\n attr_dimension = etree.SubElement(attr, 'dimension')\n attr_dimension.text = units.get_dimension(attr_i.dimension_id, do_accept_dimension_id_none=True).name\n\n return attr", "response": "Create an attribute element from an attribute DB object"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_etree_layout_as_dict(layout_tree):\n layout_dict = dict()\n\n for item in layout_tree.findall('item'):\n name = item.find('name').text\n val_element = item.find('value')\n value = val_element.text.strip()\n if value == '':\n children = val_element.getchildren()\n value = etree.tostring(children[0], pretty_print=True, encoding=\"unicode\")\n layout_dict[name] = value\n return layout_dict", "response": "Convert a layout tree to a dict."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert a dict to an etree. Element", "response": "def _get_layout_as_etree(layout_dict):\n \"\"\"\n Convert something that looks like this:\n {\n 'color' : ['red'],\n 'shapefile' : ['blah.shp']\n }\n\n Into something that looks like this:\n \n \n color\n red\n \n \n shapefile\n blah.shp\n \n \n \"\"\"\n if layout_dict is None:\n return None\n\n layout = etree.Element(\"layout\")\n layout_dict = eval(layout_dict)\n for k, v in layout_dict.items():\n item = etree.SubElement(layout, \"item\")\n name = etree.SubElement(item, \"name\")\n name.text = k\n value = etree.SubElement(item, \"value\")\n value.text = str(v)\n\n return layout"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef valueFromDataset(cls, datatype, value, metadata=None, tmap=None):\n if tmap is None:\n tmap = typemap\n obj = cls.fromDataset(datatype, value, metadata=metadata, tmap=tmap)\n return obj.value", "response": "Return the value contained by dataset argument after casting to\n correct type and performing type - specific validation."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns an instance of the class corresponding to the specified datatype and value.", "response": "def fromDataset(datatype, value, metadata=None, tmap=None):\n \"\"\"\n Return a representation of dataset argument as an instance\n\n of the class corresponding to its datatype\n \"\"\"\n if tmap is None:\n tmap = typemap\n return tmap[datatype.upper()].fromDataset(value, metadata=metadata)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef exists_dimension(dimension_name,**kwargs):\n try:\n dimension = db.DBSession.query(Dimension).filter(Dimension.name==dimension_name).one()\n # At this point the dimension exists\n return True\n except NoResultFound:\n # The dimension does not exist\n raise False", "response": "Returns True if the dimension exists False otherwise."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts a value from one unit to another unit.", "response": "def convert_units(values, source_measure_or_unit_abbreviation, target_measure_or_unit_abbreviation,**kwargs):\n \"\"\"\n Convert a value from one unit to another one.\n\n Example::\n\n >>> cli = PluginLib.connect()\n >>> cli.service.convert_units(20.0, 'm', 'km')\n 0.02\n Parameters:\n values: single measure or an array of measures\n source_measure_or_unit_abbreviation: A measure in the source unit, or just the abbreviation of the source unit, from which convert the provided measure value/values\n target_measure_or_unit_abbreviation: A measure in the target unit, or just the abbreviation of the target unit, into which convert the provided measure value/values\n\n Returns:\n Always a list\n \"\"\"\n if numpy.isscalar(values):\n # If it is a scalar, converts to an array\n values = [values]\n float_values = [float(value) for value in values]\n values_to_return = convert(float_values, source_measure_or_unit_abbreviation, target_measure_or_unit_abbreviation)\n\n return values_to_return"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef convert(values, source_measure_or_unit_abbreviation, target_measure_or_unit_abbreviation):\n\n source_dimension = get_dimension_by_unit_measure_or_abbreviation(source_measure_or_unit_abbreviation)\n target_dimension = get_dimension_by_unit_measure_or_abbreviation(target_measure_or_unit_abbreviation)\n\n if source_dimension == target_dimension:\n source=JSONObject({})\n target=JSONObject({})\n source.unit_abbreviation, source.factor = _parse_unit(source_measure_or_unit_abbreviation)\n target.unit_abbreviation, target.factor = _parse_unit(target_measure_or_unit_abbreviation)\n\n source.unit_data = get_unit_by_abbreviation(source.unit_abbreviation)\n target.unit_data = get_unit_by_abbreviation(target.unit_abbreviation)\n\n source.conv_factor = JSONObject({'lf': source.unit_data.lf, 'cf': source.unit_data.cf})\n target.conv_factor = JSONObject({'lf': target.unit_data.lf, 'cf': target.unit_data.cf})\n\n if isinstance(values, float):\n # If values is a float => returns a float\n return (source.conv_factor.lf / target.conv_factor.lf * (source.factor * values)\n + (source.conv_factor.cf - target.conv_factor.cf)\n / target.conv_factor.lf) / target.factor\n elif isinstance(values, list):\n # If values is a list of floats => returns a list of floats\n return [(source.conv_factor.lf / target.conv_factor.lf * (source.factor * value)\n + (source.conv_factor.cf - target.conv_factor.cf)\n / target.conv_factor.lf) / target.factor for value in values]\n else:\n raise HydraError(\"Unit conversion: dimensions are not consistent.\")", "response": "Convert a value or a list of values from an unit to another one."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_empty_dimension(**kwargs):\n dimension = JSONObject(Dimension())\n dimension.id = None\n dimension.name = ''\n dimension.description = ''\n dimension.project_id = None\n dimension.units = []\n return dimension", "response": "Returns an empty dimension object initialized with empty values"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_dimension(dimension_id, do_accept_dimension_id_none=False,**kwargs):\n if do_accept_dimension_id_none == True and dimension_id is None:\n # In this special case, the method returns a dimension with id None\n return get_empty_dimension()\n\n try:\n dimension = db.DBSession.query(Dimension).filter(Dimension.id==dimension_id).one()\n\n #lazy load units\n dimension.units\n\n return JSONObject(dimension)\n except NoResultFound:\n # The dimension does not exist\n raise ResourceNotFoundError(\"Dimension %s not found\"%(dimension_id))", "response": "Given a dimension id returns all its data\n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a list of objects describing all the dimensions with all the units.", "response": "def get_dimensions(**kwargs):\n \"\"\"\n Returns a list of objects describing all the dimensions with all the units.\n \"\"\"\n dimensions_list = db.DBSession.query(Dimension).options(load_only(\"id\")).all()\n return_list = []\n for dimension in dimensions_list:\n return_list.append(get_dimension(dimension.id))\n\n\n return return_list"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_dimension_by_name(dimension_name,**kwargs):\n try:\n if dimension_name is None:\n dimension_name = ''\n dimension = db.DBSession.query(Dimension).filter(func.lower(Dimension.name)==func.lower(dimension_name.strip())).one()\n\n return get_dimension(dimension.id)\n\n except NoResultFound:\n # The dimension does not exist\n raise ResourceNotFoundError(\"Dimension %s not found\"%(dimension_name))", "response": "Returns all the data for a given dimension name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_unit(unit_id, **kwargs):\n try:\n unit = db.DBSession.query(Unit).filter(Unit.id==unit_id).one()\n return JSONObject(unit)\n except NoResultFound:\n # The dimension does not exist\n raise ResourceNotFoundError(\"Unit %s not found\"%(unit_id))", "response": "Returns a single unit"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_units(**kwargs):\n units_list = db.DBSession.query(Unit).all()\n units = []\n for unit in units_list:\n new_unit = JSONObject(unit)\n units.append(new_unit)\n\n return units", "response": "Returns all the units in the database"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the physical dimension a given unit abbreviation of a measure or the measure itself refers to.", "response": "def get_dimension_by_unit_measure_or_abbreviation(measure_or_unit_abbreviation,**kwargs):\n \"\"\"\n Return the physical dimension a given unit abbreviation of a measure, or the measure itself, refers to.\n The search key is the abbreviation or the full measure\n \"\"\"\n\n unit_abbreviation, factor = _parse_unit(measure_or_unit_abbreviation)\n\n units = db.DBSession.query(Unit).filter(Unit.abbreviation==unit_abbreviation).all()\n\n if len(units) == 0:\n raise HydraError('Unit %s not found.'%(unit_abbreviation))\n elif len(units) > 1:\n raise HydraError('Unit %s has multiple dimensions not found.'%(unit_abbreviation))\n else:\n dimension = db.DBSession.query(Dimension).filter(Dimension.id==units[0].dimension_id).one()\n return str(dimension.name)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_dimension_by_unit_id(unit_id, do_accept_unit_id_none=False, **kwargs):\n if do_accept_unit_id_none == True and unit_id is None:\n # In this special case, the method returns a dimension with id None\n return get_empty_dimension()\n\n try:\n dimension = db.DBSession.query(Dimension).join(Unit).filter(Unit.id==unit_id).filter().one()\n return get_dimension(dimension.id)\n except NoResultFound:\n # The dimension does not exist\n raise ResourceNotFoundError(\"Unit %s not found\"%(unit_id))", "response": "Returns the Dimension a given unit id refers to."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_unit_by_abbreviation(unit_abbreviation, **kwargs):\n try:\n if unit_abbreviation is None:\n unit_abbreviation = ''\n unit_i = db.DBSession.query(Unit).filter(Unit.abbreviation==unit_abbreviation.strip()).one()\n return JSONObject(unit_i)\n except NoResultFound:\n # The dimension does not exist\n raise ResourceNotFoundError(\"Unit '%s' not found\"%(unit_abbreviation))", "response": "Returns a single unit by abbreviation. Used as utility function to resolve string to id\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a dimension to the DB", "response": "def add_dimension(dimension,**kwargs):\n \"\"\"\n Add the dimension defined into the object \"dimension\" to the DB\n If dimension[\"project_id\"] is None it means that the dimension is global, otherwise is property of a project\n If the dimension exists emits an exception\n \"\"\"\n if numpy.isscalar(dimension):\n # If it is a scalar, converts to an Object\n dimension = {'name': dimension}\n\n new_dimension = Dimension()\n new_dimension.name = dimension[\"name\"]\n\n if \"description\" in dimension and dimension[\"description\"] is not None:\n new_dimension.description = dimension[\"description\"]\n if \"project_id\" in dimension and dimension[\"project_id\"] is not None:\n new_dimension.project_id = dimension[\"project_id\"]\n\n # Save on DB\n db.DBSession.add(new_dimension)\n db.DBSession.flush()\n\n # Load all the record\n db_dimension = db.DBSession.query(Dimension).filter(Dimension.id==new_dimension.id).one()\n\n return JSONObject(db_dimension)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_dimension(dimension,**kwargs):\n db_dimension = None\n dimension = JSONObject(dimension)\n try:\n db_dimension = db.DBSession.query(Dimension).filter(Dimension.id==dimension.id).filter().one()\n\n if \"description\" in dimension and dimension[\"description\"] is not None:\n db_dimension.description = dimension[\"description\"]\n if \"project_id\" in dimension and dimension[\"project_id\"] is not None and dimension[\"project_id\"] != \"\" and dimension[\"project_id\"].isdigit():\n db_dimension.project_id = dimension[\"project_id\"]\n except NoResultFound:\n raise ResourceNotFoundError(\"Dimension (ID=%s) does not exist\"%(dimension.id))\n\n\n db.DBSession.flush()\n return JSONObject(db_dimension)", "response": "Update a dimension in the DB."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete_dimension(dimension_id,**kwargs):\n try:\n dimension = db.DBSession.query(Dimension).filter(Dimension.id==dimension_id).one()\n\n db.DBSession.query(Unit).filter(Unit.dimension_id==dimension.id).delete()\n\n db.DBSession.delete(dimension)\n db.DBSession.flush()\n return True\n except NoResultFound:\n raise ResourceNotFoundError(\"Dimension (dimension_id=%s) does not exist\"%(dimension_id))", "response": "Delete a dimension from the DB. Raises and exception if the dimension does not exist."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds all the dimensions contained in the passed list to the current node.", "response": "def bulk_add_dimensions(dimension_list, **kwargs):\n \"\"\"\n Save all the dimensions contained in the passed list.\n \"\"\"\n added_dimensions = []\n for dimension in dimension_list:\n added_dimensions.append(add_dimension(dimension, **kwargs))\n\n return JSONObject({\"dimensions\": added_dimensions})"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a unit to the DB.", "response": "def add_unit(unit,**kwargs):\n \"\"\"\n Add the unit defined into the object \"unit\" to the DB\n If unit[\"project_id\"] is None it means that the unit is global, otherwise is property of a project\n If the unit exists emits an exception\n\n\n A minimal example:\n\n .. code-block:: python\n\n new_unit = dict(\n\n name = 'Teaspoons per second',\n abbreviation = 'tsp s^-1',\n cf = 0, # Constant conversion factor\n lf = 1.47867648e-05, # Linear conversion factor\n dimension_id = 2,\n description = 'A flow of one teaspoon per second.',\n )\n add_unit(new_unit)\n\n\n \"\"\"\n\n new_unit = Unit()\n new_unit.dimension_id = unit[\"dimension_id\"]\n new_unit.name = unit['name']\n\n # Needed to uniform abbr to abbreviation\n new_unit.abbreviation = unit['abbreviation']\n\n # Needed to uniform into to description\n new_unit.description = unit['description']\n\n new_unit.lf = unit['lf']\n new_unit.cf = unit['cf']\n\n if ('project_id' in unit) and (unit['project_id'] is not None):\n # Adding dimension to the \"user\" dimensions list\n new_unit.project_id = unit['project_id']\n\n # Save on DB\n db.DBSession.add(new_unit)\n db.DBSession.flush()\n\n return JSONObject(new_unit)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef bulk_add_units(unit_list, **kwargs):\n # for unit in unit_list:\n # add_unit(unit, **kwargs)\n\n added_units = []\n for unit in unit_list:\n added_units.append(add_unit(unit, **kwargs))\n\n return JSONObject({\"units\": added_units})", "response": "Add all the units contained in the passed list to the current page."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete a unit from the DB. Raises and exception if the unit does not exist", "response": "def delete_unit(unit_id, **kwargs):\n \"\"\"\n Delete a unit from the DB.\n Raises and exception if the unit does not exist\n \"\"\"\n\n try:\n db_unit = db.DBSession.query(Unit).filter(Unit.id==unit_id).one()\n\n db.DBSession.delete(db_unit)\n db.DBSession.flush()\n return True\n except NoResultFound:\n raise ResourceNotFoundError(\"Unit (ID=%s) does not exist\"%(unit_id))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_unit(unit, **kwargs):\n try:\n\n db_unit = db.DBSession.query(Unit).join(Dimension).filter(Unit.id==unit[\"id\"]).filter().one()\n\n db_unit.name = unit[\"name\"]\n\n # Needed to uniform into to description\n db_unit.abbreviation = unit.abbreviation\n db_unit.description = unit.description\n\n db_unit.lf = unit[\"lf\"]\n db_unit.cf = unit[\"cf\"]\n if \"project_id\" in unit and unit['project_id'] is not None and unit['project_id'] != \"\":\n db_unit.project_id = unit[\"project_id\"]\n except NoResultFound:\n raise ResourceNotFoundError(\"Unit (ID=%s) does not exist\"%(unit[\"id\"]))\n\n\n db.DBSession.flush()\n return JSONObject(db_unit)", "response": "Update a unit in the DB."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef convert_dataset(dataset_id, target_unit_abbreviation,**kwargs):\n\n ds_i = db.DBSession.query(Dataset).filter(Dataset.id==dataset_id).one()\n\n dataset_type = ds_i.type\n\n dsval = ds_i.get_val()\n source_unit_abbreviation = get_unit(ds_i.unit_id).abbreviation\n\n if source_unit_abbreviation is not None:\n if dataset_type == 'scalar':\n new_val = convert(float(dsval), source_unit_abbreviation, target_unit_abbreviation)\n elif dataset_type == 'array':\n dim = array_dim(dsval)\n vecdata = arr_to_vector(dsval)\n newvec = convert(vecdata, source_unit_abbreviation, target_unit_abbreviation)\n new_val = vector_to_arr(newvec, dim)\n elif dataset_type == 'timeseries':\n new_val = []\n for ts_time, ts_val in dsval.items():\n dim = array_dim(ts_val)\n vecdata = arr_to_vector(ts_val)\n newvec = convert(vecdata, source_unit_abbreviation, target_unit_abbreviation)\n newarr = vector_to_arr(newvec, dim)\n new_val.append(ts_time, newarr)\n elif dataset_type == 'descriptor':\n raise HydraError('Cannot convert descriptor.')\n\n new_dataset = Dataset()\n new_dataset.type = ds_i.type\n new_dataset.value = str(new_val) # The data type is TEXT!!!\n new_dataset.name = ds_i.name\n\n new_dataset.unit_id = get_unit_by_abbreviation(target_unit_abbreviation).id\n new_dataset.hidden = 'N'\n new_dataset.set_metadata(ds_i.get_metadata_as_dict())\n new_dataset.set_hash()\n\n existing_ds = db.DBSession.query(Dataset).filter(Dataset.hash==new_dataset.hash).first()\n\n if existing_ds is not None:\n db.DBSession.expunge_all()\n return existing_ds.id\n\n db.DBSession.add(new_dataset)\n db.DBSession.flush()\n\n return new_dataset.id\n\n else:\n raise HydraError('Dataset has no units.')", "response": "Convert a whole dataset to a new unit."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nvalidating that the resource and attributes in the template are in the data.", "response": "def validate_resource_attributes(resource, attributes, template, check_unit=True, exact_match=False,**kwargs):\n \"\"\"\n Validate that the resource provided matches the template.\n Only passes if the resource contains ONLY the attributes specified\n in the template.\n\n The template should take the form of a dictionary, as should the\n resources.\n\n *check_unit*: Makes sure that if a unit is specified in the template, it\n is the same in the data\n *exact_match*: Ensures that all the attributes in the template are in\n the data also. By default this is false, meaning a subset\n of the template attributes may be specified in the data.\n An attribute specified in the data *must* be defined in\n the template.\n\n @returns a list of error messages. An empty list indicates no\n errors were found.\n \"\"\"\n errors = []\n #is it a node or link?\n res_type = 'GROUP'\n if resource.get('x') is not None:\n res_type = 'NODE'\n elif resource.get('node_1_id') is not None:\n res_type = 'LINK'\n elif resource.get('nodes') is not None:\n res_type = 'NETWORK'\n\n #Find all the node/link/network definitions in the template\n tmpl_res = template['resources'][res_type]\n\n #the user specified type of the resource\n res_user_type = resource.get('type')\n\n #Check the user specified type is in the template\n if res_user_type is None:\n errors.append(\"No type specified on resource %s\"%(resource['name']))\n\n elif tmpl_res.get(res_user_type) is None:\n errors.append(\"Resource %s is defined as having type %s but \"\n \"this type is not specified in the template.\"%\n (resource['name'], res_user_type))\n\n #It is in the template. Now check all the attributes are correct.\n tmpl_attrs = tmpl_res.get(res_user_type)['attributes']\n\n attrs = {}\n for a in attributes.values():\n attrs[a['id']] = a\n\n for a in tmpl_attrs.values():\n if a.get('id') is not None:\n attrs[a['id']] = {'name':a['name'], 'unit':a.get('unit'), 'dimen':a.get('dimension')}\n\n if exact_match is True:\n #Check that all the attributes in the template are in the data.\n #get all the attribute names from the template\n tmpl_attr_names = set(tmpl_attrs.keys())\n #get all the attribute names from the data for this resource\n resource_attr_names = []\n for ra in resource['attributes']:\n attr_name = attrs[ra['attr_id']]['name']\n resource_attr_names.append(attr_name)\n resource_attr_names = set(resource_attr_names)\n\n #Compare the two lists to ensure they are the same (using sets is easier)\n in_tmpl_not_in_resource = tmpl_attr_names - resource_attr_names\n in_resource_not_in_tmpl = resource_attr_names - tmpl_attr_names\n\n if len(in_tmpl_not_in_resource) > 0:\n errors.append(\"Template has defined attributes %s for type %s but they are not\"\n \" specified in the Data.\"%(','.join(in_tmpl_not_in_resource),\n res_user_type ))\n\n if len(in_resource_not_in_tmpl) > 0:\n errors.append(\"Resource %s (type %s) has defined attributes %s but this is not\"\n \" specified in the Template.\"%(resource['name'],\n res_user_type,\n ','.join(in_resource_not_in_tmpl)))\n\n #Check that each of the attributes specified on the resource are valid.\n for res_attr in resource['attributes']:\n\n attr = attrs.get(res_attr['attr_id'])\n\n if attr is None:\n errors.append(\"An attribute mismatch has occurred. Attr %s is not \"\n \"defined in the data but is present on resource %s\"\n %(res_attr['attr_id'], resource['name']))\n continue\n\n #If an attribute is not specified in the template, then throw an error\n if tmpl_attrs.get(attr['name']) is None:\n errors.append(\"Resource %s has defined attribute %s but this is not\"\n \" specified in the Template.\"%(resource['name'], attr['name']))\n else:\n #If the dimensions or units don't match, throw an error\n\n tmpl_attr = tmpl_attrs[attr['name']]\n\n if tmpl_attr.get('data_type') is not None:\n if res_attr.get('type') is not None:\n if tmpl_attr.get('data_type') != res_attr.get('type'):\n errors.append(\"Error in data. Template says that %s on %s is a %s, but data suggests it is a %s\"%\n (attr['name'], resource['name'], tmpl_attr.get('data_type'), res_attr.get('type')))\n\n attr_dimension = 'dimensionless' if attr.get('dimension') is None else attr.get('dimension')\n tmpl_attr_dimension = 'dimensionless' if tmpl_attr.get('dimension') is None else tmpl_attr.get('dimension')\n\n if attr_dimension.lower() != tmpl_attr_dimension.lower():\n errors.append(\"Dimension mismatch on resource %s for attribute %s\"\n \" (template says %s on type %s, data says %s)\"%\n (resource['name'], attr.get('name'),\n tmpl_attr.get('dimension'), res_user_type, attr_dimension))\n\n if check_unit is True:\n if tmpl_attr.get('unit') is not None:\n if attr.get('unit') != tmpl_attr.get('unit'):\n errors.append(\"Unit mismatch for resource %s with unit %s \"\n \"(template says %s for type %s)\"\n %(resource['name'], attr.get('unit'),\n tmpl_attr.get('unit'), res_user_type))\n\n return errors"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef encode(encoding, data):\n data = ensure_bytes(data, 'utf8')\n try:\n return ENCODINGS_LOOKUP[encoding].code + ENCODINGS_LOOKUP[encoding].converter.encode(data)\n except KeyError:\n raise ValueError('Encoding {} not supported.'.format(encoding))", "response": "Encodes the given data using the specified encoding that is specified\n ."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the codec used to encode the given data.", "response": "def get_codec(data):\n \"\"\"\n Returns the codec used to encode the given data\n\n :param data: multibase encoded data\n :type data: str or bytes\n :return: the :py:obj:`multibase.Encoding` object for the data's codec\n :raises ValueError: if the codec is not supported\n \"\"\"\n try:\n key = ensure_bytes(data[:CODE_LENGTH], 'utf8')\n codec = ENCODINGS_LOOKUP[key]\n except KeyError:\n raise ValueError('Can not determine encoding for {}'.format(data))\n else:\n return codec"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndecodes the multibase encoded data", "response": "def decode(data):\n \"\"\"\n Decode the multibase decoded data\n :param data: multibase encoded data\n :type data: str or bytes\n :return: decoded data\n :rtype: str\n :raises ValueError: if the data is not multibase encoded\n \"\"\"\n data = ensure_bytes(data, 'utf8')\n codec = get_codec(data)\n return codec.converter.decode(data[CODE_LENGTH:])"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates a new key pair from a secret.", "response": "def ed25519_generate_key_pair_from_secret(secret):\n \"\"\"\n Generate a new key pair.\n Args:\n secret (:class:`string`): A secret that serves as a seed\n Returns:\n A tuple of (private_key, public_key) encoded in base58.\n \"\"\"\n\n # if you want to do this correctly, use a key derivation function!\n if not isinstance(secret, bytes):\n secret = secret.encode()\n\n hash_bytes = sha3.keccak_256(secret).digest()\n sk = Ed25519SigningKeyFromHash.generate(hash_bytes=hash_bytes)\n # Private key\n private_value_base58 = sk.encode(encoding='base58')\n\n # Public key\n public_value_compressed_base58 = sk.get_verifying_key().encode(encoding='base58')\n\n return private_value_base58, public_value_compressed_base58"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating a cryptographic key pair.", "response": "def generate_key_pair(secret=None):\n \"\"\"Generates a cryptographic key pair.\n Args:\n secret (:class:`string`): A secret that serves as a seed\n Returns:\n :class:`~bigchaindb.common.crypto.CryptoKeypair`: A\n :obj:`collections.namedtuple` with named fields\n :attr:`~bigchaindb.common.crypto.CryptoKeypair.private_key` and\n :attr:`~bigchaindb.common.crypto.CryptoKeypair.public_key`.\n \"\"\"\n if secret:\n keypair_raw = ed25519_generate_key_pair_from_secret(secret)\n return CryptoKeypair(\n *(k.decode() for k in keypair_raw))\n else:\n return generate_keypair()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the role arn from X - Role - ARN header and REMOTE -ADDR.", "response": "def _get_role_arn():\n \"\"\"\n Return role arn from X-Role-ARN header,\n lookup role arn from source IP,\n or fall back to command line default.\n \"\"\"\n role_arn = bottle.request.headers.get('X-Role-ARN')\n if not role_arn:\n role_arn = _lookup_ip_role_arn(bottle.request.environ.get('REMOTE_ADDR'))\n if not role_arn:\n role_arn = _role_arn\n return role_arn"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncollect data into fixed - length chunks or blocks", "response": "def _chunk_with_padding(self, iterable, n, fillvalue=None):\n \"Collect data into fixed-length chunks or blocks\"\n # _chunk_with_padding('ABCDEFG', 3, 'x') --> ABC DEF Gxx\"\n args = [iter(iterable)] * n\n return zip_longest(*args, fillvalue=fillvalue)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef marshall(values):\n serialized = {}\n for key in values:\n serialized[key] = _marshall_value(values[key])\n return serialized", "response": "Marshall a dict into something DynamoDB likes."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntransforms a response payload from DynamoDB to a native dict", "response": "def unmarshall(values):\n \"\"\"\n Transform a response payload from DynamoDB to a native dict\n\n :param dict values: The response payload from DynamoDB\n :rtype: dict\n :raises ValueError: if an unsupported type code is encountered\n\n \"\"\"\n unmarshalled = {}\n for key in values:\n unmarshalled[key] = _unmarshall_dict(values[key])\n return unmarshalled"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntransforms a value into a AttributeValue dict", "response": "def _marshall_value(value):\n \"\"\"\n Recursively transform `value` into an AttributeValue `dict`\n\n :param mixed value: The value to encode\n :rtype: dict\n :raises ValueError: for unsupported types\n\n Return the value as dict indicating the data type and transform or\n recursively process the value if required.\n\n \"\"\"\n if PYTHON3 and isinstance(value, bytes):\n return {'B': base64.b64encode(value).decode('ascii')}\n elif PYTHON3 and isinstance(value, str):\n return {'S': value}\n elif not PYTHON3 and isinstance(value, str):\n if is_binary(value):\n return {'B': base64.b64encode(value).decode('ascii')}\n return {'S': value}\n elif not PYTHON3 and isinstance(value, unicode):\n return {'S': value.encode('utf-8')}\n elif isinstance(value, dict):\n return {'M': marshall(value)}\n elif isinstance(value, bool):\n return {'BOOL': value}\n elif isinstance(value, (int, float)):\n return {'N': str(value)}\n elif isinstance(value, datetime.datetime):\n return {'S': value.isoformat()}\n elif isinstance(value, uuid.UUID):\n return {'S': str(value)}\n elif isinstance(value, list):\n return {'L': [_marshall_value(v) for v in value]}\n elif isinstance(value, set):\n if PYTHON3 and all([isinstance(v, bytes) for v in value]):\n return {'BS': _encode_binary_set(value)}\n elif PYTHON3 and all([isinstance(v, str) for v in value]):\n return {'SS': sorted(list(value))}\n elif all([isinstance(v, (int, float)) for v in value]):\n return {'NS': sorted([str(v) for v in value])}\n elif not PYTHON3 and all([isinstance(v, str) for v in value]) and \\\n all([is_binary(v) for v in value]):\n return {'BS': _encode_binary_set(value)}\n elif not PYTHON3 and all([isinstance(v, str) for v in value]) and \\\n all([is_binary(v) is False for v in value]):\n return {'SS': sorted(list(value))}\n else:\n raise ValueError('Can not mix types in a set')\n elif value is None:\n return {'NULL': True}\n raise ValueError('Unsupported type: %s' % type(value))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _unwrap_result(action, result):\n if not result:\n return\n elif action in {'DeleteItem', 'PutItem', 'UpdateItem'}:\n return _unwrap_delete_put_update_item(result)\n elif action == 'GetItem':\n return _unwrap_get_item(result)\n elif action == 'Query' or action == 'Scan':\n return _unwrap_query_scan(result)\n elif action == 'CreateTable':\n return _unwrap_create_table(result)\n elif action == 'DescribeTable':\n return _unwrap_describe_table(result)\n return result", "response": "Unwrap a request response and return only the response data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninvoke the `ListTables`_ function. Returns an array of table names associated with the current account and endpoint. The output from *ListTables* is paginated, with each page returning a maximum of ``100`` table names. :param str exclusive_start_table_name: The first table name that this operation will evaluate. Use the value that was returned for ``LastEvaluatedTableName`` in a previous operation, so that you can obtain the next page of results. :param int limit: A maximum number of table names to return. If this parameter is not specified, the limit is ``100``. .. _ListTables: http://docs.aws.amazon.com/amazondynamodb/ latest/APIReference/API_ListTables.html", "response": "def list_tables(self, exclusive_start_table_name=None, limit=None):\n \"\"\"\n Invoke the `ListTables`_ function.\n\n Returns an array of table names associated with the current account\n and endpoint. The output from *ListTables* is paginated, with each page\n returning a maximum of ``100`` table names.\n\n :param str exclusive_start_table_name: The first table name that this\n operation will evaluate. Use the value that was returned for\n ``LastEvaluatedTableName`` in a previous operation, so that you can\n obtain the next page of results.\n :param int limit: A maximum number of table names to return. If this\n parameter is not specified, the limit is ``100``.\n\n .. _ListTables: http://docs.aws.amazon.com/amazondynamodb/\n latest/APIReference/API_ListTables.html\n\n \"\"\"\n payload = {}\n if exclusive_start_table_name:\n payload['ExclusiveStartTableName'] = exclusive_start_table_name\n if limit:\n payload['Limit'] = limit\n return self.execute('ListTables', payload)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef put_item(self, table_name, item,\n condition_expression=None,\n expression_attribute_names=None,\n expression_attribute_values=None,\n return_consumed_capacity=None,\n return_item_collection_metrics=None,\n return_values=None):\n \"\"\"Invoke the `PutItem`_ function, creating a new item, or replaces an\n old item with a new item. If an item that has the same primary key as\n the new item already exists in the specified table, the new item\n completely replaces the existing item. You can perform a conditional\n put operation (add a new item if one with the specified primary key\n doesn't exist), or replace an existing item if it has certain attribute\n values.\n\n For more information about using this API, see Working with Items in\n the Amazon DynamoDB Developer Guide.\n\n :param str table_name: The table to put the item to\n :param dict item: A map of attribute name/value pairs, one for each\n attribute. Only the primary key attributes are required; you can\n optionally provide other attribute name-value pairs for the item.\n\n You must provide all of the attributes for the primary key. For\n example, with a simple primary key, you only need to provide a\n value for the partition key. For a composite primary key, you must\n provide both values for both the partition key and the sort key.\n\n If you specify any attributes that are part of an index key, then\n the data types for those attributes must match those of the schema\n in the table's attribute definition.\n :param str condition_expression: A condition that must be satisfied in\n order for a conditional *PutItem* operation to succeed. See the\n `AWS documentation for ConditionExpression `_ for more information.\n :param dict expression_attribute_names: One or more substitution tokens\n for attribute names in an expression. See the `AWS documentation\n for ExpressionAttributeNames `_ for more information.\n :param dict expression_attribute_values: One or more values that can be\n substituted in an expression. See the `AWS documentation\n for ExpressionAttributeValues `_ for more information.\n :param str return_consumed_capacity: Determines the level of detail\n about provisioned throughput consumption that is returned in the\n response. Should be ``None`` or one of ``INDEXES`` or ``TOTAL``\n :param str return_item_collection_metrics: Determines whether item\n collection metrics are returned.\n :param str return_values: Use ``ReturnValues`` if you want to get the\n item attributes as they appeared before they were updated with the\n ``PutItem`` request.\n :rtype: tornado.concurrent.Future\n\n .. _PutItem: http://docs.aws.amazon.com/amazondynamodb/\n latest/APIReference/API_PutItem.html\n\n \"\"\"\n payload = {'TableName': table_name, 'Item': utils.marshall(item)}\n if condition_expression:\n payload['ConditionExpression'] = condition_expression\n if expression_attribute_names:\n payload['ExpressionAttributeNames'] = expression_attribute_names\n if expression_attribute_values:\n payload['ExpressionAttributeValues'] = expression_attribute_values\n if return_consumed_capacity:\n payload['ReturnConsumedCapacity'] = return_consumed_capacity\n if return_item_collection_metrics:\n payload['ReturnItemCollectionMetrics'] = 'SIZE'\n if return_values:\n _validate_return_values(return_values)\n payload['ReturnValues'] = return_values\n return self.execute('PutItem', payload)", "response": "Invoke the PutItem method in DynamoDB to add or replace an existing item."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninvoke the `GetItem`_ function. :param str table_name: table to retrieve the item from :param dict key_dict: key to use for retrieval. This will be marshalled for you so a native :class:`dict` works. :param bool consistent_read: Determines the read consistency model: If set to :py:data`True`, then the operation uses strongly consistent reads; otherwise, the operation uses eventually consistent reads. :param dict expression_attribute_names: One or more substitution tokens for attribute names in an expression. :param str projection_expression: A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas. If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result. :param str return_consumed_capacity: Determines the level of detail about provisioned throughput consumption that is returned in the response: - INDEXES: The response includes the aggregate consumed capacity for the operation, together with consumed capacity for each table and secondary index that was accessed. Note that some operations, such as *GetItem* and *BatchGetItem*, do not access any indexes at all. In these cases, specifying INDEXES will only return consumed capacity information for table(s). - TOTAL: The response includes only the aggregate consumed capacity for the operation. - NONE: No consumed capacity details are included in the response. :rtype: tornado.concurrent.Future .. _GetItem: http://docs.aws.amazon.com/amazondynamodb/ latest/APIReference/API_GetItem.html", "response": "def get_item(self, table_name, key_dict,\n consistent_read=False,\n expression_attribute_names=None,\n projection_expression=None,\n return_consumed_capacity=None):\n \"\"\"\n Invoke the `GetItem`_ function.\n\n :param str table_name: table to retrieve the item from\n :param dict key_dict: key to use for retrieval. This will\n be marshalled for you so a native :class:`dict` works.\n :param bool consistent_read: Determines the read consistency model: If\n set to :py:data`True`, then the operation uses strongly consistent\n reads; otherwise, the operation uses eventually consistent reads.\n :param dict expression_attribute_names: One or more substitution tokens\n for attribute names in an expression.\n :param str projection_expression: A string that identifies one or more\n attributes to retrieve from the table. These attributes can include\n scalars, sets, or elements of a JSON document. The attributes in\n the expression must be separated by commas. If no attribute names\n are specified, then all attributes will be returned. If any of the\n requested attributes are not found, they will not appear in the\n result.\n :param str return_consumed_capacity: Determines the level of detail\n about provisioned throughput consumption that is returned in the\n response:\n\n - INDEXES: The response includes the aggregate consumed\n capacity for the operation, together with consumed capacity for\n each table and secondary index that was accessed. Note that\n some operations, such as *GetItem* and *BatchGetItem*, do not\n access any indexes at all. In these cases, specifying INDEXES\n will only return consumed capacity information for table(s).\n - TOTAL: The response includes only the aggregate consumed\n capacity for the operation.\n - NONE: No consumed capacity details are included in the\n response.\n :rtype: tornado.concurrent.Future\n\n .. _GetItem: http://docs.aws.amazon.com/amazondynamodb/\n latest/APIReference/API_GetItem.html\n\n \"\"\"\n payload = {'TableName': table_name,\n 'Key': utils.marshall(key_dict),\n 'ConsistentRead': consistent_read}\n if expression_attribute_names:\n payload['ExpressionAttributeNames'] = expression_attribute_names\n if projection_expression:\n payload['ProjectionExpression'] = projection_expression\n if return_consumed_capacity:\n _validate_return_consumed_capacity(return_consumed_capacity)\n payload['ReturnConsumedCapacity'] = return_consumed_capacity\n return self.execute('GetItem', payload)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ninvoking the `UpdateItem`_ function. Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values). :param str table_name: The name of the table that contains the item to update :param dict key_dict: A dictionary of key/value pairs that are used to define the primary key values for the item. For the primary key, you must provide all of the attributes. For example, with a simple primary key, you only need to provide a value for the partition key. For a composite primary key, you must provide values for both the partition key and the sort key. :param str condition_expression: A condition that must be satisfied in order for a conditional *UpdateItem* operation to succeed. One of: ``attribute_exists``, ``attribute_not_exists``, ``attribute_type``, ``contains``, ``begins_with``, ``size``, ``=``, ``<>``, ``<``, ``>``, ``<=``, ``>=``, ``BETWEEN``, ``IN``, ``AND``, ``OR``, or ``NOT``. :param str update_expression: An expression that defines one or more attributes to be updated, the action to be performed on them, and new value(s) for them. :param dict expression_attribute_names: One or more substitution tokens for attribute names in an expression. :param dict expression_attribute_values: One or more values that can be substituted in an expression. :param str return_consumed_capacity: Determines the level of detail about provisioned throughput consumption that is returned in the response. See the `AWS documentation for ReturnConsumedCapacity `_ for more information. :param str return_item_collection_metrics: Determines whether item collection metrics are returned. :param str return_values: Use ReturnValues if you want to get the item attributes as they appeared either before or after they were updated. See the `AWS documentation for ReturnValues `_ :rtype: tornado.concurrent.Future .. _UpdateItem: http://docs.aws.amazon.com/amazondynamodb/ latest/APIReference/API_UpdateItem.html", "response": "def update_item(self, table_name, key_dict,\n condition_expression=None,\n update_expression=None,\n expression_attribute_names=None,\n expression_attribute_values=None,\n return_consumed_capacity=None,\n return_item_collection_metrics=None,\n return_values=None):\n \"\"\"Invoke the `UpdateItem`_ function.\n\n Edits an existing item's attributes, or adds a new item to the table\n if it does not already exist. You can put, delete, or add attribute\n values. You can also perform a conditional update on an existing item\n (insert a new attribute name-value pair if it doesn't exist, or replace\n an existing name-value pair if it has certain expected attribute\n values).\n\n :param str table_name: The name of the table that contains the item to\n update\n :param dict key_dict: A dictionary of key/value pairs that are used to\n define the primary key values for the item. For the primary key,\n you must provide all of the attributes. For example, with a simple\n primary key, you only need to provide a value for the partition\n key. For a composite primary key, you must provide values for both\n the partition key and the sort key.\n :param str condition_expression: A condition that must be satisfied in\n order for a conditional *UpdateItem* operation to succeed. One of:\n ``attribute_exists``, ``attribute_not_exists``, ``attribute_type``,\n ``contains``, ``begins_with``, ``size``, ``=``, ``<>``, ``<``,\n ``>``, ``<=``, ``>=``, ``BETWEEN``, ``IN``, ``AND``, ``OR``, or\n ``NOT``.\n :param str update_expression: An expression that defines one or more\n attributes to be updated, the action to be performed on them, and\n new value(s) for them.\n :param dict expression_attribute_names: One or more substitution tokens\n for attribute names in an expression.\n :param dict expression_attribute_values: One or more values that can be\n substituted in an expression.\n :param str return_consumed_capacity: Determines the level of detail\n about provisioned throughput consumption that is returned in the\n response. See the `AWS documentation\n for ReturnConsumedCapacity `_ for more information.\n :param str return_item_collection_metrics: Determines whether item\n collection metrics are returned.\n :param str return_values: Use ReturnValues if you want to get the item\n attributes as they appeared either before or after they were\n updated. See the `AWS documentation for ReturnValues `_\n :rtype: tornado.concurrent.Future\n\n .. _UpdateItem: http://docs.aws.amazon.com/amazondynamodb/\n latest/APIReference/API_UpdateItem.html\n\n \"\"\"\n payload = {'TableName': table_name,\n 'Key': utils.marshall(key_dict),\n 'UpdateExpression': update_expression}\n if condition_expression:\n payload['ConditionExpression'] = condition_expression\n if expression_attribute_names:\n payload['ExpressionAttributeNames'] = expression_attribute_names\n if expression_attribute_values:\n payload['ExpressionAttributeValues'] = \\\n utils.marshall(expression_attribute_values)\n if return_consumed_capacity:\n _validate_return_consumed_capacity(return_consumed_capacity)\n payload['ReturnConsumedCapacity'] = return_consumed_capacity\n if return_item_collection_metrics:\n _validate_return_item_collection_metrics(\n return_item_collection_metrics)\n payload['ReturnItemCollectionMetrics'] = \\\n return_item_collection_metrics\n if return_values:\n _validate_return_values(return_values)\n payload['ReturnValues'] = return_values\n return self.execute('UpdateItem', payload)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef query(self, table_name,\n index_name=None,\n consistent_read=None,\n key_condition_expression=None,\n filter_expression=None,\n expression_attribute_names=None,\n expression_attribute_values=None,\n projection_expression=None,\n select=None,\n exclusive_start_key=None,\n limit=None,\n scan_index_forward=True,\n return_consumed_capacity=None):\n \"\"\"A `Query`_ operation uses the primary key of a table or a secondary\n index to directly access items from that table or index.\n\n :param str table_name: The name of the table containing the requested\n items.\n :param bool consistent_read: Determines the read consistency model: If\n set to ``True``, then the operation uses strongly consistent reads;\n otherwise, the operation uses eventually consistent reads. Strongly\n consistent reads are not supported on global secondary indexes. If\n you query a global secondary index with ``consistent_read`` set to\n ``True``, you will receive a\n :exc:`~sprockets_dynamodb.exceptions.ValidationException`.\n :param dict exclusive_start_key: The primary key of the first\n item that this operation will evaluate. Use the value that was\n returned for ``LastEvaluatedKey`` in the previous operation. In a\n parallel scan, a *Scan* request that includes\n ``exclusive_start_key`` must specify the same segment whose\n previous *Scan* returned the corresponding value of\n ``LastEvaluatedKey``.\n :param dict expression_attribute_names: One or more substitution tokens\n for attribute names in an expression.\n :param dict expression_attribute_values: One or more values that can be\n substituted in an expression.\n :param str key_condition_expression: The condition that specifies the\n key value(s) for items to be retrieved by the *Query* action. The\n condition must perform an equality test on a single partition key\n value, but can optionally perform one of several comparison tests\n on a single sort key value. The partition key equality test is\n required. For examples see `KeyConditionExpression\n .\n :param str filter_expression: A string that contains conditions that\n DynamoDB applies after the *Query* operation, but before the data\n is returned to you. Items that do not satisfy the criteria are not\n returned. Note that a filter expression is applied after the items\n have already been read; the process of filtering does not consume\n any additional read capacity units. For more information, see\n `Filter Expressions `_ in the\n Amazon DynamoDB Developer Guide.\n :param str projection_expression:\n :param str index_name: The name of a secondary index to query. This\n index can be any local secondary index or global secondary index.\n Note that if you use this parameter, you must also provide\n ``table_name``.\n :param int limit: The maximum number of items to evaluate (not\n necessarily the number of matching items). If DynamoDB processes\n the number of items up to the limit while processing the results,\n it stops the operation and returns the matching values up to that\n point, and a key in ``LastEvaluatedKey`` to apply in a subsequent\n operation, so that you can pick up where you left off. Also, if the\n processed data set size exceeds 1 MB before DynamoDB reaches this\n limit, it stops the operation and returns the matching values up to\n the limit, and a key in ``LastEvaluatedKey`` to apply in a\n subsequent operation to continue the operation. For more\n information, see `Query and Scan `_ in the Amazon\n DynamoDB Developer Guide.\n :param str return_consumed_capacity: Determines the level of detail\n about provisioned throughput consumption that is returned in the\n response:\n\n - ``INDEXES``: The response includes the aggregate consumed\n capacity for the operation, together with consumed capacity for\n each table and secondary index that was accessed. Note that\n some operations, such as *GetItem* and *BatchGetItem*, do not\n access any indexes at all. In these cases, specifying\n ``INDEXES`` will only return consumed capacity information for\n table(s).\n - ``TOTAL``: The response includes only the aggregate consumed\n capacity for the operation.\n - ``NONE``: No consumed capacity details are included in the\n response.\n :param bool scan_index_forward: Specifies the order for index\n traversal: If ``True`` (default), the traversal is performed in\n ascending order; if ``False``, the traversal is performed in\n descending order. Items with the same partition key value are\n stored in sorted order by sort key. If the sort key data type is\n *Number*, the results are stored in numeric order. For type\n *String*, the results are stored in order of ASCII character code\n values. For type *Binary*, DynamoDB treats each byte of the binary\n data as unsigned. If set to ``True``, DynamoDB returns the results\n in the order in which they are stored (by sort key value). This is\n the default behavior. If set to ``False``, DynamoDB reads the\n results in reverse order by sort key value, and then returns the\n results to the client.\n :param str select: The attributes to be returned in the result. You can\n retrieve all item attributes, specific item attributes, the count\n of matching items, or in the case of an index, some or all of the\n attributes projected into the index. Possible values are:\n\n - ``ALL_ATTRIBUTES``: Returns all of the item attributes from the\n specified table or index. If you query a local secondary index,\n then for each matching item in the index DynamoDB will fetch\n the entire item from the parent table. If the index is\n configured to project all item attributes, then all of the data\n can be obtained from the local secondary index, and no fetching\n is required.\n - ``ALL_PROJECTED_ATTRIBUTES``: Allowed only when querying an\n index. Retrieves all attributes that have been projected into\n the index. If the index is configured to project all\n attributes, this return value is equivalent to specifying\n ``ALL_ATTRIBUTES``.\n - ``COUNT``: Returns the number of matching items, rather than\n the matching items themselves.\n :rtype: dict\n\n .. _Query: http://docs.aws.amazon.com/amazondynamodb/\n latest/APIReference/API_Query.html\n\n \"\"\"\n payload = {'TableName': table_name,\n 'ScanIndexForward': scan_index_forward}\n if index_name:\n payload['IndexName'] = index_name\n if consistent_read is not None:\n payload['ConsistentRead'] = consistent_read\n if key_condition_expression:\n payload['KeyConditionExpression'] = key_condition_expression\n if filter_expression:\n payload['FilterExpression'] = filter_expression\n if expression_attribute_names:\n payload['ExpressionAttributeNames'] = expression_attribute_names\n if expression_attribute_values:\n payload['ExpressionAttributeValues'] = \\\n utils.marshall(expression_attribute_values)\n if projection_expression:\n payload['ProjectionExpression'] = projection_expression\n if select:\n _validate_select(select)\n payload['Select'] = select\n if exclusive_start_key:\n payload['ExclusiveStartKey'] = utils.marshall(exclusive_start_key)\n if limit:\n payload['Limit'] = limit\n if return_consumed_capacity:\n _validate_return_consumed_capacity(return_consumed_capacity)\n payload['ReturnConsumedCapacity'] = return_consumed_capacity\n return self.execute('Query', payload)", "response": "A Query operation uses the primary key of a table or secondary index to directly access items from that table or index."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef scan(self,\n table_name,\n index_name=None,\n consistent_read=None,\n projection_expression=None,\n filter_expression=None,\n expression_attribute_names=None,\n expression_attribute_values=None,\n segment=None,\n total_segments=None,\n select=None,\n limit=None,\n exclusive_start_key=None,\n return_consumed_capacity=None):\n \"\"\"The `Scan`_ operation returns one or more items and item attributes\n by accessing every item in a table or a secondary index.\n\n If the total number of scanned items exceeds the maximum data set size\n limit of 1 MB, the scan stops and results are returned to the user as a\n ``LastEvaluatedKey`` value to continue the scan in a subsequent\n operation. The results also include the number of items exceeding the\n limit. A scan can result in no table data meeting the filter criteria.\n\n By default, Scan operations proceed sequentially; however, for faster\n performance on a large table or secondary index, applications can\n request a parallel *Scan* operation by providing the ``segment`` and\n ``total_segments`` parameters. For more information, see\n `Parallel Scan `_ in the\n Amazon DynamoDB Developer Guide.\n\n By default, *Scan* uses eventually consistent reads when accessing the\n data in a table; therefore, the result set might not include the\n changes to data in the table immediately before the operation began. If\n you need a consistent copy of the data, as of the time that the *Scan*\n begins, you can set the ``consistent_read`` parameter to ``True``.\n\n :rtype: dict\n\n .. _Scan: http://docs.aws.amazon.com/amazondynamodb/\n latest/APIReference/API_Scan.html\n\n \"\"\"\n payload = {'TableName': table_name}\n if index_name:\n payload['IndexName'] = index_name\n if consistent_read is not None:\n payload['ConsistentRead'] = consistent_read\n if filter_expression:\n payload['FilterExpression'] = filter_expression\n if expression_attribute_names:\n payload['ExpressionAttributeNames'] = expression_attribute_names\n if expression_attribute_values:\n payload['ExpressionAttributeValues'] = \\\n utils.marshall(expression_attribute_values)\n if projection_expression:\n payload['ProjectionExpression'] = projection_expression\n if segment:\n payload['Segment'] = segment\n if total_segments:\n payload['TotalSegments'] = total_segments\n if select:\n _validate_select(select)\n payload['Select'] = select\n if exclusive_start_key:\n payload['ExclusiveStartKey'] = utils.marshall(exclusive_start_key)\n if limit:\n payload['Limit'] = limit\n if return_consumed_capacity:\n _validate_return_consumed_capacity(return_consumed_capacity)\n payload['ReturnConsumedCapacity'] = return_consumed_capacity\n return self.execute('Scan', payload)", "response": "This operation scans the table and returns one or more items and item attributes in a secondary index."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nexecute a DynamoDB action with the given parameters.", "response": "def execute(self, action, parameters):\n \"\"\"\n Execute a DynamoDB action with the given parameters. The method will\n retry requests that failed due to OS level errors or when being\n throttled by DynamoDB.\n\n :param str action: DynamoDB action to invoke\n :param dict parameters: parameters to send into the action\n :rtype: tornado.concurrent.Future\n\n This method creates a future that will resolve to the result\n of calling the specified DynamoDB function. It does it's best\n to unwrap the response from the function to make life a little\n easier for you. It does this for the ``GetItem`` and ``Query``\n functions currently.\n\n :raises:\n :exc:`~sprockets_dynamodb.exceptions.DynamoDBException`\n :exc:`~sprockets_dynamodb.exceptions.ConfigNotFound`\n :exc:`~sprockets_dynamodb.exceptions.NoCredentialsError`\n :exc:`~sprockets_dynamodb.exceptions.NoProfileError`\n :exc:`~sprockets_dynamodb.exceptions.TimeoutException`\n :exc:`~sprockets_dynamodb.exceptions.RequestException`\n :exc:`~sprockets_dynamodb.exceptions.InternalFailure`\n :exc:`~sprockets_dynamodb.exceptions.LimitExceeded`\n :exc:`~sprockets_dynamodb.exceptions.MissingParameter`\n :exc:`~sprockets_dynamodb.exceptions.OptInRequired`\n :exc:`~sprockets_dynamodb.exceptions.ResourceInUse`\n :exc:`~sprockets_dynamodb.exceptions.RequestExpired`\n :exc:`~sprockets_dynamodb.exceptions.ResourceNotFound`\n :exc:`~sprockets_dynamodb.exceptions.ServiceUnavailable`\n :exc:`~sprockets_dynamodb.exceptions.ThroughputExceeded`\n :exc:`~sprockets_dynamodb.exceptions.ValidationException`\n\n \"\"\"\n measurements = collections.deque([], self._max_retries)\n for attempt in range(1, self._max_retries + 1):\n try:\n result = yield self._execute(\n action, parameters, attempt, measurements)\n except (exceptions.InternalServerError,\n exceptions.RequestException,\n exceptions.ThrottlingException,\n exceptions.ThroughputExceeded,\n exceptions.ServiceUnavailable) as error:\n if attempt == self._max_retries:\n if self._instrumentation_callback:\n self._instrumentation_callback(measurements)\n self._on_exception(error)\n duration = self._sleep_duration(attempt)\n self.logger.warning('%r on attempt %i, sleeping %.2f seconds',\n error, attempt, duration)\n yield gen.sleep(duration)\n except exceptions.DynamoDBException as error:\n if self._instrumentation_callback:\n self._instrumentation_callback(measurements)\n self._on_exception(error)\n else:\n if self._instrumentation_callback:\n self._instrumentation_callback(measurements)\n self.logger.debug('%s result: %r', action, result)\n raise gen.Return(_unwrap_result(action, result))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_error_callback(self, callback):\n self.logger.debug('Setting error callback: %r', callback)\n self._on_error = callback", "response": "Assign a method to invoke when a request has encountered an ancillary unrecoverable error in an action execution."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nassigns a method to invoke when a request has completed gathering measurements.", "response": "def set_instrumentation_callback(self, callback):\n \"\"\"Assign a method to invoke when a request has completed gathering\n measurements.\n\n :param method callback: The method to invoke\n\n \"\"\"\n self.logger.debug('Setting instrumentation callback: %r', callback)\n self._instrumentation_callback = callback"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _execute(self, action, parameters, attempt, measurements):\n future = concurrent.Future()\n start = time.time()\n\n def handle_response(request):\n \"\"\"Invoked by the IOLoop when fetch has a response to process.\n\n :param tornado.concurrent.Future request: The request future\n\n \"\"\"\n self._on_response(\n action, parameters.get('TableName', 'Unknown'), attempt,\n start, request, future, measurements)\n\n ioloop.IOLoop.current().add_future(self._client.fetch(\n 'POST', '/',\n body=json.dumps(parameters).encode('utf-8'),\n headers={\n 'x-amz-target': 'DynamoDB_20120810.{}'.format(action),\n 'Content-Type': 'application/x-amz-json-1.0',\n }), handle_response)\n return future", "response": "Invoke a DynamoDB action and return a concurrent. Future that will be completed when the action completes."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninvokes when the HTTP request to the DynamoDB has returned and sets the future result or exception based upon the response.", "response": "def _on_response(self, action, table, attempt, start, response, future,\n measurements):\n \"\"\"Invoked when the HTTP request to the DynamoDB has returned and\n is responsible for setting the future result or exception based upon\n the HTTP response provided.\n\n :param str action: The action that was taken\n :param str table: The table name the action was made against\n :param int attempt: The attempt number for the action\n :param float start: When the request was submitted\n :param tornado.concurrent.Future response: The HTTP request future\n :param tornado.concurrent.Future future: The action execution future\n :param list measurements: The measurement accumulator\n\n \"\"\"\n self.logger.debug('%s on %s request #%i = %r',\n action, table, attempt, response)\n now, exception = time.time(), None\n try:\n future.set_result(self._process_response(response))\n except aws_exceptions.ConfigNotFound as error:\n exception = exceptions.ConfigNotFound(str(error))\n except aws_exceptions.ConfigParserError as error:\n exception = exceptions.ConfigParserError(str(error))\n except aws_exceptions.NoCredentialsError as error:\n exception = exceptions.NoCredentialsError(str(error))\n except aws_exceptions.NoProfileError as error:\n exception = exceptions.NoProfileError(str(error))\n except aws_exceptions.AWSError as error:\n exception = exceptions.DynamoDBException(error)\n except (ConnectionError, ConnectionResetError, OSError,\n aws_exceptions.RequestException, ssl.SSLError,\n _select.error, ssl.socket_error, socket.gaierror) as error:\n exception = exceptions.RequestException(str(error))\n except TimeoutError:\n exception = exceptions.TimeoutException()\n except httpclient.HTTPError as error:\n if error.code == 599:\n exception = exceptions.TimeoutException()\n else:\n exception = exceptions.RequestException(\n getattr(getattr(error, 'response', error),\n 'body', str(error.code)))\n except Exception as error:\n exception = error\n\n if exception:\n future.set_exception(exception)\n\n measurements.append(\n Measurement(now, action, table, attempt, max(now, start) - start,\n exception.__class__.__name__\n if exception else exception))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _process_response(response):\n error = response.exception()\n if error:\n if isinstance(error, aws_exceptions.AWSError):\n if error.args[1]['type'] in exceptions.MAP:\n raise exceptions.MAP[error.args[1]['type']](\n error.args[1]['message'])\n raise error\n http_response = response.result()\n if not http_response or not http_response.body:\n raise exceptions.DynamoDBException('empty response')\n return json.loads(http_response.body.decode('utf-8'))", "response": "Process the response from DynamoDB returning either the mapped exception\nCOOKIE or deserialized response."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef write(self, obj, resource_id=None):\n if resource_id is not None:\n if self.read(resource_id):\n raise ValueError(\"There are one object already with this id.\")\n obj['_id'] = resource_id\n prepared_creation_tx = self.driver.instance.transactions.prepare(\n operation='CREATE',\n signers=self.user.public_key,\n asset={\n 'namespace': self.namespace,\n 'data': obj\n },\n metadata={\n 'namespace': self.namespace,\n 'data': obj\n }\n )\n\n signed_tx = self.driver.instance.transactions.fulfill(\n prepared_creation_tx,\n private_keys=self.user.private_key\n )\n self.logger.debug('bdb::write::{}'.format(signed_tx['id']))\n self.driver.instance.transactions.send_commit(signed_tx)\n return signed_tx", "response": "Write and return the id of the transaction that created the object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get(self, tx_id):\n # tx_id=self._find_tx_id(resource_id)\n value = [\n {\n 'data': transaction['metadata'],\n 'id': transaction['id']\n }\n for transaction in self.driver.instance.transactions.get(asset_id=self.get_asset_id(tx_id))\n ][-1]\n if value['data']['data']:\n self.logger.debug('bdb::read::{}'.format(value['data']))\n return value\n else:\n return False", "response": "Read and obj in bdb using the tx_id."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _update(self, metadata, tx_id, resource_id):\n try:\n if not tx_id:\n sent_tx = self.write(metadata, resource_id)\n self.logger.debug('bdb::put::{}'.format(sent_tx['id']))\n return sent_tx\n else:\n txs = self.driver.instance.transactions.get(asset_id=self.get_asset_id(tx_id))\n unspent = txs[-1]\n sent_tx = self._put(metadata, unspent, resource_id)\n self.logger.debug('bdb::put::{}'.format(sent_tx))\n return sent_tx\n\n except BadRequest as e:\n logging.error(e)", "response": "Update and obj in bdb using the tx_id."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlisting all the objects saved in the namespace.", "response": "def list(self, search_from=None, search_to=None, limit=None):\n \"\"\"List all the objects saved in the namespace.\n\n :param search_from: TBI\n :param search_to: TBI\n :param offset: TBI\n :param limit: max number of values to be shows.\n :return: list with transactions.\n \"\"\"\n l = []\n for i in self._list():\n l.append(i['data']['data'])\n return l[0:limit]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _list(self):\n all = self.driver.instance.metadata.get(search=self.namespace)\n list = []\n for id in all:\n try:\n if not self._get(id['id']) in list:\n list.append(self._get(id['id']))\n except Exception:\n pass\n\n return list", "response": "List all the objects saved in the namespace."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef query(self, search_model: QueryModel):\n self.logger.debug('bdb::get::{}'.format(search_model.query))\n assets = json.loads(requests.post(\"http://localhost:4000/query\", data=search_model.query).content)['data']\n self.logger.debug('bdb::result::len {}'.format(len(assets)))\n assets_metadata = []\n for i in assets:\n try:\n assets_metadata.append(self._get(i['id'])['data']['data'])\n except:\n pass\n return assets_metadata", "response": "Query to bdb namespace."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _delete(self, tx_id):\n txs = self.driver.instance.transactions.get(asset_id=self.get_asset_id(tx_id))\n unspent = txs[-1]\n output_index = 0\n output = unspent['outputs'][output_index]\n\n transfer_input = {\n 'fulfillment': output['condition']['details'],\n 'fulfills': {\n 'output_index': output_index,\n 'transaction_id': unspent['id']\n },\n 'owners_before': output['public_keys']\n }\n\n prepared_transfer_tx = self.driver.instance.transactions.prepare(\n operation='TRANSFER',\n asset=unspent['asset'] if 'id' in unspent['asset'] else {'id': unspent['id']},\n inputs=transfer_input,\n recipients=self.BURN_ADDRESS,\n metadata={\n 'namespace': 'burned',\n\n }\n )\n signed_tx = self.driver.instance.transactions.fulfill(\n prepared_transfer_tx,\n private_keys=self.user.private_key,\n )\n self.driver.instance.transactions.send_commit(signed_tx)", "response": "Delete a transaction. Read documentation about CRAB model in https://blog.bigchaindb.com/crab-create-retrieve-append-burn-b9f6d111f460.\n\n :param tx_id: transaction id\n :return:"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_asset_id(self, tx_id):\n tx = self.driver.instance.transactions.retrieve(txid=tx_id)\n assert tx is not None\n return tx['id'] if tx['operation'] == 'CREATE' else tx['asset']['id']", "response": "Return the asset id of the first transaction."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef hdr(data, filename):\n hdrobj = data if isinstance(data, HDRobject) else HDRobject(data)\n hdrobj.write(filename)", "response": "Writes ENVI header files"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef __hdr2dict(self):\n with open(self.filename, 'r') as infile:\n lines = infile.readlines()\n i = 0\n out = dict()\n while i < len(lines):\n line = lines[i].strip('\\r\\n')\n if '=' in line:\n if '{' in line and '}' not in line:\n while '}' not in line:\n i += 1\n line += lines[i].strip('\\n').lstrip()\n line = list(filter(None, re.split(r'\\s+=\\s+', line)))\n line[1] = re.split(',[ ]*', line[1].strip('{}'))\n key = line[0].replace(' ', '_')\n val = line[1] if len(line[1]) > 1 else line[1][0]\n out[key] = parse_literal(val)\n i += 1\n if 'band_names' in out.keys() and not isinstance(out['band_names'], list):\n out['band_names'] = [out['band_names']]\n return out", "response": "read a HDR file into a dictionary"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwriting object to an ENVI header file", "response": "def write(self, filename='same'):\n \"\"\"\n write object to an ENVI header file\n \"\"\"\n if filename == 'same':\n filename = self.filename\n if not filename.endswith('.hdr'):\n filename += '.hdr'\n with open(filename, 'w') as out:\n out.write(self.__str__())"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngoes to Zero position", "response": "def GoZero(self, speed):\n ' Go to Zero position '\n self.ReleaseSW()\n\n spi.SPI_write_byte(self.CS, 0x82 | (self.Dir & 1)) # Go to Zero\n spi.SPI_write_byte(self.CS, 0x00)\n spi.SPI_write_byte(self.CS, speed) \n while self.IsBusy():\n pass\n time.sleep(0.3)\n self.ReleaseSW()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef Move(self, units):\n ' Move some distance units from current position '\n steps = units * self.SPU # translate units to steps \n if steps > 0: # look for direction\n spi.SPI_write_byte(self.CS, 0x40 | (~self.Dir & 1)) \n else:\n spi.SPI_write_byte(self.CS, 0x40 | (self.Dir & 1)) \n steps = int(abs(steps)) \n spi.SPI_write_byte(self.CS, (steps >> 16) & 0xFF)\n spi.SPI_write_byte(self.CS, (steps >> 8) & 0xFF)\n spi.SPI_write_byte(self.CS, steps & 0xFF)", "response": "Move some distance units from current position"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ReadStatusBit(self, bit):\n ' Report given status bit '\n spi.SPI_write_byte(self.CS, 0x39) # Read from address 0x19 (STATUS)\n spi.SPI_write_byte(self.CS, 0x00)\n data0 = spi.SPI_read_byte() # 1st byte\n spi.SPI_write_byte(self.CS, 0x00)\n data1 = spi.SPI_read_byte() # 2nd byte\n #print hex(data0), hex(data1)\n if bit > 7: # extract requested bit\n OutputBit = (data0 >> (bit - 8)) & 1\n else:\n OutputBit = (data1 >> bit) & 1 \n return OutputBit", "response": "Report given status bit"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef SPI_write_byte(self, chip_select, data):\n 'Writes a data to a SPI device selected by chipselect bit. '\n self.bus.write_byte_data(self.address, chip_select, data)", "response": "Writes a data to a SPI device selected by chipselect bit."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef SPI_write(self, chip_select, data):\n 'Writes data to SPI device selected by chipselect bit. '\n dat = list(data)\n dat.insert(0, chip_select)\n return self.bus.write_i2c_block(self.address, dat);", "response": "Writes data to SPI device selected by chipselect bit."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef SPI_config(self,config):\n 'Configure SPI interface parameters.'\n self.bus.write_byte_data(self.address, 0xF0, config)\n return self.bus.read_byte_data(self.address, 0xF0)", "response": "Configure SPI interface parameters."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread logic state on GPIO enabled slave - selects pins.", "response": "def GPIO_read(self):\n 'Reads logic state on GPIO enabled slave-selects pins.'\n self.bus.write_byte_data(self.address, 0xF5, 0x0f)\n status = self.bus.read_byte(self.address)\n bits_values = dict([('SS0',status & 0x01 == 0x01),('SS1',status & 0x02 == 0x02),('SS2',status & 0x04 == 0x04),('SS3',status & 0x08 == 0x08)])\n return bits_values"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef GPIO_config(self, gpio_enable, gpio_config):\n 'Enable or disable slave-select pins as gpio.'\n self.bus.write_byte_data(self.address, 0xF6, gpio_enable)\n self.bus.write_byte_data(self.address, 0xF7, gpio_config)\n return", "response": "Enable or disable slave - select pins as gpio."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_address(self):\n LOGGER.debug(\"Reading RPS01A sensor's address.\",)\n return self.bus.read_byte_data(self.address, self.address_reg)", "response": "Returns sensors I2C address."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_zero_position(self):\n\n LSB = self.bus.read_byte_data(self.address, self.zero_position_MSB)\n MSB = self.bus.read_byte_data(self.address, self.zero_position_LSB)\n DATA = (MSB << 6) + LSB\n return DATA", "response": "Returns the programmed zero position in OTP memory."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_agc_value(self):\n LOGGER.debug(\"Reading RPS01A sensor's AGC settings\",)\n return self.bus.read_byte_data(self.address, self.AGC_reg)", "response": "Returns sensor s Automatic Gain Control actual value."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads the diagnostic data from the sensor and returns a dictionary of the values.", "response": "def get_diagnostics(self):\n \"\"\"\n Reads diagnostic data from the sensor.\n OCF (Offset Compensation Finished) - logic high indicates the finished Offset Compensation Algorithm. After power up the flag remains always to logic high.\n\n COF (Cordic Overflow) - logic high indicates an out of range error in the CORDIC part. When this bit is set, the angle and magnitude data is invalid. \n The absolute output maintains the last valid angular value.\n\n COMP low, indicates a high magnetic field. It is recommended to monitor in addition the magnitude value.\n COMP high, indicated a weak magnetic field. It is recommended to monitor the magnitude value.\n \"\"\"\n\n status = self.bus.read_byte_data(self.address, self.diagnostics_reg)\n bits_values = dict([('OCF',status & 0x01 == 0x01),\n ('COF',status & 0x02 == 0x02),\n ('Comp_Low',status & 0x04 == 0x04),\n ('Comp_High',status & 0x08 == 0x08)])\n return bits_values"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns measured angle in degrees in range 0 - 360.", "response": "def get_angle(self, verify = False):\n \"\"\"\n Retuns measured angle in degrees in range 0-360.\n \"\"\"\n LSB = self.bus.read_byte_data(self.address, self.angle_LSB)\n MSB = self.bus.read_byte_data(self.address, self.angle_MSB)\n DATA = (MSB << 6) + LSB\n if not verify:\n return (360.0 / 2**14) * DATA\n else:\n status = self.get_diagnostics()\n if not (status['Comp_Low']) and not(status['Comp_High']) and not(status['COF']):\n return (360.0 / 2**14) * DATA\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main(input_bed, output_file, output_features=False, genome=None,\n only_canonical=False, short=False, extended=False, high_confidence=False,\n ambiguities_method=False, coding_only=False, collapse_exons=False, work_dir=False, is_debug=False):\n \"\"\" Annotating BED file based on reference features annotations.\n \"\"\"\n logger.init(is_debug_=is_debug)\n\n if not genome:\n raise click.BadParameter('Error: please, specify genome build name with -g (e.g. `-g hg19`)', param='genome')\n\n if short:\n if extended: raise click.BadParameter('--short and --extended can\\'t be set both', param='extended')\n if output_features: raise click.BadParameter('--short and --output-features can\\'t be set both', param='output_features')\n elif output_features or extended:\n extended = True\n short = False\n\n if not verify_file(input_bed):\n click.BadParameter(f'Usage: {__file__} Input_BED_file -g hg19 -o Annotated_BED_file [options]', param='input_bed')\n input_bed = verify_file(input_bed, is_critical=True, description=f'Input BED file for {__file__}')\n\n if work_dir:\n work_dir = join(adjust_path(work_dir), os.path.splitext(basename(input_bed))[0])\n safe_mkdir(work_dir)\n info(f'Created work directory {work_dir}')\n else:\n work_dir = mkdtemp('bed_annotate')\n debug('Created temporary work directory {work_dir}')\n\n input_bed = clean_bed(input_bed, work_dir)\n input_bed = verify_bed(input_bed, is_critical=True, description=f'Input BED file for {__file__} after cleaning')\n\n output_file = adjust_path(output_file)\n\n output_file = annotate(\n input_bed, output_file, work_dir, genome=genome,\n only_canonical=only_canonical, short=short, extended=extended,\n high_confidence=high_confidence, collapse_exons=collapse_exons,\n output_features=output_features,\n ambiguities_method=ambiguities_method, coding_only=coding_only,\n is_debug=is_debug)\n\n if not work_dir:\n debug(f'Removing work directory {work_dir}')\n shutil.rmtree(work_dir)\n\n info(f'Done, saved to {output_file}')", "response": "This function is used to annotate a single BED file based on reference features annotations."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the state of charge in MHz.", "response": "def StateOfCharge(self):\n \"\"\" % of Full Charge \"\"\"\n return (self.bus.read_byte_data(self.address, 0x02) + self.bus.read_byte_data(self.address, 0x03) * 256)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef backpropagate_2d(uSin, angles, res, nm, lD=0, coords=None,\n weight_angles=True,\n onlyreal=False, padding=True, padval=0,\n count=None, max_count=None, verbose=0):\n r\"\"\"2D backpropagation with the Fourier diffraction theorem\n\n Two-dimensional diffraction tomography reconstruction\n algorithm for scattering of a plane wave\n :math:`u_0(\\mathbf{r}) = u_0(x,z)`\n by a dielectric object with refractive index\n :math:`n(x,z)`.\n\n This method implements the 2D backpropagation algorithm\n :cite:`Mueller2015arxiv`.\n\n .. math::\n f(\\mathbf{r}) =\n -\\frac{i k_\\mathrm{m}}{2\\pi}\n \\sum_{j=1}^{N} \\! \\Delta \\phi_0 D_{-\\phi_j} \\!\\!\n \\left \\{\n \\text{FFT}^{-1}_{\\mathrm{1D}}\n \\left \\{\n \\left| k_\\mathrm{Dx} \\right|\n \\frac{\\text{FFT}_{\\mathrm{1D}} \\left \\{\n u_{\\mathrm{B},\\phi_j}(x_\\mathrm{D}) \\right \\}\n }{u_0(l_\\mathrm{D})}\n \\exp \\! \\left[i k_\\mathrm{m}(M - 1) \\cdot\n (z_{\\phi_j}-l_\\mathrm{D}) \\right]\n \\right \\}\n \\right \\}\n\n with the forward :math:`\\text{FFT}_{\\mathrm{1D}}` and inverse\n :math:`\\text{FFT}^{-1}_{\\mathrm{1D}}` 1D fast Fourier transform, the\n rotational operator :math:`D_{-\\phi_j}`, the angular distance between the\n projections :math:`\\Delta \\phi_0`, the ramp filter in Fourier space\n :math:`|k_\\mathrm{Dx}|`, and the propagation distance\n :math:`(z_{\\phi_j}-l_\\mathrm{D})`.\n\n Parameters\n ----------\n uSin: (A,N) ndarray\n Two-dimensional sinogram of line recordings\n :math:`u_{\\mathrm{B}, \\phi_j}(x_\\mathrm{D})`\n divided by the incident plane wave :math:`u_0(l_\\mathrm{D})`\n measured at the detector.\n angles: (A,) ndarray\n Angular positions :math:`\\phi_j` of `uSin` in radians.\n res: float\n Vacuum wavelength of the light :math:`\\lambda` in pixels.\n nm: float\n Refractive index of the surrounding medium :math:`n_\\mathrm{m}`.\n lD: float\n Distance from center of rotation to detector plane\n :math:`l_\\mathrm{D}` in pixels.\n coords: None [(2,M) ndarray]\n Computes only the output image at these coordinates. This\n keyword is reserved for future versions and is not\n implemented yet.\n weight_angles: bool\n If `True`, weights each backpropagated projection with a factor\n proportional to the angular distance between the neighboring\n projections.\n\n .. math::\n \\Delta \\phi_0 \\longmapsto \\Delta \\phi_j =\n \\frac{\\phi_{j+1} - \\phi_{j-1}}{2}\n\n .. versionadded:: 0.1.1\n onlyreal: bool\n If `True`, only the real part of the reconstructed image\n will be returned. This saves computation time.\n padding: bool\n Pad the input data to the second next power of 2 before\n Fourier transforming. This reduces artifacts and speeds up\n the process for input image sizes that are not powers of 2.\n padval: float\n The value used for padding. This is important for the Rytov\n approximation, where an approximate zero in the phase might\n translate to 2\u03c0i due to the unwrapping algorithm. In that\n case, this value should be a multiple of 2\u03c0i.\n If `padval` is `None`, then the edge values are used for\n padding (see documentation of :func:`numpy.pad`).\n count, max_count: multiprocessing.Value or `None`\n Can be used to monitor the progress of the algorithm.\n Initially, the value of `max_count.value` is incremented\n by the total number of steps. At each step, the value\n of `count.value` is incremented.\n verbose: int\n Increment to increase verbosity.\n\n\n Returns\n -------\n f: ndarray of shape (N,N), complex if `onlyreal` is `False`\n Reconstructed object function :math:`f(\\mathbf{r})` as defined\n by the Helmholtz equation.\n :math:`f(x,z) =\n k_m^2 \\left(\\left(\\frac{n(x,z)}{n_m}\\right)^2 -1\\right)`\n\n\n See Also\n --------\n odt_to_ri: conversion of the object function :math:`f(\\mathbf{r})`\n to refractive index :math:`n(\\mathbf{r})`\n\n radontea.backproject: backprojection based on the Fourier slice\n theorem\n\n Notes\n -----\n Do not use the parameter `lD` in combination with the Rytov\n approximation - the propagation is not correctly described.\n Instead, numerically refocus the sinogram prior to converting\n it to Rytov data (using e.g. :func:`odtbrain.sinogram_as_rytov`)\n with a numerical focusing algorithm (available in the Python\n package :py:mod:`nrefocus`).\n \"\"\"\n ##\n ##\n # TODO:\n # - combine the 2nd filter and the rotation in the for loop\n # to save memory. However, memory is not a big issue in 2D.\n ##\n ##\n A = angles.shape[0]\n if max_count is not None:\n max_count.value += A + 2\n # Check input data\n assert len(uSin.shape) == 2, \"Input data `uB` must have shape (A,N)!\"\n assert len(uSin) == A, \"`len(angles)` must be equal to `len(uSin)`!\"\n\n if coords is not None:\n raise NotImplementedError(\"Output coordinates cannot yet be set \" +\n + \"for the 2D backrpopagation algorithm.\")\n # Cut-Off frequency\n # km [1/px]\n km = (2 * np.pi * nm) / res\n # Here, the notation defines\n # a wave propagating to the right as:\n #\n # u0(x) = exp(ikx)\n #\n # However, in physics usually we use the other sign convention:\n #\n # u0(x) = exp(-ikx)\n #\n # In order to be consistent with programs like Meep or our\n # scattering script for a dielectric cylinder, we want to use the\n # latter sign convention.\n # This is not a big problem. We only need to multiply the imaginary\n # part of the scattered wave by -1.\n\n # Perform weighting\n if weight_angles:\n weights = util.compute_angle_weights_1d(angles).reshape(-1, 1)\n sinogram = uSin * weights\n else:\n sinogram = uSin\n\n # Size of the input data\n ln = sinogram.shape[1]\n\n # We perform padding before performing the Fourier transform.\n # This gets rid of artifacts due to false periodicity and also\n # speeds up Fourier transforms of the input image size is not\n # a power of 2.\n order = max(64., 2**np.ceil(np.log(ln * 2.1) / np.log(2)))\n\n if padding:\n pad = order - ln\n else:\n pad = 0\n\n padl = np.int(np.ceil(pad / 2))\n padr = np.int(pad - padl)\n\n if padval is None:\n sino = np.pad(sinogram, ((0, 0), (padl, padr)),\n mode=\"edge\")\n if verbose > 0:\n print(\"......Padding with edge values.\")\n else:\n sino = np.pad(sinogram, ((0, 0), (padl, padr)),\n mode=\"linear_ramp\",\n end_values=(padval,))\n if verbose > 0:\n print(\"......Verifying padding value: {}\".format(padval))\n\n # zero-padded length of sinogram.\n lN = sino.shape[1]\n\n # Ask for the filter. Do not include zero (first element).\n #\n # Integrals over \u03d5\u2080 [0,2\u03c0]; kx [-k\u2098,k\u2098]\n # - double coverage factor 1/2 already included\n # - unitary angular frequency to unitary ordinary frequency\n # conversion performed in calculation of UB=FT(uB).\n #\n # f(r) = -i k\u2098 / ((2\u03c0)^(3/2) a\u2080) (prefactor)\n # * iint d\u03d5\u2080 dkx (prefactor)\n # * |kx| (prefactor)\n # * exp(-i k\u2098 M lD ) (prefactor)\n # * UB\u03d5\u2080(kx) (dependent on \u03d5\u2080)\n # * exp( i (kx t\u22a5 + k\u2098 (M - 1) s\u2080) r ) (dependent on \u03d5\u2080 and r)\n #\n # (r and s\u2080 are vectors. In the last term we perform the dot-product)\n #\n # k\u2098M = sqrt( k\u2098\u00b2 - kx\u00b2 )\n # t\u22a5 = ( cos(\u03d5\u2080), sin(\u03d5\u2080) )\n # s\u2080 = ( -sin(\u03d5\u2080), cos(\u03d5\u2080) )\n #\n # The filter can be split into two parts\n #\n # 1) part without dependence on the z-coordinate\n #\n # -i k\u2098 / ((2\u03c0)^(3/2) a\u2080)\n # * iint d\u03d5\u2080 dkx\n # * |kx|\n # * exp(-i k\u2098 M lD )\n #\n # 2) part with dependence of the z-coordinate\n #\n # exp( i (kx t\u22a5 + k\u2098 (M - 1) s\u2080) r )\n #\n # The filter (1) can be performed using the classical filter process\n # as in the backprojection algorithm.\n #\n #\n if count is not None:\n count.value += 1\n\n # Corresponding sample frequencies\n fx = np.fft.fftfreq(lN) # 1D array\n # kx is a 1D array.\n kx = 2 * np.pi * fx\n # Differentials for integral\n dphi0 = 2 * np.pi / A\n # We will later multiply with phi0.\n # a, x\n kx = kx.reshape(1, -1)\n # Low-pass filter:\n # less-than-or-equal would give us zero division error.\n filter_klp = (kx**2 < km**2)\n\n # Filter M so there are no nans from the root\n M = 1. / km * np.sqrt((km**2 - kx**2) * filter_klp)\n\n prefactor = -1j * km / (2 * np.pi)\n prefactor *= dphi0\n prefactor *= np.abs(kx) * filter_klp\n # new in version 0.1.4:\n # We multiply by the factor (M-1) instead of just (M)\n # to take into account that we have a scattered\n # wave that is normalized by u0.\n prefactor *= np.exp(-1j * km * (M-1) * lD)\n # Perform filtering of the sinogram\n projection = np.fft.fft(sino, axis=-1) * prefactor\n\n #\n # filter (2) must be applied before rotation as well\n # exp( i (kx t\u22a5 + k\u2098 (M - 1) s\u2080) r )\n #\n # t\u22a5 = ( cos(\u03d5\u2080), sin(\u03d5\u2080) )\n # s\u2080 = ( -sin(\u03d5\u2080), cos(\u03d5\u2080) )\n #\n # This filter is effectively an inverse Fourier transform\n #\n # exp(i kx xD) exp(i k\u2098 (M - 1) yD )\n #\n # xD = x cos(\u03d5\u2080) + y sin(\u03d5\u2080)\n # yD = - x sin(\u03d5\u2080) + y cos(\u03d5\u2080)\n\n # Everything is in pixels\n center = ln / 2.0\n x = np.arange(lN) - center + .5\n # Meshgrid for output array\n yv = x.reshape(-1, 1)\n\n Mp = M.reshape(1, -1)\n filter2 = np.exp(1j * yv * km * (Mp - 1)) # .reshape(1,lN,lN)\n\n projection = projection.reshape(A, 1, lN) # * filter2\n\n # Prepare complex output image\n if onlyreal:\n outarr = np.zeros((ln, ln))\n else:\n outarr = np.zeros((ln, ln), dtype=np.dtype(complex))\n\n if count is not None:\n count.value += 1\n\n # Calculate backpropagations\n for i in np.arange(A):\n # Create an interpolation object of the projection.\n\n # interpolation of the rotated fourier transformed projection\n # this is already tiled onto the entire image.\n sino_filtered = np.fft.ifft(projection[i] * filter2, axis=-1)\n\n # Resize filtered sinogram back to original size\n sino = sino_filtered[:ln, padl:padl + ln]\n\n rotated_projr = scipy.ndimage.interpolation.rotate(\n sino.real, -angles[i] * 180 / np.pi,\n reshape=False, mode=\"constant\", cval=0)\n # Append results\n\n outarr += rotated_projr\n\n if not onlyreal:\n outarr += 1j * scipy.ndimage.interpolation.rotate(\n sino.imag, -angles[i] * 180 / np.pi,\n reshape=False, mode=\"constant\", cval=0)\n\n if count is not None:\n count.value += 1\n\n return outarr", "response": "r2D backpropagation of a single object in a 2D manner."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef color(x, y):\n\n if (x-4) > (y-4) and -(y-4) <= (x-4):\n # right\n return \"#CDB95B\"\n elif (x-4) > (y-4) and -(y-4) > (x-4):\n # top\n return \"#CD845B\"\n elif (x-4) <= (y-4) and -(y-4) <= (x-4):\n # bottom\n return \"#57488E\"\n elif (x-4) <= (y-4) and -(y-4) > (x-4):\n # left\n return \"#3B8772\"\n\n # should not happen\n return \"black\"", "response": "Color of the current image."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef single_read(self, register):\n '''\n Reads data from desired register only once. \n '''\n \n comm_reg = (0b00010 << 3) + register\n\n if register == self.AD7730_STATUS_REG:\n bytes_num = 1\n elif register == self.AD7730_DATA_REG:\n bytes_num = 3\n elif register == self.AD7730_MODE_REG:\n bytes_num = 2\n elif register == self.AD7730_FILTER_REG:\n bytes_num = 3\n elif register == self.AD7730_DAC_REG:\n bytes_num = 1\n elif register == self.AD7730_OFFSET_REG:\n bytes_num = 3\n elif register == self.AD7730_GAIN_REG:\n bytes_num = 3\n elif register == self.AD7730_TEST_REG:\n bytes_num = 3\n\n command = [comm_reg] + ([0x00] * bytes_num)\n spi.SPI_write(self.CS, command)\n data = spi.SPI_read(bytes_num + 1) \n return data[1:]", "response": "Reads data from the specified register only once."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getStatus(self):\n\n \"\"\"\n RDY - Ready Bit. This bit provides the status of the RDY flag from the part. The status and function of this bit is the same as the RDY output pin. A number of events set the RDY bit high as indicated in Table XVIII in datasheet\n\n STDY - Steady Bit. This bit is updated when the filter writes a result to the Data Register. If the filter is\n in FASTStep mode (see Filter Register section) and responding to a step input, the STDY bit\n remains high as the initial conversion results become available. The RDY output and bit are set\n low on these initial conversions to indicate that a result is available. If the STDY is high, however,\n it indicates that the result being provided is not from a fully settled second-stage FIR filter. When the\n FIR filter has fully settled, the STDY bit will go low coincident with RDY. If the part is never placed\n into its FASTStep mode, the STDY bit will go low at the first Data Register read and it is\n not cleared by subsequent Data Register reads. A number of events set the STDY bit high as indicated in Table XVIII. STDY is set high along with RDY by all events in the table except a Data Register read.\n\n STBY - Standby Bit. This bit indicates whether the AD7730 is in its Standby Mode or normal mode of\n operation. The part can be placed in its standby mode using the STANDBY input pin or by\n writing 011 to the MD2 to MD0 bits of the Mode Register. The power-on/reset status of this bit\n is 0 assuming the STANDBY pin is high.\n\n\n\n\n\nNOREF - No Reference Bit. If the voltage between the REF IN(+) and REF IN(-) pins is below 0.3 V, or either of these inputs is open-circuit, the NOREF bit goes to 1. If NOREF is active on completion of a conversion, the Data Register is loaded with all 1s. If NOREF is active on completion of a calibration, updating of the calibration registers is inhibited.\"\"\"\n\n status = self.single_read(self.AD7730_STATUS_REG)\n bits_values = dict([('NOREF',status[0] & 0x10 == 0x10),\n ('STBY',status[0] & 0x20 == 0x20),\n ('STDY',status[0] & 0x40 == 0x40),\n ('RDY',status[0] & 0x80 == 0x80)])\n return bits_values", "response": "This method returns the status of the specific entry in the filter."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset the mode of the specified channel.", "response": "def setMode(self\n ,mode \n ,polarity \n ,den \n ,iovalue \n ,data_length \n ,reference \n ,input_range \n ,clock_enable \n ,burn_out \n ,channel):\n '''\n def setMode(self\n ,mode = self.AD7730_IDLE_MODE \n ,polarity = self.AD7730_UNIPOLAR_MODE\n ,den = self.AD7730_IODISABLE_MODE\n ,iovalue = 0b00\n ,data_lenght = self.AD7730_24bitDATA_MODE\n ,reference = self.AD7730_REFERENCE_5V\n ,input_range = self.AD7730_40mVIR_MODE\n ,clock_enable = self.AD7730_MCLK_ENABLE_MODE\n ,burn_out = self.AD7730_BURNOUT_DISABLE\n ,channel = self.AD7730_AIN1P_AIN1N\n ):\n '''\n mode_MSB = (mode << 5) + (polarity << 4) + (den << 3) + (iovalue << 1) + data_length\n mode_LSB = (reference << 7) + (0b0 << 6) + (input_range << 4) + (clock_enable << 3) + (burn_out << 2) + channel\n \n self.single_write(self.AD7730_MODE_REG, [mode_MSB, mode_LSB])"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef downsample(work_dir, sample_name, fastq_left_fpath, fastq_right_fpath, downsample_to, num_pairs=None):\n sample_name = sample_name or splitext(''.join(lc if lc == rc else '' for lc, rc in zip(fastq_left_fpath, fastq_right_fpath)))[0]\n\n l_out_fpath = make_downsampled_fpath(work_dir, fastq_left_fpath)\n r_out_fpath = make_downsampled_fpath(work_dir, fastq_right_fpath)\n if can_reuse(l_out_fpath, [fastq_left_fpath, fastq_right_fpath]):\n return l_out_fpath, r_out_fpath\n\n info('Processing ' + sample_name)\n if num_pairs is None:\n info(sample_name + ': counting number of reads in fastq...')\n num_pairs = _count_records_in_fastq(fastq_left_fpath)\n if num_pairs > LIMIT:\n info(sample_name + ' the number of reads is higher than ' + str(LIMIT) +\n ', sampling from only first ' + str(LIMIT))\n num_pairs = LIMIT\n info(sample_name + ': ' + str(num_pairs) + ' reads')\n num_downsample_pairs = int(downsample_to * num_pairs) if isinstance(downsample_to, float) else downsample_to\n if num_pairs <= num_downsample_pairs:\n info(sample_name + ': and it is less than ' + str(num_downsample_pairs) + ', so no downsampling.')\n return fastq_left_fpath, fastq_right_fpath\n else:\n info(sample_name + ': downsampling to ' + str(num_downsample_pairs))\n rand_records = sorted(random.sample(range(num_pairs), num_downsample_pairs))\n\n info('Opening ' + fastq_left_fpath)\n fh1 = open_gzipsafe(fastq_left_fpath)\n info('Opening ' + fastq_right_fpath)\n fh2 = open_gzipsafe(fastq_right_fpath) if fastq_right_fpath else None\n\n out_files = (l_out_fpath, r_out_fpath) if r_out_fpath else (l_out_fpath,)\n\n written_records = 0\n with file_transaction(work_dir, out_files) as tx_out_files:\n if isinstance(tx_out_files, six.string_types):\n tx_out_f1 = tx_out_files\n else:\n tx_out_f1, tx_out_f2 = tx_out_files\n info('Opening ' + str(tx_out_f1) + ' to write')\n sub1 = open_gzipsafe(tx_out_f1, \"w\")\n info('Opening ' + str(tx_out_f2) + ' to write')\n sub2 = open_gzipsafe(tx_out_f2, \"w\") if r_out_fpath else None\n rec_no = -1\n for rr in rand_records:\n while rec_no < rr:\n rec_no += 1\n for i in range(4): fh1.readline()\n if fh2:\n for i in range(4): fh2.readline()\n for i in range(4):\n sub1.write(fh1.readline())\n if sub2:\n sub2.write(fh2.readline())\n written_records += 1\n if written_records % 10000 == 0:\n info(sample_name + ': written ' + str(written_records) + ', rec_no ' + str(rec_no + 1))\n if rec_no > num_pairs:\n info(sample_name + ' reached the limit of ' + str(num_pairs), ' read lines, stopping.')\n break\n info(sample_name + ': done, written ' + str(written_records) + ', rec_no ' + str(rec_no))\n fh1.close()\n sub1.close()\n if fastq_right_fpath:\n fh2.close()\n sub2.close()\n\n info(sample_name + ': done downsampling, saved to ' + l_out_fpath + ' and ' + r_out_fpath + ', total ' + str(written_records) + ' paired reads written')\n return l_out_fpath, r_out_fpath", "response": "Downsample a fastq file into memory and return the new file and the new file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _calculate_checksum(value):\n # CRC\n polynomial = 0x131 # //P(x)=x^8+x^5+x^4+1 = 100110001\n crc = 0xFF\n\n # calculates 8-Bit checksum with given polynomial\n for byteCtr in [ord(x) for x in struct.pack(\">H\", value)]:\n crc ^= byteCtr\n for bit in range(8, 0, -1):\n if crc & 0x80:\n crc = (crc << 1) ^ polynomial\n else:\n crc = (crc << 1)\n return crc", "response": "4. 12 Checksum Calculation from an unsigned short input"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self):\n\n inside = 0\n for draws in range(1, self.data['samples']):\n # generate points and check whether they are inside the unit circle\n r1, r2 = (random(), random())\n if r1 ** 2 + r2 ** 2 < 1.0:\n inside += 1\n\n if draws % 1000 != 0:\n continue\n\n # debug\n yield self.emit('log', {'draws': draws, 'inside': inside})\n\n # calculate pi and its uncertainty given the current draws\n p = inside / draws\n pi = {\n 'estimate': 4.0 * inside / draws,\n 'uncertainty': 4.0 * math.sqrt(draws * p * (1.0 - p)) / draws,\n }\n\n # send status to frontend\n yield self.set_state(pi=pi)\n\n yield self.emit('log', {'action': 'done'})", "response": "Run when button is pressed."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef on_action(action):\n @wrapt.decorator\n @tornado.gen.coroutine\n def _execute(wrapped, instance, args, kwargs):\n return wrapped(*args, **kwargs)\n\n _execute.action = action\n return _execute", "response": "Decorator for action handlers."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_datastores(self):\n self.data = Datastore(self.id_)\n self.data.subscribe(lambda data: self.emit('data', data))\n self.class_data = Datastore(type(self).__name__)\n self.class_data.subscribe(lambda data: self.emit('class_data', data))", "response": "Initialize datastores for this analysis instance."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nemits a signal to the frontend.", "response": "def emit(self, signal, message='__nomessagetoken__'):\n \"\"\"Emit a signal to the frontend.\n\n :param str signal: name of the signal\n :param message: message to send\n :returns: return value from frontend emit function\n :rtype: tornado.concurrent.Future\n \"\"\"\n # call pre-emit hooks\n if signal == 'log':\n self.log_backend.info(message)\n elif signal == 'warn':\n self.log_backend.warn(message)\n elif signal == 'error':\n self.log_backend.error(message)\n\n return self.emit_to_frontend(signal, message)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set(self, i, value):\n value_encoded = encode(value, self.get_change_trigger(i))\n\n if i in self.data and self.data[i] == value_encoded:\n return self\n\n self.data[i] = value_encoded\n return self.trigger_changed(i)", "response": "Set value at position i and return a Future."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting a value at key and return a Future.", "response": "def set(self, key, value):\n \"\"\"Set a value at key and return a Future.\n\n :rtype: tornado.concurrent.Future\n \"\"\"\n value_encoded = encode(value, self.get_change_trigger(key))\n\n if key in self.data and self.data[key] == value_encoded:\n return self\n\n self.data[key] = value_encoded\n return self.trigger_changed(key)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update(self, new_data):\n for k, v in new_data.items():\n self[k] = v\n\n return self", "response": "Update the dictionary with the new data."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntriggering all callbacks that were set with on_change.", "response": "def trigger_all_change_callbacks(self):\n \"\"\"Trigger all callbacks that were set with on_change().\"\"\"\n return [\n ret\n for key in DatastoreLegacy.store[self.domain].keys()\n for ret in self.trigger_change_callbacks(key)\n ]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets value at key and return a Future", "response": "def set(self, key, value):\n \"\"\"Set value at key and return a Future\n\n :rtype: tornado.concurrent.Future\n \"\"\"\n return DatastoreLegacy.store[self.domain].set(key, value)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef init(self, key_value_pairs):\n for k, v in key_value_pairs.items():\n if k not in DatastoreLegacy.store[self.domain]:\n DatastoreLegacy.store[self.domain][k] = v", "response": "Initialize datastore.\n\n Only sets values for keys that are not in the datastore already.\n\n :param dict key_value_pairs:\n A set of key value pairs to use to initialize the datastore."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncloses and delete instance.", "response": "def close(self):\n \"\"\"Close and delete instance.\"\"\"\n\n # remove callbacks\n DatastoreLegacy.datastores[self.domain].remove(self)\n\n # delete data after the last instance is gone\n if self.release_storage and \\\n not DatastoreLegacy.datastores[self.domain]:\n del DatastoreLegacy.store[self.domain]\n\n del self"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ngenerate html and dirhtml output.", "response": "def gen(skipdirhtml=False):\n \"\"\"Generate html and dirhtml output.\"\"\"\n docs_changelog = 'docs/changelog.rst'\n check_git_unchanged(docs_changelog)\n pandoc('--from=markdown', '--to=rst', '--output=' + docs_changelog, 'CHANGELOG.md')\n if not skipdirhtml:\n sphinx_build['-b', 'dirhtml', '-W', '-E', 'docs', 'docs/_build/dirhtml'] & FG\n sphinx_build['-b', 'html', '-W', '-E', 'docs', 'docs/_build/html'] & FG"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef __reset_crosshair(self):\n self.lhor.set_ydata(self.y_coord)\n self.lver.set_xdata(self.x_coord)", "response": "redraw the cross - hair on the horizontal slice plot"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninitializing the vertical plot for the current locale.", "response": "def __init_vertical_plot(self):\n \"\"\"\n set up the vertical profile plot\n\n Returns\n -------\n \"\"\"\n # clear the plot if lines have already been drawn on it\n if len(self.ax2.lines) > 0:\n self.ax2.cla()\n # set up the vertical profile plot\n self.ax2.set_ylabel(self.datalabel, fontsize=self.fontsize)\n self.ax2.set_xlabel(self.spectrumlabel, fontsize=self.fontsize)\n self.ax2.set_title('vertical point profiles', fontsize=self.fontsize)\n self.ax2.set_xlim([1, self.bands])\n # plot vertical line at the slider position\n self.vline = self.ax2.axvline(self.slider.value, color='black')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef __onclick(self, event):\n # only do something if the first plot has been clicked on\n if event.inaxes == self.ax1:\n \n # retrieve the click coordinates\n self.x_coord = event.xdata\n self.y_coord = event.ydata\n \n # redraw the cross-hair\n self.__reset_crosshair()\n \n x, y = self.__map2img(self.x_coord, self.y_coord)\n subset_vertical = self.__read_timeseries(x, y)\n \n # redraw/clear the vertical profile plot in case stacking is disabled\n if not self.checkbox.value:\n self.__init_vertical_plot()\n \n # plot the vertical profile\n label = 'x: {0:03}; y: {1:03}'.format(x, y)\n self.ax2.plot(self.timestamps, subset_vertical, label=label)\n self.ax2_legend = self.ax2.legend(loc=0, prop={'size': 7}, markerscale=1)", "response": "This function is called when the mouse is clicked on the base calendar. It updates the vertical profile and slice plots."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a function suitable for passing as the more_formatters argument athrow.", "response": "def LookupChain(lookup_func_list):\n \"\"\"Returns a *function* suitable for passing as the more_formatters argument\n to Template.\n\n NOTE: In Java, this would be implemented using the 'Composite' pattern. A\n *list* of formatter lookup function behaves the same as a *single* formatter\n lookup funcion.\n\n Note the distinction between formatter *lookup* functions and formatter\n functions here.\n \"\"\"\n def MoreFormatters(formatter_name):\n for lookup_func in lookup_func_list:\n formatter_func = lookup_func(formatter_name)\n if formatter_func is not None:\n return formatter_func\n\n return MoreFormatters"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nusing Python % format strings as template format specifiers.", "response": "def PythonPercentFormat(format_str):\n \"\"\"Use Python % format strings as template format specifiers.\"\"\"\n\n if format_str.startswith('printf '):\n fmt = format_str[len('printf '):]\n return lambda value: fmt % value\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef Plural(format_str):\n if format_str.startswith('plural?'):\n i = len('plural?')\n\n try:\n splitchar = format_str[i] # Usually a space, but could be something else\n _, plural_val, singular_val = format_str.split(splitchar)\n except IndexError:\n raise Error('plural? must have exactly 2 arguments')\n\n def Formatter(value):\n plural = False\n if isinstance(value, int) and value > 1:\n plural = True\n if isinstance(value, list) and len(value) > 1:\n plural = True\n\n if plural:\n return plural_val\n else:\n return singular_val\n\n return Formatter\n\n else:\n return None", "response": "Returns whether the value should be considered a plural value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning all merged CDSs for a genome", "response": "def get_merged_cds(genome):\n \"\"\"\n Returns all CDS merged, used:\n - for TargQC general reports CDS coverage statistics for WGS\n - for Seq2C CNV calling when no capture BED available\n \"\"\"\n bed = get_all_features(genome)\n debug('Filtering BEDTool for high confidence CDS and stop codons')\n return bed\\\n .filter(lambda r: r.fields[BedCols.FEATURE] in ['CDS', 'stop_codon'])\\\n .filter(high_confidence_filter)\\\n .merge()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get(relative_path, genome=None):\n chrom = None\n if genome:\n if '-chr' in genome:\n genome, chrom = genome.split('-')\n check_genome(genome)\n relative_path = relative_path.format(genome=genome)\n\n path = abspath(join(dirname(__file__), relative_path))\n if not isfile(path) and isfile(path + '.gz'):\n path += '.gz'\n\n if path.endswith('.bed') or path.endswith('.bed.gz'):\n if path.endswith('.bed.gz'):\n bedtools = which('bedtools')\n if not bedtools:\n critical('bedtools not found in PATH: ' + str(os.environ['PATH']))\n debug('BED is compressed, creating BedTool')\n bed = BedTool(path)\n else:\n debug('BED is uncompressed, creating BedTool')\n bed = BedTool(path)\n\n if chrom:\n debug('Filtering BEDTool for chrom ' + chrom)\n bed = bed.filter(lambda r: r.chrom == chrom)\n return bed\n else:\n return path", "response": "Returns the BedTools object for the given path."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(cmd, output_fpath=None, input_fpath=None, checks=None, stdout_to_outputfile=True,\n stdout_tx=True, reuse=False, env_vars=None):\n \"\"\"Run the provided command, logging details and checking for errors.\n \"\"\"\n if output_fpath and reuse:\n if verify_file(output_fpath, silent=True):\n info(output_fpath + ' exists, reusing')\n return output_fpath\n if not output_fpath.endswith('.gz') and verify_file(output_fpath + '.gz', silent=True):\n info(output_fpath + '.gz exists, reusing')\n return output_fpath\n\n env = os.environ.copy()\n if env_vars:\n for k, v in env_vars.items():\n if v is None:\n if k in env:\n del env[k]\n else:\n env[k] = v\n\n if checks is None:\n checks = [file_nonempty_check]\n\n def _try_run(_cmd, _output_fpath, _input_fpath):\n try:\n info(' '.join(str(x) for x in _cmd) if not isinstance(_cmd, six.string_types) else _cmd)\n _do_run(_cmd, checks, env, _output_fpath, _input_fpath)\n except:\n raise\n\n if output_fpath:\n if isfile(output_fpath):\n os.remove(output_fpath)\n if output_fpath:\n if stdout_tx:\n with file_transaction(None, output_fpath) as tx_out_file:\n if stdout_to_outputfile:\n cmd += ' > ' + tx_out_file\n else:\n cmd += '\\n'\n cmd = cmd.replace(' ' + output_fpath + ' ', ' ' + tx_out_file + ' ') \\\n .replace(' \"' + output_fpath + '\" ', ' ' + tx_out_file + '\" ') \\\n .replace(' \\'' + output_fpath + '\\' ', ' ' + tx_out_file + '\\' ') \\\n .replace(' ' + output_fpath + '\\n', ' ' + tx_out_file) \\\n .replace(' \"' + output_fpath + '\"\\n', ' ' + tx_out_file + '\"') \\\n .replace(' \\'' + output_fpath + '\\'\\n', ' ' + tx_out_file + '\\'') \\\n .replace('\\n', '')\n _try_run(cmd, tx_out_file, input_fpath)\n else:\n _try_run(cmd, output_fpath, input_fpath)\n\n else:\n _try_run(cmd, None, input_fpath)", "response": "Run the provided command and return the path to the output file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _correct_qualimap_genome_results(samples):\n for s in samples:\n if verify_file(s.qualimap_genome_results_fpath):\n correction_is_needed = False\n with open(s.qualimap_genome_results_fpath, 'r') as f:\n content = f.readlines()\n metrics_started = False\n for line in content:\n if \">> Reference\" in line:\n metrics_started = True\n if metrics_started:\n if line.find(',') != -1:\n correction_is_needed = True\n break\n if correction_is_needed:\n with open(s.qualimap_genome_results_fpath, 'w') as f:\n metrics_started = False\n for line in content:\n if \">> Reference\" in line:\n metrics_started = True\n if metrics_started:\n if line.find(',') != -1:\n line = line.replace(',', '')\n f.write(line)", "response": "fix java. lang. Double. parseDouble error on entries like 6082. 49"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _correct_qualimap_insert_size_histogram(work_dir, samples):\n for s in samples:\n qualimap1_dirname = dirname(s.qualimap_ins_size_hist_fpath).replace('raw_data_qualimapReport', 'raw_data')\n qualimap2_dirname = dirname(s.qualimap_ins_size_hist_fpath)\n if exists(qualimap1_dirname):\n if not exists(qualimap2_dirname):\n shutil.move(qualimap1_dirname, qualimap2_dirname)\n else:\n shutil.rmtree(qualimap1_dirname)\n elif not exists(qualimap2_dirname):\n continue # no data from both Qualimap v.1 and Qualimap v.2\n\n # if qualimap histogram exits and reuse_intermediate, skip\n if verify_file(s.qualimap_ins_size_hist_fpath, silent=True) and tc.reuse_intermediate:\n pass\n else:\n if verify_file(s.picard_ins_size_hist_txt_fpath):\n with open(s.picard_ins_size_hist_txt_fpath, 'r') as picard_f:\n one_line_to_stop = False\n for line in picard_f:\n if one_line_to_stop:\n break\n if line.startswith('## HISTOGRAM'):\n one_line_to_stop = True\n\n with file_transaction(work_dir, s.qualimap_ins_size_hist_fpath) as tx:\n with open(tx, 'w') as qualimap_f:\n for line in picard_f:\n qualimap_f.write(line)", "response": "This function takes a list of samples and corrects the insert size histogram."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef norm_vec(vector):\n assert len(vector) == 3\n v = np.array(vector)\n return v/np.sqrt(np.sum(v**2))", "response": "Normalize the length of a vector to one"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rotate_points_to_axis(points, axis):\n axis = norm_vec(axis)\n u, v, w = axis\n points = np.array(points)\n # Determine the rotational angle in the x-z plane\n phi = np.arctan2(u, w)\n # Determine the tilt angle w.r.t. the y-axis\n theta = np.arccos(v)\n\n # Negative rotation about y-axis\n Rphi = np.array([\n [np.cos(phi), 0, -np.sin(phi)],\n [0, 1, 0],\n [np.sin(phi), 0, np.cos(phi)],\n ])\n\n # Negative rotation about x-axis\n Rtheta = np.array([\n [1, 0, 0],\n [0, np.cos(theta), np.sin(theta)],\n [0, -np.sin(theta), np.cos(theta)],\n ])\n\n DR1 = np.dot(Rtheta, Rphi)\n # Rotate back by -phi such that effective rotation was only\n # towards [0,1,0].\n DR = np.dot(Rphi.T, DR1)\n rotpoints = np.zeros((len(points), 3))\n for ii, pnt in enumerate(points):\n rotpoints[ii] = np.dot(DR, pnt)\n\n # For visualiztaion:\n # import matplotlib.pylab as plt\n # from mpl_toolkits.mplot3d import Axes3D\n # from matplotlib.patches import FancyArrowPatch\n # from mpl_toolkits.mplot3d import proj3d\n #\n # class Arrow3D(FancyArrowPatch):\n # def __init__(self, xs, ys, zs, *args, **kwargs):\n # FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)\n # self._verts3d = xs, ys, zs\n #\n # def draw(self, renderer):\n # xs3d, ys3d, zs3d = self._verts3d\n # xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)\n # self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))\n # FancyArrowPatch.draw(self, renderer)\n #\n # fig = plt.figure(figsize=(10,10))\n # ax = fig.add_subplot(111, projection='3d')\n # for vec in rotpoints:\n # u,v,w = vec\n # a = Arrow3D([0,u],[0,v],[0,w],\n # mutation_scale=20, lw=1, arrowstyle=\"-|>\")\n # ax.add_artist(a)\n #\n # radius=1\n # ax.set_xlabel('X')\n # ax.set_ylabel('Y')\n # ax.set_zlabel('Z')\n # ax.set_xlim(-radius*1.5, radius*1.5)\n # ax.set_ylim(-radius*1.5, radius*1.5)\n # ax.set_zlim(-radius*1.5, radius*1.5)\n # plt.tight_layout()\n # plt.show()\n\n return rotpoints", "response": "Rotates a list of points along a reference axis."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncompute the rotation matrix that rotates from [ 0 1 ) to point.", "response": "def rotation_matrix_from_point(point, ret_inv=False):\n \"\"\"Compute rotation matrix to go from [0,0,1] to `point`.\n\n First, the matrix rotates to in the polar direction. Then,\n a rotation about the y-axis is performed to match the\n azimuthal angle in the x-z-plane.\n\n This rotation matrix is required for the correct 3D orientation\n of the backpropagated projections.\n\n Parameters\n ----------\n points: list-like, length 3\n The coordinates of the point in 3D.\n ret_inv: bool\n Also return the inverse of the rotation matrix. The inverse\n is required for :func:`scipy.ndimage.interpolation.affine_transform`\n which maps the output coordinates to the input coordinates.\n\n Returns\n -------\n Rmat [, Rmat_inv]: 3x3 ndarrays\n The rotation matrix that rotates [0,0,1] to `point` and\n optionally its inverse.\n \"\"\"\n x, y, z = point\n # azimuthal angle\n phi = np.arctan2(x, z)\n # angle in polar direction (negative)\n theta = -np.arctan2(y, np.sqrt(x**2+z**2))\n\n # Rotation in polar direction\n Rtheta = np.array([\n [1, 0, 0],\n [0, np.cos(theta), -np.sin(theta)],\n [0, np.sin(theta), np.cos(theta)],\n ])\n\n # rotation in x-z-plane\n Rphi = np.array([\n [np.cos(phi), 0, -np.sin(phi)],\n [0, 1, 0],\n [np.sin(phi), 0, np.cos(phi)],\n ])\n\n D = np.dot(Rphi, Rtheta)\n # The inverse of D\n Dinv = np.dot(Rtheta.T, Rphi.T)\n\n if ret_inv:\n return D, Dinv\n else:\n return D"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rotation_matrix_from_point_planerot(point, plane_angle, ret_inv=False):\n # These matrices are correct if there is no tilt of the\n # rotational axis within the detector plane (x-y).\n D, Dinv = rotation_matrix_from_point(point, ret_inv=True)\n\n # We need an additional rotation about the z-axis to correct\n # for the tilt for all the the other cases.\n angz = plane_angle\n\n Rz = np.array([\n [np.cos(angz), -np.sin(angz), 0],\n [np.sin(angz), np.cos(angz), 0],\n [0, 0, 1],\n ])\n\n DR = np.dot(D, Rz)\n DRinv = np.dot(Rz.T, Dinv)\n\n if ret_inv:\n return DR, DRinv\n else:\n return DR", "response": "Compute the rotation matrix that rotates to a point on a 3D plane."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sphere_points_from_angles_and_tilt(angles, tilted_axis):\n assert len(angles.shape) == 1\n # Normalize tilted axis.\n tilted_axis = norm_vec(tilted_axis)\n [u, v, w] = tilted_axis\n # Initial distribution of points about great circle (x-z).\n newang = np.zeros((angles.shape[0], 3), dtype=float)\n # We subtract angles[0], because in step (a) we want that\n # newang[0]==[0,0,1]. This only works if we actually start\n # at that point.\n newang[:, 0] = np.sin(angles-angles[0])\n newang[:, 2] = np.cos(angles-angles[0])\n\n # Compute rotational angles w.r.t. [0,1,0].\n # - Draw a unit sphere with the y-axis pointing up and the\n # z-axis pointing right\n # - The rotation of `tilted_axis` can be described by two\n # separate rotations. We will use these two angles:\n # (a) Rotation from y=1 within the y-z plane: theta\n # This is the rotation that is critical for data\n # reconstruction. If this angle is zero, then we\n # have a rotational axis in the imaging plane. If\n # this angle is PI/2, then our sinogram consists\n # of a rotating image and 3D reconstruction is\n # impossible. This angle is counted from the y-axis\n # onto the x-z plane.\n # (b) Rotation in the x-z plane: phi\n # This angle is responsible for matching up the angles\n # with the correct sinogram images. If this angle is zero,\n # then the projection of the rotational axis onto the\n # x-y plane is aligned with the y-axis. If this angle is\n # PI/2, then the axis and its projection onto the x-y\n # plane are identical. This angle is counted from the\n # positive z-axis towards the positive x-axis. By default,\n # angles[0] is the point that touches the great circle\n # that lies in the x-z plane. angles[1] is the next point\n # towards the x-axis if phi==0.\n\n # (a) This angle is the azimuthal angle theta measured from the\n # y-axis.\n theta = np.arccos(v)\n\n # (b) This is the polar angle measured in the x-z plane starting\n # at the x-axis and measured towards the positive z-axis.\n if np.allclose(u, 0) and np.allclose(w, 0):\n # Avoid flipping the axis of rotation due to numerical\n # errors during its computation.\n phi = 0\n else:\n phi = np.arctan2(u, w)\n\n # Determine the projection points on the unit sphere.\n # The resulting circle meets the x-z-plane at phi, and\n # is tilted by theta w.r.t. the y-axis.\n\n # (a) Create a tilted data set. This is achieved in 3 steps.\n\n # a1) Determine radius of tilted circle and get the centered\n # circle with a smaller radius.\n rtilt = np.cos(theta)\n newang *= rtilt\n\n # a2) Rotate this circle about the x-axis by theta\n # (right-handed/counter-clockwise/basic/elemental rotation)\n Rx = np.array([\n [1, 0, 0],\n [0, np.cos(theta), -np.sin(theta)],\n [0, np.sin(theta), np.cos(theta)]\n ])\n for ii in range(newang.shape[0]):\n newang[ii] = np.dot(Rx, newang[ii])\n\n # a3) Shift newang such that newang[0] is located at (0,0,1)\n newang = newang - (newang[0] - np.array([0, 0, 1])).reshape(1, 3)\n\n # (b) Rotate the entire thing with phi about the y-axis\n # (right-handed/counter-clockwise/basic/elemental rotation)\n Ry = np.array([\n [+np.cos(phi), 0, np.sin(phi)],\n [0, 1, 0],\n [-np.sin(phi), 0, np.cos(phi)]\n ])\n\n for jj in range(newang.shape[0]):\n newang[jj] = np.dot(Ry, newang[jj])\n\n # For visualiztaion:\n # import matplotlib.pylab as plt\n # from mpl_toolkits.mplot3d import Axes3D\n # from matplotlib.patches import FancyArrowPatch\n # from mpl_toolkits.mplot3d import proj3d\n #\n # class Arrow3D(FancyArrowPatch):\n # def __init__(self, xs, ys, zs, *args, **kwargs):\n # FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)\n # self._verts3d = xs, ys, zs\n #\n # def draw(self, renderer):\n # xs3d, ys3d, zs3d = self._verts3d\n # xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)\n # self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))\n # FancyArrowPatch.draw(self, renderer)\n #\n # fig = plt.figure(figsize=(10,10))\n # ax = fig.add_subplot(111, projection='3d')\n # for vec in newang:\n # u,v,w = vec\n # a = Arrow3D([0,u],[0,v],[0,w],\n # mutation_scale=20, lw=1, arrowstyle=\"-|>\")\n # ax.add_artist(a)\n #\n # radius=1\n # ax.set_xlabel('X')\n # ax.set_ylabel('Y')\n # ax.set_zlabel('Z')\n # ax.set_xlim(-radius*1.5, radius*1.5)\n # ax.set_ylim(-radius*1.5, radius*1.5)\n # ax.set_zlim(-radius*1.5, radius*1.5)\n # plt.tight_layout()\n # plt.show()\n\n return newang", "response": "Compute the points on a unit sphere that correspond to the distribution of points about the given angles and tilt."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngiving a set of file names produce a list of names consisting of the uniq parts of the names. This works from the end of the name.", "response": "def filenames_to_uniq(names,new_delim='.'):\n '''\n Given a set of file names, produce a list of names consisting of the\n uniq parts of the names. This works from the end of the name. Chunks of\n the name are split on '.' and '-'.\n \n For example:\n A.foo.bar.txt\n B.foo.bar.txt\n returns: ['A','B']\n \n AA.BB.foo.txt\n CC.foo.txt\n returns: ['AA.BB','CC']\n \n '''\n name_words = []\n maxlen = 0\n for name in names:\n name_words.append(name.replace('.',' ').replace('-',' ').strip().split())\n name_words[-1].reverse()\n if len(name_words[-1]) > maxlen:\n maxlen = len(name_words[-1])\n\n common = [False,] * maxlen\n for i in range(maxlen):\n last = None\n same = True\n for nameword in name_words:\n if i >= len(nameword):\n same = False\n break\n if not last:\n last = nameword[i]\n elif nameword[i] != last:\n same = False\n break\n common[i] = same\n\n newnames = []\n for nameword in name_words:\n nn = []\n for (i, val) in enumerate(common):\n if not val and i < len(nameword):\n nn.append(nameword[i])\n nn.reverse()\n newnames.append(new_delim.join(nn))\n \n return newnames"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run_process(self, analysis, action_name, message='__nomessagetoken__'):\n\n if action_name == 'connect':\n analysis.on_connect(self.executable, self.zmq_publish)\n\n while not analysis.zmq_handshake:\n yield tornado.gen.sleep(0.1)\n\n log.debug('sending action {}'.format(action_name))\n analysis.zmq_send({'signal': action_name, 'load': message})\n\n if action_name == 'disconnected':\n # Give kernel time to process disconnected message.\n yield tornado.gen.sleep(0.1)\n analysis.on_disconnected()", "response": "Executes an action in the analysis with the given message."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_git_unchanged(filename, yes=False):\n if check_staged(filename):\n s = 'There are staged changes in {}, overwrite? [y/n] '.format(filename)\n if yes or input(s) in ('y', 'yes'):\n return\n else:\n raise RuntimeError('There are staged changes in '\n '{}, aborting.'.format(filename))\n if check_unstaged(filename):\n s = 'There are unstaged changes in {}, overwrite? [y/n] '.format(filename)\n if yes or input(s) in ('y', 'yes'):\n return\n else:\n raise RuntimeError('There are unstaged changes in '\n '{}, aborting.'.format(filename))", "response": "Check git to avoid overwriting user changes."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_staged(filename=None):\n retcode, _, stdout = git['diff-index', '--quiet', '--cached', 'HEAD',\n filename].run(retcode=None)\n if retcode == 1:\n return True\n elif retcode == 0:\n return False\n else:\n raise RuntimeError(stdout)", "response": "Check if there are changes to be committed in the index."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nexecuting an action in the analysis with the given message.", "response": "def run_process(analysis, action_name, message='__nomessagetoken__'):\n \"\"\"Executes an action in the analysis with the given message.\n\n It also handles the start and stop signals in the case that message\n is a `dict` with a key ``__process_id``.\n\n :param str action_name: Name of the action to trigger.\n :param message: Message.\n :param callback:\n A callback function when done (e.g.\n :meth:`~tornado.testing.AsyncTestCase.stop` in tests).\n :rtype: tornado.concurrent.Future\n \"\"\"\n\n if analysis is None:\n return\n\n # detect process_id\n process_id = None\n if isinstance(message, dict) and '__process_id' in message:\n process_id = message['__process_id']\n del message['__process_id']\n\n if process_id:\n yield analysis.emit('__process',\n {'id': process_id, 'status': 'start'})\n\n fns = [\n functools.partial(handler, analysis)\n for handler in (analysis._action_handlers.get(action_name, []) +\n analysis._action_handlers.get('*', []))\n ]\n if fns:\n args, kwargs = [], {}\n\n # Check whether this is a list (positional arguments)\n # or a dictionary (keyword arguments).\n if isinstance(message, list):\n args = message\n elif isinstance(message, dict):\n kwargs = message\n elif message == '__nomessagetoken__':\n pass\n else:\n args = [message]\n\n for fn in fns:\n log.debug('calling {}'.format(fn))\n try:\n yield tornado.gen.maybe_future(fn(*args, **kwargs))\n except Exception as e:\n yield analysis.emit('error', 'an Exception occured')\n raise e\n else:\n yield analysis.emit('warn',\n 'no handler for {}'.format(action_name))\n\n if process_id:\n yield analysis.emit('__process',\n {'id': process_id, 'status': 'end'})"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndownloads url to dest", "response": "def dl_file(url, dest, chunk_size=6553):\n \"\"\"Download `url` to `dest`\"\"\"\n import urllib3\n http = urllib3.PoolManager()\n r = http.request('GET', url, preload_content=False)\n with dest.open('wb') as out:\n while True:\n data = r.read(chunk_size)\n if data is None or len(data) == 0:\n break\n out.write(data)\n r.release_conn()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nextract an lzma file and return the temporary file name", "response": "def extract_lzma(path):\n \"\"\"Extract an lzma file and return the temporary file name\"\"\"\n tlfile = pathlib.Path(path)\n # open lzma file\n with tlfile.open(\"rb\") as td:\n data = lzma.decompress(td.read())\n # write temporary tar file\n fd, tmpname = tempfile.mkstemp(prefix=\"odt_ex_\", suffix=\".tar\")\n with open(fd, \"wb\") as fo:\n fo.write(data)\n return tmpname"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndownloads a file from the ODTbrain GitHub repository.", "response": "def get_file(fname, datapath=datapath):\n \"\"\"Return path of an example data file\n\n Return the full path to an example data file name.\n If the file does not exist in the `datapath` directory,\n tries to download it from the ODTbrain GitHub repository.\n \"\"\"\n # download location\n datapath = pathlib.Path(datapath)\n datapath.mkdir(parents=True, exist_ok=True)\n\n dlfile = datapath / fname\n if not dlfile.exists():\n print(\"Attempting to download file {} from {} to {}.\".\n format(fname, webloc, datapath))\n try:\n dl_file(url=webloc+fname, dest=dlfile)\n except BaseException:\n warnings.warn(\"Download failed: {}\".format(fname))\n raise\n return dlfile"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload example sinogram data from a. tar. lzma file", "response": "def load_tar_lzma_data(tlfile):\n \"\"\"Load example sinogram data from a .tar.lzma file\"\"\"\n tmpname = extract_lzma(tlfile)\n\n # open tar file\n fields_real = []\n fields_imag = []\n phantom = []\n parms = {}\n\n with tarfile.open(tmpname, \"r\") as t:\n members = t.getmembers()\n members.sort(key=lambda x: x.name)\n\n for m in members:\n n = m.name\n f = t.extractfile(m)\n if n.startswith(\"fdtd_info\"):\n for ln in f.readlines():\n ln = ln.decode()\n if ln.count(\"=\") == 1:\n key, val = ln.split(\"=\")\n parms[key.strip()] = float(val.strip())\n elif n.startswith(\"phantom\"):\n phantom.append(np.loadtxt(f))\n elif n.startswith(\"field\"):\n if n.endswith(\"imag.txt\"):\n fields_imag.append(np.loadtxt(f))\n elif n.endswith(\"real.txt\"):\n fields_real.append(np.loadtxt(f))\n\n try:\n os.remove(tmpname)\n except OSError:\n pass\n\n phantom = np.array(phantom)\n sino = np.array(fields_real) + 1j * np.array(fields_imag)\n angles = np.linspace(0, 2 * np.pi, sino.shape[0], endpoint=False)\n\n return sino, angles, phantom, parms"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef load_zip_data(zipname, f_sino_real, f_sino_imag,\n f_angles=None, f_phantom=None, f_info=None):\n \"\"\"Load example sinogram data from a .zip file\"\"\"\n ret = []\n with zipfile.ZipFile(str(zipname)) as arc:\n sino_real = np.loadtxt(arc.open(f_sino_real))\n sino_imag = np.loadtxt(arc.open(f_sino_imag))\n sino = sino_real + 1j * sino_imag\n ret.append(sino)\n if f_angles:\n angles = np.loadtxt(arc.open(f_angles))\n ret.append(angles)\n if f_phantom:\n phantom = np.loadtxt(arc.open(f_phantom))\n ret.append(phantom)\n if f_info:\n with arc.open(f_info) as info:\n cfg = {}\n for li in info.readlines():\n li = li.decode()\n if li.count(\"=\") == 1:\n key, val = li.split(\"=\")\n cfg[key.strip()] = float(val.strip())\n ret.append(cfg)\n return ret", "response": "Load example sinogram data from a. zip file"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fourier_map_2d(uSin, angles, res, nm, lD=0, semi_coverage=False,\n coords=None, count=None, max_count=None, verbose=0):\n r\"\"\"2D Fourier mapping with the Fourier diffraction theorem\n\n Two-dimensional diffraction tomography reconstruction\n algorithm for scattering of a plane wave\n :math:`u_0(\\mathbf{r}) = u_0(x,z)`\n by a dielectric object with refractive index\n :math:`n(x,z)`.\n\n This function implements the solution by interpolation in\n Fourier space.\n\n Parameters\n ----------\n uSin: (A,N) ndarray\n Two-dimensional sinogram of line recordings\n :math:`u_{\\mathrm{B}, \\phi_j}(x_\\mathrm{D})`\n divided by the incident plane wave :math:`u_0(l_\\mathrm{D})`\n measured at the detector.\n angles: (A,) ndarray\n Angular positions :math:`\\phi_j` of `uSin` in radians.\n res: float\n Vacuum wavelength of the light :math:`\\lambda` in pixels.\n nm: float\n Refractive index of the surrounding medium :math:`n_\\mathrm{m}`.\n lD: float\n Distance from center of rotation to detector plane\n :math:`l_\\mathrm{D}` in pixels.\n semi_coverage: bool\n If set to `True`, it is assumed that the sinogram does not\n necessarily cover the full angular range from 0 to 2\u03c0, but an\n equidistant coverage over 2\u03c0 can be achieved by inferring point\n (anti)symmetry of the (imaginary) real parts of the Fourier\n transform of f. Valid for any set of angles {X} that result in\n a 2\u03c0 coverage with the union set {X}U{X+\u03c0}.\n coords: None [(2,M) ndarray]\n Computes only the output image at these coordinates. This\n keyword is reserved for future versions and is not\n implemented yet.\n count, max_count: multiprocessing.Value or `None`\n Can be used to monitor the progress of the algorithm.\n Initially, the value of `max_count.value` is incremented\n by the total number of steps. At each step, the value\n of `count.value` is incremented.\n verbose: int\n Increment to increase verbosity.\n\n\n Returns\n -------\n f: ndarray of shape (N,N), complex if `onlyreal` is `False`\n Reconstructed object function :math:`f(\\mathbf{r})` as defined\n by the Helmholtz equation.\n :math:`f(x,z) =\n k_m^2 \\left(\\left(\\frac{n(x,z)}{n_m}\\right)^2 -1\\right)`\n\n\n See Also\n --------\n backpropagate_2d: implementation by backpropagation\n odt_to_ri: conversion of the object function :math:`f(\\mathbf{r})`\n to refractive index :math:`n(\\mathbf{r})`\n\n Notes\n -----\n Do not use the parameter `lD` in combination with the Rytov\n approximation - the propagation is not correctly described.\n Instead, numerically refocus the sinogram prior to converting\n it to Rytov data (using e.g. :func:`odtbrain.sinogram_as_rytov`)\n with a numerical focusing algorithm (available in the Python\n package :py:mod:`nrefocus`).\n\n The interpolation in Fourier space (which is done with\n :func:`scipy.interpolate.griddata`) may be unstable and lead to\n artifacts if the data to interpolate contains sharp spikes. This\n issue is not handled at all by this method (in fact, a test has\n been removed in version 0.2.6 because ``griddata`` gave different\n results on Windows and Linux).\n \"\"\"\n ##\n ##\n # TODO:\n # - zero-padding as for backpropagate_2D - However this is not\n # necessary as Fourier interpolation is not parallelizable with\n # multiprocessing and thus unattractive. Could be interesting for\n # specific environments without the Python GIL.\n # - Deal with oversampled data. Maybe issue a warning.\n ##\n ##\n A = angles.shape[0]\n if max_count is not None:\n max_count.value += 4\n # Check input data\n assert len(uSin.shape) == 2, \"Input data `uSin` must have shape (A,N)!\"\n assert len(uSin) == A, \"`len(angles)` must be equal to `len(uSin)`!\"\n\n if coords is not None:\n raise NotImplementedError(\"Output coordinates cannot yet be set\"\n + \"for the 2D backrpopagation algorithm.\")\n # Cut-Off frequency\n # km [1/px]\n km = (2 * np.pi * nm) / res\n\n # Fourier transform of all uB's\n # In the script we used the unitary angular frequency (uaf) Fourier\n # Transform. The discrete Fourier transform is equivalent to the\n # unitary ordinary frequency (uof) Fourier transform.\n #\n # uof: f\u2081(\u03be) = int f(x) exp(-2\u03c0i x\u03be)\n #\n # uaf: f\u2083(\u03c9) = (2\u03c0)^(-n/2) int f(x) exp(-i \u03c9x)\n #\n # f\u2081(\u03c9/(2\u03c0)) = (2\u03c0)^(n/2) f\u2083(\u03c9)\n # \u03c9 = 2\u03c0\u03be\n #\n # Our Backpropagation Formula is with uaf convention of the Form\n #\n # F(k) = 1/sqrt(2\u03c0) U(kD)\n #\n # If we convert now to uof convention, we get\n #\n # F(k) = U(kD)\n #\n # This means that if we divide the Fourier transform of the input\n # data by sqrt(2\u03c0) to convert f\u2083(\u03c9) to f\u2081(\u03c9/(2\u03c0)), the resulting\n # value for F is off by a factor of 2\u03c0.\n #\n # Instead, we can just multiply *UB* by sqrt(2\u03c0) and calculate\n # everything in uof.\n # UB = np.fft.fft(np.fft.ifftshift(uSin, axes=-1))/np.sqrt(2*np.pi)\n #\n #\n # Furthermore, we define\n # a wave propagating to the right as:\n #\n # u0(x) = exp(ikx)\n #\n # However, in physics usually we use the other sign convention:\n #\n # u0(x) = exp(-ikx)\n #\n # In order to be consistent with programs like Meep or our\n # scattering script for a dielectric cylinder, we want to use the\n # latter sign convention.\n # This is not a big problem. We only need to multiply the imaginary\n # part of the scattered wave by -1.\n\n UB = np.fft.fft(np.fft.ifftshift(uSin, axes=-1)) * np.sqrt(2 * np.pi)\n\n # Corresponding sample frequencies\n fx = np.fft.fftfreq(len(uSin[0])) # 1D array\n\n # kx is a 1D array.\n kx = 2 * np.pi * fx\n\n if count is not None:\n count.value += 1\n\n # Undersampling/oversampling?\n # Determine if the resolution of the image is too low by looking\n # at the maximum value for kx. This is no comparison between\n # Nyquist and Rayleigh frequency.\n if verbose and np.max(kx**2) <= km**2:\n # Detector is not set up properly. Higher resolution\n # can be achieved.\n print(\"......Measurement data is undersampled.\")\n else:\n print(\"......Measurement data is oversampled.\")\n # raise NotImplementedError(\"Oversampled data not yet supported.\"+\n # \" Please rescale xD-axis of the input data.\")\n # DEAL WITH OVERSAMPLED DATA?\n # lenk = len(kx)\n # kx = np.fft.ifftshift(np.linspace(-np.sqrt(km),\n # np.sqrt(km),\n # len(fx), endpoint=False))\n\n #\n # F(kD-k\u2098s\u2080) = - i k\u2098 sqrt(2/\u03c0) / a\u2080 * M exp(-i k\u2098 M lD) * UB(kD)\n # k\u2098M = sqrt( k\u2098\u00b2 - kx\u00b2 )\n # s\u2080 = ( -sin(\u03d5\u2080), cos(\u03d5\u2080) )\n #\n # We create the 2D interpolation object F\n # - We compute the real coordinates (krx,kry) = kD-k\u2098s\u2080\n # - We set as grid points the right side of the equation\n #\n # The interpolated griddata may go up to sqrt(2)*k\u2098 for kx and ky.\n\n kx = kx.reshape(1, -1)\n # a0 should have same shape as kx and UB\n # a0 = np.atleast_1d(a0)\n # a0 = a0.reshape(1,-1)\n\n filter_klp = (kx**2 < km**2)\n M = 1. / km * np.sqrt(km**2 - kx**2)\n # Fsin = -1j * km * np.sqrt(2/np.pi) / a0 * M * np.exp(-1j*km*M*lD)\n # new in version 0.1.4:\n # We multiply by the factor (M-1) instead of just (M)\n # to take into account that we have a scattered\n # wave that is normalized by u0.\n Fsin = -1j * km * np.sqrt(2 / np.pi) * M * np.exp(-1j * km * (M-1) * lD)\n\n # UB has same shape (len(angles), len(kx))\n Fsin = Fsin * UB * filter_klp\n\n ang = angles.reshape(-1, 1)\n\n if semi_coverage:\n Fsin = np.vstack((Fsin, np.conj(Fsin)))\n ang = np.vstack((ang, ang + np.pi))\n\n if count is not None:\n count.value += 1\n\n # Compute kxl and kyl (in rotated system \u03d5\u2080)\n kxl = kx\n kyl = np.sqrt((km**2 - kx**2) * filter_klp) - km\n # rotate kxl and kyl to where they belong\n krx = np.cos(ang) * kxl + np.sin(ang) * kyl\n kry = - np.sin(ang) * kxl + np.cos(ang) * kyl\n\n Xf = krx.flatten()\n Yf = kry.flatten()\n Zf = Fsin.flatten()\n\n # DEBUG: plot kry vs krx\n # from matplotlib import pylab as plt\n # plt.figure()\n # for i in range(len(krx)):\n # plt.plot(krx[i],kry[i],\"x\")\n # plt.axes().set_aspect('equal')\n # plt.show()\n\n # interpolation on grid with same resolution as input data\n kintp = np.fft.fftshift(kx.reshape(-1))\n\n Fcomp = intp.griddata((Xf, Yf), Zf, (kintp[None, :], kintp[:, None]))\n\n if count is not None:\n count.value += 1\n\n # removed nans\n Fcomp[np.where(np.isnan(Fcomp))] = 0\n\n # Filter data\n kinx, kiny = np.meshgrid(np.fft.fftshift(kx), np.fft.fftshift(kx))\n Fcomp[np.where((kinx**2 + kiny**2) > np.sqrt(2) * km)] = 0\n # Fcomp[np.where(kinx**2+kiny**2= getmtime(cmp_fname))\n except OSError:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\noverwrite a file in place with a short message to save disk.", "response": "def save_diskspace(fname, reason, config):\n \"\"\"Overwrite a file in place with a short message to save disk.\n\n This keeps files as a sanity check on processes working, but saves\n disk by replacing them with a short message.\n \"\"\"\n if config[\"algorithm\"].get(\"save_diskspace\", False):\n with open(fname, \"w\") as out_handle:\n out_handle.write(\"File removed to save disk space: %s\" % reason)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating relative symlinks and handle associated biological index files.", "response": "def symlink_plus(orig, new):\n \"\"\"Create relative symlinks and handle associated biological index files.\n \"\"\"\n for ext in [\"\", \".idx\", \".gbi\", \".tbi\", \".bai\"]:\n if os.path.exists(orig + ext) and not os.path.lexists(new + ext):\n with chdir(os.path.dirname(new)):\n os.symlink(os.path.relpath(orig + ext), os.path.basename(new + ext))\n orig_noext = splitext_plus(orig)[0]\n new_noext = splitext_plus(new)[0]\n for sub_ext in [\".bai\"]:\n if os.path.exists(orig_noext + sub_ext) and not os.path.lexists(new_noext + sub_ext):\n with chdir(os.path.dirname(new_noext)):\n os.symlink(os.path.relpath(orig_noext + sub_ext), os.path.basename(new_noext + sub_ext))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nusing a predicate to partition entries into false entries and true entries", "response": "def partition(pred, iterable):\n 'Use a predicate to partition entries into false entries and true entries'\n # partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9\n t1, t2 = itertools.tee(iterable)\n try:\n return itertools.ifilterfalse(pred, t1), itertools.ifilter(pred, t2)\n except:\n return itertools.filterfalse(pred, t1), filter(pred, t2)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlooks up if you can get a tuple of values from a nested dictionary", "response": "def get_in(d, t, default=None):\n \"\"\"\n look up if you can get a tuple of values from a nested dictionary,\n each item in the tuple a deeper layer\n\n example: get_in({1: {2: 3}}, (1, 2)) -> 3\n example: get_in({1: {2: 3}}, (2, 3)) -> {}\n \"\"\"\n result = reduce(lambda d, t: d.get(t, {}), t, d)\n if not result:\n return default\n else:\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the path to an executable or None if it can t be found", "response": "def which(program):\n \"\"\"\n returns the path to an executable or None if it can't be found\n \"\"\"\n def is_exe(_fpath):\n return os.path.isfile(_fpath) and os.access(_fpath, os.X_OK)\n\n fpath, fname = os.path.split(program)\n if fpath:\n if is_exe(program):\n return program\n else:\n for path in os.environ[\"PATH\"].split(os.pathsep):\n exe_file = os.path.join(path, program)\n if is_exe(exe_file):\n return exe_file\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef expanduser(path):\n if path[:1] != '~':\n return path\n i, n = 1, len(path)\n while i < n and path[i] not in '/\\\\':\n i = i + 1\n\n if 'HOME' in os.environ:\n userhome = os.environ['HOME']\n elif 'USERPROFILE' in os.environ:\n userhome = os.environ['USERPROFILE']\n elif not 'HOMEPATH' in os.environ:\n return path\n else:\n try:\n drive = os.environ['HOMEDRIVE']\n except KeyError:\n drive = ''\n userhome = join(drive, os.environ['HOMEPATH'])\n\n if i != 1: # ~user\n userhome = join(dirname(userhome), path[1:i])\n\n return userhome + path[i:]", "response": "Expand ~ and ~user constructs."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef splitext_plus(fname):\n base, ext = splitext(fname)\n if ext in [\".gz\", \".bz2\", \".zip\"]:\n base, ext2 = splitext(base)\n ext = ext2 + ext\n return base, ext", "response": "Split on file extensions allowing for zipped extensions."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dots_to_empty_cells(config, tsv_fpath):\n def proc_line(l, i):\n while '\\t\\t' in l:\n l = l.replace('\\t\\t', '\\t.\\t')\n return l\n return iterate_file(config, tsv_fpath, proc_line, suffix='dots')", "response": "Put dots instead of empty cells in order to view TSV with column - t\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef file_transaction(work_dir, *rollback_files):\n exts = {\".vcf\": \".idx\", \".bam\": \".bai\", \"vcf.gz\": \".tbi\"}\n safe_fpaths, orig_names = _flatten_plus_safe(work_dir, rollback_files)\n __remove_files(safe_fpaths) # remove any half-finished transactions\n try:\n if len(safe_fpaths) == 1:\n yield safe_fpaths[0]\n else:\n yield tuple(safe_fpaths)\n except: # failure -- delete any temporary files\n __remove_files(safe_fpaths)\n raise\n else: # worked -- move the temporary files to permanent location\n for safe, orig in zip(safe_fpaths, orig_names):\n if exists(safe):\n shutil.move(safe, orig)\n for check_ext, check_idx in exts.items():\n if safe.endswith(check_ext):\n safe_idx = safe + check_idx\n if exists(safe_idx):\n shutil.move(safe_idx, orig + check_idx)", "response": "Wrap file generation in a transaction moving to output if finishes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _flatten_plus_safe(tmp_dir, rollback_files):\n tx_fpaths, orig_files = [], []\n for fnames in rollback_files:\n if isinstance(fnames, six.string_types):\n fnames = [fnames]\n for fname in fnames:\n tx_file = fname + '.tx'\n tx_fpath = join(tmp_dir, tx_file) if tmp_dir else tx_file\n tx_fpaths.append(tx_fpath)\n orig_files.append(fname)\n return tx_fpaths, orig_files", "response": "Flatten names of files and create temporary file names."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmerge bed file intervals to avoid overlapping regions. Overlapping regions (1:1-100, 1:90-100) cause issues with callers like FreeBayes that don't collapse BEDs prior to using them.", "response": "def merge_overlaps(work_dir, bed_fpath, distance=None):\n \"\"\"Merge bed file intervals to avoid overlapping regions.\n Overlapping regions (1:1-100, 1:90-100) cause issues with callers like FreeBayes\n that don't collapse BEDs prior to using them.\n \"\"\"\n output_fpath = intermediate_fname(work_dir, bed_fpath, 'merged')\n if isfile(output_fpath) and verify_file(output_fpath, cmp_f=bed_fpath):\n return output_fpath\n\n with file_transaction(work_dir, output_fpath) as tx:\n kwargs = dict(d=distance) if distance else dict()\n BedTool(bed_fpath).merge(**kwargs).saveas(tx)\n return output_fpath"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef checkformat(self):\n\n fd = open_gzipsafe(self.filename)\n\n line = fd.readline()\n while line.startswith('#'):\n line = fd.readline()\n \n fields = line.split('\\t')\n lc = 1\n error = ''\n\n # Checks that the two columns on the right contain integer values\n try:\n # Parses each line and checks that there are at least 3 fields, the two on the right containing integer values and being the right one\n # greater than the left one\n while line != '' and len(fields) > 2 and int(fields[1]) <= int(fields[2]):\n lc += 1\n line = fd.readline()\n fields = line.split('\\t')\n except ValueError:\n error += 'Incorrect start/end values at line ' + str(lc) + '\\n'\n error += 'Start/End coordinates must be indicated with integer values. The right value must be greater than the left value.\\n'\n error += 'Line found: ' + line\n fd.close()\n\n return error\n\n # If it get to this point means that either the file ended or there is a line with less than 3 fields\n if line != '':\n error += 'Incorrect line format at line ' + str(lc) + '\\n'\n error += 'At least three columns are expected in each line\\n'\n error += 'The right value must be greater than the left value.\\n'\n error += 'Line found: ' + line\n fd.close()\n\n return error", "response": "Checks the format of the BED file for correct format."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntriggering callbacks for all keys on all or a subset of subscribers.", "response": "def trigger_all_callbacks(self, callbacks=None):\n \"\"\"Trigger callbacks for all keys on all or a subset of subscribers.\n\n :param Iterable callbacks: list of callbacks or none for all subscribed\n :rtype: Iterable[tornado.concurrent.Future]\n \"\"\"\n return [ret\n for key in self\n for ret in self.trigger_callbacks(key, callbacks=None)]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting a value at key and return a Future.", "response": "def set(self, key, value):\n \"\"\"Set a value at key and return a Future.\n\n :rtype: Iterable[tornado.concurrent.Future]\n \"\"\"\n value_encoded = encode(value)\n\n if key in self.data and self.data[key] == value_encoded:\n return []\n\n self.data[key] = value_encoded\n return self.trigger_callbacks(key)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_state(self, updater=None, **kwargs):\n if callable(updater):\n state_change = updater(self)\n elif updater is not None:\n state_change = updater\n else:\n state_change = kwargs\n\n return [callback_result\n for k, v in state_change.items()\n for callback_result in self.set(k, v)]", "response": "Update the datastore.\nTaxonomy state."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef init(self, key_value_pairs=None, **kwargs):\n if key_value_pairs is None:\n key_value_pairs = kwargs\n return [self.set(k, v)\n for k, v in key_value_pairs.items()\n if k not in self]", "response": "Initialize datastore.\n\n Only sets values for keys that are not in the datastore already.\n\n :param dict key_value_pairs:\n A set of key value pairs to use to initialize the datastore.\n\n :rtype: Iterable[tornado.concurrent.Future]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef close(self):\n\n # remove callbacks\n Datastore.stores[self.domain].remove(self)\n\n # delete data after the last instance is gone\n if self.release_storage and not Datastore.stores[self.domain]:\n del Datastore.global_data[self.domain]\n\n del self", "response": "Close and delete instance."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nruns Qualimap2 for multiple - sample plots.", "response": "def run_multisample_qualimap(output_dir, work_dir, samples, targqc_full_report):\n \"\"\" 1. Generates Qualimap2 plots and put into plots_dirpath\n 2. Adds records to targqc_full_report.plots\n \"\"\"\n plots_dirpath = join(output_dir, 'plots')\n individual_report_fpaths = [s.qualimap_html_fpath for s in samples]\n if isdir(plots_dirpath) and not any(\n not can_reuse(join(plots_dirpath, f), individual_report_fpaths)\n for f in listdir(plots_dirpath) if not f.startswith('.')):\n debug('Qualimap miltisample plots exist - ' + plots_dirpath + ', reusing...')\n else:\n # Qualimap2 run for multi-sample plots\n if len([s.qualimap_html_fpath for s in samples if s.qualimap_html_fpath]) > 0:\n if find_executable() is not None: # and get_qualimap_type(find_executable()) == 'full':\n qualimap_output_dir = join(work_dir, 'qualimap_multi_bamqc')\n\n _correct_qualimap_genome_results(samples)\n _correct_qualimap_insert_size_histogram(samples)\n\n safe_mkdir(qualimap_output_dir)\n rows = []\n for sample in samples:\n if sample.qualimap_html_fpath:\n rows += [[sample.name, sample.qualimap_html_fpath]]\n\n data_fpath = write_tsv_rows(([], rows), join(qualimap_output_dir, 'qualimap_results_by_sample.tsv'))\n qualimap_plots_dirpath = join(qualimap_output_dir, 'images_multisampleBamQcReport')\n cmdline = find_executable() + ' multi-bamqc --data {data_fpath} -outdir {qualimap_output_dir}'.format(**locals())\n run(cmdline, env_vars=dict(DISPLAY=None),\n checks=[lambda _1, _2: verify_dir(qualimap_output_dir)], reuse=cfg.reuse_intermediate)\n\n if not verify_dir(qualimap_plots_dirpath):\n warn('Warning: Qualimap for multi-sample analysis failed to finish. TargQC will not contain plots.')\n return None\n else:\n if exists(plots_dirpath):\n shutil.rmtree(plots_dirpath)\n shutil.move(qualimap_plots_dirpath, plots_dirpath)\n else:\n warn('Warning: Qualimap for multi-sample analysis was not found. TargQC will not contain plots.')\n return None\n\n targqc_full_report.plots = []\n for plot_fpath in listdir(plots_dirpath):\n plot_fpath = join(plots_dirpath, plot_fpath)\n if verify_file(plot_fpath) and plot_fpath.endswith('.png'):\n targqc_full_report.plots.append(relpath(plot_fpath, output_dir))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef rasterize(vectorobject, reference, outname=None, burn_values=1, expressions=None, nodata=0, append=False):\n if expressions is None:\n expressions = ['']\n if isinstance(burn_values, (int, float)):\n burn_values = [burn_values]\n if len(expressions) != len(burn_values):\n raise RuntimeError('expressions and burn_values of different length')\n \n failed = []\n for exp in expressions:\n try:\n vectorobject.layer.SetAttributeFilter(exp)\n except RuntimeError:\n failed.append(exp)\n if len(failed) > 0:\n raise RuntimeError('failed to set the following attribute filter(s): [\"{}\"]'.format('\", '.join(failed)))\n \n if append and outname is not None and os.path.isfile(outname):\n target_ds = gdal.Open(outname, GA_Update)\n else:\n if not isinstance(reference, Raster):\n raise RuntimeError(\"parameter 'reference' must be of type Raster\")\n if outname is not None:\n target_ds = gdal.GetDriverByName('GTiff').Create(outname, reference.cols, reference.rows, 1, gdal.GDT_Byte)\n else:\n target_ds = gdal.GetDriverByName('MEM').Create('', reference.cols, reference.rows, 1, gdal.GDT_Byte)\n target_ds.SetGeoTransform(reference.raster.GetGeoTransform())\n target_ds.SetProjection(reference.raster.GetProjection())\n band = target_ds.GetRasterBand(1)\n band.SetNoDataValue(nodata)\n band.FlushCache()\n band = None\n for expression, value in zip(expressions, burn_values):\n vectorobject.layer.SetAttributeFilter(expression)\n gdal.RasterizeLayer(target_ds, [1], vectorobject.layer, burn_values=[value])\n vectorobject.layer.SetAttributeFilter('')\n if outname is None:\n return Raster(target_ds)\n else:\n target_ds = None", "response": "Rasterize a vector object"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef reproject(rasterobject, reference, outname, targetres=None, resampling='bilinear', format='GTiff'):\n if isinstance(rasterobject, str):\n rasterobject = Raster(rasterobject)\n if not isinstance(rasterobject, Raster):\n raise RuntimeError('rasterobject must be of type Raster or str')\n if isinstance(reference, (Raster, Vector)):\n projection = reference.projection\n if targetres is not None:\n xres, yres = targetres\n elif hasattr(reference, 'res'):\n xres, yres = reference.res\n else:\n raise RuntimeError('parameter targetres is missing and cannot be read from the reference')\n elif isinstance(reference, (int, str, osr.SpatialReference)):\n try:\n projection = crsConvert(reference, 'proj4')\n except TypeError:\n raise RuntimeError('reference projection cannot be read')\n if targetres is None:\n raise RuntimeError('parameter targetres is missing and cannot be read from the reference')\n else:\n xres, yres = targetres\n else:\n raise TypeError('reference must be of type Raster, Vector, osr.SpatialReference, str or int')\n options = {'format': format,\n 'resampleAlg': resampling,\n 'xRes': xres,\n 'yRes': yres,\n 'srcNodata': rasterobject.nodata,\n 'dstNodata': rasterobject.nodata,\n 'dstSRS': projection}\n gdalwarp(rasterobject, outname, options)", "response": "Reproject a raster image to a new file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef stack(srcfiles, dstfile, resampling, targetres, dstnodata, srcnodata=None, shapefile=None, layernames=None,\n sortfun=None, separate=False, overwrite=False, compress=True, cores=4):\n \"\"\"\n function for mosaicking, resampling and stacking of multiple raster files into a 3D data cube\n\n Parameters\n ----------\n srcfiles: list\n a list of file names or a list of lists; each sub-list is treated as a task to mosaic its containing files\n dstfile: str\n the destination file or a directory (if `separate` is True)\n resampling: {near, bilinear, cubic, cubicspline, lanczos, average, mode, max, min, med, Q1, Q3}\n the resampling method; see `documentation of gdalwarp `_.\n targetres: tuple or list\n two entries for x and y spatial resolution in units of the source CRS\n srcnodata: int, float or None\n the nodata value of the source files; if left at the default (None), the nodata values are read from the files\n dstnodata: int or float\n the nodata value of the destination file(s)\n shapefile: str, Vector or None\n a shapefile for defining the spatial extent of the destination files\n layernames: list\n the names of the output layers; if `None`, the basenames of the input files are used; overrides sortfun\n sortfun: function\n a function for sorting the input files; not used if layernames is not None.\n This is first used for sorting the items in each sub-list of srcfiles;\n the basename of the first item in a sub-list will then be used as the name for the mosaic of this group.\n After mosaicing, the function is again used for sorting the names in the final output\n (only relevant if `separate` is False)\n separate: bool\n should the files be written to a single raster stack (ENVI format) or separate files (GTiff format)?\n overwrite: bool\n overwrite the file if it already exists?\n compress: bool\n compress the geotiff files?\n cores: int\n the number of CPU threads to use; this is only relevant if `separate` is True, in which case each\n mosaicing/resampling job is passed to a different CPU\n\n Returns\n -------\n\n Notes\n -----\n This function does not reproject any raster files. Thus, the CRS must be the same for all input raster files.\n This is checked prior to executing gdalwarp. In case a shapefile is defined, it is internally reprojected to the\n raster CRS prior to retrieving its extent.\n \n Examples\n --------\n \n .. code-block:: python\n \n from pyroSAR.ancillary import groupbyTime, find_datasets, seconds\n from spatialist.raster import stack\n \n # find pyroSAR files by metadata attributes\n archive_s1 = '/.../sentinel1/GRD/processed'\n scenes_s1 = find_datasets(archive_s1, sensor=('S1A', 'S1B'), acquisition_mode='IW')\n \n # group images by acquisition time\n groups = groupbyTime(images=scenes_s1, function=seconds, time=30)\n \n # mosaic individual groups and stack the mosaics to a single ENVI file\n # only files overlapping with the shapefile are selected and resampled to its extent\n stack(srcfiles=groups, dstfile='stack', resampling='bilinear', targetres=(20, 20),\n srcnodata=-99, dstnodata=-99, shapefile='site.shp', separate=False)\n \"\"\"\n # perform some checks on the input data\n \n if len(dissolve(srcfiles)) == 0:\n raise RuntimeError('no input files provided to function raster.stack')\n \n if layernames is not None:\n if len(layernames) != len(srcfiles):\n raise RuntimeError('mismatch between number of source file groups and layernames')\n \n if not isinstance(targetres, (list, tuple)) or len(targetres) != 2:\n raise RuntimeError('targetres must be a list or tuple with two entries for x and y resolution')\n \n if len(srcfiles) == 1 and not isinstance(srcfiles[0], list):\n raise RuntimeError('only one file specified; nothing to be done')\n \n if resampling not in ['near', 'bilinear', 'cubic', 'cubicspline', 'lanczos',\n 'average', 'mode', 'max', 'min', 'med', 'Q1', 'Q3']:\n raise RuntimeError('resampling method not supported')\n \n projections = list()\n for x in dissolve(srcfiles):\n try:\n projection = Raster(x).projection\n except RuntimeError as e:\n print('cannot read file: {}'.format(x))\n raise e\n projections.append(projection)\n \n projections = list(set(projections))\n if len(projections) > 1:\n raise RuntimeError('raster projection mismatch')\n elif projections[0] == '':\n raise RuntimeError('could not retrieve the projection from any of the {} input images'.format(len(srcfiles)))\n else:\n srs = projections[0]\n ##########################################################################################\n # read shapefile bounding coordinates and reduce list of rasters to those overlapping with the shapefile\n \n if shapefile is not None:\n shp = shapefile.clone() if isinstance(shapefile, Vector) else Vector(shapefile)\n shp.reproject(srs)\n ext = shp.extent\n arg_ext = (ext['xmin'], ext['ymin'], ext['xmax'], ext['ymax'])\n for i, item in enumerate(srcfiles):\n group = item if isinstance(item, list) else [item]\n if layernames is None and sortfun is not None:\n group = sorted(group, key=sortfun)\n group = [x for x in group if intersect(shp, Raster(x).bbox())]\n if len(group) > 1:\n srcfiles[i] = group\n elif len(group) == 1:\n srcfiles[i] = group[0]\n else:\n srcfiles[i] = None\n shp.close()\n srcfiles = list(filter(None, srcfiles))\n else:\n arg_ext = None\n ##########################################################################################\n # set general options and parametrization\n \n dst_base = os.path.splitext(dstfile)[0]\n \n options_warp = {'options': ['-q'],\n 'format': 'GTiff' if separate else 'ENVI',\n 'outputBounds': arg_ext, 'multithread': True,\n 'dstNodata': dstnodata,\n 'xRes': targetres[0], 'yRes': targetres[1],\n 'resampleAlg': resampling}\n \n if overwrite:\n options_warp['options'] += ['-overwrite']\n \n if separate and compress:\n options_warp['options'] += ['-co', 'COMPRESS=DEFLATE', '-co', 'PREDICTOR=2']\n \n options_buildvrt = {'outputBounds': arg_ext}\n \n if srcnodata is not None:\n options_warp['srcNodata'] = srcnodata\n options_buildvrt['srcNodata'] = srcnodata\n ##########################################################################################\n # create VRT files for mosaicing\n \n for i, group in enumerate(srcfiles):\n if isinstance(group, list):\n if len(group) > 1:\n base = group[0]\n # in-memory VRT files cannot be shared between multiple processes on Windows\n # this has to do with different process forking behaviour\n # see function spatialist.ancillary.multicore and this link:\n # https://stackoverflow.com/questions/38236211/why-multiprocessing-process-behave-differently-on-windows-and-linux-for-global-o\n vrt_base = os.path.splitext(os.path.basename(base))[0] + '.vrt'\n if platform.system() == 'Windows':\n vrt = os.path.join(tempfile.gettempdir(), vrt_base)\n else:\n vrt = '/vsimem/' + vrt_base\n gdalbuildvrt(group, vrt, options_buildvrt)\n srcfiles[i] = vrt\n else:\n srcfiles[i] = group[0]\n else:\n srcfiles[i] = group\n ##########################################################################################\n # define the output band names\n \n # if no specific layernames are defined, sort files by custom function\n if layernames is None and sortfun is not None:\n srcfiles = sorted(srcfiles, key=sortfun)\n \n # use the file basenames without extension as band names if none are defined\n bandnames = [os.path.splitext(os.path.basename(x))[0] for x in srcfiles] if layernames is None else layernames\n \n if len(list(set(bandnames))) != len(bandnames):\n raise RuntimeError('output bandnames are not unique')\n ##########################################################################################\n # create the actual image files\n \n if separate:\n if not os.path.isdir(dstfile):\n os.makedirs(dstfile)\n dstfiles = [os.path.join(dstfile, x) + '.tif' for x in bandnames]\n jobs = [x for x in zip(srcfiles, dstfiles)]\n if not overwrite:\n jobs = [x for x in jobs if not os.path.isfile(x[1])]\n if len(jobs) == 0:\n print('all target tiff files already exist, nothing to be done')\n return\n srcfiles, dstfiles = map(list, zip(*jobs))\n \n multicore(gdalwarp, cores=cores, multiargs={'src': srcfiles, 'dst': dstfiles}, options=options_warp)\n else:\n if len(srcfiles) == 1:\n options_warp['format'] = 'GTiff'\n if not dstfile.endswith('.tif'):\n dstfile = os.path.splitext(dstfile)[0] + '.tif'\n gdalwarp(srcfiles[0], dstfile, options_warp)\n else:\n # create VRT for stacking\n vrt = '/vsimem/' + os.path.basename(dst_base) + '.vrt'\n options_buildvrt['options'] = ['-separate']\n gdalbuildvrt(srcfiles, vrt, options_buildvrt)\n \n # warp files\n gdalwarp(vrt, dstfile, options_warp)\n \n # edit ENVI HDR files to contain specific layer names\n with envi.HDRobject(dstfile + '.hdr') as hdr:\n hdr.band_names = bandnames\n hdr.write()", "response": "This function creates a 3D data cube that can be used to mosaicked by a set of raster files."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncomputes some basic raster statistics for each tile.", "response": "def allstats(self, approximate=False):\n \"\"\"\n Compute some basic raster statistics\n\n Parameters\n ----------\n approximate: bool\n approximate statistics from overviews or a subset of all tiles?\n\n Returns\n -------\n list of dicts\n a list with a dictionary of statistics for each band. Keys: `min`, `max`, `mean`, `sdev`.\n See :osgeo:meth:`gdal.Band.ComputeStatistics`.\n \"\"\"\n statcollect = []\n for x in self.layers():\n try:\n stats = x.ComputeStatistics(approximate)\n except RuntimeError:\n stats = None\n stats = dict(zip(['min', 'max', 'mean', 'sdev'], stats))\n statcollect.append(stats)\n return statcollect"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef array(self):\n if self.bands == 1:\n return self.matrix()\n else:\n arr = self.raster.ReadAsArray().transpose(1, 2, 0)\n if isinstance(self.nodata, list):\n for i in range(0, self.bands):\n arr[:, :, i][arr[:, :, i] == self.nodata[i]] = np.nan\n else:\n arr[arr == self.nodata] = np.nan\n return arr", "response": "read all raster bands into a numpy ndarray"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef bandnames(self, names):\n if not isinstance(names, list):\n raise TypeError('the names to be set must be of type list')\n if len(names) != self.bands:\n raise ValueError(\n 'length mismatch of names to be set ({}) and number of bands ({})'.format(len(names), self.bands))\n self.__bandnames = names", "response": "set the names of the raster bands\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef bbox(self, outname=None, format='ESRI Shapefile', overwrite=True):\n if outname is None:\n return bbox(self.geo, self.proj4)\n else:\n bbox(self.geo, self.proj4, outname=outname, format=format, overwrite=overwrite)", "response": "Returns the bounding box of the object in the specified format."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nextracts weighted average of pixels intersecting with a defined radius to a point", "response": "def extract(self, px, py, radius=1, nodata=None):\n \"\"\"\n extract weighted average of pixels intersecting with a defined radius to a point.\n\n Parameters\n ----------\n px: int or float\n the x coordinate in units of the Raster SRS\n py: int or float\n the y coordinate in units of the Raster SRS\n radius: int or float\n the radius around the point to extract pixel values from; defined as multiples of the pixel resolution\n nodata: int\n a value to ignore from the computations; If `None`, the nodata value of the Raster object is used\n\n Returns\n -------\n int or float\n the the weighted average of all pixels within the defined radius\n\n \"\"\"\n if not self.geo['xmin'] <= px <= self.geo['xmax']:\n raise RuntimeError('px is out of bounds')\n \n if not self.geo['ymin'] <= py <= self.geo['ymax']:\n raise RuntimeError('py is out of bounds')\n \n if nodata is None:\n nodata = self.nodata\n \n xres, yres = self.res\n \n hx = xres / 2.0\n hy = yres / 2.0\n \n xlim = float(xres * radius)\n ylim = float(yres * radius)\n \n # compute minimum x and y pixel coordinates\n xmin = int(floor((px - self.geo['xmin'] - xlim) / xres))\n ymin = int(floor((self.geo['ymax'] - py - ylim) / yres))\n \n xmin = xmin if xmin >= 0 else 0\n ymin = ymin if ymin >= 0 else 0\n \n # compute maximum x and y pixel coordinates\n xmax = int(ceil((px - self.geo['xmin'] + xlim) / xres))\n ymax = int(ceil((self.geo['ymax'] - py + ylim) / yres))\n \n xmax = xmax if xmax <= self.cols else self.cols\n ymax = ymax if ymax <= self.rows else self.rows\n \n # load array subset\n if self.__data[0] is not None:\n array = self.__data[0][ymin:ymax, xmin:xmax]\n # print('using loaded array of size {}, '\n # 'indices [{}:{}, {}:{}] (row/y, col/x)'.format(array.shape, ymin, ymax, xmin, xmax))\n else:\n array = self.raster.GetRasterBand(1).ReadAsArray(xmin, ymin, xmax - xmin, ymax - ymin)\n # print('loading array of size {}, '\n # 'indices [{}:{}, {}:{}] (row/y, col/x)'.format(array.shape, ymin, ymax, xmin, xmax))\n \n sum = 0\n counter = 0\n weightsum = 0\n for x in range(xmin, xmax):\n for y in range(ymin, ymax):\n # check whether point is a valid image index\n val = array[y - ymin, x - xmin]\n if val != nodata:\n # compute distances of pixel center coordinate to requested point\n \n xc = x * xres + hx + self.geo['xmin']\n yc = self.geo['ymax'] - y * yres + hy\n \n dx = abs(xc - px)\n dy = abs(yc - py)\n \n # check whether point lies within ellipse: if ((dx ** 2) / xlim ** 2) + ((dy ** 2) / ylim ** 2) <= 1\n weight = sqrt(dx ** 2 + dy ** 2)\n sum += val * weight\n weightsum += weight\n counter += 1\n \n array = None\n \n if counter > 0:\n return sum / weightsum\n else:\n return nodata"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks image integrity. Tries to compute the checksum for each raster layer and returns False if this fails. See this forum entry: `How to check if image is valid? `_. Returns ------- bool is the file valid?", "response": "def is_valid(self):\n \"\"\"\n Check image integrity.\n Tries to compute the checksum for each raster layer and returns False if this fails.\n See this forum entry:\n `How to check if image is valid? `_.\n\n Returns\n -------\n bool\n is the file valid?\n \"\"\"\n for i in range(self.raster.RasterCount):\n try:\n checksum = self.raster.GetRasterBand(i + 1).Checksum()\n except RuntimeError:\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load(self):\n for i in range(1, self.bands + 1):\n self.__data[i - 1] = self.matrix(i)", "response": "Load all raster data to internal memory arrays."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef matrix(self, band=1, mask_nan=True):\n \n mat = self.__data[band - 1]\n if mat is None:\n mat = self.raster.GetRasterBand(band).ReadAsArray()\n if mask_nan:\n if isinstance(self.nodata, list):\n nodata = self.nodata[band - 1]\n else:\n nodata = self.nodata\n try:\n mat[mat == nodata] = np.nan\n except ValueError:\n mat = mat.astype('float32')\n mat[mat == nodata] = np.nan\n return mat", "response": "read a raster band into a numpy ndarray"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef res(self):\n return (abs(float(self.geo['xres'])), abs(float(self.geo['yres'])))", "response": "get the raster resolution in x and y direction"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nperform raster computations with custom functions and assign them to the existing raster object in memory", "response": "def rescale(self, fun):\n \"\"\"\n perform raster computations with custom functions and assign them to the existing raster object in memory\n\n Parameters\n ----------\n fun: function\n the custom function to compute on the data\n\n Examples\n --------\n >>> with Raster('filename') as ras:\n >>> ras.rescale(lambda x: 10 * x)\n\n \"\"\"\n if self.bands != 1:\n raise ValueError('only single band images are currently supported')\n \n # load array\n mat = self.matrix()\n \n # scale values\n scaled = fun(mat)\n \n # assign newly computed array to raster object\n self.assign(scaled, band=0)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write(self, outname, dtype='default', format='ENVI', nodata='default', compress_tif=False, overwrite=False):\n \n if os.path.isfile(outname) and not overwrite:\n raise RuntimeError('target file already exists')\n \n if format == 'GTiff' and not re.search(r'\\.tif[f]*$', outname):\n outname += '.tif'\n \n dtype = Dtype(self.dtype if dtype == 'default' else dtype).gdalint\n nodata = self.nodata if nodata == 'default' else nodata\n \n options = []\n if format == 'GTiff' and compress_tif:\n options += ['COMPRESS=DEFLATE', 'PREDICTOR=2']\n \n driver = gdal.GetDriverByName(format)\n outDataset = driver.Create(outname, self.cols, self.rows, self.bands, dtype, options)\n driver = None\n outDataset.SetMetadata(self.raster.GetMetadata())\n outDataset.SetGeoTransform([self.geo[x] for x in ['xmin', 'xres', 'rotation_x', 'ymax', 'rotation_y', 'yres']])\n if self.projection is not None:\n outDataset.SetProjection(self.projection)\n for i in range(1, self.bands + 1):\n outband = outDataset.GetRasterBand(i)\n if nodata is not None:\n outband.SetNoDataValue(nodata)\n mat = self.matrix(band=i)\n dtype_mat = str(mat.dtype)\n dtype_ras = Dtype(dtype).numpystr\n if not np.can_cast(dtype_mat, dtype_ras):\n warnings.warn(\"writing band {}: unsafe casting from type {} to {}\".format(i, dtype_mat, dtype_ras))\n outband.WriteArray(mat)\n del mat\n outband.FlushCache()\n outband = None\n if format == 'GTiff':\n outDataset.SetMetadataItem('TIFFTAG_DATETIME', strftime('%Y:%m:%d %H:%M:%S', gmtime()))\n outDataset = None\n if format == 'ENVI':\n hdrfile = os.path.splitext(outname)[0] + '.hdr'\n with HDRobject(hdrfile) as hdr:\n hdr.band_names = self.bandnames\n hdr.write()", "response": "Write the raster object to a file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a dictionary for mapping numpy data types to GDAL data type codes returns a dictionary for mapping numpy data types to GDAL data type codes returns a dictionary for mapping numpy data types to GDAL data type codes returns None", "response": "def numpy2gdalint(self):\n \"\"\"\n create a dictionary for mapping numpy data types to GDAL data type codes\n\n Returns\n -------\n dict\n the type map\n \"\"\"\n if not hasattr(self, '__numpy2gdalint'):\n tmap = {}\n \n for group in ['int', 'uint', 'float', 'complex']:\n for dtype in np.sctypes[group]:\n code = gdal_array.NumericTypeCodeToGDALTypeCode(dtype)\n if code is not None:\n tmap[dtype().dtype.name] = code\n self.__numpy2gdalint = tmap\n return self.__numpy2gdalint"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef static_parser(static):\n if static is None:\n return\n\n if isinstance(static, dict):\n static = static.items()\n\n for group in static:\n if not isinstance(group, dict):\n yield group\n continue\n\n for item in group.items():\n yield item", "response": "Parse object describing static routes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding analyses from the analyses folder.", "response": "def analyses_info(self):\n \"\"\"Add analyses from the analyses folder.\"\"\"\n f_config = os.path.join(self.analyses_path, 'index.yaml')\n tornado.autoreload.watch(f_config)\n with io.open(f_config, 'r', encoding='utf8') as f:\n config = yaml.safe_load(f)\n self.info.update(config)\n if self.debug:\n self.info['version'] += '.debug-{:04X}'.format(\n int(random.random() * 0xffff))\n\n readme = Readme(self.analyses_path)\n if self.info['description'] is None:\n self.info['description'] = readme.text.strip()\n self.info['description_html'] = readme.html"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrun the build command specified in index. yaml.", "response": "def build(self):\n \"\"\"Run the build command specified in index.yaml.\"\"\"\n for cmd in self.build_cmds:\n log.info('building command: {}'.format(cmd))\n full_cmd = 'cd {}; {}'.format(self.analyses_path, cmd)\n log.debug('full command: {}'.format(full_cmd))\n subprocess.call(full_cmd, shell=True)\n log.info('build done')"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nrenders the List - of - Analyses overview page.", "response": "def get(self):\n \"\"\"Render the List-of-Analyses overview page.\"\"\"\n return self.render(\n 'index.html',\n databench_version=DATABENCH_VERSION,\n meta_infos=self.meta_infos(),\n **self.info\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nresetting Axis and set default parameters for H - bridge", "response": "def Reset(self):\n ' Reset Axis and set default parameters for H-bridge '\n spi.SPI_write_byte(self.CS, 0xC0) # reset\n# spi.SPI_write_byte(self.CS, 0x14) # Stall Treshold setup\n# spi.SPI_write_byte(self.CS, 0xFF) \n# spi.SPI_write_byte(self.CS, 0x13) # Over Current Treshold setup \n# spi.SPI_write_byte(self.CS, 0xFF) \n spi.SPI_write_byte(self.CS, 0x15) # Full Step speed \n spi.SPI_write_byte(self.CS, 0xFF)\n spi.SPI_write_byte(self.CS, 0xFF) \n spi.SPI_write_byte(self.CS, 0x05) # ACC \n spi.SPI_write_byte(self.CS, 0x00)\n spi.SPI_write_byte(self.CS, 0x20) \n spi.SPI_write_byte(self.CS, 0x06) # DEC \n spi.SPI_write_byte(self.CS, 0x00)\n spi.SPI_write_byte(self.CS, 0x20) \n spi.SPI_write_byte(self.CS, 0x0A) # KVAL_RUN\n spi.SPI_write_byte(self.CS, 0xd0)\n spi.SPI_write_byte(self.CS, 0x0B) # KVAL_ACC\n spi.SPI_write_byte(self.CS, 0xd0)\n spi.SPI_write_byte(self.CS, 0x0C) # KVAL_DEC\n spi.SPI_write_byte(self.CS, 0xd0)\n spi.SPI_write_byte(self.CS, 0x16) # STEPPER\n spi.SPI_write_byte(self.CS, 0b00000000)\n spi.SPI_write_byte(self.CS, 0x18) # CONFIG\n spi.SPI_write_byte(self.CS, 0b00111000)\n spi.SPI_write_byte(self.CS, 0b00000000)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninitializes for pool for _mprotate", "response": "def _init_worker(X, X_shape, X_dtype):\n \"\"\"Initializer for pool for _mprotate\"\"\"\n # Using a dictionary is not strictly necessary. You can also\n # use global variables.\n mprotate_dict[\"X\"] = X\n mprotate_dict[\"X_shape\"] = X_shape\n mprotate_dict[\"X_dtype\"] = X_dtype"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _mprotate(ang, lny, pool, order):\n targ_args = list()\n\n slsize = np.int(np.floor(lny / ncores))\n\n for t in range(ncores):\n ymin = t * slsize\n ymax = (t + 1) * slsize\n if t == ncores - 1:\n ymax = lny\n targ_args.append((ymin, ymax, ang, order))\n\n pool.map(_rotate, targ_args)", "response": "Uses multiprocessing to rotate the data in - place on an intel i7 - 3820 CPU @ 3. 60GHz with 8 cores."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef backpropagate_3d(uSin, angles, res, nm, lD=0, coords=None,\n weight_angles=True, onlyreal=False,\n padding=(True, True), padfac=1.75, padval=None,\n intp_order=2, dtype=None,\n num_cores=ncores,\n save_memory=False,\n copy=True,\n count=None, max_count=None,\n verbose=0):\n r\"\"\"3D backpropagation\n\n Three-dimensional diffraction tomography reconstruction\n algorithm for scattering of a plane wave\n :math:`u_0(\\mathbf{r}) = u_0(x,y,z)`\n by a dielectric object with refractive index\n :math:`n(x,y,z)`.\n\n This method implements the 3D backpropagation algorithm\n :cite:`Mueller2015arxiv`.\n\n\n .. math::\n f(\\mathbf{r}) =\n -\\frac{i k_\\mathrm{m}}{2\\pi}\n \\sum_{j=1}^{N} \\! \\Delta \\phi_0 D_{-\\phi_j} \\!\\!\n \\left \\{\n \\text{FFT}^{-1}_{\\mathrm{2D}}\n \\left \\{\n \\left| k_\\mathrm{Dx} \\right|\n \\frac{\\text{FFT}_{\\mathrm{2D}} \\left \\{\n u_{\\mathrm{B},\\phi_j}(x_\\mathrm{D}, y_\\mathrm{D}) \\right \\}}\n {u_0(l_\\mathrm{D})}\n \\exp \\! \\left[i k_\\mathrm{m}(M - 1) \\cdot\n (z_{\\phi_j}-l_\\mathrm{D}) \\right]\n \\right \\}\n \\right \\}\n\n with the forward :math:`\\text{FFT}_{\\mathrm{2D}}` and inverse\n :math:`\\text{FFT}^{-1}_{\\mathrm{2D}}` 2D fast Fourier transform, the\n rotational operator :math:`D_{-\\phi_j}`, the angular distance between the\n projections :math:`\\Delta \\phi_0`, the ramp filter in Fourier space\n :math:`|k_\\mathrm{Dx}|`, and the propagation distance\n :math:`(z_{\\phi_j}-l_\\mathrm{D})`.\n\n Parameters\n ----------\n uSin: (A, Ny, Nx) ndarray\n Three-dimensional sinogram of plane recordings\n :math:`u_{\\mathrm{B}, \\phi_j}(x_\\mathrm{D}, y_\\mathrm{D})`\n divided by the incident plane wave :math:`u_0(l_\\mathrm{D})`\n measured at the detector.\n angles: (A,) ndarray\n Angular positions :math:`\\phi_j` of `uSin` in radians.\n res: float\n Vacuum wavelength of the light :math:`\\lambda` in pixels.\n nm: float\n Refractive index of the surrounding medium :math:`n_\\mathrm{m}`.\n lD: float\n Distance from center of rotation to detector plane\n :math:`l_\\mathrm{D}` in pixels.\n coords: None [(3, M) ndarray]\n Only compute the output image at these coordinates. This\n keyword is reserved for future versions and is not\n implemented yet.\n weight_angles: bool\n If `True`, weights each backpropagated projection with a factor\n proportional to the angular distance between the neighboring\n projections.\n\n .. math::\n \\Delta \\phi_0 \\longmapsto \\Delta \\phi_j =\n \\frac{\\phi_{j+1} - \\phi_{j-1}}{2}\n\n .. versionadded:: 0.1.1\n onlyreal: bool\n If `True`, only the real part of the reconstructed image\n will be returned. This saves computation time.\n padding: tuple of bool\n Pad the input data to the second next power of 2 before\n Fourier transforming. This reduces artifacts and speeds up\n the process for input image sizes that are not powers of 2.\n The default is padding in x and y: `padding=(True, True)`.\n For padding only in x-direction (e.g. for cylindrical\n symmetries), set `padding` to `(True, False)`. To turn off\n padding, set it to `(False, False)`.\n padfac: float\n Increase padding size of the input data. A value greater\n than one will trigger padding to the second-next power of\n two. For example, a value of 1.75 will lead to a padded\n size of 256 for an initial size of 144, whereas it will\n lead to a padded size of 512 for an initial size of 150.\n Values geater than 2 are allowed. This parameter may\n greatly increase memory usage!\n padval: float\n The value used for padding. This is important for the Rytov\n approximation, where an approximat zero in the phase might\n translate to 2\u03c0i due to the unwrapping algorithm. In that\n case, this value should be a multiple of 2\u03c0i.\n If `padval` is `None`, then the edge values are used for\n padding (see documentation of :func:`numpy.pad`).\n intp_order: int between 0 and 5\n Order of the interpolation for rotation.\n See :func:`scipy.ndimage.interpolation.rotate` for details.\n dtype: dtype object or argument for :func:`numpy.dtype`\n The data type that is used for calculations (float or double).\n Defaults to `numpy.float_`.\n num_cores: int\n The number of cores to use for parallel operations. This value\n defaults to the number of cores on the system.\n save_memory: bool\n Saves memory at the cost of longer computation time.\n\n .. versionadded:: 0.1.5\n\n copy: bool\n Copy input sinogram `uSin` for data processing. If `copy`\n is set to `False`, then `uSin` will be overridden.\n\n .. versionadded:: 0.1.5\n\n count, max_count: multiprocessing.Value or `None`\n Can be used to monitor the progress of the algorithm.\n Initially, the value of `max_count.value` is incremented\n by the total number of steps. At each step, the value\n of `count.value` is incremented.\n verbose: int\n Increment to increase verbosity.\n\n\n Returns\n -------\n f: ndarray of shape (Nx, Ny, Nx), complex if `onlyreal==False`\n Reconstructed object function :math:`f(\\mathbf{r})` as defined\n by the Helmholtz equation.\n :math:`f(x,z) =\n k_m^2 \\left(\\left(\\frac{n(x,z)}{n_m}\\right)^2 -1\\right)`\n\n\n See Also\n --------\n odt_to_ri: conversion of the object function :math:`f(\\mathbf{r})`\n to refractive index :math:`n(\\mathbf{r})`\n\n Notes\n -----\n Do not use the parameter `lD` in combination with the Rytov\n approximation - the propagation is not correctly described.\n Instead, numerically refocus the sinogram prior to converting\n it to Rytov data (using e.g. :func:`odtbrain.sinogram_as_rytov`)\n with a numerical focusing algorithm (available in the Python\n package :py:mod:`nrefocus`).\n \"\"\"\n A = angles.size\n\n if len(uSin.shape) != 3:\n raise ValueError(\"Input data `uSin` must have shape (A,Ny,Nx).\")\n if len(uSin) != A:\n raise ValueError(\"`len(angles)` must be equal to `len(uSin)`.\")\n if len(list(padding)) != 2:\n raise ValueError(\"`padding` must be boolean tuple of length 2!\")\n if np.array(padding).dtype is not np.dtype(bool):\n raise ValueError(\"Parameter `padding` must be boolean tuple.\")\n if coords is not None:\n raise NotImplementedError(\"Setting coordinates is not yet supported.\")\n if num_cores > ncores:\n raise ValueError(\"`num_cores` must not exceed number \"\n + \"of physical cores: {}\".format(ncores))\n\n # setup dtype\n if dtype is None:\n dtype = np.float_\n dtype = np.dtype(dtype)\n if dtype.name not in [\"float32\", \"float64\"]:\n raise ValueError(\"dtype must be float32 or float64!\")\n dtype_complex = np.dtype(\"complex{}\".format(\n 2 * np.int(dtype.name.strip(\"float\"))))\n # set ctype\n ct_dt_map = {np.dtype(np.float32): ctypes.c_float,\n np.dtype(np.float64): ctypes.c_double\n }\n\n # progress\n if max_count is not None:\n max_count.value += A + 2\n\n ne.set_num_threads(num_cores)\n\n uSin = np.array(uSin, copy=copy)\n\n # lengths of the input data\n lny, lnx = uSin.shape[1], uSin.shape[2]\n # The z-size of the output array must match the x-size.\n # The rotation is performed about the y-axis (lny).\n ln = lnx\n\n # We perform zero-padding before performing the Fourier transform.\n # This gets rid of artifacts due to false periodicity and also\n # speeds up Fourier transforms of the input image size is not\n # a power of 2.\n orderx = np.int(max(64., 2**np.ceil(np.log(lnx * padfac) / np.log(2))))\n ordery = np.int(max(64., 2**np.ceil(np.log(lny * padfac) / np.log(2))))\n\n if padding[0]:\n padx = orderx - lnx\n else:\n padx = 0\n if padding[1]:\n pady = ordery - lny\n else:\n pady = 0\n\n padyl = np.int(np.ceil(pady / 2))\n padyr = pady - padyl\n padxl = np.int(np.ceil(padx / 2))\n padxr = padx - padxl\n\n # zero-padded length of sinogram.\n lNx, lNy = lnx + padx, lny + pady\n lNz = ln\n\n if verbose > 0:\n print(\"......Image size (x,y): {}x{}, padded: {}x{}\".format(\n lnx, lny, lNx, lNy))\n\n # Perform weighting\n if weight_angles:\n weights = util.compute_angle_weights_1d(angles).reshape(-1, 1, 1)\n uSin *= weights\n\n # Cut-Off frequency\n # km [1/px]\n km = (2 * np.pi * nm) / res\n # Here, the notation for\n # a wave propagating to the right is:\n #\n # u0(x) = exp(ikx)\n #\n # However, in physics usually we use the other sign convention:\n #\n # u0(x) = exp(-ikx)\n #\n # In order to be consistent with programs like Meep or our\n # scattering script for a dielectric cylinder, we want to use the\n # latter sign convention.\n # This is not a big problem. We only need to multiply the imaginary\n # part of the scattered wave by -1.\n\n # Ask for the filter. Do not include zero (first element).\n #\n # Integrals over \u03d5\u2080 [0,2\u03c0]; kx [-k\u2098,k\u2098]\n # - double coverage factor 1/2 already included\n # - unitary angular frequency to unitary ordinary frequency\n # conversion performed in calculation of UB=FT(uB).\n #\n # f(r) = -i k\u2098 / ((2\u03c0)\u00b2 a\u2080) (prefactor)\n # * iiint d\u03d5\u2080 dkx dky (prefactor)\n # * |kx| (prefactor)\n # * exp(-i k\u2098 M lD ) (prefactor)\n # * UB\u03d5\u2080(kx) (dependent on \u03d5\u2080)\n # * exp( i (kx t\u22a5 + k\u2098 (M - 1) s\u2080) r ) (dependent on \u03d5\u2080 and r)\n # (r and s\u2080 are vectors. The last term contains a dot-product)\n #\n # k\u2098M = sqrt( k\u2098\u00b2 - kx\u00b2 - ky\u00b2 )\n # t\u22a5 = ( cos(\u03d5\u2080), ky/kx, sin(\u03d5\u2080) )\n # s\u2080 = ( -sin(\u03d5\u2080), 0 , cos(\u03d5\u2080) )\n #\n # The filter can be split into two parts\n #\n # 1) part without dependence on the z-coordinate\n #\n # -i k\u2098 / ((2\u03c0)\u00b2 a\u2080)\n # * iiint d\u03d5\u2080 dkx dky\n # * |kx|\n # * exp(-i k\u2098 M lD )\n #\n # 2) part with dependence of the z-coordinate\n #\n # exp( i (kx t\u22a5 + k\u2098 (M - 1) s\u2080) r )\n #\n # The filter (1) can be performed using the classical filter process\n # as in the backprojection algorithm.\n #\n #\n\n # Corresponding sample frequencies\n fx = np.fft.fftfreq(lNx) # 1D array\n fy = np.fft.fftfreq(lNy) # 1D array\n # kx is a 1D array.\n kx = 2 * np.pi * fx\n ky = 2 * np.pi * fy\n # Differentials for integral\n dphi0 = 2 * np.pi / A\n # We will later multiply with phi0.\n # y, x\n kx = kx.reshape(1, -1)\n ky = ky.reshape(-1, 1)\n # Low-pass filter:\n # less-than-or-equal would give us zero division error.\n filter_klp = (kx**2 + ky**2 < km**2)\n\n # Filter M so there are no nans from the root\n M = 1. / km * np.sqrt((km**2 - kx**2 - ky**2) * filter_klp)\n\n prefactor = -1j * km / (2 * np.pi)\n prefactor *= dphi0\n # Also filter the prefactor, so nothing outside the required\n # low-pass contributes to the sum.\n prefactor *= np.abs(kx) * filter_klp\n # prefactor *= np.sqrt(((kx**2+ky**2)) * filter_klp )\n # new in version 0.1.4:\n # We multiply by the factor (M-1) instead of just (M)\n # to take into account that we have a scattered\n # wave that is normalized by u0.\n prefactor *= np.exp(-1j * km * (M-1) * lD)\n\n if count is not None:\n count.value += 1\n\n # filter (2) must be applied before rotation as well\n # exp( i (kx t\u22a5 + k\u2098 (M - 1) s\u2080) r )\n #\n # k\u2098M = sqrt( k\u2098\u00b2 - kx\u00b2 - ky\u00b2 )\n # t\u22a5 = ( cos(\u03d5\u2080), ky/kx, sin(\u03d5\u2080) )\n # s\u2080 = ( -sin(\u03d5\u2080), 0 , cos(\u03d5\u2080) )\n #\n # This filter is effectively an inverse Fourier transform\n #\n # exp(i kx xD) exp(i ky yD) exp(i k\u2098 (M - 1) zD )\n #\n # xD = x cos(\u03d5\u2080) + z sin(\u03d5\u2080)\n # zD = - x sin(\u03d5\u2080) + z cos(\u03d5\u2080)\n\n # Everything is in pixels\n center = lNz / 2.0\n\n z = np.linspace(-center, center, lNz, endpoint=False)\n zv = z.reshape(-1, 1, 1)\n\n # z, y, x\n Mp = M.reshape(lNy, lNx)\n\n # filter2 = np.exp(1j * zv * km * (Mp - 1))\n f2_exp_fac = 1j * km * (Mp - 1)\n if save_memory:\n # compute filter2 later\n pass\n else:\n # compute filter2 now\n filter2 = ne.evaluate(\"exp(factor * zv)\",\n local_dict={\"factor\": f2_exp_fac,\n \"zv\": zv})\n # occupies some amount of ram, but yields faster\n # computation later\n\n if count is not None:\n count.value += 1\n\n # Prepare complex output image\n if onlyreal:\n outarr = np.zeros((ln, lny, lnx), dtype=dtype)\n else:\n outarr = np.zeros((ln, lny, lnx), dtype=dtype_complex)\n\n # Create plan for FFTW\n # save memory by in-place operations\n # projection = np.fft.fft2(sino, axes=(-1,-2)) * prefactor\n # FFTW-flag is \"estimate\":\n # specifies that, instead of actual measurements of different\n # algorithms, a simple heuristic is used to pick a (probably\n # sub-optimal) plan quickly. With this flag, the input/output\n # arrays are not overwritten during planning.\n\n # Byte-aligned arrays\n oneslice = pyfftw.empty_aligned((lNy, lNx), dtype_complex)\n\n myfftw_plan = pyfftw.FFTW(oneslice, oneslice, threads=num_cores,\n flags=[\"FFTW_ESTIMATE\"], axes=(0, 1))\n\n # Create plan for IFFTW:\n inarr = pyfftw.empty_aligned((lNy, lNx), dtype_complex)\n # inarr[:] = (projection[0]*filter2)[0,:,:]\n # plan is \"patient\":\n # FFTW_PATIENT is like FFTW_MEASURE, but considers a wider range\n # of algorithms and often produces a \u201cmore optimal\u201d plan\n # (especially for large transforms), but at the expense of\n # several times longer planning time (especially for large\n # transforms).\n # print(inarr.flags)\n\n myifftw_plan = pyfftw.FFTW(inarr, inarr, threads=num_cores,\n axes=(0, 1),\n direction=\"FFTW_BACKWARD\",\n flags=[\"FFTW_MEASURE\"])\n\n # Setup a shared array\n shared_array = mp.RawArray(ct_dt_map[dtype], ln * lny * lnx)\n arr = np.frombuffer(shared_array, dtype=dtype).reshape(ln, lny, lnx)\n\n # Initialize the pool with the shared array\n pool4loop = mp.Pool(processes=num_cores,\n initializer=_init_worker,\n initargs=(shared_array, (ln, lny, lnx), dtype))\n\n # filtered projections in loop\n filtered_proj = np.zeros((ln, lny, lnx), dtype=dtype_complex)\n\n for aa in np.arange(A):\n if padval is None:\n oneslice[:] = np.pad(uSin[aa],\n ((padyl, padyr), (padxl, padxr)),\n mode=\"edge\")\n else:\n oneslice[:] = np.pad(uSin[aa],\n ((padyl, padyr), (padxl, padxr)),\n mode=\"linear_ramp\",\n end_values=(padval,))\n myfftw_plan.execute()\n # normalize to (lNx * lNy) for FFTW and multiply with prefactor\n oneslice *= prefactor / (lNx * lNy)\n # 14x Speedup with fftw3 compared to numpy fft and\n # memory reduction by a factor of 2!\n # ifft will be computed in-place\n\n for p in range(len(zv)):\n if save_memory:\n # compute filter2 here;\n # this is comparatively slower than the other case\n ne.evaluate(\"exp(factor * zvp) * projectioni\",\n local_dict={\"zvp\": zv[p],\n \"projectioni\": oneslice,\n \"factor\": f2_exp_fac},\n out=inarr)\n else:\n # use universal functions\n np.multiply(filter2[p], oneslice, out=inarr)\n myifftw_plan.execute()\n filtered_proj[p, :, :] = inarr[padyl:lny+padyl, padxl:lnx+padxl]\n\n # resize image to original size\n # The copy is necessary to prevent memory leakage.\n arr[:] = filtered_proj.real\n\n phi0 = np.rad2deg(angles[aa])\n\n if not onlyreal:\n filtered_proj_imag = filtered_proj.imag\n\n _mprotate(phi0, lny, pool4loop, intp_order)\n\n outarr.real += arr\n\n if not onlyreal:\n arr[:] = filtered_proj_imag\n _mprotate(phi0, lny, pool4loop, intp_order)\n outarr.imag += arr\n\n if count is not None:\n count.value += 1\n\n pool4loop.terminate()\n pool4loop.join()\n _cleanup_worker()\n\n return outarr", "response": "r This method backpropagates a single object in a 3D manner."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef compute_angle_weights_1d(angles):\n # copy and modulo np.pi\n # This is an array with values in [0, np.pi)\n angles = (angles.flatten() - angles.min()) % (np.pi)\n # sort the array\n sortargs = np.argsort(angles)\n sortangl = angles[sortargs]\n # compute weights for sorted angles\n da = (np.roll(sortangl, -1) - np.roll(sortangl, 1)) % (np.pi)\n weights = da/np.sum(da)*da.shape[0]\n\n unsortweights = np.zeros_like(weights)\n # Sort everything back where it belongs\n unsortweights[sortargs] = weights\n return unsortweights", "response": "Compute the weights for each angle in the\n ."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef initialize(self):\n Device.initialize(self)\n for child in iter(self.children.values()):\n child.initialize()", "response": "Calls initialize on all devices connected to the bus."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_byte(self, address, value):\n LOGGER.debug(\"Writing byte %s to device %s!\", bin(value), hex(address))\n return self.driver.write_byte(address, value)", "response": "Writes a byte to an unaddressed register in a device."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nread a single byte from a device.", "response": "def read_byte(self, address):\n \"\"\"Reads unadressed byte from a device. \"\"\"\n LOGGER.debug(\"Reading byte from device %s!\", hex(address))\n return self.driver.read_byte(address)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwriting a byte value to a device s register.", "response": "def write_byte_data(self, address, register, value):\n \"\"\"Write a byte value to a device's register. \"\"\"\n LOGGER.debug(\"Writing byte data %s to register %s on device %s\",\n bin(value), hex(register), hex(address))\n return self.driver.write_byte_data(address, register, value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite a word value to a device s register.", "response": "def write_wdata(self, address, register, value):\n \"\"\"Write a word (two bytes) value to a device's register. \"\"\"\n warnings.warn(\"write_wdata() is deprecated and will be removed in future versions replace with write_word_data()\", DeprecationWarning)\n LOGGER.debug(\"Writing word data %s to register %s on device %s\",\n bin(value), hex(register), hex(address))\n return self.driver.write_word_data(address, register, value)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nraising an appropriate error for a given response.", "response": "async def _raise_for_status(response):\n \"\"\"Raise an appropriate error for a given response.\n\n Arguments:\n response (:py:class:`aiohttp.ClientResponse`): The API response.\n\n Raises:\n :py:class:`aiohttp.web_exceptions.HTTPException`: The appropriate\n error for the response's status.\n\n This function was taken from the aslack project and modified. The original\n copyright notice:\n\n Copyright (c) 2015, Jonathan Sharpe\n\n Permission to use, copy, modify, and/or distribute this software for any\n purpose with or without fee is hereby granted, provided that the above\n copyright notice and this permission notice appear in all copies.\n\n THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n \"\"\"\n\n try:\n response.raise_for_status()\n except aiohttp.ClientResponseError as exc:\n reason = response.reason\n\n spacetrack_error_msg = None\n\n try:\n json = await response.json()\n if isinstance(json, Mapping):\n spacetrack_error_msg = json['error']\n except (ValueError, KeyError, aiohttp.ClientResponseError):\n pass\n\n if not spacetrack_error_msg:\n spacetrack_error_msg = await response.text()\n\n if spacetrack_error_msg:\n reason += '\\nSpace-Track response:\\n' + spacetrack_error_msg\n\n payload = dict(\n code=response.status,\n message=reason,\n headers=response.headers,\n )\n\n # history attribute is only aiohttp >= 2.1\n try:\n payload['history'] = exc.history\n except AttributeError:\n pass\n\n raise aiohttp.ClientResponseError(**payload)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\nasync def generic_request(self, class_, iter_lines=False, iter_content=False,\n controller=None, parse_types=False, **kwargs):\n \"\"\"Generic Space-Track query coroutine.\n\n The request class methods use this method internally; the public\n API is as follows:\n\n .. code-block:: python\n\n st.tle_publish(*args, **st)\n st.basicspacedata.tle_publish(*args, **st)\n st.file(*args, **st)\n st.fileshare.file(*args, **st)\n st.spephemeris.file(*args, **st)\n\n They resolve to the following calls respectively:\n\n .. code-block:: python\n\n st.generic_request('tle_publish', *args, **st)\n st.generic_request('tle_publish', *args, controller='basicspacedata', **st)\n st.generic_request('file', *args, **st)\n st.generic_request('file', *args, controller='fileshare', **st)\n st.generic_request('file', *args, controller='spephemeris', **st)\n\n Parameters:\n class_: Space-Track request class name\n iter_lines: Yield result line by line\n iter_content: Yield result in 100 KiB chunks.\n controller: Optionally specify request controller to use.\n parse_types: Parse string values in response according to type given\n in predicate information, e.g. ``'2017-01-01'`` ->\n ``datetime.date(2017, 1, 1)``.\n **kwargs: These keywords must match the predicate fields on\n Space-Track. You may check valid keywords with the following\n snippet:\n\n .. code-block:: python\n\n spacetrack = AsyncSpaceTrackClient(...)\n await spacetrack.tle.get_predicates()\n # or\n await spacetrack.get_predicates('tle')\n\n See :func:`~spacetrack.operators._stringify_predicate_value` for\n which Python objects are converted appropriately.\n\n Yields:\n Lines\u2014stripped of newline characters\u2014if ``iter_lines=True``\n\n Yields:\n 100 KiB chunks if ``iter_content=True``\n\n Returns:\n Parsed JSON object, unless ``format`` keyword argument is passed.\n\n .. warning::\n\n Passing ``format='json'`` will return the JSON **unparsed**. Do\n not set ``format`` if you want the parsed JSON object returned!\n \"\"\"\n if iter_lines and iter_content:\n raise ValueError('iter_lines and iter_content cannot both be True')\n\n if 'format' in kwargs and parse_types:\n raise ValueError('parse_types can only be used if format is unset.')\n\n if controller is None:\n controller = self._find_controller(class_)\n else:\n classes = self.request_controllers.get(controller, None)\n if classes is None:\n raise ValueError(\n 'Unknown request controller {!r}'.format(controller))\n if class_ not in classes:\n raise ValueError(\n 'Unknown request class {!r} for controller {!r}'\n .format(class_, controller))\n\n # Decode unicode unless class == download, including conversion of\n # CRLF newlines to LF.\n decode = (class_ != 'download')\n if not decode and iter_lines:\n error = (\n 'iter_lines disabled for binary data, since CRLF newlines '\n 'split over chunk boundaries would yield extra blank lines. '\n 'Use iter_content=True instead.')\n raise ValueError(error)\n\n await self.authenticate()\n\n url = ('{0}{1}/query/class/{2}'\n .format(self.base_url, controller, class_))\n\n offline_check = (class_, controller) in self.offline_predicates\n valid_fields = {p.name for p in self.rest_predicates}\n predicates = None\n\n if not offline_check:\n predicates = await self.get_predicates(class_)\n predicate_fields = {p.name for p in predicates}\n valid_fields = predicate_fields | {p.name for p in self.rest_predicates}\n else:\n valid_fields |= self.offline_predicates[(class_, controller)]\n\n for key, value in kwargs.items():\n if key not in valid_fields:\n raise TypeError(\n \"'{class_}' got an unexpected argument '{key}'\"\n .format(class_=class_, key=key))\n\n value = _stringify_predicate_value(value)\n\n url += '/{key}/{value}'.format(key=key, value=value)\n\n logger.debug(url)\n\n resp = await self._ratelimited_get(url)\n\n await _raise_for_status(resp)\n\n if iter_lines:\n return _AsyncLineIterator(resp, decode_unicode=decode)\n elif iter_content:\n return _AsyncChunkIterator(resp, decode_unicode=decode)\n else:\n # If format is specified, return that format unparsed. Otherwise,\n # parse the default JSON response.\n if 'format' in kwargs:\n if decode:\n # Replace CRLF newlines with LF, Python will handle platform\n # specific newlines if written to file.\n data = await resp.text()\n data = data.replace('\\r', '')\n else:\n data = await resp.read()\n return data\n else:\n data = await resp.json()\n\n if predicates is None or not parse_types:\n return data\n else:\n return self._parse_types(data, predicates)", "response": "Generic Space-Track query coroutine.\n\n The request class methods use this method internally; the public\n API is as follows:\n\n .. code-block:: python\n\n st.tle_publish(*args, **st)\n st.basicspacedata.tle_publish(*args, **st)\n st.file(*args, **st)\n st.fileshare.file(*args, **st)\n st.spephemeris.file(*args, **st)\n\n They resolve to the following calls respectively:\n\n .. code-block:: python\n\n st.generic_request('tle_publish', *args, **st)\n st.generic_request('tle_publish', *args, controller='basicspacedata', **st)\n st.generic_request('file', *args, **st)\n st.generic_request('file', *args, controller='fileshare', **st)\n st.generic_request('file', *args, controller='spephemeris', **st)\n\n Parameters:\n class_: Space-Track request class name\n iter_lines: Yield result line by line\n iter_content: Yield result in 100 KiB chunks.\n controller: Optionally specify request controller to use.\n parse_types: Parse string values in response according to type given\n in predicate information, e.g. ``'2017-01-01'`` ->\n ``datetime.date(2017, 1, 1)``.\n **kwargs: These keywords must match the predicate fields on\n Space-Track. You may check valid keywords with the following\n snippet:\n\n .. code-block:: python\n\n spacetrack = AsyncSpaceTrackClient(...)\n await spacetrack.tle.get_predicates()\n # or\n await spacetrack.get_predicates('tle')\n\n See :func:`~spacetrack.operators._stringify_predicate_value` for\n which Python objects are converted appropriately.\n\n Yields:\n Lines\u2014stripped of newline characters\u2014if ``iter_lines=True``\n\n Yields:\n 100 KiB chunks if ``iter_content=True``\n\n Returns:\n Parsed JSON object, unless ``format`` keyword argument is passed.\n\n .. warning::\n\n Passing ``format='json'`` will return the JSON **unparsed**. Do\n not set ``format`` if you want the parsed JSON object returned!"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\nasync def _download_predicate_data(self, class_, controller):\n await self.authenticate()\n\n url = ('{0}{1}/modeldef/class/{2}'\n .format(self.base_url, controller, class_))\n\n resp = await self._ratelimited_get(url)\n\n await _raise_for_status(resp)\n\n resp_json = await resp.json()\n return resp_json['data']", "response": "Get raw predicate information for given request class and cache for\n subsequent calls."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_numeric_value(string_value):\n num_chars = ['.', '+', '-']\n number = ''\n for c in string_value:\n if c.isdigit() or c in num_chars:\n number += c\n return number", "response": "parses string_value and returns only numeric - like part\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns a single analysis.", "response": "def run(analysis, path=None, name=None, info=None, **kwargs):\n \"\"\"Run a single analysis.\n\n :param Analysis analysis: Analysis class to run.\n :param str path: Path of analysis. Can be `__file__`.\n :param str name: Name of the analysis.\n :param dict info: Optional entries are ``version``, ``title``,\n ``readme``, ...\n :param dict static: Map[url regex, root-folder] to serve static content.\n \"\"\"\n kwargs.update({\n 'analysis': analysis,\n 'path': path,\n 'name': name,\n 'info': info,\n })\n main(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert Python objects to Space - Track compatible strings", "response": "def _stringify_predicate_value(value):\n \"\"\"Convert Python objects to Space-Track compatible strings\n\n - Booleans (``True`` -> ``'true'``)\n - Sequences (``[25544, 34602]`` -> ``'25544,34602'``)\n - dates/datetimes (``date(2015, 12, 23)`` -> ``'2015-12-23'``)\n - ``None`` -> ``'null-val'``\n \"\"\"\n if isinstance(value, bool):\n return str(value).lower()\n elif isinstance(value, Sequence) and not isinstance(value, six.string_types):\n return ','.join(_stringify_predicate_value(x) for x in value)\n elif isinstance(value, datetime.datetime):\n return value.isoformat(sep=' ')\n elif isinstance(value, datetime.date):\n return value.isoformat()\n elif value is None:\n return 'null-val'\n else:\n return str(value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a human - readable string representation of both positional and keyword arguments passed to the function.", "response": "def args_repr(*args, **kwargs):\n \"\"\"\n Returns human-readable string representation of both positional and\n keyword arguments passed to the function.\n\n This function uses the built-in :func:`repr()` function to convert\n individual arguments to string.\n\n >>> args_repr(\"a\", (1, 2), some_keyword = list(\"abc\"))\n \"'a', (1, 2), some_keyword = ['a', 'b', 'c']\"\n \"\"\"\n items = [repr(a) for a in args]\n items += [\"%s = %r\" % (k, v) for k, v in iter(kwargs.items())]\n return \", \".join(items)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef obj_repr(obj, *args, **kwargs):\n cls_name = type(obj).__name__\n return \"%s(%s)\" % (cls_name, args_repr(*args, **kwargs), )", "response": "Returns a human - readable string representation of an object given that it has been created by calling constructor with the specified positional and keyword arguments."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndecoding a binary object to str and filter out non - printable characters", "response": "def decode_filter(text, encoding='utf-8'):\n \"\"\"\n decode a binary object to str and filter out non-printable characters\n \n Parameters\n ----------\n text: bytes\n the binary object to be decoded\n encoding: str\n the encoding to be used\n\n Returns\n -------\n str\n the decoded and filtered string\n \"\"\"\n if text is not None:\n text = text.decode(encoding, errors='ignore')\n printable = set(string.printable)\n text = filter(lambda x: x in printable, text)\n return ''.join(list(text))\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmerging two dictionaries together", "response": "def dictmerge(x, y):\n \"\"\"\n merge two dictionaries\n \"\"\"\n z = x.copy()\n z.update(y)\n return z"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dissolve(inlist):\n out = []\n for i in inlist:\n i = list(i) if isinstance(i, tuple) else i\n out.extend(dissolve(i)) if isinstance(i, list) else out.append(i)\n return out", "response": "Returns a list of lists with sub - lists or tuples that are not in list"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef finder(target, matchlist, foldermode=0, regex=False, recursive=True):\n if foldermode not in [0, 1, 2]:\n raise ValueError(\"'foldermode' must be either 0, 1 or 2\")\n \n # match patterns\n if isinstance(target, str):\n \n pattern = r'|'.join(matchlist if regex else [fnmatch.translate(x) for x in matchlist])\n \n if os.path.isdir(target):\n if recursive:\n out = dissolve([[os.path.join(root, x)\n for x in dirs + files\n if re.search(pattern, x)]\n for root, dirs, files in os.walk(target)])\n else:\n out = [os.path.join(target, x)\n for x in os.listdir(target)\n if re.search(pattern, x)]\n \n if foldermode == 0:\n out = [x for x in out if not os.path.isdir(x)]\n if foldermode == 2:\n out = [x for x in out if os.path.isdir(x)]\n \n return sorted(out)\n \n elif os.path.isfile(target):\n if zf.is_zipfile(target):\n with zf.ZipFile(target, 'r') as zip:\n out = [os.path.join(target, name)\n for name in zip.namelist()\n if re.search(pattern, os.path.basename(name.strip('/')))]\n \n if foldermode == 0:\n out = [x for x in out if not x.endswith('/')]\n elif foldermode == 1:\n out = [x.strip('/') for x in out]\n elif foldermode == 2:\n out = [x.strip('/') for x in out if x.endswith('/')]\n \n return sorted(out)\n \n elif tf.is_tarfile(target):\n tar = tf.open(target)\n out = [name for name in tar.getnames()\n if re.search(pattern, os.path.basename(name.strip('/')))]\n \n if foldermode == 0:\n out = [x for x in out if not tar.getmember(x).isdir()]\n elif foldermode == 2:\n out = [x for x in out if tar.getmember(x).isdir()]\n \n tar.close()\n \n out = [os.path.join(target, x) for x in out]\n \n return sorted(out)\n \n else:\n raise TypeError(\"if parameter 'target' is a file, \"\n \"it must be a zip or tar archive:\\n {}\"\n .format(target))\n else:\n raise TypeError(\"if parameter 'target' is of type str, \"\n \"it must be a directory or a file:\\n {}\"\n .format(target))\n \n elif isinstance(target, list):\n groups = [finder(x, matchlist, foldermode, regex, recursive) for x in target]\n return list(itertools.chain(*groups))\n \n else:\n raise TypeError(\"parameter 'target' must be of type str or list\")", "response": "function for finding files and folders in a directory and a list of directories and subdirectories."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef multicore(function, cores, multiargs, **singleargs):\n tblib.pickling_support.install()\n \n # compare the function arguments with the multi and single arguments and raise errors if mismatches occur\n if sys.version_info >= (3, 0):\n check = inspect.getfullargspec(function)\n varkw = check.varkw\n else:\n check = inspect.getargspec(function)\n varkw = check.keywords\n \n if not check.varargs and not varkw:\n multiargs_check = [x for x in multiargs if x not in check.args]\n singleargs_check = [x for x in singleargs if x not in check.args]\n if len(multiargs_check) > 0:\n raise AttributeError('incompatible multi arguments: {0}'.format(', '.join(multiargs_check)))\n if len(singleargs_check) > 0:\n raise AttributeError('incompatible single arguments: {0}'.format(', '.join(singleargs_check)))\n \n # compare the list lengths of the multi arguments and raise errors if they are of different length\n arglengths = list(set([len(multiargs[x]) for x in multiargs]))\n if len(arglengths) > 1:\n raise AttributeError('multi argument lists of different length')\n \n # prevent starting more threads than necessary\n cores = cores if arglengths[0] >= cores else arglengths[0]\n \n # create a list of dictionaries each containing the arguments for individual\n # function calls to be passed to the multicore processes\n processlist = [dictmerge(dict([(arg, multiargs[arg][i]) for arg in multiargs]), singleargs)\n for i in range(len(multiargs[list(multiargs.keys())[0]]))]\n \n if platform.system() == 'Windows':\n \n # in Windows parallel processing needs to strictly be in a \"if __name__ == '__main__':\" wrapper\n # it was thus necessary to outsource this to a different script and try to serialize all input for sharing objects\n # https://stackoverflow.com/questions/38236211/why-multiprocessing-process-behave-differently-on-windows-and-linux-for-global-o\n \n # a helper script to perform the parallel processing\n script = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'multicore_helper.py')\n \n # a temporary file to write the serialized function variables\n tmpfile = os.path.join(tempfile.gettempdir(), 'spatialist_dump')\n \n # check if everything can be serialized\n if not dill.pickles([function, cores, processlist]):\n raise RuntimeError('cannot fully serialize function arguments;\\n'\n ' see https://github.com/uqfoundation/dill for supported types')\n \n # write the serialized variables\n with open(tmpfile, 'wb') as tmp:\n dill.dump([function, cores, processlist], tmp, byref=False)\n \n # run the helper script\n proc = sp.Popen([sys.executable, script], stdin=sp.PIPE, stderr=sp.PIPE)\n out, err = proc.communicate()\n if proc.returncode != 0:\n raise RuntimeError(err.decode())\n \n # retrieve the serialized output of the processing which was written to the temporary file by the helper script\n with open(tmpfile, 'rb') as tmp:\n result = dill.load(tmp)\n return result\n else:\n results = None\n \n def wrapper(**kwargs):\n try:\n return function(**kwargs)\n except Exception as e:\n return ExceptionWrapper(e)\n \n # block printing of the executed function\n with HiddenPrints():\n # start pool of processes and do the work\n try:\n pool = mp.Pool(processes=cores)\n except NameError:\n raise ImportError(\"package 'pathos' could not be imported\")\n results = pool.imap(lambda x: wrapper(**x), processlist)\n pool.close()\n pool.join()\n \n i = 0\n out = []\n for item in results:\n if isinstance(item, ExceptionWrapper):\n item.ee = type(item.ee)(str(item.ee) +\n \"\\n(called function '{}' with args {})\"\n .format(function.__name__, processlist[i]))\n raise (item.re_raise())\n out.append(item)\n i += 1\n \n # evaluate the return of the processing function;\n # if any value is not None then the whole list of results is returned\n eval = [x for x in out if x is not None]\n if len(eval) == 0:\n return None\n else:\n return out", "response": "Wrapper for the multicore function function."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the smallest possible data type for a string or list of strings Parameters ---------- x: str or list a string to be parsed Returns ------- int, float or str the parsing result Examples -------- >>> isinstance(parse_literal('1.5'), float) True >>> isinstance(parse_literal('1'), int) True >>> isinstance(parse_literal('foobar'), str) True", "response": "def parse_literal(x):\n \"\"\"\n return the smallest possible data type for a string or list of strings\n\n Parameters\n ----------\n x: str or list\n a string to be parsed\n\n Returns\n -------\n int, float or str\n the parsing result\n \n Examples\n --------\n >>> isinstance(parse_literal('1.5'), float)\n True\n \n >>> isinstance(parse_literal('1'), int)\n True\n \n >>> isinstance(parse_literal('foobar'), str)\n True\n \"\"\"\n if isinstance(x, list):\n return [parse_literal(y) for y in x]\n elif isinstance(x, (bytes, str)):\n try:\n return int(x)\n except ValueError:\n try:\n return float(x)\n except ValueError:\n return x\n else:\n raise TypeError('input must be a string or a list of strings')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef rescale(inlist, newrange=(0, 1)):\n OldMax = max(inlist)\n OldMin = min(inlist)\n \n if OldMin == OldMax:\n raise RuntimeError('list contains of only one unique value')\n \n OldRange = OldMax - OldMin\n NewRange = newrange[1] - newrange[0]\n result = [(((float(x) - OldMin) * NewRange) / OldRange) + newrange[0] for x in inlist]\n return result", "response": "rescale the values in a list between the new minimum and maximum"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwrapping for subprocess execution including logfile writing and command prompt piping", "response": "def run(cmd, outdir=None, logfile=None, inlist=None, void=True, errorpass=False, env=None):\n \"\"\"\n | wrapper for subprocess execution including logfile writing and command prompt piping\n | this is a convenience wrapper around the :mod:`subprocess` module and calls\n its class :class:`~subprocess.Popen` internally.\n \n Parameters\n ----------\n cmd: list\n the command arguments\n outdir: str\n the directory to execute the command in\n logfile: str\n a file to write stdout to\n inlist: list\n a list of arguments passed to stdin, i.e. arguments passed to interactive input of the program\n void: bool\n return stdout and stderr?\n errorpass: bool\n if False, a :class:`subprocess.CalledProcessError` is raised if the command fails\n env: dict or None\n the environment to be passed to the subprocess\n\n Returns\n -------\n None or Tuple\n a tuple of (stdout, stderr) if `void` is False otherwise None\n \"\"\"\n cmd = [str(x) for x in dissolve(cmd)]\n if outdir is None:\n outdir = os.getcwd()\n log = sp.PIPE if logfile is None else open(logfile, 'a')\n proc = sp.Popen(cmd, stdin=sp.PIPE, stdout=log, stderr=sp.PIPE, cwd=outdir, env=env)\n instream = None if inlist is None \\\n else ''.join([str(x) + '\\n' for x in inlist]).encode('utf-8')\n out, err = proc.communicate(instream)\n out = decode_filter(out)\n err = decode_filter(err)\n if not errorpass and proc.returncode != 0:\n raise sp.CalledProcessError(proc.returncode, cmd, err)\n # add line for separating log entries of repeated function calls\n if logfile:\n log.write('#####################################################################\\n')\n log.close()\n if not void:\n return out, err"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nparse a url querydict", "response": "def urlQueryParser(url, querydict):\n \"\"\"\n parse a url query\n \"\"\"\n address_parse = urlparse(url)\n return urlunparse(address_parse._replace(query=urlencode(querydict)))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef push(self, x):\n if isinstance(x, list):\n for item in x:\n self.stack.append(item)\n else:\n self.stack.append(x)", "response": "push a new item to the stack"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pop(self):\n if not self.empty():\n val = self.stack[-1]\n del self.stack[-1]\n return val", "response": "pop the last element from the stack and return it"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns measured value in miligauss", "response": "def axes(self, offset = False):\n \"\"\"returns measured value in miligauss\"\"\"\n reg, self._scale = self.SCALES[self._gauss]\n x = self.bus.read_int16_data(self.address, self.HMC5883L_DXRA) \n if x == -4096: x = OVERFLOW\n y = self.bus.read_int16_data(self.address, self.HMC5883L_DYRA)\n if y == -4096: y = OVERFLOW\n z = self.bus.read_int16_data(self.address, self.HMC5883L_DZRA)\n if z == -4096: z = OVERFLOW\n\n x*=self._scale\n y*=self._scale\n z*=self._scale\n\n if offset: (x, y, z) = self.__offset((x,y,z))\n\n return (x, y, z)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _Pairs(data):\n keys = sorted(data)\n return [{'@key': k, '@value': data[k]} for k in keys]", "response": "dictionary - > list of pairs"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts a timestamp in seconds to a string based on the format string. Returns GM time.", "response": "def _StrftimeGm(value, unused_context, args):\n \"\"\"Convert a timestamp in seconds to a string based on the format string.\n\n Returns GM time.\n \"\"\"\n time_tuple = time.gmtime(value)\n return _StrftimeHelper(args, time_tuple)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting a timestamp in seconds to a string based on the format string. Returns local time.", "response": "def _StrftimeLocal(value, unused_context, args):\n \"\"\"Convert a timestamp in seconds to a string based on the format string.\n\n Returns local time.\n \"\"\"\n time_tuple = time.localtime(value)\n return _StrftimeHelper(args, time_tuple)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _TemplateExists(unused_value, context, args):\n try:\n name = args[0]\n except IndexError:\n raise EvaluationError('The \"template\" predicate requires an argument.')\n return context.HasTemplate(name)", "response": "Returns whether the given name is in the current Template s template group."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef SplitMeta(meta):\n n = len(meta)\n if n % 2 == 1:\n raise ConfigurationError(\n '%r has an odd number of metacharacters' % meta)\n return meta[:n // 2], meta[n // 2:]", "response": "Split and validate metacharacters.\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef MakeTokenRegex(meta_left, meta_right):\n key = meta_left, meta_right\n if key not in _token_re_cache:\n # - Need () grouping for re.split\n # - The first character must be a non-space. This allows us to ignore\n # literals like function() { return 1; } when\n # - There must be at least one (non-space) character inside {}\n _token_re_cache[key] = re.compile(\n r'(' +\n re.escape(meta_left) +\n r'\\S.*?' +\n re.escape(meta_right) +\n r')')\n return _token_re_cache[key]", "response": "Returns a compiled regular expression for tokenization."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _MatchDirective(token):\n # Tokens below must start with '.'\n if token.startswith('.'):\n token = token[1:]\n else:\n return None, None\n \n if token == 'end':\n return END_TOKEN, None\n \n if token == 'alternates with':\n return ALTERNATES_TOKEN, token\n \n if token.startswith('or'):\n if token.strip() == 'or':\n return OR_TOKEN, None\n else:\n pred_str = token[2:].strip()\n return OR_TOKEN, pred_str\n \n match = _SECTION_RE.match(token)\n if match:\n repeated, section_name = match.groups()\n if repeated:\n return REPEATED_SECTION_TOKEN, section_name\n else:\n return SECTION_TOKEN, section_name\n \n if token.startswith('template '):\n return SUBST_TEMPLATE_TOKEN, token[9:].strip()\n if token.startswith('define '):\n return DEF_TOKEN, token[7:].strip()\n \n if token.startswith('if '):\n return IF_TOKEN, token[3:].strip()\n if token.endswith('?'):\n return PREDICATE_TOKEN, token\n \n return None, None", "response": "Helper function for matching certain directives."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nyielding tokens which are 2 - tuples ( TOKEN_TYPE token_string", "response": "def _Tokenize(template_str, meta_left, meta_right, whitespace):\n \"\"\"Yields tokens, which are 2-tuples (TOKEN_TYPE, token_string).\"\"\"\n \n trimlen = len(meta_left)\n token_re = MakeTokenRegex(meta_left, meta_right)\n do_strip = (whitespace == 'strip-line') # Do this outside loop\n do_strip_part = False\n \n for line in template_str.splitlines(True): # retain newlines\n if do_strip or do_strip_part:\n line = line.strip()\n \n tokens = token_re.split(line)\n \n # Check for a special case first. If a comment or \"block\" directive is on a\n # line by itself (with only space surrounding it), then the space is\n # omitted. For simplicity, we don't handle the case where we have 2\n # directives, say '{.end} # {#comment}' on a line.\n if len(tokens) == 3:\n # ''.isspace() == False, so work around that\n if (tokens[0].isspace() or not tokens[0]) and \\\n (tokens[2].isspace() or not tokens[2]):\n token = tokens[1][trimlen: -trimlen]\n \n # Check the ones that begin with ## before #\n if token == COMMENT_BEGIN:\n yield COMMENT_BEGIN_TOKEN, None\n continue\n if token == COMMENT_END:\n yield COMMENT_END_TOKEN, None\n continue\n if token == OPTION_STRIP_LINE:\n do_strip_part = True\n continue\n if token == OPTION_END:\n do_strip_part = False\n continue\n \n if token.startswith('#'):\n continue # The whole line is omitted\n \n token_type, token = _MatchDirective(token)\n if token_type is not None:\n yield token_type, token # Only yield the token, not space\n continue\n \n # The line isn't special; process it normally.\n for i, token in enumerate(tokens):\n if i % 2 == 0:\n yield LITERAL_TOKEN, token\n \n else: # It's a \"directive\" in metachracters\n assert token.startswith(meta_left), repr(token)\n assert token.endswith(meta_right), repr(token)\n token = token[trimlen: -trimlen]\n \n # Check the ones that begin with ## before #\n if token == COMMENT_BEGIN:\n yield COMMENT_BEGIN_TOKEN, None\n continue\n if token == COMMENT_END:\n yield COMMENT_END_TOKEN, None\n continue\n if token == OPTION_STRIP_LINE:\n do_strip_part = True\n continue\n if token == OPTION_END:\n do_strip_part = False\n continue\n \n # A single-line comment\n if token.startswith('#'):\n continue\n \n if token.startswith('.'):\n literal = {\n '.meta-left': meta_left,\n '.meta-right': meta_right,\n '.space': ' ',\n '.tab': '\\t',\n '.newline': '\\n',\n }.get(token)\n \n if literal is not None:\n yield META_LITERAL_TOKEN, literal\n continue\n \n token_type, token = _MatchDirective(token)\n if token_type is not None:\n yield token_type, token\n \n else: # Now we know the directive is a substitution.\n yield SUBST_TOKEN, token"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncompiles the template string into a new base object.", "response": "def _CompileTemplate(\n template_str, builder, meta='{}', format_char='|', default_formatter='str',\n whitespace='smart'):\n \"\"\"Compile the template string, calling methods on the 'program builder'.\n\n Args:\n template_str: The template string. It should not have any compilation\n options in the header -- those are parsed by FromString/FromFile\n\n builder: The interface of _ProgramBuilder isn't fixed. Use at your own\n risk.\n\n meta: The metacharacters to use, e.g. '{}', '[]'.\n\n default_formatter: The formatter to use for substitutions that are missing a\n formatter. The 'str' formatter the \"default default\" -- it just tries\n to convert the context value to a string in some unspecified manner.\n\n whitespace: 'smart' or 'strip-line'. In smart mode, if a directive is alone\n on a line, with only whitespace on either side, then the whitespace is\n removed. In 'strip-line' mode, every line is stripped of its\n leading and trailing whitespace.\n\n Returns:\n The compiled program (obtained from the builder)\n\n Raises:\n The various subclasses of CompilationError. For example, if\n default_formatter=None, and a variable is missing a formatter, then\n MissingFormatter is raised.\n\n This function is public so it can be used by other tools, e.g. a syntax\n checking tool run before submitting a template to source control.\n \"\"\"\n meta_left, meta_right = SplitMeta(meta)\n \n # : is meant to look like Python 3000 formatting {foo:.3f}. According to\n # PEP 3101, that's also what .NET uses.\n # | is more readable, but, more importantly, reminiscent of pipes, which is\n # useful for multiple formatters, e.g. {name|js-string|html}\n if format_char not in (':', '|'):\n raise ConfigurationError(\n 'Only format characters : and | are accepted (got %r)' % format_char)\n \n if whitespace not in ('smart', 'strip-line'):\n raise ConfigurationError('Invalid whitespace mode %r' % whitespace)\n \n # If we go to -1, then we got too many {end}. If end at 1, then we're missing\n # an {end}.\n balance_counter = 0\n comment_counter = 0 # ditto for ##BEGIN/##END\n \n has_defines = False\n \n for token_type, token in _Tokenize(template_str, meta_left, meta_right,\n whitespace):\n if token_type == COMMENT_BEGIN_TOKEN:\n comment_counter += 1\n continue\n if token_type == COMMENT_END_TOKEN:\n comment_counter -= 1\n if comment_counter < 0:\n raise CompilationError('Got too many ##END markers')\n continue\n # Don't process any tokens\n if comment_counter > 0:\n continue\n \n if token_type in (LITERAL_TOKEN, META_LITERAL_TOKEN):\n if token:\n builder.Append(token)\n continue\n \n if token_type in (SECTION_TOKEN, REPEATED_SECTION_TOKEN, DEF_TOKEN):\n parts = [p.strip() for p in token.split(format_char)]\n if len(parts) == 1:\n name = parts[0]\n formatters = []\n else:\n name = parts[0]\n formatters = parts[1:]\n builder.NewSection(token_type, name, formatters)\n balance_counter += 1\n if token_type == DEF_TOKEN:\n has_defines = True\n continue\n \n if token_type == PREDICATE_TOKEN:\n # {.attr?} lookups\n builder.NewPredicateSection(token, test_attr=True)\n balance_counter += 1\n continue\n \n if token_type == IF_TOKEN:\n builder.NewPredicateSection(token, test_attr=False)\n balance_counter += 1\n continue\n \n if token_type == OR_TOKEN:\n builder.NewOrClause(token)\n continue\n \n if token_type == ALTERNATES_TOKEN:\n builder.AlternatesWith()\n continue\n \n if token_type == END_TOKEN:\n balance_counter -= 1\n if balance_counter < 0:\n # TODO: Show some context for errors\n raise TemplateSyntaxError(\n 'Got too many %send%s statements. You may have mistyped an '\n \"earlier 'section' or 'repeated section' directive.\"\n % (meta_left, meta_right))\n builder.EndSection()\n continue\n \n if token_type == SUBST_TOKEN:\n parts = [p.strip() for p in token.split(format_char)]\n if len(parts) == 1:\n if default_formatter is None:\n raise MissingFormatter('This template requires explicit formatters.')\n # If no formatter is specified, the default is the 'str' formatter,\n # which the user can define however they desire.\n name = token\n formatters = [default_formatter]\n else:\n name = parts[0]\n formatters = parts[1:]\n \n builder.AppendSubstitution(name, formatters)\n continue\n \n if token_type == SUBST_TEMPLATE_TOKEN:\n # no formatters\n builder.AppendTemplateSubstitution(token)\n continue\n \n if balance_counter != 0:\n raise TemplateSyntaxError('Got too few %send%s statements' %\n (meta_left, meta_right))\n if comment_counter != 0:\n raise CompilationError('Got %d more {##BEGIN}s than {##END}s' % comment_counter)\n \n return builder.Root(), has_defines"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlikes FromFile but takes a string.", "response": "def FromString(s, **kwargs):\n \"\"\"Like FromFile, but takes a string.\"\"\"\n \n f = StringIO.StringIO(s)\n return FromFile(f, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nparse a template from a file.", "response": "def FromFile(f, more_formatters=lambda x: None, more_predicates=lambda x: None,\n _constructor=None):\n \"\"\"Parse a template from a file, using a simple file format.\n\n This is useful when you want to include template options in a data file,\n rather than in the source code.\n\n The format is similar to HTTP or E-mail headers. The first lines of the file\n can specify template options, such as the metacharacters to use. One blank\n line must separate the options from the template body.\n\n Example:\n\n default-formatter: none\n meta: {{}}\n format-char: :\n \n Template goes here: {{variable:html}}\n\n Args:\n f: A file handle to read from. Caller is responsible for opening and\n closing it.\n \"\"\"\n _constructor = _constructor or Template\n \n options = {}\n \n # Parse lines until the first one that doesn't look like an option\n while 1:\n line = f.readline()\n match = _OPTION_RE.match(line)\n if match:\n name, value = match.group(1), match.group(2)\n \n # Accept something like 'Default-Formatter: raw'. This syntax is like\n # HTTP/E-mail headers.\n name = name.lower()\n # In Python 2.4, kwargs must be plain strings\n name = name.encode('utf-8')\n \n if name in _OPTION_NAMES:\n name = name.replace('-', '_')\n value = value.strip()\n if name == 'default_formatter' and value.lower() == 'none':\n value = None\n options[name] = value\n else:\n break\n else:\n break\n \n if options:\n if line.strip():\n raise CompilationError(\n 'Must be one blank line between template options and body (got %r)'\n % line)\n body = f.read()\n else:\n # There were no options, so no blank line is necessary.\n body = line + f.read()\n \n return _constructor(body,\n more_formatters=more_formatters,\n more_predicates=more_predicates,\n **options)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _MakeGroupFromRootSection(root_section, undefined_str):\n group = {}\n for statement in root_section.Statements():\n if isinstance(statement, six.string_types):\n continue\n func, args = statement\n # here the function acts as ID for the block type\n if func is _DoDef and isinstance(args, _Section):\n section = args\n # Construct a Template instance from a this _Section subtree\n t = Template._FromSection(section, group, undefined_str)\n group[section.section_name] = t\n return group", "response": "Construct a dictinary template name - > Template instance from a parse tree."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef JoinTokens(tokens):\n try:\n return ''.join(tokens)\n except UnicodeDecodeError:\n # This can still raise UnicodeDecodeError if that data isn't utf-8.\n return ''.join(t.decode('utf-8') for t in tokens)", "response": "Join tokens into a single string."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nexecutes the repeated section.", "response": "def _DoRepeatedSection(args, context, callback, trace):\n \"\"\"{.repeated section foo}\"\"\"\n \n block = args\n \n items = context.PushSection(block.section_name, block.pre_formatters)\n if items:\n if not isinstance(items, list):\n raise EvaluationError('Expected a list; got %s' % type(items))\n \n last_index = len(items) - 1\n statements = block.Statements()\n alt_statements = block.Statements('alternates with')\n try:\n i = 0\n while True:\n context.Next()\n # Execute the statements in the block for every item in the list.\n # Execute the alternate block on every iteration except the last. Each\n # item could be an atom (string, integer, etc.) or a dictionary.\n _Execute(statements, context, callback, trace)\n if i != last_index:\n _Execute(alt_statements, context, callback, trace)\n i += 1\n except StopIteration:\n pass\n \n else:\n _Execute(block.Statements('or'), context, callback, trace)\n \n context.Pop()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexecute a section statement.", "response": "def _DoSection(args, context, callback, trace):\n \"\"\"{.section foo}\"\"\"\n block = args\n # If a section present and \"true\", push the dictionary onto the stack as the\n # new context, and show it\n if context.PushSection(block.section_name, block.pre_formatters):\n _Execute(block.Statements(), context, callback, trace)\n context.Pop()\n else: # missing or \"false\" -- show the {.or} section\n context.Pop()\n _Execute(block.Statements('or'), context, callback, trace)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _DoPredicates(args, context, callback, trace):\n block = args\n value = context.Lookup('@')\n for (predicate, args, func_type), statements in block.clauses:\n if func_type == ENHANCED_FUNC:\n do_clause = predicate(value, context, args)\n else:\n do_clause = predicate(value)\n \n if do_clause:\n if trace: trace.Push(predicate)\n _Execute(statements, context, callback, trace)\n if trace: trace.Pop()\n break", "response": "Execute the predicate clauses."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _Execute(statements, context, callback, trace):\n # Every time we call _Execute, increase this depth\n if trace:\n trace.exec_depth += 1\n for i, statement in enumerate(statements):\n if isinstance(statement, six.string_types):\n callback(statement)\n else:\n # In the case of a substitution, args is a pair (name, formatters).\n # In the case of a section, it's a _Section instance.\n try:\n func, args = statement\n func(args, context, callback, trace)\n except UndefinedVariable as e:\n # Show context for statements\n start = max(0, i - 3)\n end = i + 3\n e.near = statements[start:end]\n e.trace = trace # Attach caller's trace (could be None)\n raise", "response": "Execute a bunch of template statements in a ScopedContext."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfree function to expand a template string with a data dictionary.", "response": "def expand(template_str, dictionary, **kwargs):\n \"\"\"Free function to expands a template string with a data dictionary.\n\n This is useful for cases where you don't care about saving the result of\n compilation (similar to re.match('.*', s) vs DOT_STAR.match(s))\n \"\"\"\n t = Template(template_str, **kwargs)\n return t.expand(dictionary)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _FlattenToCallback(tokens, callback):\n for t in tokens:\n if isinstance(t, six.string_types):\n callback(t)\n else:\n _FlattenToCallback(t, callback)", "response": "Flatten a nested list structure into a callback function."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef execute_with_style_LEGACY(template, style, data, callback, body_subtree='body'):\n try:\n body_data = data[body_subtree]\n except KeyError:\n raise EvaluationError('Data dictionary has no subtree %r' % body_subtree)\n tokens_body = []\n template.execute(body_data, tokens_body.append)\n data[body_subtree] = tokens_body\n tokens = []\n style.execute(data, tokens.append)\n _FlattenToCallback(tokens, callback)", "response": "Execute a template with a style."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef expand_with_style(template, style, data, body_subtree='body'):\n if template.has_defines:\n return template.expand(data, style=style)\n else:\n tokens = []\n execute_with_style_LEGACY(template, style, data, tokens.append,\n body_subtree=body_subtree)\n return JoinTokens(tokens)", "response": "Expand a data dictionary with a template AND a style."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef LookupWithType(self, user_str):\n prefix = 'template '\n ref = None # fail the lookup by default\n if user_str.startswith(prefix):\n name = user_str[len(prefix):]\n if name == 'SELF':\n # we can resolve this right away\n ref = _TemplateRef(template=self.owner) # special value\n else:\n ref = _TemplateRef(name)\n \n return ref, (), TEMPLATE_FORMATTER", "response": "Returns:\n ref: Either a template instance (itself) or _TemplateRef"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _GetFormatter(self, format_str):\n formatter, args, func_type = self.formatters.LookupWithType(format_str)\n if formatter:\n return formatter, args, func_type\n else:\n raise BadFormatter('%r is not a valid formatter' % format_str)", "response": "Get a formatter from a user s formatters list."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _GetPredicate(self, pred_str, test_attr=False):\n predicate, args, func_type = self.predicates.LookupWithType(pred_str)\n if predicate:\n pred = predicate, args, func_type\n else:\n # Nicer syntax, {.debug?} is shorthand for {.if test debug}.\n # Currently there is not if/elif chain; just use\n # {.if test debug} {.or test release} {.or} {.end}\n if test_attr:\n assert pred_str.endswith('?')\n # func, args, func_type\n pred = (_TestAttribute, (pred_str[:-1],), ENHANCED_FUNC)\n else:\n raise BadPredicate('%r is not a valid predicate' % pred_str)\n return pred", "response": "Get predicate from a string."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef NewSection(self, token_type, section_name, pre_formatters):\n pre_formatters = [self._GetFormatter(f) for f in pre_formatters]\n \n # TODO: Consider getting rid of this dispatching, and turn _Do* into methods\n if token_type == REPEATED_SECTION_TOKEN:\n new_block = _RepeatedSection(section_name, pre_formatters)\n func = _DoRepeatedSection\n elif token_type == SECTION_TOKEN:\n new_block = _Section(section_name, pre_formatters)\n func = _DoSection\n elif token_type == DEF_TOKEN:\n new_block = _Section(section_name, [])\n func = _DoDef\n else:\n raise AssertionError('Invalid token type %s' % token_type)\n \n self._NewSection(func, new_block)", "response": "For sections or repeated sections."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd an or clause to the current section.", "response": "def NewOrClause(self, pred_str):\n \"\"\"\n {.or ...} Can appear inside predicate blocks or section blocks, with\n slightly different meaning.\n \"\"\"\n if pred_str:\n pred = self._GetPredicate(pred_str, test_attr=False)\n else:\n pred = None\n self.current_section.NewOrClause(pred)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngiving a section name push it on the top of the stack.", "response": "def PushSection(self, name, pre_formatters):\n \"\"\"Given a section name, push it on the top of the stack.\n\n Returns:\n The new section, or None if there is no such section.\n \"\"\"\n if name == '@':\n value = self.stack[-1].context\n else:\n value = self.stack[-1].context.get(name)\n \n # Apply pre-formatters\n for i, (f, args, formatter_type) in enumerate(pre_formatters):\n if formatter_type == ENHANCED_FUNC:\n value = f(value, self, args)\n elif formatter_type == SIMPLE_FUNC:\n value = f(value)\n else:\n assert False, 'Invalid formatter type %r' % formatter_type\n \n self.stack.append(_Frame(value))\n return value"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef Next(self):\n stacktop = self.stack[-1]\n \n # Now we're iterating -- push a new mutable object onto the stack\n if stacktop.index == -1:\n stacktop = _Frame(None, index=0)\n self.stack.append(stacktop)\n \n context_array = self.stack[-2].context\n \n if stacktop.index == len(context_array):\n self.stack.pop()\n raise StopIteration\n \n stacktop.context = context_array[stacktop.index]\n stacktop.index += 1\n \n return True", "response": "Advance to the next item in a repeated section."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlooks up the stack for the given name.", "response": "def _LookUpStack(self, name):\n \"\"\"Look up the stack for the given name.\"\"\"\n i = len(self.stack) - 1\n while 1:\n frame = self.stack[i]\n if name == '@index':\n if frame.index != -1: # -1 is undefined\n return frame.index # @index is 1-based\n else:\n context = frame.context\n if hasattr(context, 'get'): # Can't look up names in a list or atom\n try:\n return context[name]\n except KeyError:\n pass\n \n i -= 1 # Next frame\n if i <= -1: # Couldn't find it anywhere\n return self._Undefined(name)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nlook up the value associated with a name in the current context.", "response": "def Lookup(self, name):\n \"\"\"Get the value associated with a name in the current context.\n\n The current context could be an dictionary in a list, or a dictionary\n outside a list.\n\n Args:\n name: name to lookup, e.g. 'foo' or 'foo.bar.baz'\n\n Returns:\n The value, or self.undefined_str\n\n Raises:\n UndefinedVariable if self.undefined_str is not set\n \"\"\"\n if name == '@':\n return self.stack[-1].context\n \n parts = name.split('.')\n value = self._LookUpStack(parts[0])\n \n # Now do simple lookups of the rest of the parts\n for part in parts[1:]:\n try:\n value = value[part]\n except (KeyError, TypeError): # TypeError for non-dictionaries\n return self._Undefined(part)\n \n return value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nexecuting the program and return the expanded token list.", "response": "def execute(self, data_dict, callback, group=None, trace=None):\n \"\"\"Low level method to expand the template piece by piece.\n\n Args:\n data_dict: The JSON data dictionary.\n callback: A callback which should be called with each expanded token.\n group: Dictionary of name -> Template instance (for styles)\n\n Example: You can pass 'f.write' as the callback to write directly to a file\n handle.\n \"\"\"\n # First try the passed in version, then the one set by _SetTemplateGroup. May\n # be None. Only one of these should be set.\n group = group or self.group\n context = _ScopedContext(data_dict, self.undefined_str, group=group)\n _Execute(self._program.Statements(), context, callback, trace)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef expand(self, *args, **kwargs):\n if args:\n if len(args) == 1:\n data_dict = args[0]\n trace = kwargs.get('trace')\n style = kwargs.get('style')\n else:\n raise TypeError(\n 'expand() only takes 1 positional argument (got %s)' % args)\n else:\n data_dict = kwargs\n trace = None # Can't use trace= with the kwargs style\n style = None\n \n tokens = []\n group = _MakeGroupFromRootSection(self._program, self.undefined_str)\n if style:\n style.execute(data_dict, tokens.append, group=group,\n trace=trace)\n else:\n # Needs a group to reference its OWN {.define}s\n self.execute(data_dict, tokens.append, group=group,\n trace=trace)\n \n return JoinTokens(tokens)", "response": "Expands the template with the given data dictionary returning a string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nyield a list of tokens resulting from expansion.", "response": "def tokenstream(self, data_dict):\n \"\"\"Yields a list of tokens resulting from expansion.\n\n This may be useful for WSGI apps. NOTE: In the current implementation, the\n entire expanded template must be stored memory.\n\n NOTE: This is a generator, but JavaScript doesn't have generators.\n \"\"\"\n tokens = []\n self.execute(data_dict, tokens.append)\n for token in tokens:\n yield token"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nalign an unwrapped phase array to zero - phase array.", "response": "def align_unwrapped(sino):\n \"\"\"Align an unwrapped phase array to zero-phase\n\n All operations are performed in-place.\n \"\"\"\n samples = []\n if len(sino.shape) == 2:\n # 2D\n # take 1D samples at beginning and end of array\n samples.append(sino[:, 0])\n samples.append(sino[:, 1])\n samples.append(sino[:, 2])\n samples.append(sino[:, -1])\n samples.append(sino[:, -2])\n\n elif len(sino.shape) == 3:\n # 3D\n # take 1D samples at beginning and end of array\n samples.append(sino[:, 0, 0])\n samples.append(sino[:, 0, -1])\n samples.append(sino[:, -1, 0])\n samples.append(sino[:, -1, -1])\n samples.append(sino[:, 0, 1])\n\n # find discontinuities in the samples\n steps = np.zeros((len(samples), samples[0].shape[0]))\n for i in range(len(samples)):\n t = np.unwrap(samples[i])\n steps[i] = samples[i] - t\n\n # if the majority believes so, add a step of PI\n remove = mode(steps, axis=0)[0][0]\n\n # obtain divmod min\n twopi = 2*np.pi\n minimum = divmod_neg(np.min(sino), twopi)[0]\n remove += minimum*twopi\n\n for i in range(len(sino)):\n sino[i] -= remove[i]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning divmod with closest result to zero", "response": "def divmod_neg(a, b):\n \"\"\"Return divmod with closest result to zero\"\"\"\n q, r = divmod(a, b)\n # make sure r is close to zero\n sr = np.sign(r)\n if np.abs(r) > b/2:\n q += sr\n r -= b * sr\n return q, r"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef json_encoder_default(obj):\n if np is not None and hasattr(obj, 'size') and hasattr(obj, 'dtype'):\n if obj.size == 1:\n if np.issubdtype(obj.dtype, np.integer):\n return int(obj)\n elif np.issubdtype(obj.dtype, np.floating):\n return float(obj)\n\n if isinstance(obj, set):\n return list(obj)\n elif hasattr(obj, 'to_native'):\n # DatastoreList, DatastoreDict\n return obj.to_native()\n elif hasattr(obj, 'tolist') and hasattr(obj, '__iter__'):\n # for np.array\n return obj.tolist()\n\n return obj", "response": "Default JSON encoder for more data types than the default JSON encoder."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fig_to_src(figure, image_format='png', dpi=80):\n if image_format == 'png':\n f = io.BytesIO()\n figure.savefig(f, format=image_format, dpi=dpi)\n f.seek(0)\n return png_to_src(f.read())\n\n elif image_format == 'svg':\n f = io.StringIO()\n figure.savefig(f, format=image_format, dpi=dpi)\n f.seek(0)\n return svg_to_src(f.read())", "response": "Convert a matplotlib figure to an inline HTML image."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef Reset(self):\n ' Reset Axis and set default parameters for H-bridge '\n spi.SPI_write(self.CS, [0xC0, 0x60]) # reset\n# spi.SPI_write(self.CS, [0x14, 0x14]) # Stall Treshold setup\n# spi.SPI_write(self.CS, [0xFF, 0xFF]) \n# spi.SPI_write(self.CS, [0x13, 0x13]) # Over Current Treshold setup \n# spi.SPI_write(self.CS, [0xFF, 0xFF]) \n spi.SPI_write(self.CS, [0x15, 0xFF]) # Full Step speed \n spi.SPI_write(self.CS, [0xFF, 0xFF])\n spi.SPI_write(self.CS, [0xFF, 0xFF]) \n spi.SPI_write(self.CS, [0x05, 0x05]) # ACC \n spi.SPI_write(self.CS, [0x01, 0x01])\n spi.SPI_write(self.CS, [0xF5, 0xF5]) \n spi.SPI_write(self.CS, [0x06, 0x06]) # DEC \n spi.SPI_write(self.CS, [0x01, 0x01])\n spi.SPI_write(self.CS, [0xF5, 0xF5]) \n spi.SPI_write(self.CS, [0x0A, 0x0A]) # KVAL_RUN\n spi.SPI_write(self.CS, [0x10, 0x10])\n spi.SPI_write(self.CS, [0x0B, 0x0B]) # KVAL_ACC\n spi.SPI_write(self.CS, [0x20, 0x20])\n spi.SPI_write(self.CS, [0x0C, 0x0C]) # KVAL_DEC\n spi.SPI_write(self.CS, [0x20, 0x20])\n spi.SPI_write(self.CS, [0x18, 0x18]) # CONFIG\n spi.SPI_write(self.CS, [0b00111000, 0b00111000])\n spi.SPI_write(self.CS, [0b00000000, 0b00000000])", "response": "Reset Axis and set default parameters for H - bridge"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef MaxSpeed(self, speed):\n ' Setup of maximum speed '\n spi.SPI_write(self.CS, [0x07, 0x07]) # Max Speed setup \n spi.SPI_write(self.CS, [0x00, 0x00])\n spi.SPI_write(self.CS, [speed, speed])", "response": "Setup of maximum speed"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngoes away from Limit Switch", "response": "def ReleaseSW(self):\n ' Go away from Limit Switch '\n while self.ReadStatusBit(2) == 1: # is Limit Switch ON ?\n spi.SPI_write(self.CS, [0x92, 0x92] | (~self.Dir & 1)) # release SW \n while self.IsBusy():\n pass\n self.MoveWait(10)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmoving some distance units from current position", "response": "def Move(self, units):\n ' Move some distance units from current position '\n steps = units * self.SPU # translate units to steps \n if steps > 0: # look for direction\n spi.SPI_write(self.CS, [0x40 | (~self.Dir & 1), 0x40 | (~self.Dir & 1)]) \n else:\n spi.SPI_write(self.CS, [0x40 | (self.Dir & 1), 0x40 | (self.Dir & 1)]) \n steps = int(abs(steps)) \n spi.SPI_write(self.CS, [(steps >> 16) & 0xFF, (steps >> 16) & 0xFF])\n spi.SPI_write(self.CS, [(steps >> 8) & 0xFF, (steps >> 8) & 0xFF])\n spi.SPI_write(self.CS, [steps & 0xFF, steps & 0xFF])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef bbox(coordinates, crs, outname=None, format='ESRI Shapefile', overwrite=True):\n srs = crsConvert(crs, 'osr')\n \n ring = ogr.Geometry(ogr.wkbLinearRing)\n \n ring.AddPoint(coordinates['xmin'], coordinates['ymin'])\n ring.AddPoint(coordinates['xmin'], coordinates['ymax'])\n ring.AddPoint(coordinates['xmax'], coordinates['ymax'])\n ring.AddPoint(coordinates['xmax'], coordinates['ymin'])\n ring.CloseRings()\n \n geom = ogr.Geometry(ogr.wkbPolygon)\n geom.AddGeometry(ring)\n \n geom.FlattenTo2D()\n \n bbox = Vector(driver='Memory')\n bbox.addlayer('bbox', srs, geom.GetGeometryType())\n bbox.addfield('area', ogr.OFTReal)\n bbox.addfeature(geom, fields={'area': geom.Area()})\n geom = None\n if outname is None:\n return bbox\n else:\n bbox.write(outname, format, overwrite)", "response": "Create a bounding box vector object from a dictionary of coordinates and coordinate reference system."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndissolve the polygons of a vector file by an attribute field Parameters ---------- infile: str the input vector file outfile: str the output shapefile field: str the field name to merge the polygons by layername: str the name of the output vector layer; If set to None the layername will be the basename of infile without extension Returns -------", "response": "def dissolve(infile, outfile, field, layername=None):\n \"\"\"\n dissolve the polygons of a vector file by an attribute field\n Parameters\n ----------\n infile: str\n the input vector file\n outfile: str\n the output shapefile\n field: str\n the field name to merge the polygons by\n layername: str\n the name of the output vector layer;\n If set to None the layername will be the basename of infile without extension\n\n Returns\n -------\n\n \"\"\"\n with Vector(infile) as vec:\n srs = vec.srs\n feat = vec.layer[0]\n d = feat.GetFieldDefnRef(field)\n width = d.width\n type = d.type\n feat = None\n \n layername = layername if layername is not None else os.path.splitext(os.path.basename(infile))[0]\n \n # the following can be used if GDAL was compiled with the spatialite extension\n # not tested, might need some additional/different lines\n # with Vector(infile) as vec:\n # vec.vector.ExecuteSQL('SELECT ST_Union(geometry), {0} FROM {1} GROUP BY {0}'.format(field, vec.layername),\n # dialect='SQLITE')\n # vec.write(outfile)\n \n conn = sqlite_setup(extensions=['spatialite', 'gdal'])\n conn.execute('CREATE VIRTUAL TABLE merge USING VirtualOGR(\"{}\");'.format(infile))\n select = conn.execute('SELECT {0},asText(ST_Union(geometry)) as geometry FROM merge GROUP BY {0};'.format(field))\n fetch = select.fetchall()\n with Vector(driver='Memory') as merge:\n merge.addlayer(layername, srs, ogr.wkbPolygon)\n merge.addfield(field, type=type, width=width)\n for i in range(len(fetch)):\n merge.addfeature(ogr.CreateGeometryFromWkt(fetch[i][1]), {field: fetch[i][0]})\n merge.write(outfile)\n conn.close()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a Vector object from ogr features", "response": "def feature2vector(feature, ref, layername=None):\n \"\"\"\n create a Vector object from ogr features\n\n Parameters\n ----------\n feature: list of :osgeo:class:`ogr.Feature` or :osgeo:class:`ogr.Feature`\n a single feature or a list of features\n ref: Vector\n a reference Vector object to retrieve geo information from\n layername: str or None\n the name of the output layer; retrieved from `ref` if `None`\n\n Returns\n -------\n Vector\n the new Vector object\n \"\"\"\n features = feature if isinstance(feature, list) else [feature]\n layername = layername if layername is not None else ref.layername\n vec = Vector(driver='Memory')\n vec.addlayer(layername, ref.srs, ref.geomType)\n feat_def = features[0].GetDefnRef()\n fields = [feat_def.GetFieldDefn(x) for x in range(0, feat_def.GetFieldCount())]\n vec.layer.CreateFields(fields)\n for feat in features:\n vec.layer.CreateFeature(feat)\n vec.init_features()\n return vec"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the intersection of two Vector objects", "response": "def intersect(obj1, obj2):\n \"\"\"\n intersect two Vector objects\n\n Parameters\n ----------\n obj1: Vector\n the first vector object; this object is reprojected to the CRS of obj2 if necessary\n obj2: Vector\n the second vector object\n\n Returns\n -------\n Vector\n the intersect of obj1 and obj2\n \"\"\"\n if not isinstance(obj1, Vector) or not isinstance(obj2, Vector):\n raise RuntimeError('both objects must be of type Vector')\n \n obj1 = obj1.clone()\n obj2 = obj2.clone()\n \n obj1.reproject(obj2.srs)\n \n #######################################################\n # create basic overlap\n union1 = ogr.Geometry(ogr.wkbMultiPolygon)\n # union all the geometrical features of layer 1\n for feat in obj1.layer:\n union1.AddGeometry(feat.GetGeometryRef())\n obj1.layer.ResetReading()\n union1.Simplify(0)\n # same for layer2\n union2 = ogr.Geometry(ogr.wkbMultiPolygon)\n for feat in obj2.layer:\n union2.AddGeometry(feat.GetGeometryRef())\n obj2.layer.ResetReading()\n union2.Simplify(0)\n # intersection\n intersect_base = union1.Intersection(union2)\n union1 = None\n union2 = None\n #######################################################\n # compute detailed per-geometry overlaps\n if intersect_base.GetArea() > 0:\n intersection = Vector(driver='Memory')\n intersection.addlayer('intersect', obj1.srs, ogr.wkbPolygon)\n fieldmap = []\n for index, fielddef in enumerate([obj1.fieldDefs, obj2.fieldDefs]):\n for field in fielddef:\n name = field.GetName()\n i = 2\n while name in intersection.fieldnames:\n name = '{}_{}'.format(field.GetName(), i)\n i += 1\n fieldmap.append((index, field.GetName(), name))\n intersection.addfield(name, type=field.GetType(), width=field.GetWidth())\n \n for feature1 in obj1.layer:\n geom1 = feature1.GetGeometryRef()\n if geom1.Intersects(intersect_base):\n for feature2 in obj2.layer:\n geom2 = feature2.GetGeometryRef()\n # select only the intersections\n if geom2.Intersects(intersect_base):\n intersect = geom2.Intersection(geom1)\n fields = {}\n for item in fieldmap:\n if item[0] == 0:\n fields[item[2]] = feature1.GetField(item[1])\n else:\n fields[item[2]] = feature2.GetField(item[1])\n intersection.addfeature(intersect, fields)\n intersect_base = None\n return intersection"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef addfeature(self, geometry, fields=None):\n \n feature = ogr.Feature(self.layerdef)\n feature.SetGeometry(geometry)\n \n if fields is not None:\n for fieldname, value in fields.items():\n if fieldname not in self.fieldnames:\n raise IOError('field \"{}\" is missing'.format(fieldname))\n try:\n feature.SetField(fieldname, value)\n except NotImplementedError as e:\n fieldindex = self.fieldnames.index(fieldname)\n fieldtype = feature.GetFieldDefnRef(fieldindex).GetTypeName()\n message = str(e) + '\\ntrying to set field {} (type {}) to value {} (type {})'\n message = message.format(fieldname, fieldtype, value, type(value))\n raise(NotImplementedError(message))\n \n self.layer.CreateFeature(feature)\n feature = None\n self.init_features()", "response": "add a feature to the vector object from a geometry"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef addfield(self, name, type, width=10):\n fieldDefn = ogr.FieldDefn(name, type)\n if type == ogr.OFTString:\n fieldDefn.SetWidth(width)\n self.layer.CreateField(fieldDefn)", "response": "add a field to the vector layer"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a layer to the vector layer", "response": "def addlayer(self, name, srs, geomType):\n \"\"\"\n add a layer to the vector layer\n\n Parameters\n ----------\n name: str\n the layer name\n srs: int, str or :osgeo:class:`osr.SpatialReference`\n the spatial reference system. See :func:`spatialist.auxil.crsConvert` for options.\n geomType: int\n an OGR well-known binary data type.\n See `Module ogr `_.\n\n Returns\n -------\n\n \"\"\"\n self.vector.CreateLayer(name, srs, geomType)\n self.init_layer()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a vector object to the current Vector object", "response": "def addvector(self, vec):\n \"\"\"\n add a vector object to the layer of the current Vector object\n\n Parameters\n ----------\n vec: Vector\n the vector object to add\n merge: bool\n merge overlapping polygons?\n\n Returns\n -------\n\n \"\"\"\n vec.layer.ResetReading()\n for feature in vec.layer:\n self.layer.CreateFeature(feature)\n self.init_features()\n vec.layer.ResetReading()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a bounding box from the extent of the Vector object.", "response": "def bbox(self, outname=None, format='ESRI Shapefile', overwrite=True):\n \"\"\"\n create a bounding box from the extent of the Vector object\n\n Parameters\n ----------\n outname: str or None\n the name of the vector file to be written; if None, a Vector object is returned\n format: str\n the name of the file format to write\n overwrite: bool\n overwrite an already existing file?\n\n Returns\n -------\n Vector or None\n if outname is None, the bounding box Vector object\n \"\"\"\n if outname is None:\n return bbox(self.extent, self.srs)\n else:\n bbox(self.extent, self.srs, outname=outname, format=format, overwrite=overwrite)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nexports the geometry of each feature as a wkt string", "response": "def convert2wkt(self, set3D=True):\n \"\"\"\n export the geometry of each feature as a wkt string\n\n Parameters\n ----------\n set3D: bool\n keep the third (height) dimension?\n\n Returns\n -------\n\n \"\"\"\n features = self.getfeatures()\n for feature in features:\n try:\n feature.geometry().Set3D(set3D)\n except AttributeError:\n dim = 3 if set3D else 2\n feature.geometry().SetCoordinateDimension(dim)\n \n return [feature.geometry().ExportToWkt() for feature in features]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting features by field attribute", "response": "def getFeatureByAttribute(self, fieldname, attribute):\n \"\"\"\n get features by field attribute\n\n Parameters\n ----------\n fieldname: str\n the name of the queried field\n attribute: int or str\n the field value of interest\n\n Returns\n -------\n list of :osgeo:class:`ogr.Feature` or :osgeo:class:`ogr.Feature`\n the feature(s) matching the search query\n \"\"\"\n attr = attribute.strip() if isinstance(attribute, str) else attribute\n if fieldname not in self.fieldnames:\n raise KeyError('invalid field name')\n out = []\n self.layer.ResetReading()\n for feature in self.layer:\n field = feature.GetField(fieldname)\n field = field.strip() if isinstance(field, str) else field\n if field == attr:\n out.append(feature.Clone())\n self.layer.ResetReading()\n if len(out) == 0:\n return None\n elif len(out) == 1:\n return out[0]\n else:\n return out"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getFeatureByIndex(self, index):\n feature = self.layer[index]\n if feature is None:\n feature = self.getfeatures()[index]\n return feature", "response": "get a feature by its index"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef init_layer(self):\n self.layer = self.vector.GetLayer()\n self.__features = [None] * self.nfeatures", "response": "initialize a layer object"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload all features into memory", "response": "def load(self):\n \"\"\"\n load all feature into memory\n\n Returns\n -------\n\n \"\"\"\n self.layer.ResetReading()\n for i in range(self.nfeatures):\n if self.__features[i] is None:\n self.__features[i] = self.layer[i]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef reproject(self, projection):\n srs_out = crsConvert(projection, 'osr')\n \n if self.srs.IsSame(srs_out) == 0:\n \n # create the CoordinateTransformation\n coordTrans = osr.CoordinateTransformation(self.srs, srs_out)\n \n layername = self.layername\n geomType = self.geomType\n features = self.getfeatures()\n feat_def = features[0].GetDefnRef()\n fields = [feat_def.GetFieldDefn(x) for x in range(0, feat_def.GetFieldCount())]\n \n self.__init__()\n self.addlayer(layername, srs_out, geomType)\n self.layer.CreateFields(fields)\n \n for feature in features:\n geom = feature.GetGeometryRef()\n geom.Transform(coordTrans)\n newfeature = feature.Clone()\n newfeature.SetGeometry(geom)\n self.layer.CreateFeature(newfeature)\n newfeature = None\n self.init_features()", "response": "Reproject the object to the target CRS."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setCRS(self, crs):\n # try to convert the input crs to osr.SpatialReference\n srs_out = crsConvert(crs, 'osr')\n \n # save all relevant info from the existing vector object\n layername = self.layername\n geomType = self.geomType\n layer_definition = ogr.Feature(self.layer.GetLayerDefn())\n fields = [layer_definition.GetFieldDefnRef(x) for x in range(layer_definition.GetFieldCount())]\n features = self.getfeatures()\n \n # initialize a new vector object and create a layer\n self.__init__()\n self.addlayer(layername, srs_out, geomType)\n \n # add the fields to new layer\n self.layer.CreateFields(fields)\n \n # add the features to the newly created layer\n for feat in features:\n self.layer.CreateFeature(feat)\n self.init_features()", "response": "This method is used to reset the spatial reference system of the vector object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write(self, outfile, format='ESRI Shapefile', overwrite=True):\n (outfilepath, outfilename) = os.path.split(outfile)\n basename = os.path.splitext(outfilename)[0]\n \n driver = ogr.GetDriverByName(format)\n \n if os.path.exists(outfile):\n if overwrite:\n driver.DeleteDataSource(outfile)\n else:\n raise RuntimeError('target file already exists')\n \n outdataset = driver.CreateDataSource(outfile)\n outlayer = outdataset.CreateLayer(self.layername, geom_type=self.geomType)\n outlayerdef = outlayer.GetLayerDefn()\n \n for fieldDef in self.fieldDefs:\n outlayer.CreateField(fieldDef)\n \n self.layer.ResetReading()\n for feature in self.layer:\n outFeature = ogr.Feature(outlayerdef)\n outFeature.SetGeometry(feature.GetGeometryRef())\n for name in self.fieldnames:\n outFeature.SetField(name, feature.GetField(name))\n # add the feature to the shapefile\n outlayer.CreateFeature(outFeature)\n outFeature = None\n self.layer.ResetReading()\n \n if format == 'ESRI Shapefile':\n srs_out = self.srs.Clone()\n srs_out.MorphToESRI()\n with open(os.path.join(outfilepath, basename + '.prj'), 'w') as prj:\n prj.write(srs_out.ExportToWkt())\n \n outdataset = None", "response": "Writes the Vector object to a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_byte_data(self, address, register, value):\n return self.smbus.write_byte_data(address, register, value)", "response": "Write a single byte to a designated register in a specific location."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_word_data(self, address, register, value):\n return self.smbus.write_word_data(address, register, value)", "response": "Write 2 bytes to a specified register in a specific location."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write_block_data(self, address, register, value):\n return self.smbus.write_block_data(address, register, value)", "response": "Write to a specific register in a specific SMBus device."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef block_process_call(self, address, register, value):\n return self.smbus.block_process_call(address, register, value)", "response": "This function is used to send a block read call to the specified address and value."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_i2c_block_data(self, address, register, value):\n return self.smbus.write_i2c_block_data(address, register, value)", "response": "Write to a specific register in a specific I2C block."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _init_zmq(self, port_publish, port_subscribe):\n\n log.debug('kernel {} publishing on port {}'\n ''.format(self.analysis.id_, port_publish))\n self.zmq_publish = zmq.Context().socket(zmq.PUB)\n self.zmq_publish.connect('tcp://127.0.0.1:{}'.format(port_publish))\n\n log.debug('kernel {} subscribed on port {}'\n ''.format(self.analysis.id_, port_subscribe))\n self.zmq_sub_ctx = zmq.Context()\n self.zmq_sub = self.zmq_sub_ctx.socket(zmq.SUB)\n self.zmq_sub.setsockopt(zmq.SUBSCRIBE,\n self.analysis.id_.encode('utf-8'))\n self.zmq_sub.connect('tcp://127.0.0.1:{}'.format(port_subscribe))\n\n self.zmq_stream_sub = zmq.eventloop.zmqstream.ZMQStream(self.zmq_sub)\n self.zmq_stream_sub.on_recv(self.zmq_listener)\n\n # send zmq handshakes until a zmq ack is received\n self.zmq_ack = False\n self.send_handshake()", "response": "Initialize the zmq messaging."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef run_process(self, analysis, action_name, message='__nomessagetoken__'):\n\n # detect process_id\n process_id = None\n if isinstance(message, dict) and '__process_id' in message:\n process_id = message['__process_id']\n del message['__process_id']\n\n if process_id:\n analysis.emit('__process', {'id': process_id, 'status': 'start'})\n\n fns = [\n functools.partial(class_fn, analysis)\n for class_fn in (analysis._action_handlers.get(action_name, []) +\n analysis._action_handlers.get('*', []))\n ]\n if fns:\n args, kwargs = [], {}\n\n # Check whether this is a list (positional arguments)\n # or a dictionary (keyword arguments).\n if isinstance(message, list):\n args = message\n elif isinstance(message, dict):\n kwargs = message\n elif message == '__nomessagetoken__':\n pass\n else:\n args = [message]\n\n for fn in fns:\n log.debug('kernel calling {}'.format(fn))\n fn(*args, **kwargs)\n else:\n # default is to store action name and data as key and value\n # in analysis.data\n #\n # TODO(sven): deprecate this in favor of set_state() in Analysis\n # with new Datastore\n value = message if message != '__nomessagetoken__' else None\n if hasattr(analysis.data, 'set_state'):\n # TODO(sven): add deprecation warning here?\n analysis.data.set_state({action_name: value})\n else:\n # TODO(sven): add deprecation warning here?\n analysis.data[action_name] = value\n\n log.debug('kernel done {}'.format(action_name))\n\n if process_id:\n analysis.emit('__process', {'id': process_id, 'status': 'end'})\n\n if action_name == 'disconnected':\n log.debug('kernel {} shutting down'.format(analysis.id_))\n self.zmq_publish.close()\n\n self.zmq_stream_sub.close()\n self.zmq_sub.close()\n self.zmq_sub_ctx.destroy()", "response": "Executes an action in the analysis with the given message."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef emit(self, signal, message, analysis_id):\n\n log.debug('kernel {} zmq send ({}): {}'\n ''.format(analysis_id, signal, message))\n self.zmq_publish.send(json.dumps({\n 'analysis_id': analysis_id,\n 'frame': {'signal': signal, 'load': message},\n }, default=json_encoder_default).encode('utf-8'))", "response": "Emit a signal to main.\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_ports(self, port0 = 0x00, port1 = 0x00):\n 'Writes specified value to the pins defined as output by config_ports() method. Writing to input pins has no effect.'\n self.bus.write_byte_data(self.address, self.OUTPUT_PORT0, port0)\n self.bus.write_byte_data(self.address, self.OUTPUT_PORT1, port1)\n return True", "response": "Writes specified value to the pins defined as output by config_ports() method. Writing to input pins has no effect."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_ports(self):\n 'Reads logical values at pins.'\n return (self.bus.read_byte_data(self.address, self.STATUS_PORT0), self.bus.read_byte_data(self.address, self.STATUS_PORT1));", "response": "Reads logical values at pins."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread logical values at pins.", "response": "def get_config(self):\n 'Reads logical values at pins.'\n return (self.bus.read_byte_data(self.address, self.CONTROL_PORT0), self.bus.read_byte_data(self.address, self.CONTROL_PORT1));"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_pullups(self, port0 = 0x00, port1 = 0x00):\n 'Sets INPUT (1) or OUTPUT (0) direction on pins. Inversion setting is applicable for input pins 1-inverted 0-noninverted input polarity.'\n self.bus.write_byte_data(self.address, self.PULLUP_PORT0, port0)\n self.bus.write_byte_data(self.address, self.PULLUP_PORT1, port1)\n return", "response": "Sets INPUT 1 or OUTPUT 0 direction on pins. Inversion setting is applicable for input pins 1 - inverted 0 - noninverted input polarity."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_ports(self, port0 = 0x00, port1 = 0x00):\n 'Writes specified value to the pins defined as output by method. Writing to input pins has no effect.'\n self.bus.write_byte_data(self.address, self.CONTROL_PORT0, port0)\n self.bus.write_byte_data(self.address, self.CONTROL_PORT0, port1)\n return", "response": "Writes specified value to the pins defined as output by method. Writing to input pins has no effect."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_gtf_db(gtf, in_memory=False):\n db_file = gtf + '.db'\n if gtf.endswith('.gz'):\n db_file = gtf[:-3] + '.db'\n if file_exists(db_file):\n return gffutils.FeatureDB(db_file)\n db_file = ':memory:' if in_memory else db_file\n if in_memory or not file_exists(db_file):\n debug('GTF database does not exist, creating...')\n infer_extent = guess_infer_extent(gtf)\n db = gffutils.create_db(gtf, dbfn=db_file,\n infer_gene_extent=infer_extent)\n return db\n else:\n return gffutils.FeatureDB(db_file)", "response": "get a gffutils DB from a GTF file"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef gtf_to_bed(gtf, alt_out_dir=None):\n out_file = os.path.splitext(gtf)[0] + '.bed'\n if file_exists(out_file):\n return out_file\n if not os.access(os.path.dirname(out_file), os.W_OK | os.X_OK):\n if not alt_out_dir:\n raise IOError('Cannot write transcript BED output file %s' % out_file)\n else:\n out_file = os.path.join(alt_out_dir, os.path.basename(out_file))\n with open(out_file, \"w\") as out_handle:\n db = get_gtf_db(gtf)\n for feature in db.features_of_type('transcript', order_by=(\"seqid\", \"start\", \"end\")):\n chrom = feature.chrom\n start = feature.start\n end = feature.end\n attributes = feature.attributes.keys()\n strand = feature.strand\n name = (feature['gene_name'][0] if 'gene_name' in attributes else\n feature['gene_id'][0])\n line = \"\\t\".join([str(x) for x in [chrom, start, end, name, \".\",\n strand]])\n out_handle.write(line + \"\\n\")\n return out_file", "response": "Convert GTF to BED file."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites out a file of transcript - > gene mappings for a GTF file", "response": "def tx2genefile(gtf, out_file=None):\n \"\"\"\n write out a file of transcript->gene mappings.\n use the installed tx2gene.csv if it exists, else write a new one out\n \"\"\"\n installed_tx2gene = os.path.join(os.path.dirname(gtf), \"tx2gene.csv\")\n if file_exists(installed_tx2gene):\n return installed_tx2gene\n if file_exists(out_file):\n return out_file\n with file_transaction(out_file) as tx_out_file:\n with open(tx_out_file, \"w\") as out_handle:\n for k, v in transcript_to_gene(gtf).items():\n out_handle.write(\",\".join([k, v]) + \"\\n\")\n return out_file"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef transcript_to_gene(gtf):\n gene_lookup = {}\n for feature in complete_features(get_gtf_db(gtf)):\n gene_id = feature.attributes.get('gene_id', [None])[0]\n transcript_id = feature.attributes.get('transcript_id', [None])[0]\n gene_lookup[transcript_id] = gene_id\n return gene_lookup", "response": "return a dictionary keyed by transcript_id of the associated gene_id\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting new output frequency for the current entry.", "response": "def set_freq(self, fout, freq):\n \"\"\"\n Sets new output frequency, required parameters are real current frequency at output and new required frequency.\n \"\"\"\n hsdiv_tuple = (4, 5, 6, 7, 9, 11) # possible dividers\n n1div_tuple = (1,) + tuple(range(2,129,2)) #\n fdco_min = 5670.0 # set maximum as minimum\n hsdiv = self.get_hs_div() # read curent dividers\n n1div = self.get_n1_div() #\n\n if abs((freq-fout)*1e6/fout) > 3500: \n # Large change of frequency \n fdco = fout * hsdiv * n1div # calculate high frequency oscillator\n fxtal = fdco / self.get_rfreq() # should be fxtal = 114.285 \n \n for hsdiv_iter in hsdiv_tuple: # find dividers with minimal power consumption\n for n1div_iter in n1div_tuple:\n fdco_new = freq * hsdiv_iter * n1div_iter\n if (fdco_new >= 4850) and (fdco_new <= 5670):\n if (fdco_new <= fdco_min):\n fdco_min = fdco_new \n hsdiv = hsdiv_iter\n n1div = n1div_iter\n rfreq = fdco_min / fxtal \n \n self.freeze_dco() # write registers\n self.set_hs_div(hsdiv)\n self.set_n1_div(n1div)\n self.set_rfreq(rfreq)\n self.unfreeze_dco()\n self.new_freq()\n else:\n # Small change of frequency\n rfreq = self.get_rfreq() * (freq/fout)\n\n self.freeze_m() # write registers \n self.set_rfreq(rfreq)\n self.unfreeze_m()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_file_alignment_for_new_binary_file(self, file: File) -> int:\n if len(file.data) <= 0x20:\n return 0\n bom = file.data[0xc:0xc+2]\n if bom != b'\\xff\\xfe' and bom != b'\\xfe\\xff':\n return 0\n\n be = bom == b'\\xfe\\xff'\n file_size: int = struct.unpack_from(_get_unpack_endian_character(be) + 'I', file.data, 0x1c)[0]\n if len(file.data) != file_size:\n return 0\n return 1 << file.data[0xe]", "response": "Detects alignment requirements for binary files with new nn. util. BinaryFileHeader."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_analyses(name, kernel=None):\n\n if not os.path.exists(os.path.join(os.getcwd(), 'analyses')):\n os.system(\"mkdir analyses\")\n\n # __init__.py\n init_path = os.path.join(os.getcwd(), 'analyses', '__init__.py')\n if not os.path.exists(init_path):\n with open(init_path, 'w') as f:\n pass\n\n # index.yaml\n index_path = os.path.join(os.getcwd(), 'analyses', 'index.yaml')\n if not os.path.exists(index_path):\n with open(index_path, 'w') as f:\n f.write('title: Analyses\\n')\n f.write('description: A short description.\\n')\n f.write('version: 0.1.0\\n')\n f.write('\\n')\n f.write('analyses:\\n')\n\n if kernel is None:\n with open(index_path, 'a') as f:\n f.write(' # automatically inserted by scaffold-databench\\n')\n f.write(' - name: {}\\n'.format(name))\n f.write(' title: {}\\n'.format(name.title()))\n f.write(' description: A new analysis.\\n')\n f.write(' watch:\\n')\n f.write(' - {}/*.js\\n'.format(name))\n f.write(' - {}/*.html\\n'.format(name))", "response": "Create an analysis with given name and suffix."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef crsConvert(crsIn, crsOut):\n if isinstance(crsIn, osr.SpatialReference):\n srs = crsIn.Clone()\n else:\n srs = osr.SpatialReference()\n \n if isinstance(crsIn, int):\n crsIn = 'EPSG:{}'.format(crsIn)\n \n if isinstance(crsIn, str):\n try:\n srs.SetFromUserInput(crsIn)\n except RuntimeError:\n raise TypeError('crsIn not recognized; must be of type WKT, PROJ4 or EPSG')\n else:\n raise TypeError('crsIn must be of type int, str or osr.SpatialReference')\n if crsOut == 'wkt':\n return srs.ExportToWkt()\n elif crsOut == 'prettyWkt':\n return srs.ExportToPrettyWkt()\n elif crsOut == 'proj4':\n return srs.ExportToProj4()\n elif crsOut == 'epsg':\n srs.AutoIdentifyEPSG()\n return int(srs.GetAuthorityCode(None))\n elif crsOut == 'opengis':\n srs.AutoIdentifyEPSG()\n return 'http://www.opengis.net/def/crs/EPSG/0/{}'.format(srs.GetAuthorityCode(None))\n elif crsOut == 'osr':\n return srs\n else:\n raise ValueError('crsOut not recognized; must be either wkt, proj4, opengis or epsg')", "response": "Convert two types of spatial references to one type of spatial reference."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncomputing the distance in meters between two points in latlon", "response": "def haversine(lat1, lon1, lat2, lon2):\n \"\"\"\n compute the distance in meters between two points in latlon\n\n Parameters\n ----------\n lat1: int or float\n the latitude of point 1\n lon1: int or float\n the longitude of point 1\n lat2: int or float\n the latitude of point 2\n lon2: int or float\n the longitude of point 2\n\n Returns\n -------\n float\n the distance between point 1 and point 2 in meters\n\n \"\"\"\n radius = 6371000\n lat1, lon1, lat2, lon2 = map(math.radians, [lat1, lon1, lat2, lon2])\n a = math.sin((lat2 - lat1) / 2) ** 2 + math.cos(lat1) * math.cos(lat2) * math.sin((lon2 - lon1) / 2) ** 2\n c = 2 * math.asin(math.sqrt(a))\n return radius * c"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrap for gdal. Warp", "response": "def gdalwarp(src, dst, options):\n \"\"\"\n a simple wrapper for :osgeo:func:`gdal.Warp`\n\n Parameters\n ----------\n src: str, :osgeo:class:`ogr.DataSource` or :osgeo:class:`gdal.Dataset`\n the input data set\n dst: str\n the output data set\n options: dict\n additional parameters passed to gdal.Warp; see :osgeo:func:`gdal.WarpOptions`\n\n Returns\n -------\n\n \"\"\"\n try:\n out = gdal.Warp(dst, src, options=gdal.WarpOptions(**options))\n except RuntimeError as e:\n raise RuntimeError('{}:\\n src: {}\\n dst: {}\\n options: {}'.format(str(e), src, dst, options))\n out = None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef gdalbuildvrt(src, dst, options=None, void=True):\n options = {} if options is None else options\n \n if 'outputBounds' in options.keys() and gdal.__version__ < '2.4.0':\n warnings.warn('\\ncreating VRT files with subsetted extent is very likely to cause problems. '\n 'Please use GDAL version >= 2.4.0, which fixed the problem.\\n'\n 'see here for a description of the problem:\\n'\n ' https://gis.stackexchange.com/questions/314333/'\n 'sampling-error-using-gdalwarp-on-a-subsetted-vrt\\n'\n 'and here for the release note of GDAL 2.4.0:\\n'\n ' https://trac.osgeo.org/gdal/wiki/Release/2.4.0-News')\n \n out = gdal.BuildVRT(dst, src, options=gdal.BuildVRTOptions(**options))\n if void:\n out = None\n else:\n return out", "response": "Wrapper for GDAL. BuildVRT."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwrapping for gdal. Translate", "response": "def gdal_translate(src, dst, options):\n \"\"\"\n a simple wrapper for `gdal.Translate `_\n\n Parameters\n ----------\n src: str, :osgeo:class:`ogr.DataSource` or :osgeo:class:`gdal.Dataset`\n the input data set\n dst: str\n the output data set\n options: dict\n additional parameters passed to gdal.Translate;\n see `gdal.TranslateOptions `_\n\n Returns\n -------\n\n \"\"\"\n out = gdal.Translate(dst, src, options=gdal.TranslateOptions(**options))\n out = None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef ogr2ogr(src, dst, options):\n out = gdal.VectorTranslate(dst, src, options=gdal.VectorTranslateOptions(**options))\n out = None", "response": "Wrapper for gdal. VectorTranslate aka ogr2ogr."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef gdal_rasterize(src, dst, options):\n out = gdal.Rasterize(dst, src, options=gdal.RasterizeOptions(**options))\n out = None", "response": "Wrapper for gdal. Rasterize"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef coordinate_reproject(x, y, s_crs, t_crs):\n source = crsConvert(s_crs, 'osr')\n target = crsConvert(t_crs, 'osr')\n transform = osr.CoordinateTransformation(source, target)\n point = transform.TransformPoint(x, y)[:2]\n return point", "response": "reproject a coordinate from one CRS to another"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate output_dir work_dir and log_fpath", "response": "def set_up_dirs(proc_name, output_dir=None, work_dir=None, log_dir=None):\n \"\"\" Creates output_dir, work_dir, and sets up log\n \"\"\"\n output_dir = safe_mkdir(adjust_path(output_dir or join(os.getcwd(), proc_name)), 'output_dir')\n debug('Saving results into ' + output_dir)\n\n work_dir = safe_mkdir(work_dir or join(output_dir, 'work'), 'working directory')\n info('Using work directory ' + work_dir)\n\n log_fpath = set_up_log(log_dir or safe_mkdir(join(work_dir, 'log')), proc_name + '.log')\n\n return output_dir, work_dir, log_fpath"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _iter_lines_generator(response, decode_unicode):\n pending = None\n\n for chunk in _iter_content_generator(response, decode_unicode=decode_unicode):\n\n if pending is not None:\n chunk = pending + chunk\n\n lines = chunk.splitlines()\n\n if lines and lines[-1] and chunk and lines[-1][-1] == chunk[-1]:\n pending = lines.pop()\n else:\n pending = None\n\n for line in lines:\n yield line\n\n if pending is not None:\n yield pending", "response": "Iterates over the response data one line at a time."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nraises an exception if one occurred.", "response": "def _raise_for_status(response):\n \"\"\"Raises stored :class:`HTTPError`, if one occurred.\n\n This is the :meth:`requests.models.Response.raise_for_status` method,\n modified to add the response from Space-Track, if given.\n \"\"\"\n\n http_error_msg = ''\n\n if 400 <= response.status_code < 500:\n http_error_msg = '%s Client Error: %s for url: %s' % (\n response.status_code, response.reason, response.url)\n\n elif 500 <= response.status_code < 600:\n http_error_msg = '%s Server Error: %s for url: %s' % (\n response.status_code, response.reason, response.url)\n\n if http_error_msg:\n spacetrack_error_msg = None\n\n try:\n json = response.json()\n if isinstance(json, Mapping):\n spacetrack_error_msg = json['error']\n except (ValueError, KeyError):\n pass\n\n if not spacetrack_error_msg:\n spacetrack_error_msg = response.text\n\n if spacetrack_error_msg:\n http_error_msg += '\\nSpace-Track response:\\n' + spacetrack_error_msg\n\n raise requests.HTTPError(http_error_msg, response=response)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generic_request(self, class_, iter_lines=False, iter_content=False,\n controller=None, parse_types=False, **kwargs):\n r\"\"\"Generic Space-Track query.\n\n The request class methods use this method internally; the public\n API is as follows:\n\n .. code-block:: python\n\n st.tle_publish(*args, **kw)\n st.basicspacedata.tle_publish(*args, **kw)\n st.file(*args, **kw)\n st.fileshare.file(*args, **kw)\n st.spephemeris.file(*args, **kw)\n\n They resolve to the following calls respectively:\n\n .. code-block:: python\n\n st.generic_request('tle_publish', *args, **kw)\n st.generic_request('tle_publish', *args, controller='basicspacedata', **kw)\n st.generic_request('file', *args, **kw)\n st.generic_request('file', *args, controller='fileshare', **kw)\n st.generic_request('file', *args, controller='spephemeris', **kw)\n\n Parameters:\n class\\_: Space-Track request class name\n iter_lines: Yield result line by line\n iter_content: Yield result in 100 KiB chunks.\n controller: Optionally specify request controller to use.\n parse_types: Parse string values in response according to type given\n in predicate information, e.g. ``'2017-01-01'`` ->\n ``datetime.date(2017, 1, 1)``.\n **kwargs: These keywords must match the predicate fields on\n Space-Track. You may check valid keywords with the following\n snippet:\n\n .. code-block:: python\n\n spacetrack = SpaceTrackClient(...)\n spacetrack.tle.get_predicates()\n # or\n spacetrack.get_predicates('tle')\n\n See :func:`~spacetrack.operators._stringify_predicate_value` for\n which Python objects are converted appropriately.\n\n Yields:\n Lines\u2014stripped of newline characters\u2014if ``iter_lines=True``\n\n Yields:\n 100 KiB chunks if ``iter_content=True``\n\n Returns:\n Parsed JSON object, unless ``format`` keyword argument is passed.\n\n .. warning::\n\n Passing ``format='json'`` will return the JSON **unparsed**. Do\n not set ``format`` if you want the parsed JSON object returned!\n \"\"\"\n if iter_lines and iter_content:\n raise ValueError('iter_lines and iter_content cannot both be True')\n\n if 'format' in kwargs and parse_types:\n raise ValueError('parse_types can only be used if format is unset.')\n\n if controller is None:\n controller = self._find_controller(class_)\n else:\n classes = self.request_controllers.get(controller, None)\n if classes is None:\n raise ValueError(\n 'Unknown request controller {!r}'.format(controller))\n if class_ not in classes:\n raise ValueError(\n 'Unknown request class {!r} for controller {!r}'\n .format(class_, controller))\n\n # Decode unicode unless class == download, including conversion of\n # CRLF newlines to LF.\n decode = (class_ != 'download')\n if not decode and iter_lines:\n error = (\n 'iter_lines disabled for binary data, since CRLF newlines '\n 'split over chunk boundaries would yield extra blank lines. '\n 'Use iter_content=True instead.')\n raise ValueError(error)\n\n self.authenticate()\n\n url = ('{0}{1}/query/class/{2}'\n .format(self.base_url, controller, class_))\n\n offline_check = (class_, controller) in self.offline_predicates\n valid_fields = {p.name for p in self.rest_predicates}\n predicates = None\n\n if not offline_check:\n # Validate keyword argument names by querying valid predicates from\n # Space-Track\n predicates = self.get_predicates(class_, controller)\n predicate_fields = {p.name for p in predicates}\n valid_fields |= predicate_fields\n else:\n valid_fields |= self.offline_predicates[(class_, controller)]\n\n for key, value in kwargs.items():\n if key not in valid_fields:\n raise TypeError(\n \"'{class_}' got an unexpected argument '{key}'\"\n .format(class_=class_, key=key))\n\n if class_ == 'upload' and key == 'file':\n continue\n\n value = _stringify_predicate_value(value)\n\n url += '/{key}/{value}'.format(key=key, value=value)\n\n logger.debug(requests.utils.requote_uri(url))\n\n if class_ == 'upload':\n if 'file' not in kwargs:\n raise TypeError(\"missing keyword argument: 'file'\")\n\n resp = self.session.post(url, files={'file': kwargs['file']})\n else:\n resp = self._ratelimited_get(url, stream=iter_lines or iter_content)\n\n _raise_for_status(resp)\n\n if resp.encoding is None:\n resp.encoding = 'UTF-8'\n\n if iter_lines:\n return _iter_lines_generator(resp, decode_unicode=decode)\n elif iter_content:\n return _iter_content_generator(resp, decode_unicode=decode)\n else:\n # If format is specified, return that format unparsed. Otherwise,\n # parse the default JSON response.\n if 'format' in kwargs:\n if decode:\n data = resp.text\n # Replace CRLF newlines with LF, Python will handle platform\n # specific newlines if written to file.\n data = data.replace('\\r\\n', '\\n')\n else:\n data = resp.content\n return data\n else:\n data = resp.json()\n\n if predicates is None or not parse_types:\n return data\n else:\n return self._parse_types(data, predicates)", "response": "Generic request for the given object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _ratelimited_get(self, *args, **kwargs):\n with self._ratelimiter:\n resp = self.session.get(*args, **kwargs)\n\n # It's possible that Space-Track will return HTTP status 500 with a\n # query rate limit violation. This can happen if a script is cancelled\n # before it has finished sleeping to satisfy the rate limit and it is\n # started again.\n #\n # Let's catch this specific instance and retry once if it happens.\n if resp.status_code == 500:\n # Let's only retry if the error page tells us it's a rate limit\n # violation.\n if 'violated your query rate limit' in resp.text:\n # Mimic the RateLimiter callback behaviour.\n until = time.time() + self._ratelimiter.period\n t = threading.Thread(target=self._ratelimit_callback, args=(until,))\n t.daemon = True\n t.start()\n time.sleep(self._ratelimiter.period)\n\n # Now retry\n with self._ratelimiter:\n resp = self.session.get(*args, **kwargs)\n\n return resp", "response": "Perform a get request handling rate limiting."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _find_controller(self, class_):\n for controller, classes in self.request_controllers.items():\n if class_ in classes:\n return controller\n else:\n raise ValueError('Unknown request class {!r}'.format(class_))", "response": "Find first controller that matches given request class."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _download_predicate_data(self, class_, controller):\n self.authenticate()\n\n url = ('{0}{1}/modeldef/class/{2}'\n .format(self.base_url, controller, class_))\n\n logger.debug(requests.utils.requote_uri(url))\n\n resp = self._ratelimited_get(url)\n\n _raise_for_status(resp)\n\n return resp.json()['data']", "response": "Get raw predicate information for given request class and cache for\n subsequent calls."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_predicates(self, class_, controller=None):\n if class_ not in self._predicates:\n if controller is None:\n controller = self._find_controller(class_)\n else:\n classes = self.request_controllers.get(controller, None)\n if classes is None:\n raise ValueError(\n 'Unknown request controller {!r}'.format(controller))\n if class_ not in classes:\n raise ValueError(\n 'Unknown request class {!r}'.format(class_))\n\n predicates_data = self._download_predicate_data(class_, controller)\n predicate_objects = self._parse_predicates_data(predicates_data)\n self._predicates[class_] = predicate_objects\n\n return self._predicates[class_]", "response": "Get full predicate information for given request class and cache\n for subsequent calls."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding the folding menu to the editor.", "response": "def on_install(self, editor):\n \"\"\"\n Add the folding menu to the editor, on install.\n\n :param editor: editor instance on which the mode has been installed to.\n \"\"\"\n super(FoldingPanel, self).on_install(editor)\n self.context_menu = QtWidgets.QMenu(_('Folding'), self.editor)\n action = self.action_collapse = QtWidgets.QAction(\n _('Collapse'), self.context_menu)\n action.setShortcut('Shift+-')\n action.triggered.connect(self._on_action_toggle)\n self.context_menu.addAction(action)\n action = self.action_expand = QtWidgets.QAction(_('Expand'),\n self.context_menu)\n action.setShortcut('Shift++')\n action.triggered.connect(self._on_action_toggle)\n self.context_menu.addAction(action)\n self.context_menu.addSeparator()\n action = self.action_collapse_all = QtWidgets.QAction(\n _('Collapse all'), self.context_menu)\n action.setShortcut('Ctrl+Shift+-')\n action.triggered.connect(self._on_action_collapse_all_triggered)\n self.context_menu.addAction(action)\n action = self.action_expand_all = QtWidgets.QAction(\n _('Expand all'), self.context_menu)\n action.setShortcut('Ctrl+Shift++')\n action.triggered.connect(self._on_action_expand_all_triggered)\n self.context_menu.addAction(action)\n self.editor.add_menu(self.context_menu)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _draw_rect(self, rect, painter):\n c = self._custom_color\n if self._native:\n c = self.get_system_bck_color()\n grad = QtGui.QLinearGradient(rect.topLeft(),\n rect.topRight())\n if sys.platform == 'darwin':\n grad.setColorAt(0, c.lighter(100))\n grad.setColorAt(1, c.lighter(110))\n outline = c.darker(110)\n else:\n grad.setColorAt(0, c.lighter(110))\n grad.setColorAt(1, c.lighter(130))\n outline = c.darker(100)\n painter.fillRect(rect, grad)\n painter.setPen(QtGui.QPen(outline))\n painter.drawLine(rect.topLeft() +\n QtCore.QPointF(1, 0),\n rect.topRight() -\n QtCore.QPointF(1, 0))\n painter.drawLine(rect.bottomLeft() +\n QtCore.QPointF(1, 0),\n rect.bottomRight() -\n QtCore.QPointF(1, 0))\n painter.drawLine(rect.topRight() +\n QtCore.QPointF(0, 1),\n rect.bottomRight() -\n QtCore.QPointF(0, 1))\n painter.drawLine(rect.topLeft() +\n QtCore.QPointF(0, 1),\n rect.bottomLeft() -\n QtCore.QPointF(0, 1))", "response": "Draw the fold zone rectangle using the current style primitive color\n or foldIndicatorBackground."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a system color for drawing the fold scope background.", "response": "def get_system_bck_color():\n \"\"\"\n Gets a system color for drawing the fold scope background.\n \"\"\"\n def merged_colors(colorA, colorB, factor):\n maxFactor = 100\n colorA = QtGui.QColor(colorA)\n colorB = QtGui.QColor(colorB)\n tmp = colorA\n tmp.setRed((tmp.red() * factor) / maxFactor +\n (colorB.red() * (maxFactor - factor)) / maxFactor)\n tmp.setGreen((tmp.green() * factor) / maxFactor +\n (colorB.green() * (maxFactor - factor)) / maxFactor)\n tmp.setBlue((tmp.blue() * factor) / maxFactor +\n (colorB.blue() * (maxFactor - factor)) / maxFactor)\n return tmp\n\n pal = QtWidgets.QApplication.instance().palette()\n b = pal.window().color()\n h = pal.highlight().color()\n return merged_colors(b, h, 50)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _draw_fold_indicator(self, top, mouse_over, collapsed, painter):\n rect = QtCore.QRect(0, top, self.sizeHint().width(),\n self.sizeHint().height())\n if self._native:\n if os.environ['QT_API'].lower() not in PYQT5_API:\n opt = QtGui.QStyleOptionViewItemV2()\n else:\n opt = QtWidgets.QStyleOptionViewItem()\n opt.rect = rect\n opt.state = (QtWidgets.QStyle.State_Active |\n QtWidgets.QStyle.State_Item |\n QtWidgets.QStyle.State_Children)\n if not collapsed:\n opt.state |= QtWidgets.QStyle.State_Open\n if mouse_over:\n opt.state |= (QtWidgets.QStyle.State_MouseOver |\n QtWidgets.QStyle.State_Enabled |\n QtWidgets.QStyle.State_Selected)\n opt.palette.setBrush(QtGui.QPalette.Window,\n self.palette().highlight())\n opt.rect.translate(-2, 0)\n self.style().drawPrimitive(QtWidgets.QStyle.PE_IndicatorBranch,\n opt, painter, self)\n else:\n index = 0\n if not collapsed:\n index = 2\n if mouse_over:\n index += 1\n QtGui.QIcon(self._custom_indicators[index]).paint(painter, rect)", "response": "Draw the fold indicator."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the base scope highlight color", "response": "def _get_scope_highlight_color(self):\n \"\"\"\n Gets the base scope highlight color (derivated from the editor\n background)\n\n \"\"\"\n color = self.editor.background\n if color.lightness() < 128:\n color = drift_color(color, 130)\n else:\n color = drift_color(color, 105)\n return color"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _add_scope_deco(self, start, end, parent_start, parent_end, base_color,\n factor):\n \"\"\"\n Adds a scope decoration that enclose the current scope\n :param start: Start of the current scope\n :param end: End of the current scope\n :param parent_start: Start of the parent scope\n :param parent_end: End of the parent scope\n :param base_color: base color for scope decoration\n :param factor: color factor to apply on the base color (to make it\n darker).\n \"\"\"\n color = drift_color(base_color, factor=factor)\n # upper part\n if start > 0:\n d = TextDecoration(self.editor.document(),\n start_line=parent_start, end_line=start)\n d.set_full_width(True, clear=False)\n d.draw_order = 2\n d.set_background(color)\n self.editor.decorations.append(d)\n self._scope_decos.append(d)\n # lower part\n if end <= self.editor.document().blockCount():\n d = TextDecoration(self.editor.document(),\n start_line=end, end_line=parent_end + 1)\n d.set_full_width(True, clear=False)\n d.draw_order = 2\n d.set_background(color)\n self.editor.decorations.append(d)\n self._scope_decos.append(d)", "response": "Adds a scope decoration that enclose the current scope."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _add_scope_decorations(self, block, start, end):\n try:\n parent = FoldScope(block).parent()\n except ValueError:\n parent = None\n if TextBlockHelper.is_fold_trigger(block):\n base_color = self._get_scope_highlight_color()\n factor_step = 5\n if base_color.lightness() < 128:\n factor_step = 10\n factor = 70\n else:\n factor = 100\n while parent:\n # highlight parent scope\n parent_start, parent_end = parent.get_range()\n self._add_scope_deco(\n start, end + 1, parent_start, parent_end,\n base_color, factor)\n # next parent scope\n start = parent_start\n end = parent_end\n parent = parent.parent()\n factor += factor_step\n # global scope\n parent_start = 0\n parent_end = self.editor.document().blockCount()\n self._add_scope_deco(\n start, end + 1, parent_start, parent_end, base_color,\n factor + factor_step)\n else:\n self._clear_scope_decos()", "response": "Add scope decoration on the editor widget."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nhighlights the scopes surrounding the current fold scope.", "response": "def _highlight_surrounding_scopes(self, block):\n \"\"\"\n Highlights the scopes surrounding the current fold scope.\n\n :param block: Block that starts the current fold scope.\n \"\"\"\n scope = FoldScope(block)\n if (self._current_scope is None or\n self._current_scope.get_range() != scope.get_range()):\n self._current_scope = scope\n self._clear_scope_decos()\n # highlight surrounding parent scopes with a darker color\n start, end = scope.get_range()\n if not TextBlockHelper.is_collapsed(block):\n self._add_scope_decorations(block, start, end)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nremoves scope decorations and background from the editor and the panel.", "response": "def leaveEvent(self, event):\n \"\"\"\n Removes scope decorations and background from the editor and the panel\n if highlight_caret_scope, else simply update the scope decorations to\n match the caret scope.\n\n \"\"\"\n super(FoldingPanel, self).leaveEvent(event)\n QtWidgets.QApplication.restoreOverrideCursor()\n self._highlight_runner.cancel_requests()\n if not self.highlight_caret_scope:\n self._clear_scope_decos()\n self._mouse_over_line = None\n self._current_scope = None\n else:\n self._block_nbr = -1\n self._highlight_caret_scope()\n self.editor.repaint()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _add_fold_decoration(self, block, region):\n deco = TextDecoration(block)\n deco.signals.clicked.connect(self._on_fold_deco_clicked)\n deco.tooltip = region.text(max_lines=25)\n deco.draw_order = 1\n deco.block = block\n deco.select_line()\n deco.set_outline(drift_color(\n self._get_scope_highlight_color(), 110))\n deco.set_background(self._get_scope_highlight_color())\n deco.set_foreground(QtGui.QColor('#808080'))\n self._block_decos.append(deco)\n self.editor.decorations.append(deco)", "response": "Add fold decorations to the editor."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef toggle_fold_trigger(self, block):\n if not TextBlockHelper.is_fold_trigger(block):\n return\n region = FoldScope(block)\n if region.collapsed:\n region.unfold()\n if self._mouse_over_line is not None:\n self._add_scope_decorations(\n region._trigger, *region.get_range())\n else:\n region.fold()\n self._clear_scope_decos()\n self._refresh_editor_and_scrollbars()\n self.trigger_state_changed.emit(region._trigger, region.collapsed)", "response": "Toggle a fold trigger block."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef on_state_changed(self, state):\n if state:\n self.editor.key_pressed.connect(self._on_key_pressed)\n if self._highlight_caret:\n self.editor.cursorPositionChanged.connect(\n self._highlight_caret_scope)\n self._block_nbr = -1\n self.editor.new_text_set.connect(self._clear_block_deco)\n else:\n self.editor.key_pressed.disconnect(self._on_key_pressed)\n if self._highlight_caret:\n self.editor.cursorPositionChanged.disconnect(\n self._highlight_caret_scope)\n self._block_nbr = -1\n self.editor.new_text_set.disconnect(self._clear_block_deco)", "response": "Connect to the cursorPositionChanged signal and disconnect from the new_text_set signal"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\noverride key press to select the current scope if the user wants to delete a folded scope.", "response": "def _on_key_pressed(self, event):\n \"\"\"\n Override key press to select the current scope if the user wants\n to deleted a folded scope (without selecting it).\n \"\"\"\n delete_request = event.key() in [QtCore.Qt.Key_Backspace,\n QtCore.Qt.Key_Delete]\n if event.text() or delete_request:\n cursor = self.editor.textCursor()\n if cursor.hasSelection():\n # change selection to encompass the whole scope.\n positions_to_check = cursor.selectionStart(), cursor.selectionEnd()\n else:\n positions_to_check = (cursor.position(), )\n for pos in positions_to_check:\n block = self.editor.document().findBlock(pos)\n th = TextBlockHelper()\n if th.is_fold_trigger(block) and th.is_collapsed(block):\n self.toggle_fold_trigger(block)\n if delete_request and cursor.hasSelection():\n scope = FoldScope(self.find_parent_scope(block))\n tc = TextHelper(self.editor).select_lines(*scope.get_range())\n if tc.selectionStart() > cursor.selectionStart():\n start = cursor.selectionStart()\n else:\n start = tc.selectionStart()\n if tc.selectionEnd() < cursor.selectionEnd():\n end = cursor.selectionEnd()\n else:\n end = tc.selectionEnd()\n tc.setPosition(start)\n tc.setPosition(end, tc.KeepAnchor)\n self.editor.setTextCursor(tc)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef verify_signature(self, payload, headers):\n github_signature = headers.get(\"x-hub-signature\")\n if not github_signature:\n json_abort(401, \"X-Hub-Signature header missing.\")\n\n gh_webhook_secret = current_app.config[\"GITHUB_WEBHOOK_SECRET\"]\n signature = \"sha1={}\".format(\n hmac.new(\n gh_webhook_secret.encode(\"utf-8\"), payload, hashlib.sha1\n ).hexdigest()\n )\n\n if not hmac.compare_digest(signature, github_signature):\n json_abort(401, \"Request signature does not match calculated signature.\")", "response": "Verify that the payload was sent from GitHub instance."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the options that have been set.", "response": "def get_options(self):\n \"\"\"Get the options that have been set.\n\n Called after the user has added all their own options\n and is ready to use the variables.\n\n \"\"\"\n (options, args) = self.parser.parse_args()\n\n # Set values from .visdkrc, but only if they haven't already been set\n visdkrc_opts = self.read_visdkrc()\n for opt in self.config_vars:\n if not getattr(options, opt):\n # Try and use value from visdkrc\n if visdkrc_opts:\n if opt in visdkrc_opts:\n setattr(options, opt, visdkrc_opts[opt])\n\n # Ensure all the required options are set\n for opt in self.required_opts:\n if opt not in dir(options) or getattr(options, opt) == None:\n self.parser.error('%s must be set!' % opt)\n\n return options"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_job_files(self):\n repo_jobs = []\n for rel_job_file_path, job_info in self.job_files.items():\n LOGGER.debug(\"Checking for job definitions in %s\", rel_job_file_path)\n jobs = self.parse_job_definitions(rel_job_file_path, job_info)\n LOGGER.debug(\"Found %d job definitions in %s\", len(jobs), rel_job_file_path)\n repo_jobs.extend(jobs)\n if not repo_jobs:\n LOGGER.info(\"No job definitions found in repo '%s'\", self.repo)\n else:\n LOGGER.info(\n \"Found %d job definitions in repo '%s'\", len(repo_jobs), self.repo\n )\n # LOGGER.debug(json.dumps(repo_jobs, indent=4))\n return repo_jobs", "response": "Check for job definitions in known zuul files."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef change_directory(self, directory):\n self._process.write(('cd %s\\n' % directory).encode())\n if sys.platform == 'win32':\n self._process.write((os.path.splitdrive(directory)[0] + '\\r\\n').encode())\n self.clear()\n else:\n self._process.write(b'\\x0C')", "response": "Changes the current directory."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef icon(self):\n if isinstance(self._icon, str):\n if QtGui.QIcon.hasThemeIcon(self._icon):\n return QtGui.QIcon.fromTheme(self._icon)\n else:\n return QtGui.QIcon(self._icon)\n elif isinstance(self._icon, tuple):\n return QtGui.QIcon.fromTheme(self._icon[0],\n QtGui.QIcon(self._icon[1]))\n elif isinstance(self._icon, QtGui.QIcon):\n return self._icon\n return QtGui.QIcon()", "response": "Gets the icon file name. Read - only."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_marker(self, marker):\n self._markers.append(marker)\n doc = self.editor.document()\n assert isinstance(doc, QtGui.QTextDocument)\n block = doc.findBlockByLineNumber(marker._position)\n marker.block = block\n d = TextDecoration(block)\n d.set_full_width()\n if self._background:\n d.set_background(QtGui.QBrush(self._background))\n marker.decoration = d\n self.editor.decorations.append(d)\n self.repaint()", "response": "Adds the marker to the panel."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef remove_marker(self, marker):\n self._markers.remove(marker)\n self._to_remove.append(marker)\n if hasattr(marker, 'decoration'):\n self.editor.decorations.remove(marker.decoration)\n self.repaint()", "response": "Removes a marker from the panel\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef marker_for_line(self, line):\n markers = []\n for marker in self._markers:\n if line == marker.position:\n markers.append(marker)\n return markers", "response": "Returns the marker that is displayed at the specified line number if any."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndisplay the given tooltip at the specified top position.", "response": "def _display_tooltip(self, tooltip, top):\n \"\"\"\n Display tooltip at the specified top position.\n \"\"\"\n QtWidgets.QToolTip.showText(self.mapToGlobal(QtCore.QPoint(\n self.sizeHint().width(), top)), tooltip, self)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef finditer_noregex(string, sub, whole_word):\n start = 0\n while True:\n start = string.find(sub, start)\n if start == -1:\n return\n if whole_word:\n if start:\n pchar = string[start - 1]\n else:\n pchar = ' '\n try:\n nchar = string[start + len(sub)]\n except IndexError:\n nchar = ' '\n if nchar in DocumentWordsProvider.separators and \\\n pchar in DocumentWordsProvider.separators:\n yield start\n start += len(sub)\n else:\n yield start\n start += 1", "response": "Search occurrences using str. find instead of regular expressions."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef findalliter(string, sub, regex=False, case_sensitive=False,\n whole_word=False):\n \"\"\"\n Generator that finds all occurrences of ``sub`` in ``string``\n :param string: string to parse\n :param sub: string to search\n :param regex: True to search using regex\n :param case_sensitive: True to match case, False to ignore case\n :param whole_word: True to returns only whole words\n :return:\n \"\"\"\n if not sub:\n return\n if regex:\n flags = re.MULTILINE\n if not case_sensitive:\n flags |= re.IGNORECASE\n for val in re.finditer(sub, string, flags):\n yield val.span()\n else:\n if not case_sensitive:\n string = string.lower()\n sub = sub.lower()\n for val in finditer_noregex(string, sub, whole_word):\n yield val, val + len(sub)", "response": "Generator that finds all occurrences of sub in string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef findall(data):\n return list(findalliter(\n data['string'], data['sub'], regex=data['regex'],\n whole_word=data['whole_word'], case_sensitive=data['case_sensitive']))", "response": "Worker that finds all occurrences of a given string or regular expression in a given text."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef split(txt, seps):\n # replace all possible separators with a default sep\n default_sep = seps[0]\n for sep in seps[1:]:\n if sep:\n txt = txt.replace(sep, default_sep)\n # now we can split using the default_sep\n raw_words = txt.split(default_sep)\n words = set()\n for word in raw_words:\n # w = w.strip()\n if word.replace('_', '').isalpha():\n words.add(word)\n return sorted(words)", "response": "Splits a text into a list of words based on a list of word separators."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef complete(self, code, *args):\n completions = []\n for word in self.split(code, self.separators):\n completions.append({'name': word})\n return completions", "response": "Provides completions based on the document words."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef status_to_string(cls, status):\n strings = {CheckerMessages.INFO: \"Info\",\n CheckerMessages.WARNING: \"Warning\",\n CheckerMessages.ERROR: \"Error\"}\n return strings[status]", "response": "Converts a message status to a string."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a message or a list of messages to the queue.", "response": "def add_messages(self, messages):\n \"\"\"\n Adds a message or a list of message.\n\n :param messages: A list of messages or a single message\n \"\"\"\n # remove old messages\n if len(messages) > self.limit:\n messages = messages[:self.limit]\n _logger(self.__class__).log(5, 'adding %s messages' % len(messages))\n self._finished = False\n self._new_messages = messages\n self._to_check = list(self._messages)\n self._pending_msg = messages\n # start removing messages, new message won't be added until we\n # checked all message that need to be removed\n QtCore.QTimer.singleShot(1, self._remove_batch)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove_message(self, message):\n import time\n _logger(self.__class__).log(5, 'removing message %s' % message)\n t = time.time()\n usd = message.block.userData()\n if usd:\n try:\n usd.messages.remove(message)\n except (AttributeError, ValueError):\n pass\n if message.decoration:\n self.editor.decorations.remove(message.decoration)\n self._messages.remove(message)", "response": "Removes a message from the list of messages."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndisplays results. :param status: Response status :param results: Response data, messages.", "response": "def _on_work_finished(self, results):\n \"\"\"\n Display results.\n\n :param status: Response status\n :param results: Response data, messages.\n \"\"\"\n messages = []\n for msg in results:\n msg = CheckerMessage(*msg)\n if msg.line >= self.editor.blockCount():\n msg.line = self.editor.blockCount() - 1\n block = self.editor.document().findBlockByNumber(msg.line)\n msg.block = block\n messages.append(msg)\n self.add_messages(messages)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrequesting a checking of the editor content.", "response": "def _request(self):\n \"\"\" Requests a checking of the editor content. \"\"\"\n try:\n self.editor.toPlainText()\n except (TypeError, RuntimeError):\n return\n try:\n max_line_length = self.editor.modes.get(\n 'RightMarginMode').position\n except KeyError:\n max_line_length = 79\n request_data = {\n 'code': self.editor.toPlainText(),\n 'path': self.editor.file.path,\n 'encoding': self.editor.file.encoding,\n 'ignore_rules': self.ignore_rules,\n 'max_line_length': max_line_length,\n }\n try:\n self.editor.backend.send_request(\n self._worker, request_data, on_receive=self._on_work_finished)\n self._finished = False\n except NotRunning:\n # retry later\n QtCore.QTimer.singleShot(100, self._request)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nopens a pty master pair using os. openpty.", "response": "def openpty():\n \"\"\"openpty() -> (master_fd, slave_fd)\n Open a pty master/slave pair, using os.openpty() if possible.\"\"\"\n\n try:\n return os.openpty()\n except (AttributeError, OSError):\n pass\n master_fd, slave_name = _open_terminal()\n slave_fd = slave_open(slave_name)\n return master_fd, slave_fd"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef master_open():\n\n try:\n master_fd, slave_fd = os.openpty()\n except (AttributeError, OSError):\n pass\n else:\n slave_name = os.ttyname(slave_fd)\n os.close(slave_fd)\n return master_fd, slave_name\n\n return _open_terminal()", "response": "Open a pty master and return the fd and the filename of the slave end."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nopens pty master and return ( master_fd tty_name", "response": "def _open_terminal():\n \"\"\"Open pty master and return (master_fd, tty_name).\"\"\"\n for x in 'pqrstuvwxyzPQRST':\n for y in '0123456789abcdef':\n pty_name = '/dev/pty' + x + y\n try:\n fd = os.open(pty_name, os.O_RDWR)\n except OSError:\n continue\n return (fd, '/dev/tty' + x + y)\n raise OSError('out of pty devices')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef slave_open(tty_name):\n\n result = os.open(tty_name, os.O_RDWR)\n try:\n from fcntl import ioctl, I_PUSH\n except ImportError:\n return result\n try:\n ioctl(result, I_PUSH, \"ptem\")\n ioctl(result, I_PUSH, \"ldterm\")\n except OSError:\n pass\n return result", "response": "Open the pty slave and acquire the controlling terminal returning a filedescriptor."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fork():\n\n try:\n pid, fd = os.forkpty()\n except (AttributeError, OSError):\n pass\n else:\n if pid == CHILD:\n try:\n os.setsid()\n except OSError:\n # os.forkpty() already set us session leader\n pass\n return pid, fd\n\n master_fd, slave_fd = openpty()\n pid = os.fork()\n if pid == CHILD:\n # Establish a new session.\n os.setsid()\n os.close(master_fd)\n\n # Slave becomes stdin/stdout/stderr of child.\n os.dup2(slave_fd, STDIN_FILENO)\n os.dup2(slave_fd, STDOUT_FILENO)\n os.dup2(slave_fd, STDERR_FILENO)\n if (slave_fd > STDERR_FILENO):\n os.close (slave_fd)\n\n # Explicitly open the tty to make it become a controlling tty.\n tmp_fd = os.open(os.ttyname(STDOUT_FILENO), os.O_RDWR)\n os.close(tmp_fd)\n else:\n os.close(slave_fd)\n\n # Parent and child process.\n return pid, master_fd", "response": "fork - > ( pid master_fd slave_fd"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _writen(fd, data):\n while data:\n n = os.write(fd, data)\n data = data[n:]", "response": "Write all the data to a descriptor."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncopy the master file into the standard output and stdin files.", "response": "def _copy(master_fd, master_read=_read, stdin_read=_read):\n \"\"\"Parent copy loop.\n Copies\n pty master -> standard output (master_read)\n standard input -> pty master (stdin_read)\"\"\"\n fds = [master_fd, STDIN_FILENO]\n while True:\n rfds, wfds, xfds = select(fds, [], [])\n if master_fd in rfds:\n data = master_read(master_fd)\n if not data: # Reached EOF.\n return\n else:\n os.write(STDOUT_FILENO, data)\n if STDIN_FILENO in rfds:\n data = stdin_read(STDIN_FILENO)\n if not data:\n fds.remove(STDIN_FILENO)\n else:\n _writen(master_fd, data)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef spawn(argv, master_read=_read, stdin_read=_read):\n if type(argv) == type(''):\n argv = (argv,)\n pid, master_fd = fork()\n if pid == CHILD:\n try:\n os.execlp(argv[0], *argv)\n except:\n # If we wanted to be really clever, we would use\n # the same method as subprocess() to pass the error\n # back to the parent. For now just dump stack trace.\n traceback.print_exc()\n finally:\n os._exit(1)\n try:\n mode = tty.tcgetattr(STDIN_FILENO)\n tty.setraw(STDIN_FILENO)\n restore = 1\n except tty.error: # This is the same as termios.error\n restore = 0\n try:\n _copy(master_fd, master_read, stdin_read)\n except OSError:\n # Some OSes never return an EOF on pty, just raise\n # an error instead.\n pass\n finally:\n if restore:\n tty.tcsetattr(STDIN_FILENO, tty.TCSAFLUSH, mode)\n\n os.close(master_fd)\n return os.waitpid(pid, 0)[1]", "response": "Create a new process."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_mor(self, name, multivalued):\n logger.debug(\"Querying server for uncached MOR %s\", name)\n # This will retrieve the value and inject it into the cache\n logger.debug(\"Getting view for MOR\")\n self.update(properties=[name])\n return self._cache[name][0]", "response": "This function only gets called if the property\n doesn t have a value in the cache."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nflushing the cache for this instance.", "response": "def flush_cache(self, properties=None):\n \"\"\"Flushes the cache being held for this instance.\n\n :param properties: The list of properties to flush from the cache.\n :type properties: list or None (default). If None, flush entire cache.\n\n \"\"\"\n if hasattr(self, '_cache'):\n if properties is None:\n del(self._cache)\n else:\n for prop in properties:\n if prop in self._cache:\n del(self._cache[prop])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the properties being held for this instance.", "response": "def update(self, properties=None):\n \"\"\"Updates the properties being held for this instance.\n\n :param properties: The list of properties to update.\n :type properties: list or None (default). If None, update all\n currently cached properties.\n\n \"\"\"\n if properties is None:\n try:\n self.update_view_data(properties=list(self._cache.keys()))\n except AttributeError:\n # We end up here and ignore it self._cache doesn't exist\n pass\n else:\n self.update_view_data(properties=properties)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the local object from the server - side object.", "response": "def update_view_data(self, properties=None):\n \"\"\"Update the local object from the server-side object.\n \n >>> vm = VirtualMachine.find_one(client, filter={\"name\": \"genesis\"})\n >>> # Update all properties\n >>> vm.update_view_data()\n >>> # Update the config and summary properties\n >>> vm.update_view_data(properties=[\"config\", \"summary\"]\n\n :param properties: A list of properties to update.\n :type properties: list\n\n \"\"\"\n if properties is None:\n properties = []\n logger.info(\"Updating view data for object of type %s\",\n self._mo_ref._type)\n property_spec = self._client.create('PropertySpec')\n property_spec.type = str(self._mo_ref._type)\n # Determine which properties to retrieve from the server\n if properties is None:\n properties = []\n else:\n if properties == \"all\":\n logger.debug(\"Retrieving all properties\")\n property_spec.all = True\n else:\n logger.debug(\"Retrieving %s properties\", len(properties))\n property_spec.all = False\n property_spec.pathSet = properties\n\n object_spec = self._client.create('ObjectSpec')\n object_spec.obj = self._mo_ref\n\n pfs = self._client.create('PropertyFilterSpec')\n pfs.propSet = [property_spec]\n pfs.objectSet = [object_spec]\n\n # Create a copy of the property collector and call the method\n pc = self._client.sc.propertyCollector\n object_content = pc.RetrieveProperties(specSet=pfs)[0]\n if not object_content:\n # TODO: Improve error checking and reporting\n logger.error(\"Nothing returned from RetrieveProperties!\")\n\n self._set_view_data(object_content)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef preload(self, name, properties=None):\n if properties is None:\n raise ValueError(\"You must specify some properties to preload. To\"\n \" preload all properties use the string \\\"all\\\".\")\n # Don't do anything if the attribute contains an empty list\n if not getattr(self, name):\n return\n\n mo_refs = []\n # Iterate over each item and collect the mo_ref\n for item in getattr(self, name):\n # Make sure the items are ManagedObjectReference's\n if isinstance(item, ManagedObject) is False:\n raise ValueError(\"Only ManagedObject's can be pre-loaded.\")\n\n mo_refs.append(item._mo_ref)\n \n # Send a single query to the server which gets views\n views = self._client.get_views(mo_refs, properties)\n\n # Populate the inst.attr item with the retrieved object/properties\n self._cache[name] = (views, time.time())", "response": "Pre - loads the requested properties for each object in the name attribute."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _set_view_data(self, object_content):\n # A debugging convenience, allows inspection of the object_content\n # that was used to create the object\n logger.info(\"Setting view data for a %s\", self.__class__)\n self._object_content = object_content\n\n for dynprop in object_content.propSet:\n # If the class hasn't defined the property, don't use it\n if dynprop.name not in self._valid_attrs:\n logger.error(\"Server returned a property '%s' but the object\"\n \" hasn't defined it so it is being ignored.\" %\n dynprop.name)\n continue\n\n try:\n if not len(dynprop.val):\n logger.info(\"Server returned empty value for %s\",\n dynprop.name)\n except TypeError:\n # This except allows us to pass over:\n # TypeError: object of type 'datetime.datetime' has no len()\n # It will be processed in the next code block\n logger.info(\"%s of type %s has no len!\",\n dynprop.name, type(dynprop.val))\n pass\n\n try:\n # See if we have a cache attribute\n cache = self._cache\n except AttributeError:\n # If we don't create one and use it\n cache = self._cache = {}\n\n # Values which contain classes starting with Array need\n # to be converted into a nicer Python list\n if dynprop.val.__class__.__name__.startswith('Array'):\n # suds returns a list containing a single item, which\n # is another list. Use the first item which is the real list\n logger.info(\"Setting value of an Array* property\")\n logger.debug(\"%s being set to %s\",\n dynprop.name, dynprop.val[0])\n now = time.time()\n cache[dynprop.name] = (dynprop.val[0], now)\n else:\n logger.info(\"Setting value of a single-valued property\")\n logger.debug(\"DynamicProperty value is a %s: \",\n dynprop.val.__class__.__name__)\n logger.debug(\"%s being set to %s\", dynprop.name, dynprop.val)\n now = time.time()\n cache[dynprop.name] = (dynprop.val, now)", "response": "Update the local object from the passed in object_content."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _initialize_repo_cache():\n LOGGER.info(\"Initializing repository cache\")\n # Initialize Repo Cache\n repo_cache = {}\n\n # Get all repos from Elasticsearch\n for hit in GitRepo.search().query(\"match_all\").scan():\n # TODO (fschmidt): Maybe we can use this list as cache for the whole\n # scraper-webhook part.\n # This way, we could reduce the amount of operations needed for GitHub\n # and ElasticSearch\n repo_cache[hit.repo_name] = hit.to_dict(skip_empty=False)\n\n return repo_cache", "response": "Initialize the repository cache used for scraping."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef main(options):\n client = Client(server=options.server, username=options.username,\n password=options.password)\n print('Successfully connected to %s' % client.server)\n lm_info = client.sc.licenseManager.QuerySupportedFeatures()\n for feature in lm_info:\n print('%s: %s' % (feature.featureName, feature.state))\n\n client.logout()", "response": "Gets supported features from the license manager"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef append(self, mode):\n _logger().log(5, 'adding mode %r', mode.name)\n self._modes[mode.name] = mode\n mode.on_install(self.editor)\n return mode", "response": "Adds a mode instance to the editor."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove(self, name_or_klass):\n _logger().log(5, 'removing mode %r', name_or_klass)\n mode = self.get(name_or_klass)\n mode.on_uninstall()\n self._modes.pop(mode.name)\n return mode", "response": "Removes a mode from the editor."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get(self, name_or_klass):\n if not isinstance(name_or_klass, str):\n name_or_klass = name_or_klass.__name__\n return self._modes[name_or_klass]", "response": "Gets a mode by name or class"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef on_install(self, editor):\n Mode.on_install(self, editor)\n self.setParent(editor)\n self.setPalette(QtWidgets.QApplication.instance().palette())\n self.setFont(QtWidgets.QApplication.instance().font())\n self.editor.panels.refresh()\n self._background_brush = QtGui.QBrush(QtGui.QColor(\n self.palette().window().color()))\n self._foreground_pen = QtGui.QPen(QtGui.QColor(\n self.palette().windowText().color()))", "response": "Overrides the base on_install method to set the parent widget as the parent widget."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\noverriding this method to update the visible state of the Panel.", "response": "def setVisible(self, visible):\n \"\"\"\n Shows/Hides the panel\n\n Automatically call CodeEdit.refresh_panels.\n\n :param visible: Visible state\n \"\"\"\n _logger().log(5, '%s visibility changed', self.name)\n super(Panel, self).setVisible(visible)\n if self.editor:\n self.editor.panels.refresh()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef append(self, decoration):\n if decoration not in self._decorations:\n self._decorations.append(decoration)\n self._decorations = sorted(\n self._decorations, key=lambda sel: sel.draw_order)\n self.editor.setExtraSelections(self._decorations)\n return True\n return False", "response": "Adds a text decoration on a CodeEdit instance"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove(self, decoration):\n try:\n self._decorations.remove(decoration)\n self.editor.setExtraSelections(self._decorations)\n return True\n except ValueError:\n return False", "response": "Removes a text decoration from the editor."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef clear(self):\n self._decorations[:] = []\n try:\n self.editor.setExtraSelections(self._decorations)\n except RuntimeError:\n pass", "response": "Clears all text decoration from the editor."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _on_key_pressed(self, event):\n if (int(event.modifiers()) & QtCore.Qt.ControlModifier > 0 and\n not int(event.modifiers()) & QtCore.Qt.ShiftModifier):\n if event.key() == QtCore.Qt.Key_0:\n self.editor.reset_zoom()\n event.accept()\n if event.key() == QtCore.Qt.Key_Plus:\n self.editor.zoom_in()\n event.accept()\n if event.key() == QtCore.Qt.Key_Minus:\n self.editor.zoom_out()\n event.accept()", "response": "Handles key presses on the key sequence."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nincrement editor fonts settings on mouse wheel event event", "response": "def _on_wheel_event(self, event):\n \"\"\"\n Increments or decrements editor fonts settings on mouse wheel event\n if ctrl modifier is on.\n\n :param event: wheel event\n :type event: QWheelEvent\n \"\"\"\n try:\n delta = event.angleDelta().y()\n except AttributeError:\n # PyQt4/PySide\n delta = event.delta()\n if int(event.modifiers()) & QtCore.Qt.ControlModifier > 0:\n if delta < self.prev_delta:\n self.editor.zoom_out()\n event.accept()\n else:\n self.editor.zoom_in()\n event.accept()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_writer(self, writer):\n if self._writer != writer and self._writer:\n self._writer = None\n if writer:\n self._writer = writer", "response": "Sets the writer function to handle writing to the text edit."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef start_process(self, process, args=None, cwd=None, env=None):\n self.setReadOnly(False)\n if env is None:\n env = {}\n if args is None:\n args = []\n if not self._running:\n self.process = QProcess()\n self.process.finished.connect(self._on_process_finished)\n self.process.started.connect(self.process_started.emit)\n self.process.error.connect(self._write_error)\n self.process.readyReadStandardError.connect(self._on_stderr)\n self.process.readyReadStandardOutput.connect(self._on_stdout)\n if cwd:\n self.process.setWorkingDirectory(cwd)\n e = self.process.systemEnvironment()\n ev = QProcessEnvironment()\n for v in e:\n values = v.split('=')\n ev.insert(values[0], '='.join(values[1:]))\n for k, v in env.items():\n ev.insert(k, v)\n self.process.setProcessEnvironment(ev)\n self._running = True\n self._process_name = process\n self._args = args\n if self._clear_on_start:\n self.clear()\n self._user_stop = False\n self._write_started()\n self.process.start(process, args)\n self.process.waitForStarted()\n else:\n _logger().warning('a process is already running')", "response": "Starts a process interactively."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef stop_process(self):\n if self.process is not None:\n self._user_stop = True\n self.process.kill()\n self.setReadOnly(True)\n self._running = False", "response": "Stop the process (by killing it)."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndefault write function. Move the cursor to the end and insert text with the specified color.", "response": "def write(text_edit, text, color):\n \"\"\"\n Default write function. Move the cursor to the end and insert text with\n the specified color.\n\n :param text_edit: QInteractiveConsole instance\n :type text_edit: pyqode.widgets.QInteractiveConsole\n\n :param text: Text to write\n :type text: str\n\n :param color: Desired text color\n :type color: QColor\n \"\"\"\n try:\n text_edit.moveCursor(QTextCursor.End)\n text_edit.setTextColor(color)\n text_edit.insertPlainText(text)\n text_edit.moveCursor(QTextCursor.End)\n except RuntimeError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\napply a pygments color scheme to the console.", "response": "def apply_color_scheme(self, color_scheme):\n \"\"\"\n Apply a pygments color scheme to the console.\n\n As there is not a 1 to 1 mapping between color scheme formats and\n console formats, we decided to make the following mapping (it usually\n looks good for most of the available pygments styles):\n\n - stdout_color = normal color\n - stderr_color = red (lighter if background is dark)\n - stdin_color = numbers color\n - app_msg_color = string color\n - bacgorund_color = background\n\n\n :param color_scheme: pyqode.core.api.ColorScheme to apply\n \"\"\"\n self.stdout_color = color_scheme.formats['normal'].foreground().color()\n self.stdin_color = color_scheme.formats['number'].foreground().color()\n self.app_msg_color = color_scheme.formats[\n 'string'].foreground().color()\n self.background_color = color_scheme.background\n if self.background_color.lightness() < 128:\n self.stderr_color = QColor('#FF8080')\n else:\n self.stderr_color = QColor('red')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert a value to a code key that can be used in the code page.", "response": "def convert_to_codec_key(value):\n \"\"\"\n Normalize code key value (encoding codecs must be lower case and must\n not contain any dashes).\n\n :param value: value to convert.\n \"\"\"\n if not value:\n # fallback to utf-8\n value = 'UTF-8'\n # UTF-8 -> utf_8\n converted = value.replace('-', '_').lower()\n # fix some corner cases, see https://github.com/pyQode/pyQode/issues/11\n all_aliases = {\n 'ascii': [\n 'us_ascii',\n 'us',\n 'ansi_x3.4_1968',\n 'cp367',\n 'csascii',\n 'ibm367',\n 'iso_ir_6',\n 'iso646_us',\n 'iso_646.irv:1991'\n ],\n 'utf-7': [\n 'csunicode11utf7',\n 'unicode_1_1_utf_7',\n 'unicode_2_0_utf_7',\n 'x_unicode_1_1_utf_7',\n 'x_unicode_2_0_utf_7',\n ],\n 'utf_8': [\n 'unicode_1_1_utf_8',\n 'unicode_2_0_utf_8',\n 'x_unicode_1_1_utf_8',\n 'x_unicode_2_0_utf_8',\n ],\n 'utf_16': [\n 'utf_16le',\n 'ucs_2',\n 'unicode',\n 'iso_10646_ucs2'\n ],\n 'latin_1': ['iso_8859_1']\n }\n\n for key, aliases in all_aliases.items():\n if converted in aliases:\n return key\n return converted"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprocess a block and setup its folding info. This method call ``detect_fold_level`` and handles most of the tricky corner cases so that all you have to do is focus on getting the proper fold level foreach meaningful block, skipping the blank ones. :param current_block: current block to process :param previous_block: previous block :param text: current block text", "response": "def process_block(self, current_block, previous_block, text):\n \"\"\"\n Processes a block and setup its folding info.\n\n This method call ``detect_fold_level`` and handles most of the tricky\n corner cases so that all you have to do is focus on getting the proper\n fold level foreach meaningful block, skipping the blank ones.\n\n :param current_block: current block to process\n :param previous_block: previous block\n :param text: current block text\n \"\"\"\n prev_fold_level = TextBlockHelper.get_fold_lvl(previous_block)\n if text.strip() == '':\n # blank line always have the same level as the previous line\n fold_level = prev_fold_level\n else:\n fold_level = self.detect_fold_level(\n previous_block, current_block)\n if fold_level > self.limit:\n fold_level = self.limit\n\n prev_fold_level = TextBlockHelper.get_fold_lvl(previous_block)\n\n if fold_level > prev_fold_level:\n # apply on previous blank lines\n block = current_block.previous()\n while block.isValid() and block.text().strip() == '':\n TextBlockHelper.set_fold_lvl(block, fold_level)\n block = block.previous()\n TextBlockHelper.set_fold_trigger(\n block, True)\n\n # update block fold level\n if text.strip():\n TextBlockHelper.set_fold_trigger(\n previous_block, fold_level > prev_fold_level)\n TextBlockHelper.set_fold_lvl(current_block, fold_level)\n\n # user pressed enter at the beginning of a fold trigger line\n # the previous blank line will keep the trigger state and the new line\n # (which actually contains the trigger) must use the prev state (\n # and prev state must then be reset).\n prev = current_block.previous() # real prev block (may be blank)\n if (prev and prev.isValid() and prev.text().strip() == '' and\n TextBlockHelper.is_fold_trigger(prev)):\n # prev line has the correct trigger fold state\n TextBlockHelper.set_collapsed(\n current_block, TextBlockHelper.is_collapsed(\n prev))\n # make empty line not a trigger\n TextBlockHelper.set_fold_trigger(prev, False)\n TextBlockHelper.set_collapsed(prev, False)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef detect_fold_level(self, prev_block, block):\n text = block.text()\n # round down to previous indentation guide to ensure contiguous block\n # fold level evolution.\n return (len(text) - len(text.lstrip())) // self.editor.tab_length", "response": "Detects fold level by looking at the indentation guide."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef read_config(contents):\n file_obj = io.StringIO(contents)\n config = six.moves.configparser.ConfigParser()\n config.readfp(file_obj)\n return config", "response": "Reads pylintrc config into a ConfigParser object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nloads the pylint. config. py file.", "response": "def load_local_config(filename):\n \"\"\"Loads the pylint.config.py file.\n\n Args:\n filename (str): The python file containing the local configuration.\n\n Returns:\n module: The loaded Python module.\n \"\"\"\n if not filename:\n return imp.new_module('local_pylint_config')\n module = imp.load_source('local_pylint_config', filename)\n return module"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndetermines the final configuration.", "response": "def determine_final_config(config_module):\n \"\"\"Determines the final additions and replacements.\n\n Combines the config module with the defaults.\n\n Args:\n config_module: The loaded local configuration module.\n\n Returns:\n Config: the final configuration.\n \"\"\"\n config = Config(\n DEFAULT_LIBRARY_RC_ADDITIONS, DEFAULT_LIBRARY_RC_REPLACEMENTS,\n DEFAULT_TEST_RC_ADDITIONS, DEFAULT_TEST_RC_REPLACEMENTS)\n\n for field in config._fields:\n if hasattr(config_module, field):\n config = config._replace(**{field: getattr(config_module, field)})\n\n return config"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef lint_fileset(*dirnames, **kwargs):\n try:\n rc_filename = kwargs['rc_filename']\n description = kwargs['description']\n if len(kwargs) != 2:\n raise KeyError\n except KeyError:\n raise KeyError(_LINT_FILESET_MSG)\n\n pylint_shell_command = ['pylint', '--rcfile', rc_filename]\n pylint_shell_command.extend(dirnames)\n status_code = subprocess.call(pylint_shell_command)\n if status_code != 0:\n error_message = _ERROR_TEMPLATE.format(description, status_code)\n print(error_message, file=sys.stderr)\n sys.exit(status_code)", "response": "Lints a group of files using a given rcfile."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncombining a base rc and additions into a single file.", "response": "def make_rc(base_cfg, target_filename,\n additions=None, replacements=None):\n \"\"\"Combines a base rc and additions into single file.\n\n Args:\n base_cfg (ConfigParser.ConfigParser): The configuration we are\n merging into.\n target_filename (str): The filename where the new configuration\n will be saved.\n additions (dict): (Optional) The values added to the configuration.\n replacements (dict): (Optional) The wholesale replacements for\n the new configuration.\n\n Raises:\n KeyError: if one of the additions or replacements does not\n already exist in the current config.\n \"\"\"\n # Set-up the mutable default values.\n if additions is None:\n additions = {}\n if replacements is None:\n replacements = {}\n\n # Create fresh config, which must extend the base one.\n new_cfg = six.moves.configparser.ConfigParser()\n # pylint: disable=protected-access\n new_cfg._sections = copy.deepcopy(base_cfg._sections)\n new_sections = new_cfg._sections\n # pylint: enable=protected-access\n\n for section, opts in additions.items():\n curr_section = new_sections.setdefault(\n section, collections.OrderedDict())\n for opt, opt_val in opts.items():\n curr_val = curr_section.get(opt)\n if curr_val is None:\n msg = _MISSING_OPTION_ADDITION.format(opt)\n raise KeyError(msg)\n curr_val = curr_val.rstrip(',')\n opt_val = _transform_opt(opt_val)\n curr_section[opt] = '%s, %s' % (curr_val, opt_val)\n\n for section, opts in replacements.items():\n curr_section = new_sections.setdefault(\n section, collections.OrderedDict())\n for opt, opt_val in opts.items():\n curr_val = curr_section.get(opt)\n if curr_val is None:\n # NOTE: This doesn't need to fail, because some options\n # are present in one version of pylint and not present\n # in another. For example ``[BASIC].method-rgx`` is\n # ``(([a-z][a-z0-9_]{2,30})|(_[a-z0-9_]*))$`` in\n # ``1.7.5`` but in ``1.8.0`` the default config has\n # ``#method-rgx=`` (i.e. it is commented out in the\n # ``[BASIC]`` section).\n msg = _MISSING_OPTION_REPLACE.format(opt)\n print(msg, file=sys.stderr)\n opt_val = _transform_opt(opt_val)\n curr_section[opt] = '%s' % (opt_val,)\n\n with open(target_filename, 'w') as file_obj:\n new_cfg.write(file_obj)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nscripts entry point. Lints both sets of files.", "response": "def run_command(args):\n \"\"\"Script entry point. Lints both sets of files.\"\"\"\n library_rc = 'pylintrc'\n test_rc = 'pylintrc.test'\n\n if os.path.exists(library_rc):\n os.remove(library_rc)\n\n if os.path.exists(test_rc):\n os.remove(test_rc)\n\n default_config = read_config(get_default_config())\n user_config = load_local_config(args.config)\n configuration = determine_final_config(user_config)\n\n make_rc(default_config, library_rc,\n additions=configuration.library_additions,\n replacements=configuration.library_replacements)\n make_rc(default_config, test_rc,\n additions=configuration.test_additions,\n replacements=configuration.test_replacements)\n\n lint_fileset(*args.library_filesets, rc_filename=library_rc,\n description='Library')\n lint_fileset(*args.test_filesets, rc_filename=test_rc,\n description='Test')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef u2open(self, u2request):\n tm = self.options.timeout\n url = build_opener(HTTPSClientAuthHandler(self.context))\n if self.u2ver() < 2.6:\n socket.setdefaulttimeout(tm)\n return url.open(u2request)\n else:\n return url.open(u2request, timeout=tm)", "response": "Open a connection.\n @param u2request: A urllib2 request.\n @type u2request: urllib2.Requet.\n @return: The opened file-like urllib2 object.\n @rtype: fp"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlogin to a vSphere server.", "response": "def login(self, username=None, password=None):\n \"\"\"Login to a vSphere server.\n\n >>> client.login(username='Administrator', password='strongpass')\n\n :param username: The username to authenticate as.\n :type username: str\n :param password: The password to authenticate with.\n :type password: str\n \"\"\"\n if username is None:\n username = self.username\n if password is None:\n password = self.password\n logger.debug(\"Logging into server\")\n self.sc.sessionManager.Login(userName=username, password=password)\n self._logged_in = True"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef invoke(self, method, _this, **kwargs):\n if (self._logged_in is False and\n method not in [\"Login\", \"RetrieveServiceContent\"]):\n logger.critical(\"Cannot exec %s unless logged in\", method)\n raise NotLoggedInError(\"Cannot exec %s unless logged in\" % method)\n\n for kwarg in kwargs:\n kwargs[kwarg] = self._marshal(kwargs[kwarg])\n\n result = getattr(self.service, method)(_this=_this, **kwargs)\n if hasattr(result, '__iter__') is False:\n logger.debug(\"Returning non-iterable result\")\n return result\n\n # We must traverse the result and convert any ManagedObjectReference\n # to a psphere class, this will then be lazy initialised on use\n logger.debug(result.__class__)\n logger.debug(\"Result: %s\", result)\n logger.debug(\"Length: %s\", len(result))\n if type(result) == list:\n new_result = []\n for item in result:\n new_result.append(self._unmarshal(item))\n else:\n new_result = self._unmarshal(result)\n \n logger.debug(\"Finished in invoke.\")\n #property = self.find_and_destroy(property)\n #print result\n # Return the modified result to the caller\n return new_result", "response": "Invoke a method on the server."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts a MOR to a psphere object.", "response": "def _mor_to_pobject(self, mo_ref):\n \"\"\"Converts a MOR to a psphere object.\"\"\"\n kls = classmapper(mo_ref._type)\n new_object = kls(mo_ref, self)\n return new_object"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _marshal(self, obj):\n logger.debug(\"Checking if %s needs to be marshalled\", obj)\n if isinstance(obj, ManagedObject):\n logger.debug(\"obj is a psphere object, converting to MOR\")\n return obj._mo_ref\n\n if isinstance(obj, list):\n logger.debug(\"obj is a list, recursing it\")\n new_list = []\n for item in obj:\n new_list.append(self._marshal(item))\n return new_list\n \n if not isinstance(obj, suds.sudsobject.Object):\n logger.debug(\"%s is not a sudsobject subclass, skipping\", obj)\n return obj\n\n if hasattr(obj, '__iter__'):\n logger.debug(\"obj is iterable, recursing it\")\n for (name, value) in obj:\n setattr(obj, name, self._marshal(value))\n return obj\n \n # The obj has nothing that we want to marshal or traverse, return it\n logger.debug(\"obj doesn't need to be marshalled\")\n return obj", "response": "Walks an object and marshals any psphere object into MORs."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwalk an object and unmarshals any MORs into psphere objects.", "response": "def _unmarshal(self, obj):\n \"\"\"Walks an object and unmarshals any MORs into psphere objects.\"\"\"\n if isinstance(obj, suds.sudsobject.Object) is False:\n logger.debug(\"%s is not a suds instance, skipping\", obj)\n return obj\n\n logger.debug(\"Processing:\")\n logger.debug(obj)\n logger.debug(\"...with keylist:\")\n logger.debug(obj.__keylist__)\n # If the obj that we're looking at has a _type key\n # then create a class of that type and return it immediately\n if \"_type\" in obj.__keylist__:\n logger.debug(\"obj is a MOR, converting to psphere class\")\n return self._mor_to_pobject(obj)\n\n new_object = obj.__class__()\n for sub_obj in obj:\n logger.debug(\"Looking at %s of type %s\", sub_obj, type(sub_obj))\n\n if isinstance(sub_obj[1], list):\n new_embedded_objs = []\n for emb_obj in sub_obj[1]:\n new_emb_obj = self._unmarshal(emb_obj)\n new_embedded_objs.append(new_emb_obj)\n setattr(new_object, sub_obj[0], new_embedded_objs)\n continue\n\n if not issubclass(sub_obj[1].__class__, suds.sudsobject.Object):\n logger.debug(\"%s is not a sudsobject subclass, skipping\",\n sub_obj[1].__class__)\n setattr(new_object, sub_obj[0], sub_obj[1])\n continue\n\n logger.debug(\"Obj keylist: %s\", sub_obj[1].__keylist__)\n if \"_type\" in sub_obj[1].__keylist__:\n logger.debug(\"Converting nested MOR to psphere class:\")\n logger.debug(sub_obj[1])\n kls = classmapper(sub_obj[1]._type)\n logger.debug(\"Setting %s.%s to %s\",\n new_object.__class__.__name__,\n sub_obj[0],\n sub_obj[1])\n setattr(new_object, sub_obj[0], kls(sub_obj[1], self))\n else:\n logger.debug(\"Didn't find _type in:\")\n logger.debug(sub_obj[1])\n setattr(new_object, sub_obj[0], self._unmarshal(sub_obj[1]))\n\n return new_object"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create(self, type_, **kwargs):\n obj = self.factory.create(\"ns0:%s\" % type_)\n for key, value in kwargs.items():\n setattr(obj, key, value)\n return obj", "response": "Create a SOAP object of the requested type."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a view of a managed object.", "response": "def get_view(self, mo_ref, properties=None):\n \"\"\"Get a view of a vSphere managed object.\n \n :param mo_ref: The MOR to get a view of\n :type mo_ref: ManagedObjectReference\n :param properties: A list of properties to retrieve from the \\\n server\n :type properties: list\n :returns: A view representing the ManagedObjectReference.\n :rtype: ManagedObject\n\n \"\"\"\n # This maps the mo_ref into a psphere class and then instantiates it\n kls = classmapper(mo_ref._type)\n view = kls(mo_ref, self)\n # Update the requested properties of the instance\n #view.update_view_data(properties=properties)\n\n return view"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a list of local views for multiple managed objects.", "response": "def get_views(self, mo_refs, properties=None):\n \"\"\"Get a list of local view's for multiple managed objects.\n\n :param mo_refs: The list of ManagedObjectReference's that views are \\\n to be created for.\n :type mo_refs: ManagedObjectReference\n :param properties: The properties to retrieve in the views.\n :type properties: list\n :returns: A list of local instances representing the server-side \\\n managed objects.\n :rtype: list of ManagedObject's\n\n \"\"\"\n property_specs = []\n for mo_ref in mo_refs:\n property_spec = self.create('PropertySpec')\n property_spec.type = str(mo_ref._type)\n if properties is None:\n properties = []\n else:\n # Only retrieve the requested properties\n if properties == \"all\":\n property_spec.all = True\n else:\n property_spec.all = False\n property_spec.pathSet = properties\n property_specs.append(property_spec)\n\n object_specs = []\n for mo_ref in mo_refs:\n object_spec = self.create('ObjectSpec')\n object_spec.obj = mo_ref\n object_specs.append(object_spec)\n\n pfs = self.create('PropertyFilterSpec')\n pfs.propSet = property_specs\n pfs.objectSet = object_specs\n\n object_contents = self.sc.propertyCollector.RetrieveProperties(\n specSet=pfs)\n views = []\n for object_content in object_contents:\n # Update the instance with the data in object_content\n object_content.obj._set_view_data(object_content=object_content)\n views.append(object_content.obj)\n\n return views"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nbuilds a PropertyFilterSpec capable of full inventory traversal.", "response": "def get_search_filter_spec(self, begin_entity, property_spec):\n \"\"\"Build a PropertyFilterSpec capable of full inventory traversal.\n \n By specifying all valid traversal specs we are creating a PFS that\n can recursively select any object under the given entity.\n\n :param begin_entity: The place in the MOB to start the search.\n :type begin_entity: ManagedEntity\n :param property_spec: TODO\n :type property_spec: TODO\n :returns: A PropertyFilterSpec, suitable for recursively searching \\\n under the given ManagedEntity.\n :rtype: PropertyFilterSpec\n\n \"\"\"\n # The selection spec for additional objects we want to filter\n ss_strings = ['resource_pool_traversal_spec',\n 'resource_pool_vm_traversal_spec',\n 'folder_traversal_spec',\n 'datacenter_host_traversal_spec',\n 'datacenter_vm_traversal_spec',\n 'compute_resource_rp_traversal_spec',\n 'compute_resource_host_traversal_spec',\n 'host_vm_traversal_spec',\n 'datacenter_datastore_traversal_spec']\n\n # Create a selection spec for each of the strings specified above\n selection_specs = [\n self.create('SelectionSpec', name=ss_string)\n for ss_string in ss_strings\n ]\n\n # A traversal spec for deriving ResourcePool's from found VMs\n rpts = self.create('TraversalSpec')\n rpts.name = 'resource_pool_traversal_spec'\n rpts.type = 'ResourcePool'\n rpts.path = 'resourcePool'\n rpts.selectSet = [selection_specs[0], selection_specs[1]]\n\n # A traversal spec for deriving ResourcePool's from found VMs\n rpvts = self.create('TraversalSpec')\n rpvts.name = 'resource_pool_vm_traversal_spec'\n rpvts.type = 'ResourcePool'\n rpvts.path = 'vm'\n\n crrts = self.create('TraversalSpec')\n crrts.name = 'compute_resource_rp_traversal_spec'\n crrts.type = 'ComputeResource'\n crrts.path = 'resourcePool'\n crrts.selectSet = [selection_specs[0], selection_specs[1]]\n\n crhts = self.create('TraversalSpec')\n crhts.name = 'compute_resource_host_traversal_spec'\n crhts.type = 'ComputeResource'\n crhts.path = 'host'\n \n dhts = self.create('TraversalSpec')\n dhts.name = 'datacenter_host_traversal_spec'\n dhts.type = 'Datacenter'\n dhts.path = 'hostFolder'\n dhts.selectSet = [selection_specs[2]]\n\n dsts = self.create('TraversalSpec')\n dsts.name = 'datacenter_datastore_traversal_spec'\n dsts.type = 'Datacenter'\n dsts.path = 'datastoreFolder'\n dsts.selectSet = [selection_specs[2]]\n\n dvts = self.create('TraversalSpec')\n dvts.name = 'datacenter_vm_traversal_spec'\n dvts.type = 'Datacenter'\n dvts.path = 'vmFolder'\n dvts.selectSet = [selection_specs[2]]\n\n hvts = self.create('TraversalSpec')\n hvts.name = 'host_vm_traversal_spec'\n hvts.type = 'HostSystem'\n hvts.path = 'vm'\n hvts.selectSet = [selection_specs[2]]\n \n fts = self.create('TraversalSpec')\n fts.name = 'folder_traversal_spec'\n fts.type = 'Folder'\n fts.path = 'childEntity'\n fts.selectSet = [selection_specs[2], selection_specs[3],\n selection_specs[4], selection_specs[5],\n selection_specs[6], selection_specs[7],\n selection_specs[1], selection_specs[8]]\n\n obj_spec = self.create('ObjectSpec')\n obj_spec.obj = begin_entity\n obj_spec.selectSet = [fts, dvts, dhts, crhts, crrts,\n rpts, hvts, rpvts, dsts]\n\n pfs = self.create('PropertyFilterSpec')\n pfs.propSet = [property_spec]\n pfs.objectSet = [obj_spec]\n return pfs"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef invoke_task(self, method, **kwargs):\n # Don't execute methods which don't return a Task object\n if not method.endswith('_Task'):\n logger.error('invoke_task can only be used for methods which '\n 'return a ManagedObjectReference to a Task.')\n return None\n\n task_mo_ref = self.invoke(method=method, **kwargs)\n task = Task(task_mo_ref, self)\n task.update_view_data(properties=['info'])\n # TODO: This returns true when there is an error\n while True:\n if task.info.state == 'success':\n return task\n elif task.info.state == 'error':\n # TODO: Handle error checking properly\n raise TaskFailedError(task.info.error.localizedMessage)\n\n # TODO: Implement progresscallbackfunc\n # Sleep two seconds and then refresh the data from the server\n time.sleep(2)\n task.update_view_data(properties=['info'])", "response": "Invoke a \\ *_Task method and wait for it to complete."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfind all ManagedEntity s of the requested type.", "response": "def find_entity_views(self, view_type, begin_entity=None, properties=None):\n \"\"\"Find all ManagedEntity's of the requested type.\n\n :param view_type: The type of ManagedEntity's to find.\n :type view_type: str\n :param begin_entity: The MOR to start searching for the entity. \\\n The default is to start the search at the root folder.\n :type begin_entity: ManagedObjectReference or None\n :returns: A list of ManagedEntity's\n :rtype: list\n\n \"\"\"\n if properties is None:\n properties = []\n\n # Start the search at the root folder if no begin_entity was given\n if not begin_entity:\n begin_entity = self.sc.rootFolder._mo_ref\n\n property_spec = self.create('PropertySpec')\n property_spec.type = view_type\n property_spec.all = False\n property_spec.pathSet = properties\n\n pfs = self.get_search_filter_spec(begin_entity, property_spec)\n\n # Retrieve properties from server and update entity\n obj_contents = self.sc.propertyCollector.RetrieveProperties(specSet=pfs)\n\n views = []\n for obj_content in obj_contents:\n logger.debug(\"In find_entity_view with object of type %s\",\n obj_content.obj.__class__.__name__)\n obj_content.obj.update_view_data(properties=properties)\n views.append(obj_content.obj)\n\n return views"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfinds a ManagedEntity of the requested type.", "response": "def find_entity_view(self, view_type, begin_entity=None, filter={},\n properties=None):\n \"\"\"Find a ManagedEntity of the requested type.\n\n Traverses the MOB looking for an entity matching the filter.\n\n :param view_type: The type of ManagedEntity to find.\n :type view_type: str\n :param begin_entity: The MOR to start searching for the entity. \\\n The default is to start the search at the root folder.\n :type begin_entity: ManagedObjectReference or None\n :param filter: Key/value pairs to filter the results. The key is \\\n a valid parameter of the ManagedEntity type. The value is what \\\n that parameter should match.\n :type filter: dict\n :returns: If an entity is found, a ManagedEntity matching the search.\n :rtype: ManagedEntity\n\n \"\"\"\n if properties is None:\n properties = []\n\n kls = classmapper(view_type)\n # Start the search at the root folder if no begin_entity was given\n if not begin_entity:\n begin_entity = self.sc.rootFolder._mo_ref\n logger.debug(\"Using %s\", self.sc.rootFolder._mo_ref)\n\n property_spec = self.create('PropertySpec')\n property_spec.type = view_type\n property_spec.all = False\n property_spec.pathSet = list(filter.keys())\n\n pfs = self.get_search_filter_spec(begin_entity, property_spec)\n\n # Retrieve properties from server and update entity\n #obj_contents = self.propertyCollector.RetrieveProperties(specSet=pfs)\n obj_contents = self.sc.propertyCollector.RetrieveProperties(specSet=pfs)\n\n # TODO: Implement filtering\n if not filter:\n logger.warning('No filter specified, returning first match.')\n # If no filter is specified we just return the first item\n # in the list of returned objects\n logger.debug(\"Creating class in find_entity_view (filter)\")\n view = kls(obj_contents[0].obj._mo_ref, self)\n logger.debug(\"Completed creating class in find_entity_view (filter)\")\n #view.update_view_data(properties)\n return view\n\n matched = False\n # Iterate through obj_contents retrieved\n for obj_content in obj_contents:\n # If there are is no propSet, skip this one\n if not obj_content.propSet:\n continue\n\n matches = 0\n # Iterate through each property in the set\n for prop in obj_content.propSet:\n for key in filter.keys():\n # If the property name is in the defined filter\n if prop.name == key:\n # ...and it matches the value specified\n # TODO: Regex this?\n if prop.val == filter[prop.name]:\n # We've found a match\n matches += 1\n else:\n break\n else:\n continue\n if matches == len(filter):\n filtered_obj_content = obj_content\n matched = True\n break\n else:\n continue\n\n if matched is not True:\n # There were no matches\n raise ObjectNotFoundError(\"No matching objects for filter\")\n\n logger.debug(\"Creating class in find_entity_view\")\n view = kls(filtered_obj_content.obj._mo_ref, self)\n logger.debug(\"Completed creating class in find_entity_view\")\n #view.update_view_data(properties=properties)\n return view"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading a template of the specified name.", "response": "def load_template(name=None):\n \"\"\"Loads a template of the specified name.\n\n Templates are placed in the directory in YAML format with\n a .yaml extension.\n\n If no name is specified then the function will return the default\n template (/default.yaml) if it exists.\n \n :param name: The name of the template to load.\n :type name: str or None (default)\n\n \"\"\"\n if name is None:\n name = \"default\"\n\n logger.info(\"Loading template with name %s\", name)\n try:\n template_file = open(\"%s/%s.yaml\" % (template_path, name))\n except IOError:\n raise TemplateNotFoundError\n\n template = yaml.safe_load(template_file)\n template_file.close()\n if \"extends\" in template:\n logger.debug(\"Merging %s with %s\", name, template[\"extends\"])\n template = _merge(load_template(template[\"extends\"]), template)\n\n return template"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a list of all templates.", "response": "def list_templates():\n \"\"\"Returns a list of all templates.\"\"\"\n templates = [f for f in glob.glob(os.path.join(template_path, '*.yaml'))]\n return templates"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_vm(client, name, compute_resource, datastore, disksize, nics,\n memory, num_cpus, guest_id, host=None):\n \"\"\"Create a virtual machine using the specified values.\n\n :param name: The name of the VM to create.\n :type name: str\n :param compute_resource: The name of a ComputeResource in which to \\\n create the VM.\n :type compute_resource: str\n :param datastore: The name of the datastore on which to create the VM.\n :type datastore: str\n :param disksize: The size of the disk, specified in KB, MB or GB. e.g. \\\n 20971520KB, 20480MB, 20GB.\n :type disksize: str\n :param nics: The NICs to create, specified in a list of dict's which \\\n contain a \"network_name\" and \"type\" key. e.g. \\\n {\"network_name\": \"VM Network\", \"type\": \"VirtualE1000\"}\n :type nics: list of dict's\n :param memory: The amount of memory for the VM. Specified in KB, MB or \\\n GB. e.g. 2097152KB, 2048MB, 2GB.\n :type memory: str\n :param num_cpus: The number of CPUs the VM will have.\n :type num_cpus: int\n :param guest_id: The vSphere string of the VM guest you are creating. \\\n The list of VMs can be found at \\\n http://pubs.vmware.com/vsphere-50/index.jsp?topic=/com.vmware.wssdk.apiref.doc_50/right-pane.html\n :type guest_id: str\n :param host: The name of the host (default: None), if you want to \\\n provision the VM on a \\ specific host.\n :type host: str\n\n \"\"\"\n print(\"Creating VM %s\" % name)\n # If the host is not set, use the ComputeResource as the target\n if host is None:\n target = client.find_entity_view(\"ComputeResource\",\n filter={\"name\": compute_resource})\n resource_pool = target.resourcePool\n else:\n target = client.find_entity_view(\"HostSystem\", filter={\"name\": host})\n resource_pool = target.parent.resourcePool\n\n disksize_pattern = re.compile(\"^\\d+[KMG]B\")\n if disksize_pattern.match(disksize) is None:\n print(\"Disk size %s is invalid. Try \\\"12G\\\" or similar\" % disksize)\n sys.exit(1)\n\n if disksize.endswith(\"GB\"):\n disksize_kb = int(disksize[:-2]) * 1024 * 1024\n elif disksize.endswith(\"MB\"):\n disksize_kb = int(disksize[:-2]) * 1024\n elif disksize.endswith(\"KB\"):\n disksize_kb = int(disksize[:-2])\n else:\n print(\"Disk size %s is invalid. Try \\\"12G\\\" or similar\" % disksize)\n\n memory_pattern = re.compile(\"^\\d+[KMG]B\")\n if memory_pattern.match(memory) is None:\n print(\"Memory size %s is invalid. Try \\\"12G\\\" or similar\" % memory)\n sys.exit(1)\n\n if memory.endswith(\"GB\"):\n memory_mb = int(memory[:-2]) * 1024\n elif memory.endswith(\"MB\"):\n memory_mb = int(memory[:-2])\n elif memory.endswith(\"KB\"):\n memory_mb = int(memory[:-2]) / 1024\n else:\n print(\"Memory size %s is invalid. Try \\\"12G\\\" or similar\" % memory)\n\n # A list of devices to be assigned to the VM\n vm_devices = []\n\n # Create a disk controller\n controller = create_controller(client, \"VirtualLsiLogicController\")\n vm_devices.append(controller)\n\n ds_to_use = None\n for ds in target.datastore:\n if ds.name == datastore:\n ds_to_use = ds\n break\n\n if ds_to_use is None:\n print(\"Could not find datastore on %s with name %s\" %\n (target.name, datastore))\n sys.exit(1)\n\n # Ensure the datastore is accessible and has enough space\n if ds_to_use.summary.accessible is not True:\n print(\"Datastore (%s) exists, but is not accessible\" %\n ds_to_use.summary.name)\n sys.exit(1)\n if ds_to_use.summary.freeSpace < disksize_kb * 1024:\n print(\"Datastore (%s) exists, but does not have sufficient\"\n \" free space.\" % ds_to_use.summary.name)\n sys.exit(1)\n\n disk = create_disk(client, datastore=ds_to_use, disksize_kb=disksize_kb)\n vm_devices.append(disk)\n \n for nic in nics:\n nic_spec = create_nic(client, target, nic)\n if nic_spec is None:\n print(\"Could not create spec for NIC\")\n sys.exit(1)\n\n # Append the nic spec to the vm_devices list\n vm_devices.append(nic_spec)\n\n vmfi = client.create(\"VirtualMachineFileInfo\")\n vmfi.vmPathName = \"[%s]\" % ds_to_use.summary.name\n vm_config_spec = client.create(\"VirtualMachineConfigSpec\")\n vm_config_spec.name = name\n vm_config_spec.memoryMB = memory_mb\n vm_config_spec.files = vmfi\n vm_config_spec.annotation = \"Auto-provisioned by psphere\"\n vm_config_spec.numCPUs = num_cpus\n vm_config_spec.guestId = guest_id\n vm_config_spec.deviceChange = vm_devices\n\n # Find the datacenter of the target\n if target.__class__.__name__ == \"HostSystem\":\n datacenter = target.parent.parent.parent\n else:\n datacenter = target.parent.parent\n\n try:\n task = datacenter.vmFolder.CreateVM_Task(config=vm_config_spec,\n pool=resource_pool)\n except VimFault as e:\n print(\"Failed to create %s: \" % e)\n sys.exit()\n\n while task.info.state in [\"queued\", \"running\"]:\n time.sleep(5)\n task.update()\n print(\"Waiting 5 more seconds for VM creation\")\n\n if task.info.state == \"success\":\n elapsed_time = task.info.completeTime - task.info.startTime\n print(\"Successfully created new VM %s. Server took %s seconds.\" %\n (name, elapsed_time.seconds))\n elif task.info.state == \"error\":\n print(\"ERROR: The task for creating the VM has finished with\"\n \" an error. If an error was reported it will follow.\")\n try:\n print(\"ERROR: %s\" % task.info.error.localizedMessage)\n except AttributeError:\n print(\"ERROR: There is no error message available.\")\n else:\n print(\"UNKNOWN: The task reports an unknown state %s\" %\n task.info.state)", "response": "Creates a virtual machine using the specified values."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_nic(client, target, nic):\n # Iterate through the networks and look for one matching\n # the requested name\n for network in target.network:\n if network.name == nic[\"network_name\"]:\n net = network\n break\n else:\n return None\n\n # Success! Create a nic attached to this network\n backing = client.create(\"VirtualEthernetCardNetworkBackingInfo\")\n backing.deviceName = nic[\"network_name\"]\n backing.network = net\n\n connect_info = client.create(\"VirtualDeviceConnectInfo\")\n connect_info.allowGuestControl = True\n connect_info.connected = False\n connect_info.startConnected = True\n\n new_nic = client.create(nic[\"type\"]) \n new_nic.backing = backing\n new_nic.key = 2\n # TODO: Work out a way to automatically increment this\n new_nic.unitNumber = 1\n new_nic.addressType = \"generated\"\n new_nic.connectable = connect_info\n\n nic_spec = client.create(\"VirtualDeviceConfigSpec\")\n nic_spec.device = new_nic\n nic_spec.fileOperation = None\n operation = client.create(\"VirtualDeviceConfigSpecOperation\")\n nic_spec.operation = (operation.add)\n\n return nic_spec", "response": "Create a NIC spec for a target."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_ignore_patterns(self, *patterns):\n for ptrn in patterns:\n if isinstance(ptrn, list):\n for p in ptrn:\n self._ignored_patterns.append(p)\n else:\n self._ignored_patterns.append(ptrn)", "response": "Adds an ignore pattern to the list for ignore patterns."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_context_menu(self, context_menu):\n self.context_menu = context_menu\n self.context_menu.tree_view = self\n self.context_menu.init_actions()\n for action in self.context_menu.actions():\n self.addAction(action)", "response": "Sets the context menu of the tree view."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the root path to watch.", "response": "def set_root_path(self, path, hide_extra_columns=True):\n \"\"\"\n Sets the root path to watch\n :param path: root path - str\n :param hide_extra_columns: Hide extra column (size, paths,...)\n \"\"\"\n if not self.isVisible():\n self._path_to_set = path\n self._hide_extra_colums = hide_extra_columns\n return\n if sys.platform == 'win32' and os.path.splitunc(path)[0]:\n mdl = QtGui.QStandardItemModel(1, 1)\n item = QtGui.QStandardItem(\n QtGui.QIcon.fromTheme(\n 'dialog-warning',\n QtGui.QIcon(':/pyqode-icons/rc/dialog-warning.png')),\n 'UNC pathnames not supported.')\n mdl.setItem(0, 0, item)\n self.setModel(mdl)\n self.root_path = None\n return\n self._hide_extra_colums = hide_extra_columns\n\n if os.path.isfile(path):\n path = os.path.abspath(os.path.join(path, os.pardir))\n self._fs_model_source = QtWidgets.QFileSystemModel()\n self._fs_model_source.setFilter(QtCore.QDir.Dirs | QtCore.QDir.Files |\n QtCore.QDir.NoDotAndDotDot |\n QtCore.QDir.Hidden)\n self._fs_model_source.setIconProvider(self._icon_provider)\n self._fs_model_proxy = self.FilterProxyModel()\n for item in self._ignored_patterns:\n self._fs_model_proxy.ignored_patterns.append(item)\n self._fs_model_proxy.setSourceModel(self._fs_model_source)\n self._fs_model_proxy.set_root_path(path)\n # takes parent of the root path, filter will keep only `path`, that\n # way `path` appear as the top level node of the tree\n self._root_path = os.path.dirname(path)\n self.root_path = path\n self._fs_model_source.directoryLoaded.connect(self._on_path_loaded)\n self._fs_model_source.setRootPath(self._root_path)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the file path of the item at the specified index.", "response": "def filePath(self, index):\n \"\"\"\n Gets the file path of the item at the specified ``index``.\n\n :param index: item index - QModelIndex\n :return: str\n \"\"\"\n return self._fs_model_source.filePath(\n self._fs_model_proxy.mapToSource(index))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving the file info of the item at the specified index.", "response": "def fileInfo(self, index):\n \"\"\"\n Gets the file info of the item at the specified ``index``.\n\n :param index: item index - QModelIndex\n :return: QFileInfo\n \"\"\"\n return self._fs_model_source.fileInfo(\n self._fs_model_proxy.mapToSource(index))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncopy the selected items to the clipboard.", "response": "def copy_to_clipboard(self, copy=True):\n \"\"\"\n Copies the selected items to the clipboard\n :param copy: True to copy, False to cut.\n \"\"\"\n urls = self.selected_urls()\n if not urls:\n return\n mime = self._UrlListMimeData(copy)\n mime.set_list(urls)\n clipboard = QtWidgets.QApplication.clipboard()\n clipboard.setMimeData(mime)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef selected_urls(self):\n urls = []\n debug('gettings urls')\n for proxy_index in self.tree_view.selectedIndexes():\n finfo = self.tree_view.fileInfo(proxy_index)\n urls.append(finfo.canonicalFilePath())\n debug('selected urls %r' % [str(url) for url in urls])\n return urls", "response": "Gets the list of selected items file paths"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npasting files from clipboard.", "response": "def paste_from_clipboard(self):\n \"\"\"\n Pastes files from clipboard.\n \"\"\"\n to = self.get_current_path()\n if os.path.isfile(to):\n to = os.path.abspath(os.path.join(to, os.pardir))\n mime = QtWidgets.QApplication.clipboard().mimeData()\n\n paste_operation = None\n if mime.hasFormat(self._UrlListMimeData.format(copy=True)):\n paste_operation = True\n elif mime.hasFormat(self._UrlListMimeData.format(copy=False)):\n paste_operation = False\n if paste_operation is not None:\n self._paste(\n self._UrlListMimeData.list_from(mime, copy=paste_operation),\n to, copy=paste_operation)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npasting the files listed in sources to destination.", "response": "def _paste(self, sources, destination, copy):\n \"\"\"\n Copies the files listed in ``sources`` to destination. Source are\n removed if copy is set to False.\n \"\"\"\n for src in sources:\n debug('%s <%s> to <%s>' % (\n 'copying' if copy else 'cutting', src, destination))\n perform_copy = True\n ext = os.path.splitext(src)[1]\n original = os.path.splitext(os.path.split(src)[1])[0]\n filename, status = QtWidgets.QInputDialog.getText(\n self.tree_view, _('Copy'), _('New name:'),\n QtWidgets.QLineEdit.Normal, original)\n if filename == '' or not status:\n return\n filename = filename + ext\n final_dest = os.path.join(destination, filename)\n if os.path.exists(final_dest):\n rep = QtWidgets.QMessageBox.question(\n self.tree_view, _('File exists'),\n _('File <%s> already exists. Do you want to erase it?') %\n final_dest,\n QtWidgets.QMessageBox.Yes | QtWidgets.QMessageBox.No,\n QtWidgets.QMessageBox.No)\n if rep == QtWidgets.QMessageBox.No:\n perform_copy = False\n if not perform_copy:\n continue\n try:\n if os.path.isfile(src):\n shutil.copy(src, final_dest)\n else:\n shutil.copytree(src, final_dest)\n except (IOError, OSError) as e:\n QtWidgets.QMessageBox.warning(\n self.tree_view, _('Copy failed'), _('Failed to copy \"%s\" to \"%s\".\\n\\n%s' %\n (src, destination, str(e))))\n _logger().exception('failed to copy \"%s\" to \"%s', src,\n destination)\n else:\n debug('file copied %s', src)\n if not copy:\n debug('removing source (cut operation)')\n if os.path.isfile(src):\n os.remove(src)\n else:\n shutil.rmtree(src)\n self.tree_view.files_renamed.emit([(src, final_dest)])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the list of files contained in path.", "response": "def _get_files(path):\n \"\"\"\n Returns the list of files contained in path (recursively).\n \"\"\"\n ret_val = []\n for root, _, files in os.walk(path):\n for f in files:\n ret_val.append(os.path.join(root, f))\n return ret_val"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete(self):\n urls = self.selected_urls()\n rep = QtWidgets.QMessageBox.question(\n self.tree_view, _('Confirm delete'),\n _('Are you sure about deleting the selected files/directories?'),\n QtWidgets.QMessageBox.Yes | QtWidgets.QMessageBox.No,\n QtWidgets.QMessageBox.Yes)\n if rep == QtWidgets.QMessageBox.Yes:\n deleted_files = []\n for fn in urls:\n try:\n if os.path.isfile(fn):\n os.remove(fn)\n deleted_files.append(fn)\n else:\n files = self._get_files(fn)\n shutil.rmtree(fn)\n deleted_files += files\n except OSError as e:\n QtWidgets.QMessageBox.warning(\n self.tree_view, _('Delete failed'),\n _('Failed to remove \"%s\".\\n\\n%s') % (fn, str(e)))\n _logger().exception('failed to remove %s', fn)\n self.tree_view.files_deleted.emit(deleted_files)\n for d in deleted_files:\n debug('%s removed', d)\n self.tree_view.file_deleted.emit(os.path.normpath(d))", "response": "Deletes the selected items."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_current_path(self):\n path = self.tree_view.fileInfo(\n self.tree_view.currentIndex()).filePath()\n # https://github.com/pyQode/pyQode/issues/6\n if not path:\n path = self.tree_view.root_path\n return path", "response": "Gets the path of the currently selected item."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef copy_path_to_clipboard(self):\n path = self.get_current_path()\n QtWidgets.QApplication.clipboard().setText(path)\n debug('path copied: %s' % path)", "response": "Copies the current path to the clipboard"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrename the selected item in the tree view.", "response": "def rename(self):\n \"\"\"\n Renames the selected item in the tree view\n \"\"\"\n src = self.get_current_path()\n pardir, name = os.path.split(src)\n new_name, status = QtWidgets.QInputDialog.getText(\n self.tree_view, _('Rename '), _('New name:'),\n QtWidgets.QLineEdit.Normal, name)\n if status:\n dest = os.path.join(pardir, new_name)\n old_files = []\n if os.path.isdir(src):\n old_files = self._get_files(src)\n else:\n old_files = [src]\n try:\n os.rename(src, dest)\n except OSError as e:\n QtWidgets.QMessageBox.warning(\n self.tree_view, _('Rename failed'),\n _('Failed to rename \"%s\" into \"%s\".\\n\\n%s') % (src, dest, str(e)))\n else:\n if os.path.isdir(dest):\n new_files = self._get_files(dest)\n else:\n new_files = [dest]\n self.tree_view.file_renamed.emit(os.path.normpath(src),\n os.path.normpath(dest))\n renamed_files = []\n for old_f, new_f in zip(old_files, new_files):\n self.tree_view.file_renamed.emit(old_f, new_f)\n renamed_files.append((old_f, new_f))\n # emit all changes in one go\n self.tree_view.files_renamed.emit(renamed_files)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a directory under the selected directory.", "response": "def create_directory(self):\n \"\"\"\n Creates a directory under the selected directory (if the selected item\n is a file, the parent directory is used).\n \"\"\"\n src = self.get_current_path()\n name, status = QtWidgets.QInputDialog.getText(\n self.tree_view, _('Create directory'), _('Name:'),\n QtWidgets.QLineEdit.Normal, '')\n if status:\n fatal_names = ['.', '..']\n for i in fatal_names:\n if i == name:\n QtWidgets.QMessageBox.critical(\n self.tree_view, _(\"Error\"), _(\"Wrong directory name\"))\n return\n\n if os.path.isfile(src):\n src = os.path.dirname(src)\n dir_name = os.path.join(src, name)\n try:\n os.makedirs(dir_name, exist_ok=True)\n except OSError as e:\n QtWidgets.QMessageBox.warning(\n self.tree_view, _('Failed to create directory'),\n _('Failed to create directory: \"%s\".\\n\\n%s') % (dir_name, str(e)))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a file under the current directory.", "response": "def create_file(self):\n \"\"\"\n Creates a file under the current directory.\n \"\"\"\n src = self.get_current_path()\n name, status = QtWidgets.QInputDialog.getText(\n self.tree_view, _('Create new file'), _('File name:'),\n QtWidgets.QLineEdit.Normal, '')\n if status:\n fatal_names = ['.', '..', os.sep]\n for i in fatal_names:\n if i == name:\n QtWidgets.QMessageBox.critical(\n self.tree_view, _(\"Error\"), _(\"Wrong directory name\"))\n return\n\n if os.path.isfile(src):\n src = os.path.dirname(src)\n path = os.path.join(src, name)\n try:\n with open(path, 'w'):\n pass\n except OSError as e:\n QtWidgets.QMessageBox.warning(\n self.tree_view, _('Failed to create new file'),\n _('Failed to create file: \"%s\".\\n\\n%s') % (path, str(e)))\n else:\n self.tree_view.file_created.emit(os.path.normpath(path))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_mimetype(path):\n filename = os.path.split(path)[1]\n mimetype = mimetypes.guess_type(filename)[0]\n if mimetype is None:\n mimetype = 'text/x-plain'\n _logger().debug('mimetype detected: %s', mimetype)\n return mimetype", "response": "Guesses the mime type of a file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nopen a file and set its content on the editor widget.", "response": "def open(self, path, encoding=None, use_cached_encoding=True):\n \"\"\"\n Open a file and set its content on the editor widget.\n\n pyqode does not try to guess encoding. It's up to the client code to\n handle encodings. You can either use a charset detector to detect\n encoding or rely on a settings in your application. It is also up to\n you to handle UnicodeDecodeError, unless you've added\n class:`pyqode.core.panels.EncodingPanel` on the editor.\n\n pyqode automatically caches file encoding that you can later reuse it\n automatically.\n\n :param path: Path of the file to open.\n :param encoding: Default file encoding. Default is to use the locale\n encoding.\n :param use_cached_encoding: True to use the cached encoding instead\n of ``encoding``. Set it to True if you want to force reload with a\n new encoding.\n\n :raises: UnicodeDecodeError in case of error if no EncodingPanel\n were set on the editor.\n \"\"\"\n ret_val = False\n if encoding is None:\n encoding = locale.getpreferredencoding()\n self.opening = True\n settings = Cache()\n self._path = path\n # get encoding from cache\n if use_cached_encoding:\n try:\n cached_encoding = settings.get_file_encoding(\n path, preferred_encoding=encoding)\n except KeyError:\n pass\n else:\n encoding = cached_encoding\n enable_modes = os.path.getsize(path) < self._limit\n for m in self.editor.modes:\n if m.enabled:\n m.enabled = enable_modes\n # open file and get its content\n try:\n with open(path, 'Ur', encoding=encoding) as file:\n content = file.read()\n if self.autodetect_eol:\n self._eol = file.newlines\n if isinstance(self._eol, tuple):\n self._eol = self._eol[0]\n if self._eol is None:\n # empty file has no newlines\n self._eol = self.EOL.string(self.preferred_eol)\n else:\n self._eol = self.EOL.string(self.preferred_eol)\n except (UnicodeDecodeError, UnicodeError) as e:\n try:\n from pyqode.core.panels import EncodingPanel\n panel = self.editor.panels.get(EncodingPanel)\n except KeyError:\n raise e # panel not found, not automatic error management\n else:\n panel.on_open_failed(path, encoding)\n else:\n # success! Cache the encoding\n settings.set_file_encoding(path, encoding)\n self._encoding = encoding\n # replace tabs by spaces\n if self.replace_tabs_by_spaces:\n content = content.replace(\"\\t\", \" \" * self.editor.tab_length)\n # set plain text\n self.editor.setPlainText(\n content, self.get_mimetype(path), self.encoding)\n self.editor.setDocumentTitle(self.editor.file.name)\n ret_val = True\n _logger().debug('file open: %s', path)\n self.opening = False\n if self.restore_cursor:\n self._restore_cached_pos()\n self._check_for_readonly()\n return ret_val"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef reload(self, encoding):\n assert os.path.exists(self.path)\n self.open(self.path, encoding=encoding,\n use_cached_encoding=False)", "response": "Reload the file with another encoding."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsaving the content of the current file to a file.", "response": "def save(self, path=None, encoding=None, fallback_encoding=None):\n \"\"\"\n Save the editor content to a file.\n\n :param path: optional file path. Set it to None to save using the\n current path (save), set a new path to save as.\n :param encoding: optional encoding, will use the current\n file encoding if None.\n :param fallback_encoding: Fallback encoding to use in case of encoding\n error. None to use the locale preferred encoding\n\n \"\"\"\n if not self.editor.dirty and \\\n (encoding is None and encoding == self.encoding) and \\\n (path is None and path == self.path):\n # avoid saving if editor not dirty or if encoding or path did not\n # change\n return\n if fallback_encoding is None:\n fallback_encoding = locale.getpreferredencoding()\n _logger().log(\n 5, \"saving %r with %r encoding\", path, encoding)\n if path is None:\n if self.path:\n path = self.path\n else:\n _logger().debug(\n 'failed to save file, path argument cannot be None if '\n 'FileManager.path is also None')\n return False\n # use cached encoding if None were specified\n if encoding is None:\n encoding = self._encoding\n self.saving = True\n self.editor.text_saving.emit(str(path))\n\n # get file persmission on linux\n try:\n st_mode = os.stat(path).st_mode\n except (ImportError, TypeError, AttributeError, OSError):\n st_mode = None\n\n # perform a safe save: we first save to a temporary file, if the save\n # succeeded we just rename the temporary file to the final file name\n # and remove it.\n if self.safe_save:\n tmp_path = path + '~'\n else:\n tmp_path = path\n try:\n with open(tmp_path, 'wb') as file:\n file.write(self._get_text(encoding))\n except UnicodeEncodeError:\n # fallback to utf-8 in case of error.\n with open(tmp_path, 'wb') as file:\n file.write(self._get_text(fallback_encoding))\n except (IOError, OSError) as e:\n self._rm(tmp_path)\n self.saving = False\n self.editor.text_saved.emit(str(path))\n raise e\n # cache update encoding\n Cache().set_file_encoding(path, encoding)\n self._encoding = encoding\n # remove path and rename temp file, if safe save is on\n if self.safe_save:\n self._rm(path)\n os.rename(tmp_path, path)\n self._rm(tmp_path)\n # reset dirty flags\n self.editor.document().setModified(False)\n # remember path for next save\n self._path = os.path.normpath(path)\n self.editor.text_saved.emit(str(path))\n self.saving = False\n _logger().debug('file saved: %s', path)\n self._check_for_readonly()\n\n # restore file permission\n if st_mode:\n try:\n os.chmod(path, st_mode)\n except (ImportError, TypeError, AttributeError):\n pass"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef close(self, clear=True):\n Cache().set_cursor_position(\n self.path, self.editor.textCursor().position())\n self.editor._original_text = ''\n if clear:\n self.editor.clear()\n self._path = ''\n self.mimetype = ''\n self._encoding = locale.getpreferredencoding()", "response": "Close the file open in the editor"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconnects and disconnects to the painted event of the editor", "response": "def on_state_changed(self, state):\n \"\"\"\n Connects/Disconnects to the painted event of the editor\n\n :param state: Enable state\n \"\"\"\n if state:\n self.editor.painted.connect(self._paint_margin)\n self.editor.repaint()\n else:\n self.editor.painted.disconnect(self._paint_margin)\n self.editor.repaint()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _paint_margin(self, event):\n font = QtGui.QFont(self.editor.font_name, self.editor.font_size +\n self.editor.zoom_level)\n metrics = QtGui.QFontMetricsF(font)\n pos = self._margin_pos\n offset = self.editor.contentOffset().x() + \\\n self.editor.document().documentMargin()\n x80 = round(metrics.width(' ') * pos) + offset\n painter = QtGui.QPainter(self.editor.viewport())\n painter.setPen(self._pen)\n painter.drawLine(x80, 0, x80, 2 ** 16)", "response": "Paints the right margin after editor paint event."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncheck if the file is in the file system and if it is writable checks if the file is in the file system and if it is writable checks if the file is in the file system and if it is writable checks if the file is in the file system and if it is not writeable", "response": "def _check_file(self):\n \"\"\"\n Checks watched file moficiation time and permission changes.\n \"\"\"\n try:\n self.editor.toPlainText()\n except RuntimeError:\n self._timer.stop()\n return\n if self.editor and self.editor.file.path:\n if not os.path.exists(self.editor.file.path) and self._mtime:\n self._notify_deleted_file()\n else:\n mtime = os.path.getmtime(self.editor.file.path)\n if mtime > self._mtime:\n self._mtime = mtime\n self._notify_change()\n # check for permission change\n writeable = os.access(self.editor.file.path, os.W_OK)\n self.editor.setReadOnly(not writeable)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nnotifies user from external event", "response": "def _notify(self, title, message, expected_action=None):\n \"\"\"\n Notify user from external event\n \"\"\"\n if self.editor is None:\n return\n inital_value = self.editor.save_on_focus_out\n self.editor.save_on_focus_out = False\n self._flg_notify = True\n dlg_type = (QtWidgets.QMessageBox.Yes | QtWidgets.QMessageBox.No)\n expected_action = (\n lambda *x: None) if not expected_action else expected_action\n if (self._auto_reload or QtWidgets.QMessageBox.question(\n self.editor, title, message, dlg_type,\n QtWidgets.QMessageBox.Yes) == QtWidgets.QMessageBox.Yes):\n expected_action(self.editor.file.path)\n self._update_mtime()\n self.editor.save_on_focus_out = inital_value"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nnotify user from external change", "response": "def _notify_change(self):\n \"\"\"\n Notify user from external change if autoReloadChangedFiles is False\n then reload the changed file in the editor\n \"\"\"\n def inner_action(*args):\n \"\"\" Inner action: open file \"\"\"\n # cache cursor position before reloading so that the cursor\n # position is restored automatically after reload has finished.\n # See OpenCobolIDE/OpenCobolIDE#97\n Cache().set_cursor_position(\n self.editor.file.path,\n self.editor.textCursor().position())\n if os.path.exists(self.editor.file.path):\n self.editor.file.open(self.editor.file.path)\n self.file_reloaded.emit()\n else:\n # file moved just after a change, see OpenCobolIDE/OpenCobolIDE#337\n self._notify_deleted_file()\n\n args = (_(\"File changed\"),\n _(\"The file %s has changed externally.\\nDo you want to \"\n \"reload it?\") % os.path.basename(self.editor.file.path))\n kwargs = {\"expected_action\": inner_action}\n if self.editor.hasFocus() or self.auto_reload:\n self._notify(*args, **kwargs)\n else:\n # show the reload prompt as soon as the editor has focus\n self._notification_pending = True\n self._data = (args, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if a notification is pending.", "response": "def _check_for_pending(self, *args, **kwargs):\n \"\"\"\n Checks if a notification is pending.\n \"\"\"\n if self._notification_pending and not self._processing:\n self._processing = True\n args, kwargs = self._data\n self._notify(*args, **kwargs)\n self._notification_pending = False\n self._processing = False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nnotify user from external file deletion.", "response": "def _notify_deleted_file(self):\n \"\"\"\n Notify user from external file deletion.\n \"\"\"\n self.file_deleted.emit(self.editor)\n # file deleted, disable file watcher\n self.enabled = False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting a list of all of the available Python SDK versions sorted with the newest last.", "response": "def get_gae_versions():\n \"\"\"Gets a list of all of the available Python SDK versions, sorted with\n the newest last.\"\"\"\n r = requests.get(SDK_RELEASES_URL)\n r.raise_for_status()\n\n releases = r.json().get('items', {})\n\n # We only care about the Python releases, which all are in the format\n # \"featured/google_appengine_{version}.zip\". We'll extract the version\n # number so we can sort the list by version, and finally get the download\n # URL.\n versions_and_urls = []\n for release in releases:\n match = PYTHON_RELEASE_RE.match(release['name'])\n\n if not match:\n continue\n\n versions_and_urls.append(\n ([int(x) for x in match.groups()], release['mediaLink']))\n\n return sorted(versions_and_urls, key=lambda x: x[0])"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if the existing install is up to date. Otherwise returns False.", "response": "def is_existing_up_to_date(destination, latest_version):\n \"\"\"Returns False if there is no existing install or if the existing install\n is out of date. Otherwise, returns True.\"\"\"\n version_path = os.path.join(\n destination, 'google_appengine', 'VERSION')\n\n if not os.path.exists(version_path):\n return False\n\n with open(version_path, 'r') as f:\n version_line = f.readline()\n\n match = SDK_RELEASE_RE.match(version_line)\n\n if not match:\n print('Unable to parse version from:', version_line)\n return False\n\n version = [int(x) for x in match.groups()]\n\n return version >= latest_version"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef download_sdk(url):\n r = requests.get(url)\n r.raise_for_status()\n return StringIO(r.content)", "response": "Downloads the SDK and returns a file - like object for the zip content."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fixup_version(destination, version):\n version_path = os.path.join(\n destination, 'google_appengine', 'VERSION')\n\n with open(version_path, 'r') as f:\n version_data = f.read()\n\n version_data = version_data.replace(\n 'release: \"0.0.0\"',\n 'release: \"{}\"'.format('.'.join(str(x) for x in version)))\n\n with open(version_path, 'w') as f:\n f.write(version_data)", "response": "Fixup the version number in the VERSION file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef download_command(args):\n latest_two_versions = list(reversed(get_gae_versions()))[:2]\n\n zip = None\n version_number = None\n\n for version in latest_two_versions:\n if is_existing_up_to_date(args.destination, version[0]):\n print(\n 'App Engine SDK already exists and is up to date '\n 'at {}.'.format(args.destination))\n return\n\n try:\n print('Downloading App Engine SDK {}'.format(\n '.'.join([str(x) for x in version[0]])))\n zip = download_sdk(version[1])\n version_number = version[0]\n break\n except Exception as e:\n print('Failed to download: {}'.format(e))\n continue\n\n if not zip:\n return\n\n print('Extracting SDK to {}'.format(args.destination))\n\n extract_zip(zip, args.destination)\n fixup_version(args.destination, version_number)\n\n print('App Engine SDK installed.')", "response": "Downloads and extracts the latest App Engine SDK to the given\n destination."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef preferred_encodings(self):\n default_encodings = [\n locale.getpreferredencoding().lower().replace('-', '_')]\n if 'utf_8' not in default_encodings:\n default_encodings.append('utf_8')\n default_encodings = list(set(default_encodings))\n return json.loads(self._settings.value(\n 'userDefinedEncodings', json.dumps(default_encodings)))", "response": "Returns a list of user defined encodings for display in the encodings menu / combobox."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_file_encoding(self, file_path, preferred_encoding=None):\n _logger().debug('getting encoding for %s', file_path)\n try:\n map = json.loads(self._settings.value('cachedFileEncodings'))\n except TypeError:\n map = {}\n try:\n return map[file_path]\n except KeyError:\n encodings = self.preferred_encodings\n if preferred_encoding:\n encodings.insert(0, preferred_encoding)\n for encoding in encodings:\n _logger().debug('trying encoding: %s', encoding)\n try:\n with open(file_path, encoding=encoding) as f:\n f.read()\n except (UnicodeDecodeError, IOError, OSError):\n pass\n else:\n return encoding\n raise KeyError(file_path)", "response": "Gets an eventual cached encoding for a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_file_encoding(self, path, encoding):\n try:\n map = json.loads(self._settings.value('cachedFileEncodings'))\n except TypeError:\n map = {}\n map[path] = encoding\n self._settings.setValue('cachedFileEncodings', json.dumps(map))", "response": "Sets the encoding of the specified file path."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_cursor_position(self, file_path):\n try:\n map = json.loads(self._settings.value('cachedCursorPosition'))\n except TypeError:\n map = {}\n try:\n pos = map[file_path]\n except KeyError:\n pos = 0\n if isinstance(pos, list):\n # changed in pyqode 2.6.3, now we store the cursor position\n # instead of the line and column (faster)\n pos = 0\n return pos", "response": "Gets the cached cursor position for a file in the cache"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the cursor position for the specified file path.", "response": "def set_cursor_position(self, path, position):\n \"\"\"\n Cache encoding for the specified file path.\n\n :param path: path of the file to cache\n :param position: cursor position to cache\n \"\"\"\n try:\n map = json.loads(self._settings.value('cachedCursorPosition'))\n except TypeError:\n map = {}\n map[path] = position\n self._settings.setValue('cachedCursorPosition', json.dumps(map))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a substring delimited by start and end position.", "response": "def _mid(string, start, end=None):\n \"\"\"\n Returns a substring delimited by start and end position.\n \"\"\"\n if end is None:\n end = len(string)\n return string[start:start + end]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting an ansi code to a QColor", "response": "def _ansi_color(code, theme):\n \"\"\"\n Converts an ansi code to a QColor, taking the color scheme (theme) into account.\n \"\"\"\n red = 170 if code & 1 else 0\n green = 170 if code & 2 else 0\n blue = 170 if code & 4 else 0\n color = QtGui.QColor(red, green, blue)\n if theme is not None:\n mappings = {\n '#aa0000': theme.red,\n '#00aa00': theme.green,\n '#aaaa00': theme.yellow,\n '#0000aa': theme.blue,\n '#aa00aa': theme.magenta,\n '#00aaaa': theme.cyan,\n '#000000': theme.background,\n \"#ffffff\": theme.foreground\n }\n try:\n return mappings[color.name()]\n except KeyError:\n pass\n return color"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting a Qt key event to an ASCII sequence.", "response": "def _qkey_to_ascii(event):\n \"\"\"\n (Try to) convert the Qt key event to the corresponding ASCII sequence for\n the terminal. This works fine for standard alphanumerical characters, but\n most other characters require terminal specific control_modifier sequences.\n The conversion below works for TERM=\"linux' terminals.\n \"\"\"\n if sys.platform == 'darwin':\n control_modifier = QtCore.Qt.MetaModifier\n else:\n control_modifier = QtCore.Qt.ControlModifier\n ctrl = int(event.modifiers() & control_modifier) != 0\n if ctrl:\n if event.key() == QtCore.Qt.Key_P:\n return b'\\x10'\n elif event.key() == QtCore.Qt.Key_N:\n return b'\\x0E'\n elif event.key() == QtCore.Qt.Key_C:\n return b'\\x03'\n elif event.key() == QtCore.Qt.Key_L:\n return b'\\x0C'\n elif event.key() == QtCore.Qt.Key_B:\n return b'\\x02'\n elif event.key() == QtCore.Qt.Key_F:\n return b'\\x06'\n elif event.key() == QtCore.Qt.Key_D:\n return b'\\x04'\n elif event.key() == QtCore.Qt.Key_O:\n return b'\\x0F'\n elif event.key() == QtCore.Qt.Key_V:\n return QtWidgets.qApp.clipboard().text().encode('utf-8')\n else:\n return None\n else:\n if event.key() == QtCore.Qt.Key_Return:\n return '\\n'.encode('utf-8')\n elif event.key() == QtCore.Qt.Key_Enter:\n return '\\n'.encode('utf-8')\n elif event.key() == QtCore.Qt.Key_Tab:\n return '\\t'.encode('utf-8')\n elif event.key() == QtCore.Qt.Key_Backspace:\n return b'\\x08'\n elif event.key() == QtCore.Qt.Key_Delete:\n return b'\\x06\\x08'\n elif event.key() == QtCore.Qt.Key_Enter:\n return '\\n'.encode('utf-8')\n elif event.key() == QtCore.Qt.Key_Home:\n return b'\\x1b[H'\n elif event.key() == QtCore.Qt.Key_End:\n return b'\\x1b[F'\n elif event.key() == QtCore.Qt.Key_Left:\n return b'\\x02'\n elif event.key() == QtCore.Qt.Key_Up:\n return b'\\x10'\n elif event.key() == QtCore.Qt.Key_Right:\n return b'\\x06'\n elif event.key() == QtCore.Qt.Key_Down:\n return b'\\x0E'\n elif event.key() == QtCore.Qt.Key_PageUp:\n return b'\\x49'\n elif event.key() == QtCore.Qt.Key_PageDown:\n return b'\\x51'\n elif event.key() == QtCore.Qt.Key_F1:\n return b'\\x1b\\x31'\n elif event.key() == QtCore.Qt.Key_F2:\n return b'\\x1b\\x32'\n elif event.key() == QtCore.Qt.Key_F3:\n return b'\\x00\\x3b'\n elif event.key() == QtCore.Qt.Key_F4:\n return b'\\x1b\\x34'\n elif event.key() == QtCore.Qt.Key_F5:\n return b'\\x1b\\x35'\n elif event.key() == QtCore.Qt.Key_F6:\n return b'\\x1b\\x36'\n elif event.key() == QtCore.Qt.Key_F7:\n return b'\\x1b\\x37'\n elif event.key() == QtCore.Qt.Key_F8:\n return b'\\x1b\\x38'\n elif event.key() == QtCore.Qt.Key_F9:\n return b'\\x1b\\x39'\n elif event.key() == QtCore.Qt.Key_F10:\n return b'\\x1b\\x30'\n elif event.key() == QtCore.Qt.Key_F11:\n return b'\\x45'\n elif event.key() == QtCore.Qt.Key_F12:\n return b'\\x46'\n elif event.text() in ('abcdefghijklmnopqrstuvwxyz'\n 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'\n '[],=-.;/`&^~*@|#(){}$><%+?\"_!'\n \"'\\\\ :\"):\n return event.text().encode('utf8')\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks if the child process is running.", "response": "def is_running(self):\n \"\"\"\n Checks if the child process is running (or is starting).\n \"\"\"\n return self._process.state() in [self._process.Running, self._process.Starting]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef start_process(self, program, arguments=None, working_dir=None, print_command=True,\n use_pseudo_terminal=True, env=None):\n \"\"\"\n Starts the child process.\n\n :param program: program to start\n :param arguments: list of program arguments\n :param working_dir: working directory of the child process\n :param print_command: True to print the full command (pgm + arguments) as the first line\n of the output window\n :param use_pseudo_terminal: True to use a pseudo terminal on Unix (pty), False to avoid\n using a pty wrapper. When using a pty wrapper, both stdout and stderr are merged together.\n :param environ: environment variables to set on the child process. If None, os.environ will be used.\n \"\"\"\n # clear previous output\n self.clear()\n self.setReadOnly(False)\n if arguments is None:\n arguments = []\n if sys.platform != 'win32' and use_pseudo_terminal:\n pgm = sys.executable\n args = [pty_wrapper.__file__, program] + arguments\n self.flg_use_pty = use_pseudo_terminal\n else:\n pgm = program\n args = arguments\n self.flg_use_pty = False # pty not available on windows\n\n self._process.setProcessEnvironment(self._setup_process_environment(env))\n\n if working_dir:\n self._process.setWorkingDirectory(working_dir)\n\n if print_command:\n self._formatter.append_message('\\x1b[0m%s %s\\n' % (program, ' '.join(arguments)),\n output_format=OutputFormat.CustomFormat)\n self._process.start(pgm, args)", "response": "Starts the child process."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef stop_process(self):\n self._process.terminate()\n if not self._process.waitForFinished(100):\n self._process.kill()", "response": "Stops the child process."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef closeEvent(self, event):\n self.stop_process()\n self.backend.stop()\n try:\n self.modes.remove('_LinkHighlighter')\n except KeyError:\n pass # already removed\n super(OutputWindow, self).closeEvent(event)", "response": "Terminates the child process on close."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef keyPressEvent(self, event):\n if self._process.state() != self._process.Running:\n return\n tc = self.textCursor()\n sel_start = tc.selectionStart()\n sel_end = tc.selectionEnd()\n tc.setPosition(self._formatter._last_cursor_pos)\n self.setTextCursor(tc)\n if self.input_handler.key_press_event(event):\n tc.setPosition(sel_start)\n tc.setPosition(sel_end, tc.KeepAnchor)\n self.setTextCursor(tc)\n super(OutputWindow, self).keyPressEvent(event)\n self._formatter._last_cursor_pos = self.textCursor().position()", "response": "Handles key press event using the input handler."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mouseMoveEvent(self, event):\n c = self.cursorForPosition(event.pos())\n block = c.block()\n self._link_match = None\n self.viewport().setCursor(QtCore.Qt.IBeamCursor)\n for match in self.link_regex.finditer(block.text()):\n if not match:\n continue\n start, end = match.span()\n if start <= c.positionInBlock() <= end:\n self._link_match = match\n self.viewport().setCursor(QtCore.Qt.PointingHandCursor)\n break\n\n self._last_hovered_block = block\n super(OutputWindow, self).mouseMoveEvent(event)", "response": "Handles mouse over file link."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef mousePressEvent(self, event):\n super(OutputWindow, self).mousePressEvent(event)\n if self._link_match:\n path = self._link_match.group('url')\n line = self._link_match.group('line')\n if line is not None:\n line = int(line) - 1\n else:\n line = 0\n self.open_file_requested.emit(path, line)", "response": "Handle file link clicks."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninitialize the code editor.", "response": "def _init_code_edit(self, backend):\n \"\"\"\n Initializes the code editor (setup modes, panels and colors).\n \"\"\"\n from pyqode.core import panels, modes\n self.modes.append(_LinkHighlighter(self.document()))\n self.background = self._formatter.color_scheme.background\n self.foreground = self._formatter.color_scheme.foreground\n self._reset_stylesheet()\n self.setCenterOnScroll(False)\n self.setMouseTracking(True)\n self.setUndoRedoEnabled(False)\n search_panel = panels.SearchAndReplacePanel()\n self.panels.append(search_panel, search_panel.Position.TOP)\n self.action_copy.setShortcut('Ctrl+Shift+C')\n self.action_paste.setShortcut('Ctrl+Shift+V')\n self.remove_action(self.action_undo, sub_menu=None)\n self.remove_action(self.action_redo, sub_menu=None)\n self.remove_action(self.action_cut, sub_menu=None)\n self.remove_action(self.action_duplicate_line, sub_menu=None)\n self.remove_action(self.action_indent)\n self.remove_action(self.action_un_indent)\n self.remove_action(self.action_goto_line)\n self.remove_action(search_panel.menu.menuAction())\n self.remove_menu(self._sub_menus['Advanced'])\n self.add_action(search_panel.actionSearch, sub_menu=None)\n self.modes.append(modes.ZoomMode())\n self.backend.start(backend)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets up the process environment.", "response": "def _setup_process_environment(self, env):\n \"\"\"\n Sets up the process environment.\n \"\"\"\n environ = self._process.processEnvironment()\n if env is None:\n env = {}\n for k, v in os.environ.items():\n environ.insert(k, v)\n for k, v in env.items():\n environ.insert(k, v)\n if sys.platform != 'win32':\n environ.insert('TERM', 'xterm')\n environ.insert('LINES', '24')\n environ.insert('COLUMNS', '450')\n environ.insert('PYTHONUNBUFFERED', '1')\n environ.insert('QT_LOGGING_TO_CONSOLE', '1')\n return environ"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _on_process_error(self, error):\n if self is None:\n return\n err = PROCESS_ERROR_STRING[error]\n self._formatter.append_message(err + '\\r\\n', output_format=OutputFormat.ErrorMessageFormat)", "response": "Display child process error in the text edit."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _on_process_finished(self):\n exit_code = self._process.exitCode()\n if self._process.exitStatus() != self._process.NormalExit:\n exit_code = 139\n self._formatter.append_message('\\x1b[0m\\nProcess finished with exit code %d' % exit_code,\n output_format=OutputFormat.CustomFormat)\n self.setReadOnly(True)\n self.process_finished.emit()", "response": "Write the process finished message and emit the finished signal."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading the child process s stdout and processes it.", "response": "def _read_stdout(self):\n \"\"\"\n Reads the child process' stdout and process it.\n \"\"\"\n output = self._decode(self._process.readAllStandardOutput().data())\n if self._formatter:\n self._formatter.append_message(output, output_format=OutputFormat.NormalMessageFormat)\n else:\n self.insertPlainText(output)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _read_stderr(self):\n output = self._decode(self._process.readAllStandardError().data())\n if self._formatter:\n self._formatter.append_message(output, output_format=OutputFormat.ErrorMessageFormat)\n else:\n self.insertPlainText(output)", "response": "Reads the child process s stderr and processes it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nparse the text in the format specified by the user and returns a list of operations.", "response": "def parse_text(self, formatted_text):\n \"\"\"\n Retursn a list of operations (draw, cup, ed,...).\n\n Each operation consist of a command and its associated data.\n\n :param formatted_text: text to parse with the default char format to apply.\n :return: list of Operation\n \"\"\"\n assert isinstance(formatted_text, FormattedText)\n ret_val = []\n fmt = formatted_text.fmt if self._prev_fmt_closed else self._prev_fmt\n fmt = QtGui.QTextCharFormat(fmt)\n if not self._pending_text:\n stripped_text = formatted_text.txt\n else:\n stripped_text = self._pending_text + formatted_text.txt\n self._pending_text = ''\n while stripped_text:\n try:\n escape_pos = stripped_text.index(self._escape[0])\n except ValueError:\n ret_val.append(Operation('draw', FormattedText(stripped_text, fmt)))\n break\n else:\n if escape_pos != 0:\n ret_val.append(Operation('draw', FormattedText(stripped_text[:escape_pos], fmt)))\n stripped_text = stripped_text[escape_pos:]\n fmt = QtGui.QTextCharFormat(fmt)\n assert stripped_text[0] == self._escape[0]\n while stripped_text and stripped_text[0] == self._escape[0]:\n if self._escape.startswith(stripped_text):\n # control sequence not complete\n self._pending_text += stripped_text\n stripped_text = ''\n break\n if not stripped_text.startswith(self._escape):\n # check vt100 escape sequences\n ctrl_seq = False\n for alt_seq in self._escape_alts:\n if stripped_text.startswith(alt_seq):\n ctrl_seq = True\n break\n if not ctrl_seq:\n # not a control sequence\n self._pending_text = ''\n ret_val.append(Operation('draw', FormattedText(stripped_text[:1], fmt)))\n fmt = QtGui.QTextCharFormat(fmt)\n stripped_text = stripped_text[1:]\n continue\n self._pending_text += _mid(stripped_text, 0, self._escape_len)\n stripped_text = stripped_text[self._escape_len:]\n\n # Non draw related command (cursor/erase)\n if self._pending_text in [self._escape] + self._escape_alts:\n m = self._supported_commands.match(stripped_text)\n if m and self._pending_text == self._escape:\n _, e = m.span()\n n = m.group('n')\n cmd = m.group('cmd')\n if not n:\n n = 0\n ret_val.append(Operation(self._commands[cmd], n))\n self._pending_text = ''\n stripped_text = stripped_text[e:]\n continue\n else:\n m = self._unsupported_command.match(stripped_text)\n if m:\n self._pending_text = ''\n stripped_text = stripped_text[m.span()[1]:]\n continue\n elif self._pending_text in ['\\x1b=', '\\x1b>']:\n self._pending_text = ''\n continue\n\n # Handle Select Graphic Rendition commands\n # get the number\n str_nbr = ''\n numbers = []\n while stripped_text:\n if stripped_text[0].isdigit():\n str_nbr += stripped_text[0]\n else:\n if str_nbr:\n numbers.append(str_nbr)\n if not str_nbr or stripped_text[0] != self._semicolon:\n break\n str_nbr = ''\n self._pending_text += _mid(stripped_text, 0, 1)\n stripped_text = stripped_text[1:]\n\n if not stripped_text:\n break\n\n # remove terminating char\n if not stripped_text.startswith(self._color_terminator):\n # _logger().warn('removing %s', repr(self._pending_text + stripped_text[0]))\n self._pending_text = ''\n stripped_text = stripped_text[1:]\n break\n\n # got consistent control sequence, ok to clear pending text\n self._pending_text = ''\n stripped_text = stripped_text[1:]\n\n if not numbers:\n fmt = QtGui.QTextCharFormat(formatted_text.fmt)\n self.end_format_scope()\n\n i_offset = 0\n n = len(numbers)\n for i in range(n):\n i += i_offset\n code = int(numbers[i])\n\n if self._TextColorStart <= code <= self._TextColorEnd:\n fmt.setForeground(_ansi_color(code - self._TextColorStart, self.color_scheme))\n self._set_format_scope(fmt)\n elif self._BackgroundColorStart <= code <= self._BackgroundColorEnd:\n fmt.setBackground(_ansi_color(code - self._BackgroundColorStart, self.color_scheme))\n self._set_format_scope(fmt)\n else:\n if code == self._ResetFormat:\n fmt = QtGui.QTextCharFormat(formatted_text.fmt)\n self.end_format_scope()\n elif code == self._BoldText:\n fmt.setFontWeight(QtGui.QFont.Bold)\n self._set_format_scope(fmt)\n elif code == self._NotBold:\n fmt.setFontWeight(QtGui.QFont.Normal)\n self._set_format_scope(fmt)\n elif code == self._ItalicText:\n fmt.setFontItalic(True)\n self._set_format_scope(fmt)\n elif code == self._NotItalicNotFraktur:\n fmt.setFontItalic(False)\n self._set_format_scope(fmt)\n elif code == self._UnderlinedText:\n fmt.setUnderlineStyle(fmt.SingleUnderline)\n fmt.setUnderlineColor(fmt.foreground().color())\n self._set_format_scope(fmt)\n elif code == self._NotUnderlined:\n fmt.setUnderlineStyle(fmt.NoUnderline)\n self._set_format_scope(fmt)\n elif code == self._DefaultTextColor:\n fmt.setForeground(formatted_text.fmt.foreground())\n self._set_format_scope(fmt)\n elif code == self._DefaultBackgroundColor:\n fmt.setBackground(formatted_text.fmt.background())\n self._set_format_scope(fmt)\n elif code == self._Dim:\n fmt = QtGui.QTextCharFormat(fmt)\n fmt.setForeground(fmt.foreground().color().darker(self.DIM_FACTOR))\n elif code == self._Negative:\n normal_fmt = fmt\n fmt = QtGui.QTextCharFormat(fmt)\n fmt.setForeground(normal_fmt.background())\n fmt.setBackground(normal_fmt.foreground())\n elif code == self._Positive:\n fmt = QtGui.QTextCharFormat(formatted_text.fmt)\n elif code in [self._RgbBackgroundColor, self._RgbTextColor]:\n # See http://en.wikipedia.org/wiki/ANSI_escape_code#Colors\n i += 1\n if i == n:\n break\n next_code = int(numbers[i])\n if next_code == 2:\n # RGB set with format: 38;2;;;\n if i + 3 < n:\n method = fmt.setForeground if code == self._RgbTextColor else fmt.setBackground\n method(QtGui.QColor(int(numbers[i + 1]), int(numbers[i + 2]), int(numbers[i + 3])))\n self._set_format_scope(fmt)\n i_offset = 3\n elif next_code == 5:\n # 256 color mode with format: 38;5;\n index = int(numbers[i + 1])\n if index < 8:\n # The first 8 colors are standard low-intensity ANSI colors.\n color = _ansi_color(index, self.color_scheme)\n elif index < 16:\n # The next 8 colors are standard high-intensity ANSI colors.\n color = _ansi_color(index - 8, self.color_scheme).lighter(150)\n elif index < 232:\n # The next 216 colors are a 6x6x6 RGB cube.\n o = index - 16\n color = QtGui.QColor((o / 36) * 51, ((o / 6) % 6) * 51, (o % 6) * 51)\n else:\n # The last 24 colors are a greyscale gradient.\n grey = (index - 232) * 11\n color = QtGui.QColor(grey, grey, grey)\n\n if code == self._RgbTextColor:\n fmt.setForeground(color)\n else:\n fmt.setBackground(color)\n\n self._set_format_scope(fmt)\n else:\n _logger().warn('unsupported SGR code: %r', code)\n return ret_val"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the format scope.", "response": "def _set_format_scope(self, fmt):\n \"\"\"\n Opens the format scope.\n \"\"\"\n self._prev_fmt = QtGui.QTextCharFormat(fmt)\n self._prev_fmt_closed = False"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_command(self, command):\n try:\n self._history.remove(command)\n except ValueError:\n pass\n self._history.insert(0, command)\n self._index = -1", "response": "Adds a command to the history and reset the index."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nscrolls up the command list.", "response": "def scroll_up(self):\n \"\"\"\n Returns the previous command, if any.\n \"\"\"\n self._index += 1\n nb_commands = len(self._history)\n if self._index >= nb_commands:\n self._index = nb_commands - 1\n try:\n return self._history[self._index]\n except IndexError:\n return ''"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef scroll_down(self):\n self._index -= 1\n if self._index < 0:\n self._index = -1\n return ''\n try:\n return self._history[self._index]\n except IndexError:\n return ''", "response": "Scrolls down the log."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _insert_command(self, command):\n self._clear_user_buffer()\n tc = self.edit.textCursor()\n tc.insertText(command)\n self.edit.setTextCursor(tc)", "response": "Insert command into the text edit."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef key_press_event(self, event):\n input_buffer = self._get_input_buffer()\n ctrl = int(event.modifiers() & QtCore.Qt.ControlModifier) != 0\n shift = int(event.modifiers() & QtCore.Qt.ShiftModifier) != 0\n delete = event.key() in [QtCore.Qt.Key_Backspace, QtCore.Qt.Key_Delete]\n ignore = False\n if delete and not input_buffer and not shift:\n return False\n if ctrl:\n if shift and event.key() == QtCore.Qt.Key_V:\n self.edit.insertPlainText(QtWidgets.qApp.clipboard().text())\n return False\n elif event.key() == QtCore.Qt.Key_L:\n self.edit.clear()\n if sys.platform == 'win32':\n self.process.write(b'\\r')\n self.process.write(b'\\n')\n return False\n if (shift or ctrl) and event.key() == QtCore.Qt.Key_Backspace:\n if input_buffer.strip() != '':\n return True\n self._clear_user_buffer()\n return False\n if event.key() == QtCore.Qt.Key_Up:\n if self.is_code_completion_popup_visible():\n return True\n self._insert_command(self._history.scroll_up())\n return False\n if event.key() == QtCore.Qt.Key_Left:\n return bool(input_buffer)\n if event.key() == QtCore.Qt.Key_Down:\n if self.is_code_completion_popup_visible():\n return True\n self._insert_command(self._history.scroll_down())\n return False\n if event.key() == QtCore.Qt.Key_Home:\n tc = self.edit.textCursor()\n tc.movePosition(tc.StartOfBlock)\n tc.movePosition(tc.Right, tc.MoveAnchor, self.edit._formatter._prefix_len)\n self.edit.setTextCursor(tc)\n return False\n if event.key() == QtCore.Qt.Key_End:\n tc = self.edit.textCursor()\n tc.movePosition(tc.EndOfBlock)\n self.edit.setTextCursor(tc)\n self._cursor_pos = len(self._get_input_buffer())\n return False\n if event.key() in [QtCore.Qt.Key_Return, QtCore.Qt.Key_Enter]:\n if self.is_code_completion_popup_visible():\n return True\n tc = self.edit.textCursor()\n tc.movePosition(tc.EndOfBlock)\n self.edit.setTextCursor(tc)\n # send the user input to the child process\n if self.edit.flg_use_pty or 'cmd.exe' in self.process.program():\n # remove user buffer from text edit, the content of the buffer will be\n # drawn as soon as we write it to the process stdin\n tc = self.edit.textCursor()\n for _ in input_buffer:\n tc.deletePreviousChar()\n self.edit.setTextCursor(tc)\n self._history.add_command(input_buffer)\n if sys.platform == 'win32':\n input_buffer += \"\\r\"\n input_buffer += \"\\n\"\n self.process.write(input_buffer.encode())\n if self.edit.flg_use_pty or 'cmd.exe' in self.process.program():\n ignore = True\n return not ignore", "response": "Handles key press events."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nappending a message to the edit.", "response": "def append_message(self, text, output_format=OutputFormat.NormalMessageFormat):\n \"\"\"\n Parses and append message to the text edit.\n \"\"\"\n self._append_message(text, self._formats[output_format])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nappend a message to the internal buffer.", "response": "def _append_message(self, text, char_format):\n \"\"\"\n Parses text and executes parsed operations.\n \"\"\"\n self._cursor = self._text_edit.textCursor()\n operations = self._parser.parse_text(FormattedText(text, char_format))\n for i, operation in enumerate(operations):\n try:\n func = getattr(self, '_%s' % operation.command)\n except AttributeError:\n print('command not implemented: %r - %r' % (\n operation.command, operation.data))\n else:\n try:\n func(operation.data)\n except Exception:\n _logger().exception('exception while running %r', operation)\n # uncomment next line for debugging commands\n self._text_edit.repaint()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _draw(self, data):\n self._cursor.clearSelection()\n self._cursor.setPosition(self._last_cursor_pos)\n\n if '\\x07' in data.txt:\n print('\\a')\n txt = data.txt.replace('\\x07', '')\n\n if '\\x08' in txt:\n parts = txt.split('\\x08')\n else:\n parts = [txt]\n\n for i, part in enumerate(parts):\n if part:\n part = part.replace('\\r\\r', '\\r')\n if len(part) >= 80 * 24 * 8:\n # big output, process it in one step (\\r and \\n will not be handled)\n self._draw_chars(data, part)\n continue\n to_draw = ''\n for n, char in enumerate(part):\n if char == '\\n':\n self._draw_chars(data, to_draw)\n to_draw = ''\n self._linefeed()\n elif char == '\\r':\n self._draw_chars(data, to_draw)\n to_draw = ''\n self._erase_in_line(0)\n try:\n nchar = part[n + 1]\n except IndexError:\n nchar = None\n if self._cursor.positionInBlock() > 80 and self.flg_bash and nchar != '\\n':\n self._linefeed()\n self._cursor.movePosition(self._cursor.StartOfBlock)\n self._text_edit.setTextCursor(self._cursor)\n else:\n to_draw += char\n if to_draw:\n self._draw_chars(data, to_draw)\n if i != len(parts) - 1:\n self._cursor_back(1)\n self._last_cursor_pos = self._cursor.position()\n self._prefix_len = self._cursor.positionInBlock()\n self._text_edit.setTextCursor(self._cursor)", "response": "Draw text from the cursor."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndrawing the specified charachters using the specified format.", "response": "def _draw_chars(self, data, to_draw):\n \"\"\"\n Draw the specified charachters using the specified format.\n \"\"\"\n i = 0\n while not self._cursor.atBlockEnd() and i < len(to_draw) and len(to_draw) > 1:\n self._cursor.deleteChar()\n i += 1\n self._cursor.insertText(to_draw, data.fmt)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nperform a line feed.", "response": "def _linefeed(self):\n \"\"\"\n Performs a line feed.\n \"\"\"\n last_line = self._cursor.blockNumber() == self._text_edit.blockCount() - 1\n if self._cursor.atEnd() or last_line:\n if last_line:\n self._cursor.movePosition(self._cursor.EndOfBlock)\n self._cursor.insertText('\\n')\n else:\n self._cursor.movePosition(self._cursor.Down)\n self._cursor.movePosition(self._cursor.StartOfBlock)\n self._text_edit.setTextCursor(self._cursor)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _cursor_down(self, value):\n self._cursor.clearSelection()\n if self._cursor.atEnd():\n self._cursor.insertText('\\n')\n else:\n self._cursor.movePosition(self._cursor.Down, self._cursor.MoveAnchor, value)\n self._last_cursor_pos = self._cursor.position()", "response": "Moves the cursor down by value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmove the cursor up by value.", "response": "def _cursor_up(self, value):\n \"\"\"\n Moves the cursor up by ``value``.\n \"\"\"\n value = int(value)\n if value == 0:\n value = 1\n self._cursor.clearSelection()\n self._cursor.movePosition(self._cursor.Up, self._cursor.MoveAnchor, value)\n self._last_cursor_pos = self._cursor.position()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmoves the cursor position.", "response": "def _cursor_position(self, data):\n \"\"\"\n Moves the cursor position.\n \"\"\"\n column, line = self._get_line_and_col(data)\n self._move_cursor_to_line(line)\n self._move_cursor_to_column(column)\n self._last_cursor_pos = self._cursor.position()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmove the cursor to the specified column.", "response": "def _move_cursor_to_column(self, column):\n \"\"\"\n Moves the cursor to the specified column, if possible.\n \"\"\"\n last_col = len(self._cursor.block().text())\n self._cursor.movePosition(self._cursor.EndOfBlock)\n to_insert = ''\n for i in range(column - last_col):\n to_insert += ' '\n if to_insert:\n self._cursor.insertText(to_insert)\n self._cursor.movePosition(self._cursor.StartOfBlock)\n self._cursor.movePosition(self._cursor.Right, self._cursor.MoveAnchor, column)\n self._last_cursor_pos = self._cursor.position()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmoves the cursor to the specified line.", "response": "def _move_cursor_to_line(self, line):\n \"\"\"\n Moves the cursor to the specified line, if possible.\n \"\"\"\n last_line = self._text_edit.document().blockCount() - 1\n self._cursor.clearSelection()\n self._cursor.movePosition(self._cursor.End)\n to_insert = ''\n for i in range(line - last_line):\n to_insert += '\\n'\n if to_insert:\n self._cursor.insertText(to_insert)\n self._cursor.movePosition(self._cursor.Start)\n self._cursor.movePosition(self._cursor.Down, self._cursor.MoveAnchor, line)\n self._last_cursor_pos = self._cursor.position()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget line and column numbers from a string like 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5 or 1 ; 5.", "response": "def _get_line_and_col(data):\n \"\"\"\n Gets line and column from a string like the following: \"1;5\" or \"1;\" or \";5\"\n\n and convers the column/line numbers to 0 base.\n \"\"\"\n try:\n line, column = data.split(';')\n except AttributeError:\n line = int(data)\n column = 1\n # handle empty values and convert them to 0 based indices\n if not line:\n line = 0\n else:\n line = int(line) - 1\n if line < 0:\n line = 0\n if not column:\n column = 0\n else:\n column = int(column) - 1\n if column < 0:\n column = 0\n return column, line"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _erase_in_line(self, value):\n initial_pos = self._cursor.position()\n if value == 0:\n # delete end of line\n self._cursor.movePosition(self._cursor.EndOfBlock, self._cursor.KeepAnchor)\n elif value == 1:\n # delete start of line\n self._cursor.movePosition(self._cursor.StartOfBlock, self._cursor.KeepAnchor)\n else:\n # delete whole line\n self._cursor.movePosition(self._cursor.StartOfBlock)\n self._cursor.movePosition(self._cursor.EndOfBlock, self._cursor.KeepAnchor)\n self._cursor.insertText(' ' * len(self._cursor.selectedText()))\n self._cursor.setPosition(initial_pos)\n self._text_edit.setTextCursor(self._cursor)\n self._last_cursor_pos = self._cursor.position()", "response": "Erases charachters in line."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmoves the cursor back.", "response": "def _cursor_back(self, value):\n \"\"\"\n Moves the cursor back.\n \"\"\"\n if value <= 0:\n value = 1\n self._cursor.movePosition(self._cursor.Left, self._cursor.MoveAnchor, value)\n self._text_edit.setTextCursor(self._cursor)\n self._last_cursor_pos = self._cursor.position()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _cursor_forward(self, value):\n if value <= 0:\n value = 1\n self._cursor.movePosition(self._cursor.Right, self._cursor.MoveAnchor, value)\n self._text_edit.setTextCursor(self._cursor)\n self._last_cursor_pos = self._cursor.position()", "response": "Moves the cursor forward."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _delete_chars(self, value):\n value = int(value)\n if value <= 0:\n value = 1\n for i in range(value):\n self._cursor.deleteChar()\n self._text_edit.setTextCursor(self._cursor)\n self._last_cursor_pos = self._cursor.position()", "response": "Deletes the specified number of charachters."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the PyPI information for a given package.", "response": "def get_package_info(package):\n \"\"\"Gets the PyPI information for a given package.\"\"\"\n url = 'https://pypi.python.org/pypi/{}/json'.format(package)\n r = requests.get(url)\n r.raise_for_status()\n return r.json()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread a requirements file.", "response": "def read_requirements(req_file):\n \"\"\"Reads a requirements file.\n\n Args:\n req_file (str): Filename of requirements file\n \"\"\"\n items = list(parse_requirements(req_file, session={}))\n result = []\n\n for item in items:\n # Get line number from item\n line_number = item.comes_from.split(req_file + ' (line ')[1][:-1]\n if item.req:\n item.req.marker = item.markers\n result.append((item.req, line_number))\n else:\n result.append((item, line_number))\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns true if requirements specify a version range.", "response": "def _is_version_range(req):\n \"\"\"Returns true if requirements specify a version range.\"\"\"\n\n assert len(req.specifier) > 0\n specs = list(req.specifier)\n if len(specs) == 1:\n # \"foo > 2.0\" or \"foo == 2.4.3\"\n return specs[0].operator != '=='\n else:\n # \"foo > 2.0, < 3.0\"\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates a given req object with the latest version.", "response": "def update_req(req):\n \"\"\"Updates a given req object with the latest version.\"\"\"\n\n if not req.name:\n return req, None\n\n info = get_package_info(req.name)\n\n if info['info'].get('_pypi_hidden'):\n print('{} is hidden on PyPI and will not be updated.'.format(req))\n return req, None\n\n if _is_pinned(req) and _is_version_range(req):\n print('{} is pinned to a range and will not be updated.'.format(req))\n return req, None\n\n newest_version = _get_newest_version(info)\n current_spec = next(iter(req.specifier)) if req.specifier else None\n current_version = current_spec.version if current_spec else None\n new_spec = Specifier(u'=={}'.format(newest_version))\n if not current_spec or current_spec._spec != new_spec._spec:\n req.specifier = new_spec\n update_info = (\n req.name,\n current_version,\n newest_version)\n return req, update_info\n return req, None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef write_requirements(reqs_linenum, req_file):\n with open(req_file, 'r') as input:\n lines = input.readlines()\n\n for req in reqs_linenum:\n line_num = int(req[1])\n\n if hasattr(req[0], 'link'):\n lines[line_num - 1] = '{}\\n'.format(req[0].link)\n else:\n lines[line_num - 1] = '{}\\n'.format(req[0])\n\n with open(req_file, 'w') as output:\n output.writelines(lines)", "response": "Writes a list of req objects out to a given file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef check_req(req):\n if not isinstance(req, Requirement):\n return None\n\n info = get_package_info(req.name)\n newest_version = _get_newest_version(info)\n\n if _is_pinned(req) and _is_version_range(req):\n return None\n\n current_spec = next(iter(req.specifier)) if req.specifier else None\n current_version = current_spec.version if current_spec else None\n if current_version != newest_version:\n return req.name, current_version, newest_version", "response": "Checks if a given Requirement is the latest version available."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks the requirements file for outdated packages.", "response": "def check_requirements_file(req_file, skip_packages):\n \"\"\"Return list of outdated requirements.\n\n Args:\n req_file (str): Filename of requirements file\n skip_packages (list): List of package names to ignore.\n \"\"\"\n reqs = read_requirements(req_file)\n if skip_packages is not None:\n reqs = [req for req in reqs if req.name not in skip_packages]\n outdated_reqs = filter(None, [check_req(req) for req in reqs])\n return outdated_reqs"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update_command(args):\n updated = update_requirements_file(\n args.requirements_file, args.skip_packages)\n\n if updated:\n print('Updated requirements in {}:'.format(args.requirements_file))\n\n for item in updated:\n print(' * {} from {} to {}.'.format(*item))\n else:\n print('All dependencies in {} are up-to-date.'.format(\n args.requirements_file))", "response": "Updates all dependencies of the specified requirements file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_command(args):\n outdated = check_requirements_file(args.requirements_file,\n args.skip_packages)\n\n if outdated:\n print('Requirements in {} are out of date:'.format(\n args.requirements_file))\n\n for item in outdated:\n print(' * {} is {} latest is {}.'.format(*item))\n\n sys.exit(1)\n else:\n print('Requirements in {} are up to date.'.format(\n args.requirements_file))", "response": "Checks that all dependencies in the specified requirements file are up to date."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfills the panel background.", "response": "def paintEvent(self, event):\n \"\"\" Fills the panel background. \"\"\"\n super(EncodingPanel, self).paintEvent(event)\n if self.isVisible():\n # fill background\n painter = QtGui.QPainter(self)\n self._background_brush = QtGui.QBrush(self._color)\n painter.fillRect(event.rect(), self._background_brush)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a QTextCharFormat for the given token by reading a Pygments style.", "response": "def _get_format_from_style(self, token, style):\n \"\"\" Returns a QTextCharFormat for token by reading a Pygments style.\n \"\"\"\n result = QtGui.QTextCharFormat()\n items = list(style.style_for_token(token).items())\n for key, value in items:\n if value is None and key == 'color':\n # make sure to use a default visible color for the foreground\n # brush\n value = drift_color(self.background, 1000).name()\n if value:\n if key == 'color':\n result.setForeground(self._get_brush(value))\n elif key == 'bgcolor':\n result.setBackground(self._get_brush(value))\n elif key == 'bold':\n result.setFontWeight(QtGui.QFont.Bold)\n elif key == 'italic':\n result.setFontItalic(value)\n elif key == 'underline':\n result.setUnderlineStyle(\n QtGui.QTextCharFormat.SingleUnderline)\n elif key == 'sans':\n result.setFontStyleHint(QtGui.QFont.SansSerif)\n elif key == 'roman':\n result.setFontStyleHint(QtGui.QFont.Times)\n elif key == 'mono':\n result.setFontStyleHint(QtGui.QFont.TypeWriter)\n if token in [Token.Literal.String, Token.Literal.String.Doc,\n Token.Comment]:\n # mark strings, comments and docstrings regions for further queries\n result.setObjectType(result.UserObject)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_color(color):\n color = str(color).replace(\"#\", \"\")\n qcolor = QtGui.QColor()\n qcolor.setRgb(int(color[:2], base=16),\n int(color[2:4], base=16),\n int(color[4:6], base=16))\n return qcolor", "response": "Returns a QColor built from a Pygments color string."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrefresh editor settings when color scheme changed.", "response": "def refresh_editor(self, color_scheme):\n \"\"\"\n Refresh editor settings (background and highlight colors) when color\n scheme changed.\n\n :param color_scheme: new color scheme.\n \"\"\"\n self.editor.background = color_scheme.background\n self.editor.foreground = color_scheme.formats[\n 'normal'].foreground().color()\n self.editor.whitespaces_foreground = color_scheme.formats[\n 'whitespace'].foreground().color()\n try:\n mode = self.editor.modes.get('CaretLineHighlighterMode')\n except KeyError:\n pass\n else:\n mode.background = color_scheme.highlight\n mode.refresh()\n try:\n mode = self.editor.panels.get('FoldingPanel')\n except KeyError:\n pass\n else:\n mode.refresh_decorations(force=True)\n self.editor._reset_stylesheet()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef line_number_area_width(self):\n digits = 1\n count = max(1, self.editor.blockCount())\n while count >= 10:\n count /= 10\n digits += 1\n space = 5 + self.editor.fontMetrics().width(\"9\") * digits\n return space", "response": "Computes the line number area width depending on the number of lines in the document."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the current editor.", "response": "def set_editor(self, editor):\n \"\"\"\n Sets the current editor. The widget display the structure of that\n editor.\n\n :param editor: CodeEdit\n \"\"\"\n try:\n self._editor.cursorPositionChanged.disconnect(self.sync)\n except (AttributeError, TypeError, RuntimeError, ReferenceError):\n pass\n try:\n self._outline_mode.document_changed.disconnect(\n self._on_changed)\n except (AttributeError, TypeError, RuntimeError, ReferenceError):\n pass\n try:\n self._folding_panel.trigger_state_changed.disconnect(\n self._on_block_state_changed)\n except (AttributeError, TypeError, RuntimeError, ReferenceError):\n pass\n if editor:\n self._editor = weakref.proxy(editor)\n else:\n self._editor = None\n\n if editor is not None:\n editor.cursorPositionChanged.connect(self.sync)\n try:\n self._folding_panel = weakref.proxy(\n editor.panels.get(FoldingPanel))\n except KeyError:\n pass\n else:\n self._folding_panel.trigger_state_changed.connect(\n self._on_block_state_changed)\n try:\n analyser = editor.modes.get(OutlineMode)\n except KeyError:\n self._outline_mode = None\n else:\n self._outline_mode = weakref.proxy(analyser)\n analyser.document_changed.connect(self._on_changed)\n self._on_changed()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _on_changed(self):\n self._updating = True\n to_collapse = []\n self.clear()\n if self._editor and self._outline_mode and self._folding_panel:\n items, to_collapse = self.to_tree_widget_items(\n self._outline_mode.definitions, to_collapse=to_collapse)\n if len(items):\n self.addTopLevelItems(items)\n self.expandAll()\n for item in reversed(to_collapse):\n self.collapseItem(item)\n self._updating = False\n return\n\n # no data\n root = QtWidgets.QTreeWidgetItem()\n root.setText(0, _('No data'))\n root.setIcon(0, icons.icon(\n 'dialog-information', ':/pyqode-icons/rc/dialog-info.png',\n 'fa.info-circle'))\n self.addTopLevelItem(root)\n self._updating = False\n self.sync()", "response": "Update the tree items with the new ones"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngo to the item position in the editor.", "response": "def _on_item_clicked(self, item):\n \"\"\"\n Go to the item position in the editor.\n \"\"\"\n if item:\n name = item.data(0, QtCore.Qt.UserRole)\n if name:\n go = name.block.blockNumber()\n helper = TextHelper(self._editor)\n if helper.current_line_nbr() != go:\n helper.goto_line(go, column=name.column)\n self._editor.setFocus()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts the list of top level definitions to a list of top level tree items.", "response": "def to_tree_widget_items(self, definitions, to_collapse=None):\n \"\"\"\n Converts the list of top level definitions to a list of top level\n tree items.\n \"\"\"\n def flatten(definitions):\n \"\"\"\n Flattens the document structure tree as a simple sequential list.\n \"\"\"\n ret_val = []\n for de in definitions:\n ret_val.append(de)\n for sub_d in de.children:\n ret_val.append(sub_d)\n ret_val += flatten(sub_d.children)\n return ret_val\n\n def convert(name, editor, to_collapse):\n ti = QtWidgets.QTreeWidgetItem()\n ti.setText(0, name.name)\n if isinstance(name.icon, list):\n icon = QtGui.QIcon.fromTheme(\n name.icon[0], QtGui.QIcon(name.icon[1]))\n else:\n icon = QtGui.QIcon(name.icon)\n ti.setIcon(0, icon)\n name.block = editor.document().findBlockByNumber(name.line)\n ti.setData(0, QtCore.Qt.UserRole, name)\n ti.setToolTip(0, name.description)\n name.tree_item = ti\n block_data = name.block.userData()\n if block_data is None:\n block_data = TextBlockUserData()\n name.block.setUserData(block_data)\n block_data.tree_item = ti\n\n if to_collapse is not None and \\\n TextBlockHelper.is_collapsed(name.block):\n to_collapse.append(ti)\n\n for ch in name.children:\n ti_ch, to_collapse = convert(ch, editor, to_collapse)\n if ti_ch:\n ti.addChild(ti_ch)\n return ti, to_collapse\n\n self._definitions = definitions\n self._flattened_defs = flatten(self._definitions)\n\n items = []\n for d in definitions:\n value, to_collapse = convert(d, self._editor, to_collapse)\n items.append(value)\n if to_collapse is not None:\n return items, to_collapse\n\n return items"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding the corresponding symbol position in the specified symbol type.", "response": "def symbol_pos(self, cursor, character_type=OPEN, symbol_type=PAREN):\n \"\"\"\n Find the corresponding symbol position (line, column) of the specified\n symbol. If symbol type is PAREN and character_type is OPEN, the\n function will look for '('.\n\n :param cursor: QTextCursor\n :param character_type: character type to look for (open or close char)\n :param symbol_type: symbol type (index in the SYMBOLS map).\n \"\"\"\n retval = None, None\n original_cursor = self.editor.textCursor()\n self.editor.setTextCursor(cursor)\n block = cursor.block()\n data = get_block_symbol_data(self.editor, block)\n self._match(symbol_type, data, block.position())\n for deco in self._decorations:\n if deco.character == self.SYMBOLS[symbol_type][character_type]:\n retval = deco.line, deco.column\n break\n self.editor.setTextCursor(original_cursor)\n self._clear_decorations()\n return retval"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the clear button as visible", "response": "def set_button_visible(self, visible):\n \"\"\"\n Sets the clear button as ``visible``\n\n :param visible: Visible state (True = visible, False = hidden).\n \"\"\"\n self.button.setVisible(visible)\n left, top, right, bottom = self.getTextMargins()\n if visible:\n right = self._margin + self._spacing\n else:\n right = 0\n self.setTextMargins(left, top, right, bottom)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nimport a class from a fully qualified name string.", "response": "def import_class(klass):\n \"\"\"\n Imports a class from a fully qualified name string.\n\n :param klass: class string, e.g.\n \"pyqode.core.backend.workers.CodeCompletionWorker\"\n :return: The corresponding class\n\n \"\"\"\n path = klass.rfind(\".\")\n class_name = klass[path + 1: len(klass)]\n try:\n module = __import__(klass[0:path], globals(), locals(), [class_name])\n klass = getattr(module, class_name)\n except ImportError as e:\n raise ImportError('%s: %s' % (klass, str(e)))\n except AttributeError:\n raise ImportError(klass)\n else:\n return klass"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates the server and serves forever for the base class.", "response": "def serve_forever(args=None):\n \"\"\"\n Creates the server and serves forever\n\n :param args: Optional args if you decided to use your own\n argument parser. Default is None to let the JsonServer setup its own\n parser and parse command line arguments.\n \"\"\"\n class Unbuffered(object):\n def __init__(self, stream):\n self.stream = stream\n\n def write(self, data):\n self.stream.write(data)\n self.stream.flush()\n\n def __getattr__(self, attr):\n return getattr(self.stream, attr)\n\n sys.stdout = Unbuffered(sys.stdout)\n sys.stderr = Unbuffered(sys.stderr)\n\n server = JsonServer(args=args)\n server.serve_forever()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nselects the word under the mouse cursor.", "response": "def _select_word_cursor(self):\n \"\"\" Selects the word under the mouse cursor. \"\"\"\n cursor = TextHelper(self.editor).word_under_mouse_cursor()\n if (self._previous_cursor_start != cursor.selectionStart() and\n self._previous_cursor_end != cursor.selectionEnd()):\n self._remove_decoration()\n self._add_decoration(cursor)\n self._previous_cursor_start = cursor.selectionStart()\n self._previous_cursor_end = cursor.selectionEnd()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _add_decoration(self, cursor):\n if self._deco is None:\n if cursor.selectedText():\n self._deco = TextDecoration(cursor)\n if self.editor.background.lightness() < 128:\n self._deco.set_foreground(QtGui.QColor('#0681e0'))\n else:\n self._deco.set_foreground(QtCore.Qt.blue)\n self._deco.set_as_underlined()\n self.editor.decorations.append(self._deco)\n self.editor.set_mouse_cursor(QtCore.Qt.PointingHandCursor)\n else:\n self.editor.set_mouse_cursor(QtCore.Qt.IBeamCursor)", "response": "Adds a decoration for the word under cursor."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nremove the word under cursor s decoration.", "response": "def _remove_decoration(self):\n \"\"\"\n Removes the word under cursor's decoration\n \"\"\"\n if self._deco is not None:\n self.editor.decorations.remove(self._deco)\n self._deco = None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef date(date):\n date = date.value\n return (datetime.strptime(date[:-5], '%Y-%m-%dT%H:%M:%S')\n if len(date) == 24\n else datetime.strptime(date, '%Y%m%dT%H:%M:%S'))", "response": "DokuWiki returns dates of xmlrpclib / xmlrpc. client datetime object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert a UTC date to the local time.", "response": "def utc2local(date):\n \"\"\"DokuWiki returns date with a +0000 timezone. This function convert *date*\n to the local time.\n \"\"\"\n date_offset = (datetime.now() - datetime.utcnow())\n #\u00a0Python < 2.7 don't have the 'total_seconds' method so calculate it by hand!\n date_offset = (date_offset.microseconds +\n (date_offset.seconds + date_offset.days * 24 * 3600) * 1e6) / 1e6\n date_offset = int(round(date_offset / 60 / 60))\n return date + timedelta(hours=date_offset)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing and store cookie", "response": "def parse_response(self, response):\n \"\"\"parse and store cookie\"\"\"\n try:\n for header in response.msg.get_all(\"Set-Cookie\"):\n cookie = header.split(\";\", 1)[0]\n cookieKey, cookieValue = cookie.split(\"=\", 1)\n self._cookies[cookieKey] = cookieValue\n finally:\n return Transport.parse_response(self, response)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse and store cookie", "response": "def parse_response(self, response):\n \"\"\"parse and store cookie\"\"\"\n try:\n for header in response.getheader(\"set-cookie\").split(\", \"):\n # filter 'expire' information\n if not header.startswith(\"D\"):\n continue\n cookie = header.split(\";\", 1)[0]\n cookieKey, cookieValue = cookie.split(\"=\", 1)\n self._cookies[cookieKey] = cookieValue\n finally:\n return Transport.parse_response(self, response)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_acl(self, scope, user, permission):\n return self.send('plugin.acl.addAcl', scope, user, permission)", "response": "Add an ACL rule that restricts the page namespace scope to user with permission level."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns informations of page.", "response": "def info(self, page, version=None):\n \"\"\"Returns informations of *page*. Informations of the last version\n is returned if *version* is not set.\n \"\"\"\n return (self._dokuwiki.send('wiki.getPageInfoVersion', page, version)\n if version is not None\n else self._dokuwiki.send('wiki.getPageInfo', page))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the content of page.", "response": "def get(self, page, version=None):\n \"\"\"Returns the content of *page*. The content of the last version is\n returned if *version* is not set.\n \"\"\"\n return (self._dokuwiki.send('wiki.getPageVersion', page, version)\n if version is not None\n else self._dokuwiki.send('wiki.getPage', page))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nappends content text to page.", "response": "def append(self, page, content, **options):\n \"\"\"Appends *content* text to *page*.\n\n Valid *options* are:\n\n * *sum*: (str) change summary\n * *minor*: (bool) whether this is a minor change\n \"\"\"\n return self._dokuwiki.send('dokuwiki.appendPage', page, content, options)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef html(self, page, version=None):\n return (self._dokuwiki.send('wiki.getPageHTMLVersion', page, version)\n if version is not None\n else self._dokuwiki.send('wiki.getPageHTML', page))", "response": "Returns HTML content of page."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset the content of page.", "response": "def set(self, page, content, **options):\n \"\"\"Set/replace the *content* of *page*.\n\n Valid *options* are:\n\n * *sum*: (str) change summary\n * *minor*: (bool) whether this is a minor change\n \"\"\"\n try:\n return self._dokuwiki.send('wiki.putPage', page, content, options)\n except ExpatError as err:\n # Sometime the first line of the XML response is blank which raise\n # the 'ExpatError' exception although the change has been done. This\n # allow to ignore the error.\n if str(err) != ERR:\n raise DokuWikiError(err)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the binary data of the specified media or saves it to a file.", "response": "def get(self, media, dirpath=None, filename=None, overwrite=False, b64decode=False):\n \"\"\"Returns the binary data of *media* or save it to a file. If *dirpath*\n is not set the binary data is returned, otherwise the data is saved\n to a file. By default, the filename is the name of the media but it can\n be changed with *filename* parameter. *overwrite* parameter allow to\n overwrite the file if it already exists locally.\n \"\"\"\n import os\n data = self._dokuwiki.send('wiki.getAttachment', media)\n data = base64.b64decode(data) if b64decode else data.data\n if dirpath is None:\n return data\n\n if filename is None:\n filename = media.replace('/', ':').split(':')[-1]\n if not os.path.exists(dirpath):\n os.makedirs(dirpath)\n filepath = os.path.join(dirpath, filename)\n if os.path.exists(filepath) and not overwrite:\n raise FileExistsError(\"[Errno 17] File exists: '%s'\" % filepath)\n\n with open(filepath, 'wb') as fhandler:\n fhandler.write(data)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add(self, media, filepath, overwrite=True):\n with open(filepath, 'rb') as fhandler:\n self._dokuwiki.send('wiki.putAttachment', media,\n Binary(fhandler.read()), ow=overwrite)", "response": "Add a new entry to the wiki."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset a new attachment from a byte array.", "response": "def set(self, media, _bytes, overwrite=True, b64encode=False):\n \"\"\"Set *media* from *_bytes*. *overwrite* parameter specify if the media\n must be overwrite if it exists remotely.\n \"\"\"\n data = base64.b64encode(_bytes) if b64encode else Binary(_bytes)\n self._dokuwiki.send('wiki.putAttachment', media, data, ow=overwrite)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get(content, keep_order=False):\n if keep_order:\n from collections import OrderedDict\n dataentry = OrderedDict()\n else:\n dataentry = {}\n\n found = False\n for line in content.split('\\n'):\n if line.strip().startswith('---- dataentry'):\n found = True\n continue\n elif line == '----':\n break\n elif not found:\n continue\n\n line_split = line.split(':')\n key = line_split[0].strip()\n value = re.sub('#.*$', '', ':'.join(line_split[1:])).strip()\n dataentry.setdefault(key, value)\n\n if not found:\n raise DokuWikiError('no dataentry found')\n return dataentry", "response": "Get dataentry from content."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef gen(name, data):\n return '---- dataentry %s ----\\n%s\\n----' % (name, '\\n'.join(\n '%s:%s' % (attr, value) for attr, value in data.items()))", "response": "Generate dataentry name from data."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves dataentry from content.", "response": "def ignore(content):\n \"\"\"Remove dataentry from *content*.\"\"\"\n page_content = []\n start = False\n for line in content.split('\\n'):\n if line == '----' and not start:\n start = True\n continue\n if start:\n page_content.append(line)\n return '\\n'.join(page_content) if page_content else content"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndrawing all messages from all CheckerMode classes installed on the editor.", "response": "def _draw_messages(self, painter):\n \"\"\"\n Draw messages from all subclass of CheckerMode currently\n installed on the editor.\n\n :type painter: QtGui.QPainter\n \"\"\"\n checker_modes = []\n for m in self.editor.modes:\n if isinstance(m, modes.CheckerMode):\n checker_modes.append(m)\n for checker_mode in checker_modes:\n for msg in checker_mode.messages:\n block = msg.block\n color = QtGui.QColor(msg.color)\n brush = QtGui.QBrush(color)\n rect = QtCore.QRect()\n rect.setX(self.sizeHint().width() / 4)\n rect.setY(block.blockNumber() * self.get_marker_height())\n rect.setSize(self.get_marker_size())\n painter.fillRect(rect, brush)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _draw_visible_area(self, painter):\n if self.editor.visible_blocks:\n start = self.editor.visible_blocks[0][-1]\n end = self.editor.visible_blocks[-1][-1]\n rect = QtCore.QRect()\n rect.setX(0)\n rect.setY(start.blockNumber() * self.get_marker_height())\n rect.setWidth(self.sizeHint().width())\n rect.setBottom(end.blockNumber() * self.get_marker_height())\n if self.editor.background.lightness() < 128:\n c = self.editor.background.darker(150)\n else:\n c = self.editor.background.darker(110)\n c.setAlpha(128)\n painter.fillRect(rect, c)", "response": "Draw the visible area."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef paintEvent(self, event):\n if self.isVisible():\n # fill background\n self._background_brush = QtGui.QBrush(self.editor.background)\n painter = QtGui.QPainter(self)\n painter.fillRect(event.rect(), self._background_brush)\n self._draw_messages(painter)\n self._draw_visible_area(painter)", "response": "Pains the messages and the visible area on the panel."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_marker_height(self):\n return self.editor.viewport().height() / TextHelper(\n self.editor).line_count()", "response": "Gets the height of the marker."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_marker_size(self):\n h = self.get_marker_height()\n if h < 1:\n h = 1\n return QtCore.QSize(self.sizeHint().width() / 2, h)", "response": "Gets the size of a message marker."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mimetype_icon(path, fallback=None):\n mime = mimetypes.guess_type(path)[0]\n if mime:\n icon = mime.replace('/', '-')\n # if system.WINDOWS:\n # return icons.file()\n if QtGui.QIcon.hasThemeIcon(icon):\n icon = QtGui.QIcon.fromTheme(icon)\n if not icon.isNull():\n return icon\n if fallback:\n return QtGui.QIcon(fallback)\n return QtGui.QIcon.fromTheme('text-x-generic')", "response": "Tries to create an icon from theme using the file mimetype."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef main(options):\n \n client = Client(server=options.server, username=options.username, password=options.password)\n print('Successfully connected to %s' % client.server)\n print(client.si.CurrentTime())\n \n client.logout()", "response": "A simple connection test to login and print the server time."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef marker_for_line(self, line):\n block = self.editor.document().findBlockByNumber(line)\n try:\n return block.userData().messages\n except AttributeError:\n return []", "response": "Returns the marker that is displayed at the specified line number if any."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main():\n global program, args, ret\n print(os.getcwd())\n ret = 0\n if '--help' in sys.argv or '-h' in sys.argv or len(sys.argv) == 1:\n print(__doc__)\n else:\n program = sys.argv[1]\n args = sys.argv[2:]\n if args:\n ret = subprocess.call([program] + args)\n else:\n ret = subprocess.call([program])\n print('\\nProcess terminated with exit code %d' % ret)\n prompt = 'Press ENTER to close this window...'\n if sys.version_info[0] == 3:\n input(prompt)\n else:\n raw_input(prompt)\n sys.exit(ret)", "response": "pyqode - console main function."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef linkcode_resolve(domain, info):\n module_name = info['module']\n fullname = info['fullname']\n attribute_name = fullname.split('.')[-1]\n base_url = 'https://github.com/fmenabe/python-dokuwiki/blob/'\n\n if release.endswith('-dev'):\n base_url += 'master/'\n else:\n base_url += version + '/'\n\n filename = module_name.replace('.', '/') + '.py'\n module = sys.modules.get(module_name)\n\n # Get the actual object\n try:\n actual_object = module\n for obj in fullname.split('.'):\n parent = actual_object\n actual_object = getattr(actual_object, obj)\n except AttributeError:\n return None\n\n # Fix property methods by using their getter method\n if isinstance(actual_object, property):\n actual_object = actual_object.fget\n\n # Try to get the linenumber of the object\n try:\n source, start_line = inspect.getsourcelines(actual_object)\n except TypeError:\n # If it can not be found, try to find it anyway in the parents its\n # source code\n parent_source, parent_start_line = inspect.getsourcelines(parent)\n for i, line in enumerate(parent_source):\n if line.strip().startswith(attribute_name):\n start_line = parent_start_line + i\n end_line = start_line\n break\n else:\n return None\n\n else:\n end_line = start_line + len(source) - 1\n\n line_anchor = '#L%d-L%d' % (start_line, end_line)\n\n return base_url + filename + line_anchor", "response": "A simple function to find matching source code."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pick_free_port():\n test_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n test_socket.bind(('127.0.0.1', 0))\n free_port = int(test_socket.getsockname()[1])\n test_socket.close()\n return free_port", "response": "Picks a free port from the list of available sockets."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef start(self, script, interpreter=sys.executable, args=None,\n error_callback=None, reuse=False):\n \"\"\"\n Starts the backend process.\n\n The backend is a python script that starts a\n :class:`pyqode.core.backend.JsonServer`. You must write the backend\n script so that you can apply your own backend configuration.\n\n The script can be run with a custom interpreter. The default is to use\n sys.executable.\n\n .. note:: This restart the backend process if it was previously\n running.\n\n :param script: Path to the backend script.\n :param interpreter: The python interpreter to use to run the backend\n script. If None, sys.executable is used unless we are in a frozen\n application (frozen backends do not require an interpreter).\n :param args: list of additional command line args to use to start\n the backend process.\n :param reuse: True to reuse an existing backend process. WARNING: to\n use this, your application must have one single server script. If\n you're creating an app which supports multiple programming\n languages you will need to merge all backend scripts into one\n single script, otherwise the wrong script might be picked up).\n \"\"\"\n self._shared = reuse\n if reuse and BackendManager.SHARE_COUNT:\n self._port = BackendManager.LAST_PORT\n self._process = BackendManager.LAST_PROCESS\n BackendManager.SHARE_COUNT += 1\n else:\n if self.running:\n self.stop()\n self.server_script = script\n self.interpreter = interpreter\n self.args = args\n backend_script = script.replace('.pyc', '.py')\n self._port = self.pick_free_port()\n if hasattr(sys, \"frozen\") and not backend_script.endswith('.py'):\n # frozen backend script on windows/mac does not need an\n # interpreter\n program = backend_script\n pgm_args = [str(self._port)]\n else:\n program = interpreter\n pgm_args = [backend_script, str(self._port)]\n if args:\n pgm_args += args\n self._process = BackendProcess(self.editor)\n if error_callback:\n self._process.error.connect(error_callback)\n self._process.start(program, pgm_args)\n\n if reuse:\n BackendManager.LAST_PROCESS = self._process\n BackendManager.LAST_PORT = self._port\n BackendManager.SHARE_COUNT += 1\n comm('starting backend process: %s %s', program,\n ' '.join(pgm_args))\n self._heartbeat_timer.start()", "response": "Starts the backend process."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef stop(self):\n if self._process is None:\n return\n if self._shared:\n BackendManager.SHARE_COUNT -= 1\n if BackendManager.SHARE_COUNT:\n return\n comm('stopping backend process')\n # close all sockets\n for s in self._sockets:\n s._callback = None\n s.close()\n\n self._sockets[:] = []\n # prevent crash logs from being written if we are busy killing\n # the process\n self._process._prevent_logs = True\n while self._process.state() != self._process.NotRunning:\n self._process.waitForFinished(1)\n if sys.platform == 'win32':\n # Console applications on Windows that do not run an event\n # loop, or whose event loop does not handle the WM_CLOSE\n # message, can only be terminated by calling kill().\n self._process.kill()\n else:\n self._process.terminate()\n self._process._prevent_logs = False\n self._heartbeat_timer.stop()\n comm('backend process terminated')", "response": "Stops the backend process."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsends some work to the backend.", "response": "def send_request(self, worker_class_or_function, args, on_receive=None):\n \"\"\"\n Requests some work to be done by the backend. You can get notified of\n the work results by passing a callback (on_receive).\n\n :param worker_class_or_function: Worker class or function\n :param args: worker args, any Json serializable objects\n :param on_receive: an optional callback executed when we receive the\n worker's results. The callback will be called with one arguments:\n the results of the worker (object)\n\n :raise: backend.NotRunning if the backend process is not running.\n \"\"\"\n if not self.running:\n try:\n # try to restart the backend if it crashed.\n self.start(self.server_script, interpreter=self.interpreter,\n args=self.args)\n except AttributeError:\n pass # not started yet\n finally:\n # caller should try again, later\n raise NotRunning()\n else:\n comm('sending request, worker=%r' % worker_class_or_function)\n # create a socket, the request will be send as soon as the socket\n # has connected\n socket = JsonTcpClient(\n self.editor, self._port, worker_class_or_function, args,\n on_receive=on_receive)\n socket.finished.connect(self._rm_socket)\n self._sockets.append(socket)\n # restart heartbeat timer\n self._heartbeat_timer.start()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntells whether the backend process is running.", "response": "def running(self):\n \"\"\"\n Tells whether the backend process is running.\n\n :return: True if the process is running, otherwise False\n \"\"\"\n try:\n return (self._process is not None and\n self._process.state() != self._process.NotRunning)\n except RuntimeError:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving a file from the list of recent files.", "response": "def remove(self, filename):\n \"\"\"\n Remove a file path from the list of recent files.\n :param filename: Path of the file to remove\n \"\"\"\n files = self.get_value('list', [])\n files.remove(filename)\n self.set_value('list', files)\n self.updated.emit()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_value(self, key, default=None):\n def unique(seq, idfun=None):\n if idfun is None:\n def idfun(x):\n return x\n # order preserving\n seen = {}\n result = []\n for item in seq:\n marker = idfun(item)\n if marker in seen:\n continue\n seen[marker] = 1\n result.append(item)\n return result\n lst = self._settings.value('recent_files/%s' % key, default)\n # emtpy list\n if lst is None:\n lst = []\n # single file\n if isinstance(lst, str):\n lst = [lst]\n return unique([os.path.normpath(pth) for pth in lst])", "response": "Reads value from QSettings\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_value(self, key, value):\n if value is None:\n value = []\n value = [os.path.normpath(pth) for pth in value]\n self._settings.setValue('recent_files/%s' % key, value)", "response": "Set the value of the recent files key in QSettings. recent_files."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the list of recent files.", "response": "def get_recent_files(self):\n \"\"\"\n Gets the list of recent files. (files that do not exists anymore\n are automatically filtered)\n \"\"\"\n ret_val = []\n files = self.get_value('list', [])\n # filter files, remove files that do not exist anymore\n for file in files:\n if file is not None and os.path.exists(file):\n if os.path.ismount(file) and \\\n sys.platform == 'win32' and not file.endswith('\\\\'):\n file += '\\\\'\n if file not in ret_val:\n ret_val.append(file)\n return ret_val"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nopening a file in the list of recent files.", "response": "def open_file(self, file):\n \"\"\"\n Adds a file to the list (and move it to the top of the list if the\n file already exists)\n\n :param file: file path to add the list of recent files.\n\n \"\"\"\n files = self.get_recent_files()\n try:\n files.remove(file)\n except ValueError:\n pass\n files.insert(0, file)\n # discard old files\n del files[self.max_recent_files:]\n self.set_value('list', files)\n self.updated.emit()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates the list of actions.", "response": "def update_actions(self):\n \"\"\"\n Updates the list of actions.\n \"\"\"\n self.clear()\n self.recent_files_actions[:] = []\n for file in self.manager.get_recent_files():\n action = QtWidgets.QAction(self)\n action.setText(os.path.split(file)[1])\n action.setToolTip(file)\n action.setStatusTip(file)\n action.setData(file)\n action.setIcon(self.icon_provider.icon(QtCore.QFileInfo(file)))\n action.triggered.connect(self._on_action_triggered)\n self.addAction(action)\n self.recent_files_actions.append(action)\n self.addSeparator()\n action_clear = QtWidgets.QAction(_('Clear list'), self)\n action_clear.triggered.connect(self.clear_recent_files)\n if isinstance(self.clear_icon, QtGui.QIcon):\n action_clear.setIcon(self.clear_icon)\n elif self.clear_icon:\n theme = ''\n if len(self.clear_icon) == 2:\n theme, path = self.clear_icon\n else:\n path = self.clear_icon\n icons.icon(theme, path, 'fa.times-circle')\n self.addAction(action_clear)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nclearing recent files and menu.", "response": "def clear_recent_files(self):\n \"\"\" Clear recent files and menu. \"\"\"\n self.manager.clear()\n self.update_actions()\n self.clear_requested.emit()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nemits open_requested when a recent file action has been triggered.", "response": "def _on_action_triggered(self):\n \"\"\"\n Emits open_requested when a recent file action has been triggered.\n \"\"\"\n action = self.sender()\n assert isinstance(action, QtWidgets.QAction)\n path = action.data()\n self.open_requested.emit(path)\n self.update_actions()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef append(self, panel, position=Panel.Position.LEFT):\n assert panel is not None\n pos_to_string = {\n Panel.Position.BOTTOM: 'bottom',\n Panel.Position.LEFT: 'left',\n Panel.Position.RIGHT: 'right',\n Panel.Position.TOP: 'top'\n }\n _logger().log(5, 'adding panel %s at %r', panel.name,\n pos_to_string[position])\n panel.order_in_zone = len(self._panels[position])\n self._panels[position][panel.name] = panel\n panel.position = position\n panel.on_install(self.editor)\n _logger().log(5, 'panel %s installed', panel.name)\n return panel", "response": "Adds a new panel at the specified position."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nremove the specified panel from the cache.", "response": "def remove(self, name_or_klass):\n \"\"\"\n Removes the specified panel.\n\n :param name_or_klass: Name or class of the panel to remove.\n :return: The removed panel\n \"\"\"\n _logger().log(5, 'removing panel %r', name_or_klass)\n panel = self.get(name_or_klass)\n panel.on_uninstall()\n panel.hide()\n panel.setParent(None)\n return self._panels[panel.position].pop(panel.name, None)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get(self, name_or_klass):\n if not isinstance(name_or_klass, str):\n name_or_klass = name_or_klass.__name__\n for zone in range(4):\n try:\n panel = self._panels[zone][name_or_klass]\n except KeyError:\n pass\n else:\n return panel\n raise KeyError(name_or_klass)", "response": "Gets a specific panel instance."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrefreshing the editor panels", "response": "def refresh(self):\n \"\"\" Refreshes the editor panels (resize and update margins) \"\"\"\n _logger().log(5, 'refresh_panels')\n self.resize()\n self._update(self.editor.contentsRect(), 0,\n force_update_margins=True)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _update(self, rect, delta_y, force_update_margins=False):\n helper = TextHelper(self.editor)\n if not self:\n return\n for zones_id, zone in self._panels.items():\n if zones_id == Panel.Position.TOP or \\\n zones_id == Panel.Position.BOTTOM:\n continue\n panels = list(zone.values())\n for panel in panels:\n if panel.scrollable and delta_y:\n panel.scroll(0, delta_y)\n line, col = helper.cursor_position()\n oline, ocol = self._cached_cursor_pos\n if line != oline or col != ocol or panel.scrollable:\n panel.update(0, rect.y(), panel.width(), rect.height())\n self._cached_cursor_pos = helper.cursor_position()\n if (rect.contains(self.editor.viewport().rect()) or\n force_update_margins):\n self._update_viewport_margins()", "response": "Updates the panels with the given rect."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef icon(theme_name='', path='', qta_name='', qta_options=None, use_qta=None):\n ret_val = None\n if use_qta is None:\n use_qta = USE_QTAWESOME\n if qta_options is None:\n qta_options = QTA_OPTIONS\n if qta is not None and use_qta is True:\n ret_val = qta.icon(qta_name, **qta_options)\n else:\n if theme_name and path:\n ret_val = QtGui.QIcon.fromTheme(theme_name, QtGui.QIcon(path))\n elif theme_name:\n ret_val = QtGui.QIcon.fromTheme(theme_name)\n elif path:\n ret_val = QtGui.QIcon(path)\n return ret_val", "response": "Creates an icon from qtawesome from theme or from path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the color of the outline rectangle.", "response": "def set_outline(self, color):\n \"\"\"\n Uses an outline rectangle.\n\n :param color: Color of the outline rect\n :type color: QtGui.QColor\n \"\"\"\n self.format.setProperty(QtGui.QTextFormat.OutlinePen,\n QtGui.QPen(color))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nenable full width selection.", "response": "def set_full_width(self, flag=True, clear=True):\n \"\"\"\n Enables FullWidthSelection (the selection does not stops at after the\n character instead it goes up to the right side of the widget).\n\n :param flag: True to use full width selection.\n :type flag: bool\n\n :param clear: True to clear any previous selection. Default is True.\n :type clear: bool\n \"\"\"\n if clear:\n self.cursor.clearSelection()\n self.format.setProperty(QtGui.QTextFormat.FullWidthSelection, flag)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef compile(self, script, bare=False):\n '''compile a CoffeeScript code to a JavaScript code.\n\n if bare is True, then compile the JavaScript without the top-level\n function safety wrapper (like the coffee command).\n '''\n if not hasattr(self, '_context'):\n self._context = self._runtime.compile(self._compiler_script)\n return self._context.call(\n \"CoffeeScript.compile\", script, {'bare': bare})", "response": "compile a CoffeeScript code to a JavaScript code. bare is True then compile the JavaScript without the top - level\n function safety wrapper."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef compile_file(self, filename, encoding=\"utf-8\", bare=False):\n '''compile a CoffeeScript script file to a JavaScript code.\n\n filename can be a list or tuple of filenames,\n then contents of files are concatenated with line feeds.\n\n if bare is True, then compile the JavaScript without the top-level\n function safety wrapper (like the coffee command).\n '''\n if isinstance(filename, _BaseString):\n filename = [filename]\n\n scripts = []\n for f in filename:\n with io.open(f, encoding=encoding) as fp:\n scripts.append(fp.read())\n\n return self.compile('\\n\\n'.join(scripts), bare=bare)", "response": "compile a CoffeeScript script file to a JavaScript code."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconnecting the slots to signals", "response": "def setup_actions(self):\n \"\"\" Connects slots to signals \"\"\"\n self.actionOpen.triggered.connect(self.on_open)\n self.actionNew.triggered.connect(self.on_new)\n self.actionSave.triggered.connect(self.on_save)\n self.actionSave_as.triggered.connect(self.on_save_as)\n self.actionQuit.triggered.connect(QtWidgets.QApplication.instance().quit)\n self.tabWidget.current_changed.connect(\n self.on_current_tab_changed)\n self.actionAbout.triggered.connect(self.on_about)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef setup_recent_files_menu(self):\n self.recent_files_manager = widgets.RecentFilesManager(\n 'pyQode', 'notepad')\n self.menu_recents = widgets.MenuRecentFiles(\n self.menuFile, title='Recents',\n recent_files_manager=self.recent_files_manager)\n self.menu_recents.open_requested.connect(self.open_file)\n self.menuFile.insertMenu(self.actionSave, self.menu_recents)\n self.menuFile.insertSeparator(self.actionSave)", "response": "Setup the recent files menu and manager"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef setup_mimetypes(self):\n # setup some specific mimetypes\n mimetypes.add_type('text/xml', '.ui') # qt designer forms forms\n mimetypes.add_type('text/x-rst', '.rst') # rst docs\n mimetypes.add_type('text/x-cython', '.pyx') # cython impl files\n mimetypes.add_type('text/x-cython', '.pxd') # cython def files\n mimetypes.add_type('text/x-python', '.py')\n mimetypes.add_type('text/x-python', '.pyw')\n mimetypes.add_type('text/x-c', '.c')\n mimetypes.add_type('text/x-c', '.h')\n mimetypes.add_type('text/x-c++hdr', '.hpp')\n mimetypes.add_type('text/x-c++src', '.cpp')\n mimetypes.add_type('text/x-c++src', '.cxx')\n # cobol files\n for ext in ['.cbl', '.cob', '.cpy']:\n mimetypes.add_type('text/x-cobol', ext)\n mimetypes.add_type('text/x-cobol', ext.upper())", "response": "Setup additional mime types for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef open_file(self, path):\n if path:\n editor = self.tabWidget.open_document(path)\n editor.cursorPositionChanged.connect(\n self.on_cursor_pos_changed)\n self.recent_files_manager.open_file(path)\n self.menu_recents.update_actions()", "response": "Opens the requested file and adds it to the tab widget."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a new empty code editor to the tab widget", "response": "def on_new(self):\n \"\"\"\n Add a new empty code editor to the tab widget\n \"\"\"\n editor = self.tabWidget.create_new_document()\n editor.cursorPositionChanged.connect(self.on_cursor_pos_changed)\n self.refresh_color_scheme()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nopen file dialog and open the file if the dialog was accepted.", "response": "def on_open(self):\n \"\"\"\n Shows an open file dialog and open the file if the dialog was\n accepted.\n\n \"\"\"\n filename, filter = QtWidgets.QFileDialog.getOpenFileName(\n self, _('Open'))\n if filename:\n self.open_file(filename)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef on_save_as(self):\n self.tabWidget.save_current_as()\n self._update_status_bar(self.tabWidget.current_widget())", "response": "Save the current editor document as."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef tab_under_menu(self):\n if self._menu_pos:\n return self.tabBar().tabAt(self._menu_pos)\n else:\n return self.currentIndex()", "response": "Returns the tab that sits under the context menu."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef close_others(self):\n current_widget = self.widget(self.tab_under_menu())\n if self._try_close_dirty_tabs(exept=current_widget):\n i = 0\n while self.count() > 1:\n widget = self.widget(i)\n if widget != current_widget:\n self.remove_tab(i)\n else:\n i = 1", "response": "Closes every editors tabs except the current one."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef close_left(self):\n current_widget = self.widget(self.tab_under_menu())\n index = self.indexOf(current_widget)\n if self._try_close_dirty_tabs(tab_range=range(index)):\n while True:\n widget = self.widget(0)\n if widget != current_widget:\n self.remove_tab(0)\n else:\n break", "response": "Closes every editors tabs on the left of the current one."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nclosing every editors tabs on the left of the current one.", "response": "def close_right(self):\n \"\"\"\n Closes every editors tabs on the left of the current one.\n \"\"\"\n current_widget = self.widget(self.tab_under_menu())\n index = self.indexOf(current_widget)\n if self._try_close_dirty_tabs(tab_range=range(index + 1, self.count())):\n while True:\n widget = self.widget(self.count() - 1)\n if widget != current_widget:\n self.remove_tab(self.count() - 1)\n else:\n break"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef close_all(self):\n if self._try_close_dirty_tabs():\n while self.count():\n widget = self.widget(0)\n self.remove_tab(0)\n self.tab_closed.emit(widget)\n return True\n return False", "response": "Closes all editors and tabs."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _collect_dirty_tabs(self, skip=None, tab_range=None):\n widgets = []\n filenames = []\n if tab_range is None:\n tab_range = range(self.count())\n for i in tab_range:\n widget = self.widget(i)\n try:\n if widget.dirty and widget != skip:\n widgets.append(widget)\n if widget.file.path:\n filenames.append(widget.file.path)\n else:\n filenames.append(widget.documentTitle())\n except AttributeError:\n pass\n return widgets, filenames", "response": "Collects the list of dirty tabs and their filenames."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncloses the given widgets and handles cases where the widget has been cloned or is a clone of another widget.", "response": "def _close_widget(widget):\n \"\"\"\n Closes the given widgets and handles cases where the widget has been\n clone or is a clone of another widget\n \"\"\"\n if widget is None:\n return\n try:\n widget.document().setParent(None)\n widget.syntax_highlighter.setParent(None)\n except AttributeError:\n pass # not a QPlainTextEdit subclass\n # handled cloned widgets\n clones = []\n if hasattr(widget, 'original') and widget.original:\n # cloned widget needs to be removed from the original\n widget.original.clones.remove(widget)\n try:\n widget.setDocument(None)\n except AttributeError:\n # not a QTextEdit/QPlainTextEdit\n pass\n elif hasattr(widget, 'clones'):\n clones = widget.clones\n try:\n # only clear current editor if it does not have any other clones\n widget.close(clear=len(clones) == 0)\n except (AttributeError, TypeError):\n # not a CodeEdit\n widget.close()\n return clones"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove a tab from the tab list.", "response": "def remove_tab(self, index):\n \"\"\"\n Overrides removeTab to emit tab_closed and last_tab_closed signals.\n\n :param index: index of the tab to remove.\n \"\"\"\n widget = self.widget(index)\n try:\n document = widget.document()\n except AttributeError:\n document = None # not a QPlainTextEdit\n clones = self._close_widget(widget)\n self.tab_closed.emit(widget)\n self.removeTab(index)\n self._restore_original(clones)\n widget._original_tab_widget._tabs.remove(widget)\n if self.count() == 0:\n self.last_tab_closed.emit()\n if SplittableTabWidget.tab_under_menu == widget:\n SplittableTabWidget.tab_under_menu = None\n if not clones:\n widget.setParent(None)\n else:\n try:\n clones[0].syntax_highlighter.setDocument(document)\n except AttributeError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _on_split_requested(self):\n orientation = self.sender().text()\n widget = self.widget(self.tab_under_menu())\n if 'horizontally' in orientation:\n self.split_requested.emit(\n widget, QtCore.Qt.Horizontal)\n else:\n self.split_requested.emit(\n widget, QtCore.Qt.Vertical)", "response": "Emits the split requested signal with the desired orientation."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef addTab(self, tab, *args):\n tab.parent_tab_widget = self\n super(BaseTabWidget, self).addTab(tab, *args)", "response": "Add a tab to the tab widget."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a custom context menu action to the main tab widget.", "response": "def add_context_action(self, action):\n \"\"\"\n Adds a custom context menu action\n\n :param action: action to add.\n \"\"\"\n self.main_tab_widget.context_actions.append(action)\n for child_splitter in self.child_splitters:\n child_splitter.add_context_action(action)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a tab to the main tab widget.", "response": "def add_tab(self, tab, title='', icon=None):\n \"\"\"\n Adds a tab to main tab widget.\n\n :param tab: Widget to add as a new tab of the main tab widget.\n :param title: Tab title\n :param icon: Tab icon\n \"\"\"\n if icon:\n tab._icon = icon\n if not hasattr(tab, 'clones'):\n tab.clones = []\n if not hasattr(tab, 'original'):\n tab.original = None\n if icon:\n self.main_tab_widget.addTab(tab, icon, title)\n else:\n self.main_tab_widget.addTab(tab, title)\n self.main_tab_widget.setCurrentIndex(\n self.main_tab_widget.indexOf(tab))\n self.main_tab_widget.show()\n tab._uuid = self._uuid\n try:\n scroll_bar = tab.horizontalScrollBar()\n except AttributeError:\n # not a QPlainTextEdit class\n pass\n else:\n scroll_bar.setValue(0)\n tab.setFocus()\n tab._original_tab_widget = self\n self._tabs.append(tab)\n self._on_focus_changed(None, tab)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsplit the current widget in new SplittableTabWidget.", "response": "def split(self, widget, orientation):\n \"\"\"\n Split the the current widget in new SplittableTabWidget.\n\n :param widget: widget to split\n :param orientation: orientation of the splitter\n :return: the new splitter\n \"\"\"\n if widget.original:\n base = widget.original\n else:\n base = widget\n clone = base.split()\n if not clone:\n return\n if orientation == int(QtCore.Qt.Horizontal):\n orientation = QtCore.Qt.Horizontal\n else:\n orientation = QtCore.Qt.Vertical\n self.setOrientation(orientation)\n splitter = self._make_splitter()\n splitter.show()\n self.addWidget(splitter)\n self.child_splitters.append(splitter)\n if clone not in base.clones:\n # code editors maintain the list of clones internally but some\n # other widgets (user widgets) might not.\n base.clones.append(clone)\n clone.original = base\n splitter._parent_splitter = self\n splitter.last_tab_closed.connect(self._on_last_child_tab_closed)\n splitter.tab_detached.connect(self.tab_detached.emit)\n if hasattr(base, '_icon'):\n icon = base._icon\n else:\n icon = None\n # same group of tab splitter (user might have a group for editors and\n # another group for consoles or whatever).\n splitter._uuid = self._uuid\n splitter.add_tab(clone, title=self.main_tab_widget.tabText(\n self.main_tab_widget.indexOf(widget)), icon=icon)\n self.setSizes([1 for i in range(self.count())])\n return splitter"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef has_children(self):\n for splitter in self.child_splitters:\n if splitter.has_children():\n return splitter\n return self.main_tab_widget.count() != 0", "response": "Checks if there are children tab widgets."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef widgets(self, include_clones=False):\n widgets = []\n for i in range(self.main_tab_widget.count()):\n widget = self.main_tab_widget.widget(i)\n try:\n if widget.original is None or include_clones:\n widgets.append(widget)\n except AttributeError:\n pass\n for child in self.child_splitters:\n widgets += child.widgets(include_clones=include_clones)\n return widgets", "response": "Recursively gets the list of widgets."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef count(self):\n c = self.main_tab_widget.count()\n for child in self.child_splitters:\n c += child.count()\n return c", "response": "Returns the number of widgets currently displayed in this tab."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_filter(cls, mimetype):\n filters = ' '.join(\n ['*%s' % ext for ext in mimetypes.guess_all_extensions(mimetype)])\n return '%s (%s)' % (mimetype, filters)", "response": "Returns a filter string for the file dialog. The filter is based\n on the mime type."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef addTab(self, widget, *args):\n widget.dirty_changed.connect(self._on_dirty_changed)\n super(CodeEditTabWidget, self).addTab(widget, *args)", "response": "Re - implements addTab to connect to the dirty changed signal and setup\n some helper attributes."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncall when the dirty flag is set to True.", "response": "def _on_dirty_changed(self, dirty):\n \"\"\"\n Adds a star in front of a dirtt tab and emits dirty_changed.\n \"\"\"\n widget = self.sender()\n if isinstance(widget, DraggableTabBar):\n return\n parent = widget.parent_tab_widget\n index = parent.indexOf(widget)\n title = parent.tabText(index)\n title = title.replace('* ', '')\n if dirty:\n parent.setTabText(index, \"* \" + title)\n else:\n parent.setTabText(index, title)\n parent.dirty_changed.emit(dirty)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _ask_path(cls, editor):\n try:\n filter = cls.get_filter(editor.mimetypes[0])\n except IndexError:\n filter = _('All files (*)')\n return QtWidgets.QFileDialog.getSaveFileName(\n editor, _('Save file as'), cls.default_directory, filter)", "response": "Shows a QFileDialog and ask for a save filename."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nimplements SplittableTabWidget. save_widget to actually save the current language of the current language of the current language of the current language of the current language.", "response": "def save_widget(cls, editor):\n \"\"\"\n Implements SplittableTabWidget.save_widget to actually save the\n code editor widget.\n\n If the editor.file.path is None or empty or the file does not exist,\n a save as dialog is shown (save as).\n\n :param editor: editor widget to save.\n :return: False if there was a problem saving the editor (e.g. the save\n as dialog has been canceled by the user, or a permission error,...)\n \"\"\"\n if editor.original:\n editor = editor.original\n if editor.file.path is None or not os.path.exists(editor.file.path):\n # save as\n path, filter = cls._ask_path(editor)\n if not path:\n return False\n if not os.path.splitext(path)[1]:\n if len(editor.mimetypes):\n path += mimetypes.guess_extension(editor.mimetypes[0])\n try:\n _logger().debug('saving %r as %r', editor.file._old_path, path)\n except AttributeError:\n _logger().debug('saving %r as %r', editor.file.path, path)\n editor.file._path = path\n else:\n path = editor.file.path\n try:\n editor.file.save(path)\n except Exception as e:\n QtWidgets.QMessageBox.warning(editor, \"Failed to save file\", 'Failed to save %r.\\n\\nError=\"%s\"' %\n (path, e))\n else:\n tw = editor.parent_tab_widget\n text = tw.tabText(tw.indexOf(editor)).replace('*', '')\n tw.setTabText(tw.indexOf(editor), text)\n for clone in [editor] + editor.clones:\n if clone != editor:\n tw = clone.parent_tab_widget\n tw.setTabText(tw.indexOf(clone), text)\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register_code_edit(cls, code_edit_class):\n if not inspect.isclass(code_edit_class):\n raise TypeError('must be a class, not an instance.')\n for mimetype in code_edit_class.mimetypes:\n if mimetype in cls.editors:\n _logger().warn('editor for mimetype already registered, '\n 'skipping')\n cls.editors[mimetype] = code_edit_class\n _logger().log(5, 'registered editors: %r', cls.editors)", "response": "Register an additional code edit class."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsaving current widget as.", "response": "def save_current_as(self):\n \"\"\"\n Save current widget as.\n \"\"\"\n if not self.current_widget():\n return\n mem = self.current_widget().file.path\n self.current_widget().file._path = None\n self.current_widget().file._old_path = mem\n CodeEditTabWidget.default_directory = os.path.dirname(mem)\n widget = self.current_widget()\n try:\n success = self.main_tab_widget.save_widget(widget)\n except Exception as e:\n QtWidgets.QMessageBox.warning(\n self, _('Failed to save file as'),\n _('Failed to save file as %s\\nError=%s') % (\n widget.file.path, str(e)))\n widget.file._path = mem\n else:\n if not success:\n widget.file._path = mem\n else:\n CodeEditTabWidget.default_directory = os.path.expanduser('~')\n self.document_saved.emit(widget.file.path, '')\n\n # rename tab\n tw = widget.parent_tab_widget\n tw.setTabText(tw.indexOf(widget),\n os.path.split(widget.file.path)[1])\n\n return self.current_widget().file.path"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef save_current(self):\n if self.current_widget() is not None:\n editor = self.current_widget()\n self._save(editor)", "response": "Save current editor. If the editor.file.path is None, a save as dialog\n will be shown."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsave all widgets in the system.", "response": "def save_all(self):\n \"\"\"\n Save all editors.\n \"\"\"\n for w in self.widgets():\n try:\n self._save(w)\n except OSError:\n _logger().exception('failed to save %s', w.file.path)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _create_code_edit(self, mimetype, *args, **kwargs):\n if mimetype in self.editors.keys():\n return self.editors[mimetype](\n *args, parent=self.main_tab_widget, **kwargs)\n editor = self.fallback_editor(*args, parent=self.main_tab_widget,\n **kwargs)\n return editor", "response": "Create a code edit instance based on the mimetype."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new document and returns a tab widget instance.", "response": "def create_new_document(self, base_name='New Document',\n extension='.txt', preferred_eol=0,\n autodetect_eol=True, **kwargs):\n \"\"\"\n Creates a new document.\n\n The document name will be ``base_name + count + extension``\n\n :param base_name: Base name of the document. An int will be appended.\n :param extension: Document extension (dotted)\n :param args: Positional arguments that must be forwarded to the editor\n widget constructor.\n :param preferred_eol: Preferred EOL convention. This setting will be\n used for saving the document unless autodetect_eol is True.\n :param autodetect_eol: If true, automatically detects file EOL and\n use it instead of the preferred EOL when saving files.\n :param kwargs: Keyworded arguments that must be forwarded to the editor\n widget constructor.\n :return: Code editor widget instance.\n \"\"\"\n SplittableCodeEditTabWidget._new_count += 1\n name = '%s%d%s' % (base_name, self._new_count, extension)\n tab = self._create_code_edit(\n self.guess_mimetype(name), **kwargs)\n self.editor_created.emit(tab)\n tab.file.autodetect_eol = autodetect_eol\n tab.file.preferred_eol = preferred_eol\n tab.setDocumentTitle(name)\n self.add_tab(tab, title=name, icon=self._icon(name))\n self.document_opened.emit(tab)\n return tab"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nopens a document and returns a CodeEditor object.", "response": "def open_document(self, path, encoding=None, replace_tabs_by_spaces=True,\n clean_trailing_whitespaces=True, safe_save=True,\n restore_cursor_position=True, preferred_eol=0,\n autodetect_eol=True, show_whitespaces=False, **kwargs):\n \"\"\"\n Opens a document.\n\n :param path: Path of the document to open\n :param encoding: The encoding to use to open the file. Default is\n locale.getpreferredencoding().\n :param replace_tabs_by_spaces: Enable/Disable replace tabs by spaces.\n Default is true.\n :param clean_trailing_whitespaces: Enable/Disable clean trailing\n whitespaces (on save). Default is True.\n :param safe_save: If True, the file is saved to a temporary file first.\n If the save went fine, the temporary file is renamed to the final\n filename.\n :param restore_cursor_position: If true, last cursor position will be\n restored. Default is True.\n :param preferred_eol: Preferred EOL convention. This setting will be\n used for saving the document unless autodetect_eol is True.\n :param autodetect_eol: If true, automatically detects file EOL and\n use it instead of the preferred EOL when saving files.\n :param show_whitespaces: True to show white spaces.\n :param kwargs: addtional keyword args to pass to the widget\n constructor.\n :return: The created code editor\n \"\"\"\n original_path = os.path.normpath(path)\n path = os.path.normcase(original_path)\n paths = []\n widgets = []\n for w in self.widgets(include_clones=False):\n if os.path.exists(w.file.path):\n # skip new docs\n widgets.append(w)\n paths.append(os.path.normcase(w.file.path))\n if path in paths:\n i = paths.index(path)\n w = widgets[i]\n tw = w.parent_tab_widget\n tw.setCurrentIndex(tw.indexOf(w))\n return w\n else:\n assert os.path.exists(original_path)\n name = os.path.split(original_path)[1]\n\n use_parent_dir = False\n for tab in self.widgets():\n title = QtCore.QFileInfo(tab.file.path).fileName()\n if title == name:\n tw = tab.parent_tab_widget\n new_name = os.path.join(os.path.split(os.path.dirname(\n tab.file.path))[1], title)\n tw.setTabText(tw.indexOf(tab), new_name)\n use_parent_dir = True\n\n if use_parent_dir:\n name = os.path.join(\n os.path.split(os.path.dirname(path))[1], name)\n use_parent_dir = False\n\n tab = self._create_code_edit(self.guess_mimetype(path), **kwargs)\n self.editor_created.emit(tab)\n\n tab.open_parameters = {\n 'encoding': encoding,\n 'replace_tabs_by_spaces': replace_tabs_by_spaces,\n 'clean_trailing_whitespaces': clean_trailing_whitespaces,\n 'safe_save': safe_save,\n 'restore_cursor_position': restore_cursor_position,\n 'preferred_eol': preferred_eol,\n 'autodetect_eol': autodetect_eol,\n 'show_whitespaces': show_whitespaces,\n 'kwargs': kwargs\n }\n tab.file.clean_trailing_whitespaces = clean_trailing_whitespaces\n tab.file.safe_save = safe_save\n tab.file.restore_cursor = restore_cursor_position\n tab.file.replace_tabs_by_spaces = replace_tabs_by_spaces\n tab.file.autodetect_eol = autodetect_eol\n tab.file.preferred_eol = preferred_eol\n tab.show_whitespaces = show_whitespaces\n try:\n tab.file.open(original_path, encoding=encoding)\n except Exception as e:\n _logger().exception('exception while opening file')\n tab.close()\n tab.setParent(None)\n tab.deleteLater()\n raise e\n else:\n tab.setDocumentTitle(name)\n tab.file._path = original_path\n icon = self._icon(path)\n self.add_tab(tab, title=name, icon=icon)\n self.document_opened.emit(tab)\n\n for action in self.closed_tabs_menu.actions():\n if action.toolTip() == original_path:\n self.closed_tabs_menu.removeAction(action)\n break\n self.closed_tabs_history_btn.setEnabled(\n len(self.closed_tabs_menu.actions()) > 0)\n\n return tab"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef close_document(self, path):\n to_close = []\n for widget in self.widgets(include_clones=True):\n p = os.path.normpath(os.path.normcase(widget.file.path))\n path = os.path.normpath(os.path.normcase(path))\n if p == path:\n to_close.append(widget)\n for widget in to_close:\n tw = widget.parent_tab_widget\n tw.remove_tab(tw.indexOf(widget))", "response": "Closes a text document."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrename an already opened document.", "response": "def rename_document(self, old_path, new_path):\n \"\"\"\n Renames an already opened document (this will not rename the file,\n just update the file path and tab title).\n\n Use that function to update a file that has been renamed externally.\n\n :param old_path: old path (path of the widget to rename with\n ``new_path``\n :param new_path: new path that will be used to rename the tab.\n \"\"\"\n to_rename = []\n title = os.path.split(new_path)[1]\n for widget in self.widgets(include_clones=True):\n p = os.path.normpath(os.path.normcase(widget.file.path))\n old_path = os.path.normpath(os.path.normcase(old_path))\n if p == old_path:\n to_rename.append(widget)\n for widget in to_rename:\n tw = widget.parent_tab_widget\n widget.file._path = new_path\n tw.setTabText(tw.indexOf(widget), title)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsaving dirty editors on close and cancel the event if the user choosed to continue to work.", "response": "def closeEvent(self, event):\n \"\"\"\n Saves dirty editors on close and cancel the event if the user choosed\n to continue to work.\n\n :param event: close event\n \"\"\"\n dirty_widgets = []\n for w in self.widgets(include_clones=False):\n if w.dirty:\n dirty_widgets.append(w)\n filenames = []\n for w in dirty_widgets:\n if os.path.exists(w.file.path):\n filenames.append(w.file.path)\n else:\n filenames.append(w.documentTitle())\n if len(filenames) == 0:\n self.close_all()\n return\n dlg = DlgUnsavedFiles(self, files=filenames)\n if dlg.exec_() == dlg.Accepted:\n if not dlg.discarded:\n for item in dlg.listWidget.selectedItems():\n filename = item.text()\n widget = None\n for widget in dirty_widgets:\n if widget.file.path == filename or \\\n widget.documentTitle() == filename:\n break\n tw = widget.parent_tab_widget\n tw.save_widget(widget)\n tw.remove_tab(tw.indexOf(widget))\n self.close_all()\n else:\n event.ignore()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _is_shortcut(self, event):\n modifier = (QtCore.Qt.MetaModifier if sys.platform == 'darwin' else\n QtCore.Qt.ControlModifier)\n valid_modifier = int(event.modifiers() & modifier) == modifier\n valid_key = event.key() == self._trigger_key\n return valid_key and valid_modifier", "response": "Checks if the event s key and modifiers make the completion shortcut"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _hide_popup(self):\n debug('hide popup')\n if (self._completer.popup() is not None and\n self._completer.popup().isVisible()):\n self._completer.popup().hide()\n self._last_cursor_column = -1\n self._last_cursor_line = -1\n QtWidgets.QToolTip.hideText()", "response": "Hides the completer popup"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nshow the popup at the specified index.", "response": "def _show_popup(self, index=0):\n \"\"\"\n Shows the popup at the specified index.\n :param index: index\n :return:\n \"\"\"\n full_prefix = self._helper.word_under_cursor(\n select_whole_word=False).selectedText()\n if self._case_sensitive:\n self._completer.setCaseSensitivity(QtCore.Qt.CaseSensitive)\n else:\n self._completer.setCaseSensitivity(QtCore.Qt.CaseInsensitive)\n # set prefix\n self._completer.setCompletionPrefix(self.completion_prefix)\n cnt = self._completer.completionCount()\n selected = self._completer.currentCompletion()\n if (full_prefix == selected) and cnt == 1:\n debug('user already typed the only completion that we '\n 'have')\n self._hide_popup()\n else:\n # show the completion list\n if self.editor.isVisible():\n if self._completer.widget() != self.editor:\n self._completer.setWidget(self.editor)\n self._completer.complete(self._get_popup_rect())\n self._completer.popup().setCurrentIndex(\n self._completer.completionModel().index(index, 0))\n debug(\n \"popup shown: %r\" % self._completer.popup().isVisible())\n else:\n debug('cannot show popup, editor is not visible')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate the completion model for the current locale.", "response": "def _update_model(self, completions):\n \"\"\"\n Creates a QStandardModel that holds the suggestion from the completion\n models for the QCompleter\n\n :param completionPrefix:\n \"\"\"\n # build the completion model\n cc_model = QtGui.QStandardItemModel()\n self._tooltips.clear()\n for completion in completions:\n name = completion['name']\n item = QtGui.QStandardItem()\n item.setData(name, QtCore.Qt.DisplayRole)\n if 'tooltip' in completion and completion['tooltip']:\n self._tooltips[name] = completion['tooltip']\n if 'icon' in completion:\n icon = completion['icon']\n if isinstance(icon, list):\n icon = QtGui.QIcon.fromTheme(icon[0], QtGui.QIcon(icon[1]))\n else:\n icon = QtGui.QIcon(icon)\n item.setData(QtGui.QIcon(icon),\n QtCore.Qt.DecorationRole)\n cc_model.appendRow(item)\n try:\n self._completer.setModel(cc_model)\n except RuntimeError:\n self._create_completer()\n self._completer.setModel(cc_model)\n return cc_model"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a suds object of the requested _type.", "response": "def create(client, _type, **kwargs):\n \"\"\"Create a suds object of the requested _type.\"\"\"\n obj = client.factory.create(\"ns0:%s\" % _type)\n for key, value in kwargs.items():\n setattr(obj, key, value)\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef invoke(client, method, **kwargs):\n try:\n # Proxy the method to the suds service\n result = getattr(client.service, method)(**kwargs)\n except AttributeError:\n logger.critical(\"Unknown method: %s\", method)\n raise\n except URLError as e:\n logger.debug(pprint(e))\n logger.debug(\"A URL related error occurred while invoking the '%s' \"\n \"method on the VIM server, this can be caused by \"\n \"name resolution or connection problems.\", method)\n logger.debug(\"The underlying error is: %s\", e.reason[1])\n raise\n except suds.client.TransportError as e:\n logger.debug(pprint(e))\n logger.debug(\"TransportError: %s\", e)\n except suds.WebFault as e:\n # Get the type of fault\n logger.critical(\"SUDS Fault: %s\" % e.fault.faultstring)\n if len(e.fault.faultstring) > 0:\n raise\n\n detail = e.document.childAtPath(\"/Envelope/Body/Fault/detail\")\n fault_type = detail.getChildren()[0].name\n fault = create(fault_type)\n if isinstance(e.fault.detail[0], list):\n for attr in e.fault.detail[0]:\n setattr(fault, attr[0], attr[1])\n else:\n fault[\"text\"] = e.fault.detail[0]\n\n raise VimFault(fault)\n\n return result", "response": "Invoke a method on the underlying soap service."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef prettydate(date):\n now = datetime.now(timezone.utc)\n \"\"\"\n Return the relative timeframe between the given date and now.\n\n e.g. 'Just now', 'x days ago', 'x hours ago', ...\n\n When the difference is greater than 7 days, the timestamp will be returned\n instead.\n \"\"\"\n diff = now - date\n # Show the timestamp rather than the relative timeframe when the difference\n # is greater than 7 days\n if diff.days > 7:\n return date.strftime(\"%d. %b %Y\")\n return arrow.get(date).humanize()", "response": "Return a pretty date string for the given date."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_app_auth_headers(self):\n now = datetime.now(timezone.utc)\n expiry = now + timedelta(minutes=5)\n\n data = {\"iat\": now, \"exp\": expiry, \"iss\": self.app_id}\n app_token = jwt.encode(data, self.app_key, algorithm=\"RS256\").decode(\"utf-8\")\n\n headers = {\n \"Accept\": PREVIEW_JSON_ACCEPT,\n \"Authorization\": \"Bearer {}\".format(app_token),\n }\n\n return headers", "response": "Set the correct auth headers to authenticate against GitHub."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_installation_key(\n self, project, user_id=None, install_id=None, reprime=False\n ):\n \"\"\"Get the auth token for a project or installation id.\"\"\"\n installation_id = install_id\n if project is not None:\n installation_id = self.installation_map.get(project, {}).get(\n \"installation_id\"\n )\n\n if not installation_id:\n if reprime:\n # prime installation map and try again without refreshing\n self._prime_install_map()\n return self._get_installation_key(\n project, user_id=user_id, install_id=install_id, reprime=False\n )\n LOGGER.debug(\"No installation ID available for project %s\", project)\n return \"\"\n\n # Look up the token from cache\n now = datetime.now(timezone.utc)\n token, expiry = self.installation_token_cache.get(installation_id, (None, None))\n\n # Request new token if the available one is expired or could not be found\n if (not expiry) or (not token) or (now >= expiry):\n LOGGER.debug(\"Requesting new token for installation %s\", installation_id)\n headers = self._get_app_auth_headers()\n url = \"{}/installations/{}/access_tokens\".format(\n self.api_url, installation_id\n )\n\n json_data = {\"user_id\": user_id} if user_id else None\n\n response = requests.post(url, headers=headers, json=json_data)\n response.raise_for_status()\n\n data = response.json()\n\n token = data[\"token\"]\n expiry = datetime.strptime(data[\"expires_at\"], \"%Y-%m-%dT%H:%M:%SZ\")\n # Update time zone of expiration date to make it comparable with now()\n expiry = expiry.replace(tzinfo=timezone.utc)\n\n # Assume, that the token expires two minutes earlier, to not\n # get lost during the checkout/scraping?\n expiry -= timedelta(minutes=2)\n\n self.installation_token_cache[installation_id] = (token, expiry)\n\n return token", "response": "Get the auth token for a project or installation id."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _prime_install_map(self):\n url = \"{}/app/installations\".format(self.api_url)\n headers = self._get_app_auth_headers()\n LOGGER.debug(\"Fetching installations for GitHub app\")\n\n response = requests.get(url, headers=headers)\n response.raise_for_status()\n\n data = response.json()\n\n for install in data:\n install_id = install.get(\"id\")\n token = self._get_installation_key(project=None, install_id=install_id)\n headers = {\n \"Accept\": PREVIEW_JSON_ACCEPT,\n \"Authorization\": \"token {}\".format(token),\n }\n\n url = \"{}/installation/repositories?per_page=100\".format(self.api_url)\n while url:\n LOGGER.debug(\"Fetching repos for installation %s\", install_id)\n response = requests.get(url, headers=headers)\n response.raise_for_status()\n repos = response.json()\n\n # Store all projects in the installation map\n for repo in repos.get(\"repositories\", []):\n # TODO (fschmidt): Store the installation's\n # permissions (could come in handy for later features)\n project_name = repo[\"full_name\"]\n self.installation_map[project_name] = {\n \"installation_id\": install_id,\n \"default_branch\": repo[\"default_branch\"],\n }\n\n # Check if we need to do further page calls\n url = response.links.get(\"next\", {}).get(\"url\")", "response": "Fetch all installations and look up the ID for each."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_github_client(self, project):\n token = self._get_installation_key(project=project)\n if not token:\n LOGGER.warning(\n \"Could not find an authentication token for '%s'. Do you \"\n \"have access to this repository?\",\n project,\n )\n return\n gh = github3.GitHubEnterprise(self.base_url)\n gh.login(token=token)\n return gh", "response": "Create a github3 client per project."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef refresh(self):\n if self.enabled and self.line:\n self._clear_deco()\n brush = QtGui.QBrush(self._color)\n self._decoration = TextDecoration(\n self.editor.textCursor(), start_line=self.line)\n self._decoration.set_background(brush)\n self._decoration.set_full_width()\n self._decoration.draw_order = 255\n self.editor.decorations.append(self._decoration)", "response": "Updates the current line decoration"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nclose every editors tabs except the current one.", "response": "def close_others(self):\n \"\"\"\n Closes every editors tabs except the current one.\n \"\"\"\n current_widget = self.currentWidget()\n self._try_close_dirty_tabs(exept=current_widget)\n i = 0\n while self.count() > 1:\n widget = self.widget(i)\n if widget != current_widget:\n self.removeTab(i)\n else:\n i = 1"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save_current(self, path=None):\n try:\n if not path and not self._current.file.path:\n path, filter = QtWidgets.QFileDialog.getSaveFileName(\n self, _('Choose destination path'))\n if not path:\n return False\n old_path = self._current.file.path\n code_edit = self._current\n self._save_editor(code_edit, path)\n path = code_edit.file.path\n # path (and icon) may have changed\n if path and old_path != path:\n self._ensure_unique_name(code_edit, code_edit.file.name)\n self.setTabText(self.currentIndex(), code_edit._tab_name)\n ext = os.path.splitext(path)[1]\n old_ext = os.path.splitext(old_path)[1]\n if ext != old_ext or not old_path:\n icon = QtWidgets.QFileIconProvider().icon(\n QtCore.QFileInfo(code_edit.file.path))\n self.setTabIcon(self.currentIndex(), icon)\n return True\n except AttributeError: # not an editor widget\n pass\n return False", "response": "Save the current editor content."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef index_from_filename(self, path):\n if path:\n for i in range(self.count()):\n widget = self.widget(i)\n try:\n if widget.file.path == path:\n return i\n except AttributeError:\n pass # not an editor widget\n return -1", "response": "Checks if the path is already open in an editor tab. If not returns - 1."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_code_edit(self, code_edit, name=None):\n # new empty editor widget (no path set)\n if code_edit.file.path == '':\n cnt = 0\n for i in range(self.count()):\n tab = self.widget(i)\n if tab.file.path.startswith(name[:name.find('%')]):\n cnt += 1\n name %= (cnt + 1)\n code_edit.file._path = name\n index = self.index_from_filename(code_edit.file.path)\n if index != -1:\n # already open, just show it\n self.setCurrentIndex(index)\n # no need to keep this instance\n self._del_code_edit(code_edit)\n return -1\n self._ensure_unique_name(code_edit, name)\n index = self.addTab(code_edit, code_edit.file.icon,\n code_edit._tab_name)\n self.setCurrentIndex(index)\n self.setTabText(index, code_edit._tab_name)\n try:\n code_edit.setFocus()\n except TypeError:\n # PySide\n code_edit.setFocus()\n try:\n file_watcher = code_edit.modes.get(FileWatcherMode)\n except (KeyError, AttributeError):\n # not installed\n pass\n else:\n file_watcher.file_deleted.connect(self._on_file_deleted)\n return index", "response": "Adds a code edit tab to the editor tab set its text as the editor. file. name and sets it as the active tab."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef addTab(self, elem, icon, name):\n self._widgets.append(elem)\n return super(TabWidget, self).addTab(elem, icon, name)", "response": "Extends QTabWidget. addTab to keep an internal list of added tabs."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _name_exists(self, name):\n for i in range(self.count()):\n if self.tabText(i) == name:\n return True\n return False", "response": "Checks if a tab with the given name already exists."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _rename_duplicate_tabs(self, current, name, path):\n for i in range(self.count()):\n if self.widget(i)._tab_name == name and self.widget(i) != current:\n file_path = self.widget(i).file.path\n if file_path:\n parent_dir = os.path.split(os.path.abspath(\n os.path.join(file_path, os.pardir)))[1]\n new_name = os.path.join(parent_dir, name)\n self.setTabText(i, new_name)\n self.widget(i)._tab_name = new_name\n break\n if path:\n parent_dir = os.path.split(os.path.abspath(\n os.path.join(path, os.pardir)))[1]\n return os.path.join(parent_dir, name)\n else:\n return name", "response": "Rename tabs whose title is the same as the name."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving the tab at index index.", "response": "def removeTab(self, index):\n \"\"\"\n Removes tab at index ``index``.\n\n This method will emits tab_closed for the removed tab.\n\n :param index: index of the tab to remove.\n \"\"\"\n widget = self.widget(index)\n try:\n self._widgets.remove(widget)\n except ValueError:\n pass\n self.tab_closed.emit(widget)\n self._del_code_edit(widget)\n QTabWidget.removeTab(self, index)\n if widget == self._current:\n self._current = None"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncollect the list of dirty tabs and their filenames.", "response": "def _collect_dirty_tabs(self, exept=None):\n \"\"\"\n Collects the list of dirty tabs\n \"\"\"\n widgets = []\n filenames = []\n for i in range(self.count()):\n widget = self.widget(i)\n try:\n if widget.dirty and widget != exept:\n widgets.append(widget)\n filenames.append(widget.file.path)\n except AttributeError:\n pass\n return widgets, filenames"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _on_dirty_changed(self, dirty):\n try:\n title = self._current._tab_name\n index = self.indexOf(self._current)\n if dirty:\n self.setTabText(index, \"* \" + title)\n else:\n self.setTabText(index, title)\n except AttributeError:\n pass\n self.dirty_changed.emit(dirty)", "response": "Called when the dirty attribute of the class attribute is changed."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_preferred_encodings(self):\n encodings = []\n for row in range(self.ui.tableWidgetPreferred.rowCount()):\n item = self.ui.tableWidgetPreferred.item(row, 0)\n encodings.append(item.data(QtCore.Qt.UserRole))\n return encodings", "response": "Gets the list of preferred encodings."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef edit_encoding(cls, parent):\n dlg = cls(parent)\n if dlg.exec_() == dlg.Accepted:\n settings = Cache()\n settings.preferred_encodings = dlg.get_preferred_encodings()\n return True\n return False", "response": "Static helper method that shows the encoding editor dialog and adds the new encodings to the settings."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nshowing the encodings dialog and returns the user choice.", "response": "def choose_encoding(cls, parent, path, encoding):\n \"\"\"\n Show the encodings dialog and returns the user choice.\n\n :param parent: parent widget.\n :param path: file path\n :param encoding: current file encoding\n :return: selected encoding\n \"\"\"\n dlg = cls(parent, path, encoding)\n dlg.exec_()\n return dlg.ui.comboBoxEncodings.current_encoding"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef background(self):\n if self._color or not self.editor:\n return self._color\n else:\n return drift_color(self.editor.background, 110)", "response": "Get the color of the caret line."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate the current line decoration with the current line color.", "response": "def refresh(self):\n \"\"\"\n Updates the current line decoration\n \"\"\"\n if self.enabled:\n self._clear_deco()\n if self._color:\n color = self._color\n else:\n color = drift_color(self.editor.background, 110)\n brush = QtGui.QBrush(color)\n self._decoration = TextDecoration(self.editor.textCursor())\n self._decoration.set_background(brush)\n self._decoration.set_full_width()\n self.editor.decorations.append(self._decoration)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate the extended selection menu.", "response": "def create_menu(self):\n \"\"\"\n Creates the extended selection menu.\n \"\"\"\n # setup menu\n menu = QtWidgets.QMenu(self.editor)\n menu.setTitle(_('Select'))\n menu.menuAction().setIcon(QtGui.QIcon.fromTheme('edit-select'))\n # setup actions\n menu.addAction(self.action_select_word)\n menu.addAction(self.action_select_extended_word)\n menu.addAction(self.action_select_matched)\n menu.addAction(self.action_select_line)\n menu.addSeparator()\n menu.addAction(self.editor.action_select_all)\n icon = QtGui.QIcon.fromTheme(\n 'edit-select-all', QtGui.QIcon(\n ':/pyqode-icons/rc/edit-select-all.png'))\n self.editor.action_select_all.setIcon(icon)\n return menu"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nperform word selection :param event: QMouseEvent", "response": "def perform_word_selection(self, event=None):\n \"\"\"\n Performs word selection\n :param event: QMouseEvent\n \"\"\"\n self.editor.setTextCursor(\n TextHelper(self.editor).word_under_cursor(True))\n if event:\n event.accept()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nperforming extended word selection.", "response": "def perform_extended_selection(self, event=None):\n \"\"\"\n Performs extended word selection.\n :param event: QMouseEvent\n \"\"\"\n TextHelper(self.editor).select_extended_word(\n continuation_chars=self.continuation_characters)\n if event:\n event.accept()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef perform_matched_selection(self, event):\n selected = TextHelper(self.editor).match_select()\n if selected and event:\n event.accept()", "response": "Performs matched selection.\n :param event: QMouseEvent"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nserializing a definition to a dictionary ready for json.", "response": "def to_dict(self):\n \"\"\"\n Serializes a definition to a dictionary, ready for json.\n\n Children are serialised recursively.\n \"\"\"\n ddict = {'name': self.name, 'icon': self.icon,\n 'line': self.line, 'column': self.column,\n 'children': [], 'description': self.description,\n 'user_data': self.user_data, 'path': self.file_path}\n for child in self.children:\n ddict['children'].append(child.to_dict())\n return ddict"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_dict(ddict):\n d = Definition(ddict['name'], ddict['line'], ddict['column'],\n ddict['icon'], ddict['description'],\n ddict['user_data'], ddict['path'])\n for child_dict in ddict['children']:\n d.children.append(Definition.from_dict(child_dict))\n return d", "response": "Deserializes a definition from a simple dict."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef indent_selection(self, cursor):\n doc = self.editor.document()\n tab_len = self.editor.tab_length\n cursor.beginEditBlock()\n nb_lines = len(cursor.selection().toPlainText().splitlines())\n c = self.editor.textCursor()\n if c.atBlockStart() and c.position() == c.selectionEnd():\n nb_lines += 1\n block = doc.findBlock(cursor.selectionStart())\n i = 0\n # indent every lines\n while i < nb_lines:\n nb_space_to_add = tab_len\n cursor = QtGui.QTextCursor(block)\n cursor.movePosition(cursor.StartOfLine, cursor.MoveAnchor)\n if self.editor.use_spaces_instead_of_tabs:\n for _ in range(nb_space_to_add):\n cursor.insertText(\" \")\n else:\n cursor.insertText('\\t')\n block = block.next()\n i += 1\n cursor.endEditBlock()", "response": "Indent selected text\n\n :param cursor: QTextCursor"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef indent(self):\n cursor = self.editor.textCursor()\n assert isinstance(cursor, QtGui.QTextCursor)\n if cursor.hasSelection():\n self.indent_selection(cursor)\n else:\n # simply insert indentation at the cursor position\n tab_len = self.editor.tab_length\n cursor.beginEditBlock()\n if self.editor.use_spaces_instead_of_tabs:\n nb_space_to_add = tab_len - cursor.positionInBlock() % tab_len\n cursor.insertText(nb_space_to_add * \" \")\n else:\n cursor.insertText('\\t')\n cursor.endEditBlock()", "response": "Indents text at cursor position."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsplitting the code editor widget into two text documents and link them with the new one.", "response": "def split(self):\n \"\"\"\n Split the code editor widget, return a clone of the widget ready to\n be used (and synchronised with its original).\n\n Splitting the widget is done in 2 steps:\n - first we clone the widget, you can override ``clone`` if your\n widget needs additional arguments.\n\n - then we link the two text document and disable some modes on the\n cloned instance (such as the watcher mode).\n \"\"\"\n # cache cursor position so that the clone open at the current cursor\n # pos\n l, c = TextHelper(self).cursor_position()\n clone = self.clone()\n self.link(clone)\n TextHelper(clone).goto_line(l, c)\n self.clones.append(clone)\n return clone"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef link(self, clone):\n clone.file._path = self.file.path\n clone.file._encoding = self.file.encoding\n clone.file._mimetype = self.file.mimetype\n clone.setDocument(self.document())\n for original_mode, mode in zip(list(self.modes), list(clone.modes)):\n mode.enabled = original_mode.enabled\n mode.clone_settings(original_mode)\n for original_panel, panel in zip(\n list(self.panels), list(clone.panels)):\n panel.enabled = original_panel.isEnabled()\n panel.clone_settings(original_panel)\n if not original_panel.isVisible():\n panel.setVisible(False)\n clone.use_spaces_instead_of_tabs = self.use_spaces_instead_of_tabs\n clone.tab_length = self.tab_length\n clone._save_on_focus_out = self._save_on_focus_out\n clone.show_whitespaces = self.show_whitespaces\n clone.font_name = self.font_name\n clone.font_size = self.font_size\n clone.zoom_level = self.zoom_level\n clone.background = self.background\n clone.foreground = self.foreground\n clone.whitespaces_foreground = self.whitespaces_foreground\n clone.selection_background = self.selection_background\n clone.selection_foreground = self.selection_foreground\n clone.word_separators = self.word_separators\n clone.file.clone_settings(self.file)", "response": "Links the clone with its original."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef close(self, clear=True):\n if self._tooltips_runner:\n self._tooltips_runner.cancel_requests()\n self._tooltips_runner = None\n self.decorations.clear()\n self.modes.clear()\n self.panels.clear()\n self.backend.stop()\n Cache().set_cursor_position(\n self.file.path, self.textCursor().position())\n super(CodeEdit, self).close()", "response": "Closes the editor and clears any installed version of the editor."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nshows a tool tip at the specified position.", "response": "def show_tooltip(self, pos, tooltip, _sender_deco=None):\n \"\"\"\n Show a tool tip at the specified position\n\n :param pos: Tooltip position\n :param tooltip: Tooltip text\n\n :param _sender_deco: TextDecoration which is the sender of the show\n tooltip request. (for internal use only).\n \"\"\"\n if _sender_deco is not None and _sender_deco not in self.decorations:\n return\n QtWidgets.QToolTip.showText(pos, tooltip[0: 1024], self)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef setPlainText(self, txt, mime_type, encoding):\n self.file.mimetype = mime_type\n self.file._encoding = encoding\n self._original_text = txt\n self._modified_lines.clear()\n import time\n t = time.time()\n super(CodeEdit, self).setPlainText(txt)\n _logger().log(5, 'setPlainText duration: %fs' % (time.time() - t))\n self.new_text_set.emit()\n self.redoAvailable.emit(False)\n self.undoAvailable.emit(False)", "response": "Sets the plain text for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds an action to the editor s context menu.", "response": "def add_action(self, action, sub_menu='Advanced'):\n \"\"\"\n Adds an action to the editor's context menu.\n\n :param action: QAction to add to the context menu.\n :param sub_menu: The name of a sub menu where to put the action.\n 'Advanced' by default. If None or empty, the action will be added\n at the root of the submenu.\n \"\"\"\n if sub_menu:\n try:\n mnu = self._sub_menus[sub_menu]\n except KeyError:\n mnu = QtWidgets.QMenu(sub_menu)\n self.add_menu(mnu)\n self._sub_menus[sub_menu] = mnu\n finally:\n mnu.addAction(action)\n else:\n self._actions.append(action)\n action.setShortcutContext(QtCore.Qt.WidgetShortcut)\n self.addAction(action)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef insert_action(self, action, prev_action):\n if isinstance(prev_action, QtWidgets.QAction):\n index = self._actions.index(prev_action)\n else:\n index = prev_action\n action.setShortcutContext(QtCore.Qt.WidgetShortcut)\n self._actions.insert(index, action)", "response": "Inserts an action to the editor s context menu."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_separator(self, sub_menu='Advanced'):\n action = QtWidgets.QAction(self)\n action.setSeparator(True)\n if sub_menu:\n try:\n mnu = self._sub_menus[sub_menu]\n except KeyError:\n pass\n else:\n mnu.addAction(action)\n else:\n self._actions.append(action)\n return action", "response": "Adds a sepqrator to the editor s context menu."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nremoves an action from the context menu.", "response": "def remove_action(self, action, sub_menu='Advanced'):\n \"\"\"\n Removes an action/separator from the editor's context menu.\n\n :param action: Action/seprator to remove.\n :param advanced: True to remove the action from the advanced submenu.\n \"\"\"\n if sub_menu:\n try:\n mnu = self._sub_menus[sub_menu]\n except KeyError:\n pass\n else:\n mnu.removeAction(action)\n else:\n try:\n self._actions.remove(action)\n except ValueError:\n pass\n self.removeAction(action)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_menu(self, menu):\n self._menus.append(menu)\n self._menus = sorted(list(set(self._menus)), key=lambda x: x.title())\n for action in menu.actions():\n action.setShortcutContext(QtCore.Qt.WidgetShortcut)\n self.addActions(menu.actions())", "response": "Adds a sub - menu to the editor context menu."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove_menu(self, menu):\n self._menus.remove(menu)\n for action in menu.actions():\n self.removeAction(action)", "response": "Removes a sub - menu from the context menu."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef goto_line(self):\n helper = TextHelper(self)\n line, result = DlgGotoLine.get_line(\n self, helper.current_line_nbr(), helper.line_count())\n if not result:\n return\n return helper.goto_line(line, move=True)", "response": "Shows the * go to line dialog* and go to the selected line."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nzoom in the editor", "response": "def zoom_in(self, increment=1):\n \"\"\"\n Zooms in the editor (makes the font bigger).\n\n :param increment: zoom level increment. Default is 1.\n \"\"\"\n self.zoom_level += increment\n TextHelper(self).mark_whole_doc_dirty()\n self._reset_stylesheet()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nzooms out the editor.", "response": "def zoom_out(self, decrement=1):\n \"\"\"\n Zooms out the editor (makes the font smaller).\n\n :param decrement: zoom level decrement. Default is 1. The value is\n given as an absolute value.\n \"\"\"\n self.zoom_level -= decrement\n # make sure font size remains > 0\n if self.font_size + self.zoom_level <= 0:\n self.zoom_level = -self._font_size + 1\n TextHelper(self).mark_whole_doc_dirty()\n self._reset_stylesheet()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef duplicate_line(self):\n cursor = self.textCursor()\n assert isinstance(cursor, QtGui.QTextCursor)\n has_selection = True\n if not cursor.hasSelection():\n cursor.select(cursor.LineUnderCursor)\n has_selection = False\n line = cursor.selectedText()\n line = '\\n'.join(line.split('\\u2029'))\n end = cursor.selectionEnd()\n cursor.setPosition(end)\n cursor.beginEditBlock()\n cursor.insertText('\\n')\n cursor.insertText(line)\n cursor.endEditBlock()\n if has_selection:\n pos = cursor.position()\n cursor.setPosition(end + 1)\n cursor.setPosition(pos, cursor.KeepAnchor)\n self.setTextCursor(cursor)", "response": "Duplicates the line under the cursor."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncutting the selected text or the whole line if no text was selected.", "response": "def cut(self):\n \"\"\"\n Cuts the selected text or the whole line if no text was selected.\n \"\"\"\n tc = self.textCursor()\n helper = TextHelper(self)\n tc.beginEditBlock()\n no_selection = False\n sText = tc.selection().toPlainText()\n if not helper.current_line_text() and sText.count(\"\\n\") > 1:\n tc.deleteChar()\n else:\n if not self.textCursor().hasSelection():\n no_selection = True\n TextHelper(self).select_whole_line()\n super(CodeEdit, self).cut()\n if no_selection:\n tc.deleteChar()\n tc.endEditBlock()\n self.setTextCursor(tc)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef copy(self):\n if self.select_line_on_copy_empty and not self.textCursor().hasSelection():\n TextHelper(self).select_whole_line()\n super(CodeEdit, self).copy()", "response": "Copy the selected text to the clipboard."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\noverriding resize event to resize the editor s panels.", "response": "def resizeEvent(self, e):\n \"\"\"\n Overrides resize event to resize the editor's panels.\n\n :param e: resize event\n \"\"\"\n super(CodeEdit, self).resizeEvent(e)\n self.panels.resize()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef paintEvent(self, e):\n self._update_visible_blocks(e)\n super(CodeEdit, self).paintEvent(e)\n self.painted.emit(e)", "response": "Overrides paintEvent to update the list of visible blocks and emit the painted event."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef keyPressEvent(self, event):\n if self.isReadOnly():\n return\n initial_state = event.isAccepted()\n event.ignore()\n self.key_pressed.emit(event)\n state = event.isAccepted()\n if not event.isAccepted():\n if event.key() == QtCore.Qt.Key_Tab and event.modifiers() == \\\n QtCore.Qt.NoModifier:\n self.indent()\n event.accept()\n elif event.key() == QtCore.Qt.Key_Backtab and \\\n event.modifiers() == QtCore.Qt.NoModifier:\n self.un_indent()\n event.accept()\n elif event.key() == QtCore.Qt.Key_Home and \\\n int(event.modifiers()) & QtCore.Qt.ControlModifier == 0:\n self._do_home_key(\n event, int(event.modifiers()) & QtCore.Qt.ShiftModifier)\n if not event.isAccepted():\n event.setAccepted(initial_state)\n super(CodeEdit, self).keyPressEvent(event)\n new_state = event.isAccepted()\n event.setAccepted(state)\n self.post_key_pressed.emit(event)\n event.setAccepted(new_state)", "response": "Overrides the keyPressEvent to emit the key_pressed signal."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef keyReleaseEvent(self, event):\n if self.isReadOnly():\n return\n initial_state = event.isAccepted()\n event.ignore()\n self.key_released.emit(event)\n if not event.isAccepted():\n event.setAccepted(initial_state)\n super(CodeEdit, self).keyReleaseEvent(event)", "response": "Overrides keyReleaseEvent to emit the key_released signal."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\noverrides focusInEvent to emits the focused_in signal.", "response": "def focusInEvent(self, event):\n \"\"\"\n Overrides focusInEvent to emits the focused_in signal\n\n :param event: QFocusEvent\n \"\"\"\n self.focused_in.emit(event)\n super(CodeEdit, self).focusInEvent(event)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\noverriding mousePressEvent to emits mouse_pressed signal which is a list of related objects.", "response": "def mousePressEvent(self, event):\n \"\"\"\n Overrides mousePressEvent to emits mouse_pressed signal\n\n :param event: QMouseEvent\n \"\"\"\n initial_state = event.isAccepted()\n event.ignore()\n self.mouse_pressed.emit(event)\n if event.button() == QtCore.Qt.LeftButton:\n cursor = self.cursorForPosition(event.pos())\n for sel in self.decorations:\n if sel.cursor.blockNumber() == cursor.blockNumber():\n if sel.contains_cursor(cursor):\n sel.signals.clicked.emit(sel)\n if not event.isAccepted():\n event.setAccepted(initial_state)\n super(CodeEdit, self).mousePressEvent(event)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nemit mouse_released signal. :param event: QMouseEvent", "response": "def mouseReleaseEvent(self, event):\n \"\"\"\n Emits mouse_released signal.\n\n :param event: QMouseEvent\n \"\"\"\n initial_state = event.isAccepted()\n event.ignore()\n self.mouse_released.emit(event)\n if not event.isAccepted():\n event.setAccepted(initial_state)\n super(CodeEdit, self).mouseReleaseEvent(event)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef wheelEvent(self, event):\n initial_state = event.isAccepted()\n event.ignore()\n self.mouse_wheel_activated.emit(event)\n if not event.isAccepted():\n event.setAccepted(initial_state)\n super(CodeEdit, self).wheelEvent(event)", "response": "Emits the mouse_wheel_activated signal."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\noverriding mouseMovedEvent to display any decoration tooltip and emits the mouse_moved event.", "response": "def mouseMoveEvent(self, event):\n \"\"\"\n Overrides mouseMovedEvent to display any decoration tooltip and emits\n the mouse_moved event.\n\n :param event: QMouseEvent\n \"\"\"\n cursor = self.cursorForPosition(event.pos())\n self._last_mouse_pos = event.pos()\n block_found = False\n for sel in self.decorations:\n if sel.contains_cursor(cursor) and sel.tooltip:\n if (self._prev_tooltip_block_nbr != cursor.blockNumber() or\n not QtWidgets.QToolTip.isVisible()):\n pos = event.pos()\n # add left margin\n pos.setX(pos.x() + self.panels.margin_size())\n # add top margin\n pos.setY(pos.y() + self.panels.margin_size(0))\n self._tooltips_runner.request_job(\n self.show_tooltip,\n self.mapToGlobal(pos), sel.tooltip[0: 1024], sel)\n self._prev_tooltip_block_nbr = cursor.blockNumber()\n block_found = True\n break\n if not block_found and self._prev_tooltip_block_nbr != -1:\n QtWidgets.QToolTip.hideText()\n self._prev_tooltip_block_nbr = -1\n self._tooltips_runner.cancel_requests()\n self.mouse_moved.emit(event)\n super(CodeEdit, self).mouseMoveEvent(event)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\noverriding showEvent to update the viewport margins", "response": "def showEvent(self, event):\n \"\"\" Overrides showEvent to update the viewport margins \"\"\"\n super(CodeEdit, self).showEvent(event)\n self.panels.refresh()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_context_menu(self):\n mnu = QtWidgets.QMenu()\n mnu.addActions(self._actions)\n mnu.addSeparator()\n for menu in self._menus:\n mnu.addMenu(menu)\n return mnu", "response": "Gets the editor context menu."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nshow the context menu", "response": "def _show_context_menu(self, point):\n \"\"\" Shows the context menu \"\"\"\n tc = self.textCursor()\n nc = self.cursorForPosition(point)\n if not nc.position() in range(tc.selectionStart(), tc.selectionEnd()):\n self.setTextCursor(nc)\n self._mnu = self.get_context_menu()\n if len(self._mnu.actions()) > 1 and self.show_context_menu:\n self._mnu.popup(self.mapToGlobal(point))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting show white spaces flag", "response": "def _set_whitespaces_flags(self, show):\n \"\"\" Sets show white spaces flag \"\"\"\n doc = self.document()\n options = doc.defaultTextOption()\n if show:\n options.setFlags(options.flags() |\n QtGui.QTextOption.ShowTabsAndSpaces)\n else:\n options.setFlags(\n options.flags() & ~QtGui.QTextOption.ShowTabsAndSpaces)\n doc.setDefaultTextOption(options)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes the context menu action", "response": "def _init_actions(self, create_standard_actions):\n \"\"\" Init context menu action \"\"\"\n menu_advanced = QtWidgets.QMenu(_('Advanced'))\n self.add_menu(menu_advanced)\n self._sub_menus = {\n 'Advanced': menu_advanced\n }\n if create_standard_actions:\n # Undo\n action = QtWidgets.QAction(_('Undo'), self)\n action.setShortcut('Ctrl+Z')\n action.setIcon(icons.icon(\n 'edit-undo', ':/pyqode-icons/rc/edit-undo.png', 'fa.undo'))\n action.triggered.connect(self.undo)\n self.undoAvailable.connect(action.setVisible)\n action.setVisible(False)\n self.add_action(action, sub_menu=None)\n self.action_undo = action\n # Redo\n action = QtWidgets.QAction(_('Redo'), self)\n action.setShortcut('Ctrl+Y')\n action.setIcon(icons.icon(\n 'edit-redo', ':/pyqode-icons/rc/edit-redo.png', 'fa.repeat'))\n action.triggered.connect(self.redo)\n self.redoAvailable.connect(action.setVisible)\n action.setVisible(False)\n self.add_action(action, sub_menu=None)\n self.action_redo = action\n # Copy\n action = QtWidgets.QAction(_('Copy'), self)\n action.setShortcut(QtGui.QKeySequence.Copy)\n action.setIcon(icons.icon(\n 'edit-copy', ':/pyqode-icons/rc/edit-copy.png', 'fa.copy'))\n action.triggered.connect(self.copy)\n self.add_action(action, sub_menu=None)\n self.action_copy = action\n # cut\n action = QtWidgets.QAction(_('Cut'), self)\n action.setShortcut(QtGui.QKeySequence.Cut)\n action.setIcon(icons.icon(\n 'edit-cut', ':/pyqode-icons/rc/edit-cut.png', 'fa.cut'))\n action.triggered.connect(self.cut)\n self.add_action(action, sub_menu=None)\n self.action_cut = action\n # paste\n action = QtWidgets.QAction(_('Paste'), self)\n action.setShortcut(QtGui.QKeySequence.Paste)\n action.setIcon(icons.icon(\n 'edit-paste', ':/pyqode-icons/rc/edit-paste.png',\n 'fa.paste'))\n action.triggered.connect(self.paste)\n self.add_action(action, sub_menu=None)\n self.action_paste = action\n # duplicate line\n action = QtWidgets.QAction(_('Duplicate line'), self)\n action.setShortcut('Ctrl+D')\n action.triggered.connect(self.duplicate_line)\n self.add_action(action, sub_menu=None)\n self.action_duplicate_line = action\n # swap line up\n action = QtWidgets.QAction(_('Swap line up'), self)\n action.setShortcut(\"Alt++\")\n action.triggered.connect(self.swapLineUp)\n self.add_action(action, sub_menu=None)\n self.action_swap_line_up = action\n # swap line down\n action = QtWidgets.QAction(_('Swap line down'), self)\n action.setShortcut(\"Alt+-\")\n action.triggered.connect(self.swapLineDown)\n self.add_action(action, sub_menu=None)\n self.action_swap_line_down = action\n # select all\n action = QtWidgets.QAction(_('Select all'), self)\n action.setShortcut(QtGui.QKeySequence.SelectAll)\n action.triggered.connect(self.selectAll)\n self.action_select_all = action\n self.add_action(self.action_select_all, sub_menu=None)\n self.add_separator(sub_menu=None)\n if create_standard_actions:\n # indent\n action = QtWidgets.QAction(_('Indent'), self)\n action.setShortcut('Tab')\n action.setIcon(icons.icon(\n 'format-indent-more',\n ':/pyqode-icons/rc/format-indent-more.png', 'fa.indent'))\n action.triggered.connect(self.indent)\n self.add_action(action)\n self.action_indent = action\n # unindent\n action = QtWidgets.QAction(_('Un-indent'), self)\n action.setShortcut('Shift+Tab')\n action.setIcon(icons.icon(\n 'format-indent-less',\n ':/pyqode-icons/rc/format-indent-less.png', 'fa.dedent'))\n action.triggered.connect(self.un_indent)\n self.add_action(action)\n self.action_un_indent = action\n self.add_separator()\n # goto\n action = QtWidgets.QAction(_('Go to line'), self)\n action.setShortcut('Ctrl+G')\n action.setIcon(icons.icon(\n 'go-jump', ':/pyqode-icons/rc/goto-line.png', 'fa.share'))\n action.triggered.connect(self.goto_line)\n self.add_action(action)\n self.action_goto_line = action"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _init_settings(self):\n self._show_whitespaces = False\n self._tab_length = 4\n self._use_spaces_instead_of_tabs = True\n self.setTabStopWidth(self._tab_length *\n self.fontMetrics().width(\" \"))\n self._set_whitespaces_flags(self._show_whitespaces)", "response": "Init settings for the current locale"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _update_visible_blocks(self, *args):\n self._visible_blocks[:] = []\n block = self.firstVisibleBlock()\n block_nbr = block.blockNumber()\n top = int(self.blockBoundingGeometry(block).translated(\n self.contentOffset()).top())\n bottom = top + int(self.blockBoundingRect(block).height())\n ebottom_top = 0\n ebottom_bottom = self.height()\n while block.isValid():\n visible = (top >= ebottom_top and bottom <= ebottom_bottom)\n if not visible:\n break\n if block.isVisible():\n self._visible_blocks.append((top, block_nbr, block))\n block = block.next()\n top = bottom\n bottom = top + int(self.blockBoundingRect(block).height())\n block_nbr = block.blockNumber()", "response": "Updates the list of visible blocks in the archive."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadjust dirty flag depending on editor s content", "response": "def _on_text_changed(self):\n \"\"\" Adjust dirty flag depending on editor's content \"\"\"\n if not self._cleaning:\n ln = TextHelper(self).cursor_position()[0]\n self._modified_lines.add(ln)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _reset_stylesheet(self):\n self.setFont(QtGui.QFont(self._font_family,\n self._font_size + self._zoom_level))\n flg_stylesheet = hasattr(self, '_flg_stylesheet')\n if QtWidgets.QApplication.instance().styleSheet() or flg_stylesheet:\n self._flg_stylesheet = True\n # On Window, if the application once had a stylesheet, we must\n # keep on using a stylesheet otherwise strange colors appear\n # see https://github.com/OpenCobolIDE/OpenCobolIDE/issues/65\n # Also happen on plasma 5\n try:\n plasma = os.environ['DESKTOP_SESSION'] == 'plasma'\n except KeyError:\n plasma = False\n if sys.platform == 'win32' or plasma:\n self.setStyleSheet('''QPlainTextEdit\n {\n background-color: %s;\n color: %s;\n }\n ''' % (self.background.name(), self.foreground.name()))\n else:\n # on linux/osx we just have to set an empty stylesheet to\n # cancel any previous stylesheet and still keep a correct\n # style for scrollbars\n self.setStyleSheet('')\n else:\n p = self.palette()\n p.setColor(QtGui.QPalette.Base, self.background)\n p.setColor(QtGui.QPalette.Text, self.foreground)\n p.setColor(QtGui.QPalette.Highlight,\n self.selection_background)\n p.setColor(QtGui.QPalette.HighlightedText,\n self.selection_foreground)\n self.setPalette(p)\n self.repaint()", "response": "Resets the current stylesheet to the default one."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _do_home_key(self, event=None, select=False):\n # get nb char to first significative char\n delta = (self.textCursor().positionInBlock() -\n TextHelper(self).line_indent())\n cursor = self.textCursor()\n move = QtGui.QTextCursor.MoveAnchor\n if select:\n move = QtGui.QTextCursor.KeepAnchor\n if delta > 0:\n cursor.movePosition(QtGui.QTextCursor.Left, move, delta)\n else:\n cursor.movePosition(QtGui.QTextCursor.StartOfBlock, move)\n self.setTextCursor(cursor)\n if event:\n event.accept()", "response": "Handles the home key action."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cached(key, timeout=3600):\n\n def decorator(f):\n @wraps(f)\n def wrapped(*args, **kwargs):\n cache = get_cache()\n # Check if key is a function\n if callable(key):\n cache_key = key(*args, **kwargs)\n else:\n cache_key = key\n # Try to get the value from cache\n cached_val = cache.get(cache_key)\n if cached_val is None:\n # Call the original function and cache the result\n cached_val = f(*args, **kwargs)\n cache.set(cache_key, cached_val, timeout)\n return cached_val\n\n return wrapped\n\n return decorator", "response": "A decorator that caches the return value of the decorated function with the given key."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _copy_cell_text(self):\n txt = self.currentItem().text()\n QtWidgets.QApplication.clipboard().setText(txt)", "response": "Copies the text of the selected message to the clipboard"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef clear(self):\n QtWidgets.QTableWidget.clear(self)\n self.setRowCount(0)\n self.setColumnCount(4)\n self.setHorizontalHeaderLabels(\n [\"Type\", \"File name\", \"Line\", \"Description\"])", "response": "Clears the tables and the message list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmake icon from icon filename or tuple", "response": "def _make_icon(cls, status):\n \"\"\"\n Make icon from icon filename/tuple (if you want to use a theme)\n \"\"\"\n icon = cls.ICONS[status]\n if isinstance(icon, tuple):\n return QtGui.QIcon.fromTheme(\n icon[0], QtGui.QIcon(icon[1]))\n elif isinstance(icon, str):\n return QtGui.QIcon(icon)\n elif isinstance(icon, QtGui.QIcon):\n return icon\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_message(self, msg):\n row = self.rowCount()\n self.insertRow(row)\n\n # type\n item = QtWidgets.QTableWidgetItem(\n self._make_icon(msg.status), msg.status_string)\n item.setFlags(QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable)\n item.setData(QtCore.Qt.UserRole, msg)\n self.setItem(row, COL_TYPE, item)\n\n # filename\n item = QtWidgets.QTableWidgetItem(\n QtCore.QFileInfo(msg.path).fileName())\n item.setFlags(QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable)\n item.setData(QtCore.Qt.UserRole, msg)\n self.setItem(row, COL_FILE_NAME, item)\n\n # line\n if msg.line < 0:\n item = QtWidgets.QTableWidgetItem(\"-\")\n else:\n item = QtWidgets.QTableWidgetItem(str(msg.line + 1))\n item.setFlags(QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable)\n item.setData(QtCore.Qt.UserRole, msg)\n self.setItem(row, COL_LINE_NBR, item)\n\n # desc\n item = QtWidgets.QTableWidgetItem(msg.description)\n item.setFlags(QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable)\n item.setData(QtCore.Qt.UserRole, msg)\n self.setItem(row, COL_MSG, item)", "response": "Adds a checker message to the table."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _on_item_activated(self, item):\n msg = item.data(QtCore.Qt.UserRole)\n self.msg_activated.emit(msg)", "response": "Emits the message activated signal"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef showDetails(self):\n msg = self.currentItem().data(QtCore.Qt.UserRole)\n desc = msg.description\n desc = desc.replace('\\r\\n', '\\n').replace('\\r', '\\n')\n desc = desc.replace('\\n', '
')\n QtWidgets.QMessageBox.information(\n self, _('Message details'),\n _(\"\"\"

Description:
%s

\n

File:
%s

\n

Line:
%d

\n \"\"\") % (desc, msg.path, msg.line + 1, ))", "response": "Shows the error details."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_line(cls, parent, current_line, line_count):\n dlg = DlgGotoLine(parent, current_line + 1, line_count)\n if dlg.exec_() == dlg.Accepted:\n return dlg.spinBox.value() - 1, True\n return current_line, False", "response": "Gets user selected line."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncloses the current panel.", "response": "def close_panel(self):\n \"\"\"\n Closes the panel\n \"\"\"\n self.hide()\n self.lineEditReplace.clear()\n self.lineEditSearch.clear()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrequesting a search operation.", "response": "def request_search(self, txt=None):\n \"\"\"\n Requests a search operation.\n\n :param txt: The text to replace. If None, the content of lineEditSearch\n is used instead.\n \"\"\"\n if self.checkBoxRegex.isChecked():\n try:\n re.compile(self.lineEditSearch.text(), re.DOTALL)\n except sre_constants.error as e:\n self._show_error(e)\n return\n else:\n self._show_error(None)\n\n if txt is None or isinstance(txt, int):\n txt = self.lineEditSearch.text()\n if txt:\n self.job_runner.request_job(\n self._exec_search, txt, self._search_flags())\n else:\n self.job_runner.cancel_requests()\n self._clear_occurrences()\n self._on_search_finished()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nchange the base color of a widget.", "response": "def _set_widget_background_color(widget, color):\n \"\"\"\n Changes the base color of a widget (background).\n :param widget: widget to modify\n :param color: the color to apply\n \"\"\"\n pal = widget.palette()\n pal.setColor(pal.Base, color)\n widget.setPalette(pal)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nselects the next occurrence in the list.", "response": "def select_next(self):\n \"\"\"\n Selects the next occurrence.\n\n :return: True in case of success, false if no occurrence could be\n selected.\n \"\"\"\n current_occurence = self._current_occurrence()\n occurrences = self.get_occurences()\n if not occurrences:\n return\n current = self._occurrences[current_occurence]\n cursor_pos = self.editor.textCursor().position()\n if cursor_pos not in range(current[0], current[1] + 1) or \\\n current_occurence == -1:\n # search first occurrence that occurs after the cursor position\n current_occurence = 0\n for i, (start, end) in enumerate(self._occurrences):\n if end > cursor_pos:\n current_occurence = i\n break\n else:\n if (current_occurence == -1 or\n current_occurence >= len(occurrences) - 1):\n current_occurence = 0\n else:\n current_occurence += 1\n self._set_current_occurrence(current_occurence)\n try:\n cursor = self.editor.textCursor()\n cursor.setPosition(occurrences[current_occurence][0])\n cursor.setPosition(occurrences[current_occurence][1],\n cursor.KeepAnchor)\n self.editor.setTextCursor(cursor)\n return True\n except IndexError:\n return False"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef replace(self, text=None):\n if text is None or isinstance(text, bool):\n text = self.lineEditReplace.text()\n current_occurences = self._current_occurrence()\n occurrences = self.get_occurences()\n if current_occurences == -1:\n self.select_next()\n current_occurences = self._current_occurrence()\n try:\n # prevent search request due to editor textChanged\n try:\n self.editor.textChanged.disconnect(self.request_search)\n except (RuntimeError, TypeError):\n # already disconnected\n pass\n occ = occurrences[current_occurences]\n cursor = self.editor.textCursor()\n cursor.setPosition(occ[0])\n cursor.setPosition(occ[1], cursor.KeepAnchor)\n len_to_replace = len(cursor.selectedText())\n len_replacement = len(text)\n offset = len_replacement - len_to_replace\n cursor.insertText(text)\n self.editor.setTextCursor(cursor)\n self._remove_occurrence(current_occurences, offset)\n current_occurences -= 1\n self._set_current_occurrence(current_occurences)\n self.select_next()\n self.cpt_occurences = len(self.get_occurences())\n self._update_label_matches()\n self._update_buttons()\n return True\n except IndexError:\n return False\n finally:\n self.editor.textChanged.connect(self.request_search)", "response": "Replaces the selected occurrence with the given text."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreplace all occurrences of the lineEdit with the given text.", "response": "def replace_all(self, text=None):\n \"\"\"\n Replaces all occurrences in the editor's document.\n\n :param text: The replacement text. If None, the content of the lineEdit\n replace will be used instead\n \"\"\"\n cursor = self.editor.textCursor()\n cursor.beginEditBlock()\n remains = self.replace(text=text)\n while remains:\n remains = self.replace(text=text)\n cursor.endEditBlock()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _search_flags(self):\n return (self.checkBoxRegex.isChecked(),\n self.checkBoxCase.isChecked(),\n self.checkBoxWholeWords.isChecked(),\n self.checkBoxInSelection.isChecked())", "response": "Returns the user search flags."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _create_decoration(self, selection_start, selection_end):\n deco = TextDecoration(self.editor.document(), selection_start,\n selection_end)\n deco.set_background(QtGui.QBrush(self.background))\n deco.set_outline(self._outline)\n deco.set_foreground(QtCore.Qt.black)\n deco.draw_order = 1\n return deco", "response": "Creates the text occurences decoration"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsends the request to the backend.", "response": "def _send_request(self):\n \"\"\"\n Sends the request to the backend.\n \"\"\"\n if isinstance(self._worker, str):\n classname = self._worker\n else:\n classname = '%s.%s' % (self._worker.__module__,\n self._worker.__name__)\n self.request_id = str(uuid.uuid4())\n self.send({'request_id': self.request_id, 'worker': classname,\n 'data': self._args})"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef send(self, obj, encoding='utf-8'):\n comm('sending request: %r', obj)\n msg = json.dumps(obj)\n msg = msg.encode(encoding)\n header = struct.pack('=I', len(msg))\n self.write(header)\n self.write(msg)", "response": "Send a python object to the backend."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _connect(self):\n if self is None:\n return\n comm('connecting to 127.0.0.1:%d', self._port)\n address = QtNetwork.QHostAddress('127.0.0.1')\n self.connectToHost(address, self._port)\n if sys.platform == 'darwin':\n self.waitForConnected()", "response": "Connects to the backend socket"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _read_payload(self):\n comm('reading payload data')\n comm('remaining bytes to read: %d', self._to_read)\n data_read = self.read(self._to_read)\n nb_bytes_read = len(data_read)\n comm('%d bytes read', nb_bytes_read)\n self._data_buf += data_read\n self._to_read -= nb_bytes_read\n if self._to_read <= 0:\n try:\n data = self._data_buf.decode('utf-8')\n except AttributeError:\n data = bytes(self._data_buf.data()).decode('utf-8')\n comm('payload read: %r', data)\n comm('payload length: %r', len(self._data_buf))\n comm('decoding payload as json object')\n obj = json.loads(data)\n comm('response received: %r', obj)\n try:\n results = obj['results']\n except (KeyError, TypeError):\n results = None\n # possible callback\n if self._callback and self._callback():\n self._callback()(results)\n self._header_complete = False\n self._data_buf = bytes()\n self.finished.emit(self)", "response": "Reads the payload from the file and calls the callback function if it has been set."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads bytes when ready read", "response": "def _on_ready_read(self):\n \"\"\" Read bytes when ready read \"\"\"\n while self.bytesAvailable():\n if not self._header_complete:\n self._read_header()\n else:\n self._read_payload()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalling when the backend process is started.", "response": "def _on_process_started(self):\n \"\"\" Logs process started \"\"\"\n comm('backend process started')\n if self is None:\n return\n self.starting = False\n self.running = True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlogging process output ready", "response": "def _on_process_stderr_ready(self):\n \"\"\" Logs process output (stderr) \"\"\"\n try:\n o = self.readAllStandardError()\n except (TypeError, RuntimeError):\n # widget already deleted\n return\n try:\n output = bytes(o).decode(self._encoding)\n except TypeError:\n output = bytes(o.data()).decode(self._encoding)\n for line in output.splitlines():\n self._srv_logger.error(line)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngives a dictionary of RegexLexer token dictionary tokens replace all patterns that match the token specified in new_pattern with new_pattern.", "response": "def replace_pattern(tokens, new_pattern):\n \"\"\" Given a RegexLexer token dictionary 'tokens', replace all patterns that\n match the token specified in 'new_pattern' with 'new_pattern'.\n \"\"\"\n for state in tokens.values():\n for index, pattern in enumerate(state):\n if isinstance(pattern, tuple) and pattern[1] == new_pattern[1]:\n state[index] = new_pattern"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncalling when the package is installed.", "response": "def on_install(self, editor):\n \"\"\"\n :type editor: pyqode.code.api.CodeEdit\n \"\"\"\n self._clear_caches()\n self._update_style()\n super(PygmentsSH, self).on_install(editor)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_mime_type(self, mime_type):\n try:\n self.set_lexer_from_mime_type(mime_type)\n except ClassNotFound:\n _logger().exception('failed to get lexer from mimetype')\n self._lexer = TextLexer()\n return False\n except ImportError:\n # import error while loading some pygments plugins, the editor\n # should not crash\n _logger().warning('failed to get lexer from mimetype (%s)' %\n mime_type)\n self._lexer = TextLexer()\n return False\n else:\n return True", "response": "Set the highlighter lexer based on a mime type."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nchange the lexer based on the filename or extension.", "response": "def set_lexer_from_filename(self, filename):\n \"\"\"\n Change the lexer based on the filename (actually only the extension is\n needed)\n\n :param filename: Filename or extension\n \"\"\"\n self._lexer = None\n if filename.endswith(\"~\"):\n filename = filename[0:len(filename) - 1]\n try:\n self._lexer = get_lexer_for_filename(filename)\n except (ClassNotFound, ImportError):\n print('class not found for url', filename)\n try:\n m = mimetypes.guess_type(filename)\n print(m)\n self._lexer = get_lexer_for_mimetype(m[0])\n except (ClassNotFound, IndexError, ImportError):\n self._lexer = get_lexer_for_mimetype('text/plain')\n if self._lexer is None:\n _logger().warning('failed to get lexer from filename: %s, using '\n 'plain text instead...', filename)\n self._lexer = TextLexer()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the pygments lexer from mime type.", "response": "def set_lexer_from_mime_type(self, mime, **options):\n \"\"\"\n Sets the pygments lexer from mime type.\n\n :param mime: mime type\n :param options: optional addtional options.\n \"\"\"\n self._lexer = get_lexer_for_mimetype(mime, **options)\n _logger().debug('lexer for mimetype (%s): %r', mime, self._lexer)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nhighlight the block using a pygments lexer.", "response": "def highlight_block(self, text, block):\n \"\"\"\n Highlights the block using a pygments lexer.\n\n :param text: text of the block to highlith\n :param block: block to highlight\n \"\"\"\n if self.color_scheme.name != self._pygments_style:\n self._pygments_style = self.color_scheme.name\n self._update_style()\n original_text = text\n if self.editor and self._lexer and self.enabled:\n if block.blockNumber():\n prev_data = self._prev_block.userData()\n if prev_data:\n if hasattr(prev_data, \"syntax_stack\"):\n self._lexer._saved_state_stack = prev_data.syntax_stack\n elif hasattr(self._lexer, '_saved_state_stack'):\n del self._lexer._saved_state_stack\n\n # Lex the text using Pygments\n index = 0\n usd = block.userData()\n if usd is None:\n usd = TextBlockUserData()\n block.setUserData(usd)\n tokens = list(self._lexer.get_tokens(text))\n for token, text in tokens:\n length = len(text)\n fmt = self._get_format(token)\n if token in [Token.Literal.String, Token.Literal.String.Doc,\n Token.Comment]:\n fmt.setObjectType(fmt.UserObject)\n self.setFormat(index, length, fmt)\n index += length\n\n if hasattr(self._lexer, '_saved_state_stack'):\n setattr(usd, \"syntax_stack\", self._lexer._saved_state_stack)\n # Clean up for the next go-round.\n del self._lexer._saved_state_stack\n\n # spaces\n text = original_text\n expression = QRegExp(r'\\s+')\n index = expression.indexIn(text, 0)\n while index >= 0:\n index = expression.pos(0)\n length = len(expression.cap(0))\n self.setFormat(index, length, self._get_format(Whitespace))\n index = expression.indexIn(text, index + length)\n\n self._prev_block = block"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating the style of the object based on the current style.", "response": "def _update_style(self):\n \"\"\" Sets the style to the specified Pygments style.\n \"\"\"\n try:\n self._style = get_style_by_name(self._pygments_style)\n except ClassNotFound:\n # unknown style, also happen with plugins style when used from a\n # frozen app.\n if self._pygments_style == 'qt':\n from pyqode.core.styles import QtStyle\n self._style = QtStyle\n elif self._pygments_style == 'darcula':\n from pyqode.core.styles import DarculaStyle\n self._style = DarculaStyle\n else:\n self._style = get_style_by_name('default')\n self._pygments_style = 'default'\n self._clear_caches()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_format(self, token):\n if token == Whitespace:\n return self.editor.whitespaces_foreground\n\n if token in self._formats:\n return self._formats[token]\n\n result = self._get_format_from_style(token, self._style)\n\n self._formats[token] = result\n return result", "response": "Returns a QTextCharFormat for the given token."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef goto_line(self, line, column=0, move=True):\n text_cursor = self.move_cursor_to(line)\n if column:\n text_cursor.movePosition(text_cursor.Right, text_cursor.MoveAnchor,\n column)\n if move:\n block = text_cursor.block()\n # unfold parent fold trigger if the block is collapsed\n try:\n folding_panel = self._editor.panels.get('FoldingPanel')\n except KeyError:\n pass\n else:\n from pyqode.core.api.folding import FoldScope\n if not block.isVisible():\n block = FoldScope.find_parent_scope(block)\n if TextBlockHelper.is_collapsed(block):\n folding_panel.toggle_fold_trigger(block)\n self._editor.setTextCursor(text_cursor)\n return text_cursor", "response": "Moves the text cursor to the specified line and column."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nselecting all lines between start and end line numbers.", "response": "def select_lines(self, start=0, end=-1, apply_selection=True):\n \"\"\"\n Selects entire lines between start and end line numbers.\n\n This functions apply the selection and returns the text cursor that\n contains the selection.\n\n Optionally it is possible to prevent the selection from being applied\n on the code editor widget by setting ``apply_selection`` to False.\n\n :param start: Start line number (0 based)\n :param end: End line number (0 based). Use -1 to select up to the\n end of the document\n :param apply_selection: True to apply the selection before returning\n the QTextCursor.\n :returns: A QTextCursor that holds the requested selection\n \"\"\"\n editor = self._editor\n if end == -1:\n end = self.line_count() - 1\n if start < 0:\n start = 0\n text_cursor = self.move_cursor_to(start)\n if end > start: # Going down\n text_cursor.movePosition(text_cursor.Down,\n text_cursor.KeepAnchor, end - start)\n text_cursor.movePosition(text_cursor.EndOfLine,\n text_cursor.KeepAnchor)\n elif end < start: # going up\n # don't miss end of line !\n text_cursor.movePosition(text_cursor.EndOfLine,\n text_cursor.MoveAnchor)\n text_cursor.movePosition(text_cursor.Up,\n text_cursor.KeepAnchor, start - end)\n text_cursor.movePosition(text_cursor.StartOfLine,\n text_cursor.KeepAnchor)\n else:\n text_cursor.movePosition(text_cursor.EndOfLine,\n text_cursor.KeepAnchor)\n if apply_selection:\n editor.setTextCursor(text_cursor)\n return text_cursor"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef line_indent(self, line_nbr=None):\n if line_nbr is None:\n line_nbr = self.current_line_nbr()\n elif isinstance(line_nbr, QtGui.QTextBlock):\n line_nbr = line_nbr.blockNumber()\n line = self.line_text(line_nbr)\n indentation = len(line) - len(line.lstrip())\n return indentation", "response": "Returns the indentation level of the specified line."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the word that is on the right of the text cursor.", "response": "def get_right_word(self, cursor=None):\n \"\"\"\n Gets the character on the right of the text cursor.\n\n :param cursor: QTextCursor where the search will start.\n\n :return: The word that is on the right of the text cursor.\n \"\"\"\n if cursor is None:\n cursor = self._editor.textCursor()\n cursor.movePosition(QtGui.QTextCursor.WordRight,\n QtGui.QTextCursor.KeepAnchor)\n return cursor.selectedText().strip()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsearches a text in a text document.", "response": "def search_text(self, text_cursor, search_txt, search_flags):\n \"\"\"\n Searches a text in a text document.\n\n :param text_cursor: Current text cursor\n :param search_txt: Text to search\n :param search_flags: QTextDocument.FindFlags\n :returns: the list of occurrences, the current occurrence index\n :rtype: tuple([], int)\n\n \"\"\"\n def compare_cursors(cursor_a, cursor_b):\n \"\"\"\n Compares two QTextCursor\n\n :param cursor_a: cursor a\n :param cursor_b: cursor b\n\n :returns; True if both cursor are identical (same position, same\n selection)\n \"\"\"\n return (cursor_b.selectionStart() >= cursor_a.selectionStart() and\n cursor_b.selectionEnd() <= cursor_a.selectionEnd())\n\n text_document = self._editor.document()\n occurrences = []\n index = -1\n cursor = text_document.find(search_txt, 0, search_flags)\n original_cursor = text_cursor\n while not cursor.isNull():\n if compare_cursors(cursor, original_cursor):\n index = len(occurrences)\n occurrences.append((cursor.selectionStart(),\n cursor.selectionEnd()))\n cursor.setPosition(cursor.position() + 1)\n cursor = text_document.find(search_txt, cursor, search_flags)\n return occurrences, index"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_comment_or_string(self, cursor_or_block, formats=None):\n if formats is None:\n formats = [\"comment\", \"string\", \"docstring\"]\n layout = None\n pos = 0\n if isinstance(cursor_or_block, QtGui.QTextBlock):\n pos = len(cursor_or_block.text()) - 1\n layout = cursor_or_block.layout()\n elif isinstance(cursor_or_block, QtGui.QTextCursor):\n b = cursor_or_block.block()\n pos = cursor_or_block.position() - b.position()\n layout = b.layout()\n if layout is not None:\n additional_formats = layout.additionalFormats()\n sh = self._editor.syntax_highlighter\n if sh:\n ref_formats = sh.color_scheme.formats\n for r in additional_formats:\n if r.start <= pos < (r.start + r.length):\n for fmt_type in formats:\n is_user_obj = (r.format.objectType() ==\n r.format.UserObject)\n if (ref_formats[fmt_type] == r.format and\n is_user_obj):\n return True\n return False", "response": "Checks if a block or cursor is a string or a comment."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef match_select(self, ignored_symbols=None):\n def filter_matching(ignored_symbols, matching):\n \"\"\"\n Removes any ignored symbol from the match dict.\n \"\"\"\n if ignored_symbols is not None:\n for symbol in matching.keys():\n if symbol in ignored_symbols:\n matching.pop(symbol)\n return matching\n\n def find_opening_symbol(cursor, matching):\n \"\"\"\n Find the position ot the opening symbol\n :param cursor: Current text cursor\n :param matching: symbol matches map\n \"\"\"\n start_pos = None\n opening_char = None\n closed = {k: 0 for k in matching.values()\n if k not in ['\"', \"'\"]}\n # go left\n stop = False\n while not stop and not cursor.atStart():\n cursor.clearSelection()\n cursor.movePosition(cursor.Left, cursor.KeepAnchor)\n char = cursor.selectedText()\n if char in closed.keys():\n closed[char] += 1\n elif char in matching.keys():\n opposite = matching[char]\n if opposite in closed.keys() and closed[opposite]:\n closed[opposite] -= 1\n continue\n else:\n # found opening quote or parenthesis\n start_pos = cursor.position() + 1\n stop = True\n opening_char = char\n return opening_char, start_pos\n\n def find_closing_symbol(cursor, matching, opening_char, original_pos):\n \"\"\"\n Finds the position of the closing symbol\n\n :param cursor: current text cursor\n :param matching: symbold matching dict\n :param opening_char: the opening character\n :param original_pos: position of the opening character.\n \"\"\"\n end_pos = None\n cursor.setPosition(original_pos)\n rev_matching = {v: k for k, v in matching.items()}\n opened = {k: 0 for k in rev_matching.values()\n if k not in ['\"', \"'\"]}\n stop = False\n while not stop and not cursor.atEnd():\n cursor.clearSelection()\n cursor.movePosition(cursor.Right, cursor.KeepAnchor)\n char = cursor.selectedText()\n if char in opened.keys():\n opened[char] += 1\n elif char in rev_matching.keys():\n opposite = rev_matching[char]\n if opposite in opened.keys() and opened[opposite]:\n opened[opposite] -= 1\n continue\n elif matching[opening_char] == char:\n # found opening quote or parenthesis\n end_pos = cursor.position() - 1\n stop = True\n return end_pos\n\n matching = {'(': ')', '{': '}', '[': ']', '\"': '\"', \"'\": \"'\"}\n filter_matching(ignored_symbols, matching)\n cursor = self._editor.textCursor()\n original_pos = cursor.position()\n end_pos = None\n opening_char, start_pos = find_opening_symbol(cursor, matching)\n if opening_char:\n end_pos = find_closing_symbol(\n cursor, matching, opening_char, original_pos)\n if start_pos and end_pos:\n cursor.setPosition(start_pos)\n cursor.movePosition(cursor.Right, cursor.KeepAnchor,\n end_pos - start_pos)\n self._editor.setTextCursor(cursor)\n return True\n else:\n return False", "response": "Performs matched selection on the current text."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef main():\n parser = argparse.ArgumentParser()\n subparsers = parser.add_subparsers()\n\n appengine.register_commands(subparsers)\n requirements.register_commands(subparsers)\n pylint.register_commands(subparsers)\n\n args = parser.parse_args()\n args.func(args)", "response": "Entrypoint for the console script gcp - devrel - py - tools."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning the main loop.", "response": "async def run(websession):\n \"\"\"Run.\"\"\"\n try:\n client = Client('17015', websession)\n print('Client instantiated for ZIP \"{0}\"'.format(client.zip_code))\n\n print()\n print('CURRENT ALLERGENS')\n print(await client.allergens.current())\n\n print()\n print('EXTENDED ALLERGENS')\n print(await client.allergens.extended())\n\n print()\n print('HISTORIC ALLERGENS')\n print(await client.allergens.historic())\n\n print()\n print('ALLERGY OUTLOOK')\n print(await client.allergens.outlook())\n\n print()\n print('EXTENDED DISEASE INFO')\n print(await client.disease.extended())\n\n print()\n print('CURRENT ASTHMA INFO')\n print(await client.asthma.current())\n\n print()\n print('EXTENDED ASTHMA INFO')\n print(await client.asthma.extended())\n\n print()\n print('HISTORIC ASTHMA INFO')\n print(await client.asthma.historic())\n except PollenComError as err:\n print(err)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef install_postgres(user=None, dbname=None, password=None):\r\n execute(pydiploy.django.install_postgres_server,\r\n user=user, dbname=dbname, password=password)", "response": "Install Postgres on remote"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef field_to_dict(fields):\n field_dict = {}\n for field in fields:\n d_tmp = field_dict\n for part in field.split(LOOKUP_SEP)[:-1]:\n d_tmp = d_tmp.setdefault(part, {})\n d_tmp = d_tmp.setdefault(\n field.split(LOOKUP_SEP)[-1],\n deepcopy(EMPTY_DICT)\n ).update(deepcopy(EMPTY_DICT))\n return field_dict", "response": "Convert a list of fields into a dictionary that can be used to build a tree structure."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the prefix name of a field", "response": "def alias_field(model, field):\n \"\"\"\n Return the prefix name of a field\n \"\"\"\n for part in field.split(LOOKUP_SEP)[:-1]:\n model = associate_model(model,part)\n return model.__name__ + \"-\" + field.split(LOOKUP_SEP)[-1]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the model associate to the ForeignKey or ManyToMany relation", "response": "def associate_model(model, field):\n \"\"\"\n Return the model associate to the ForeignKey or ManyToMany\n relation\n \"\"\"\n class_field = model._meta.get_field(field)\n if hasattr(class_field, \"field\"):\n return class_field.field.related.related_model\n else:\n return class_field.related_model"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_formfield(model, field):\n class_field = model._meta.get_field(field)\n\n if hasattr(class_field, \"field\"):\n formfield = class_field.field.formfield()\n else:\n formfield = class_field.formfield()\n \n # Otherwise the formfield contain the reverse relation\n if isinstance(formfield, ChoiceField):\n formfield.choices = class_field.get_choices()\n \n return formfield", "response": "Returns the formfied associate to the field of the model\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_q_object(self):\n q_object = Q()\n for field in self.searchable_fields:\n value = self.request.GET.getlist(alias_field(self.model, field), None)\n mini_q = Q()\n for val in value:\n attr = \"{0}{1}\".format(field, self.specifications.get(field, ''))\n if val:\n dic_tmp = {\n attr: val\n }\n mini_q |= Q(**dic_tmp)\n q_object &= mini_q\n return q_object", "response": "Build Q object to filter the queryset by the relevant fields"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning list of form based on model", "response": "def get_search_form(self):\n \"\"\"\n Return list of form based on model\n \"\"\"\n magic_dico_form = self.get_dict_for_forms()\n forms = []\n initial = list(self.request.GET.lists())\n\n for key, value in magic_dico_form.items():\n form = Form()\n model = value[\"model\"]\n if not value[\"fields\"]:\n continue\n for field in value[\"fields\"]:\n formfield = get_formfield(model, field)\n formfield.widget.attrs.update({'class': self.css_class})\n form.fields.update({\n field : formfield\n })\n\n initial_tmp = {}\n for k, vals in initial:\n tmp_list = k.split(model.__name__ + \"-\")\n if len(tmp_list) == 2:\n list_val_tmp = vals[0] if len(vals) == 1 else [val for val in vals if val != '']\n initial_tmp[tmp_list[-1]] = list_val_tmp\n\n form.initial = initial_tmp\n form.prefix = model.__name__\n forms.append(form)\n return sorted(forms, key=lambda form: form.prefix)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_dict_for_forms(self):\n magic_dico = field_to_dict(self.searchable_fields)\n dico = {}\n\n def dict_from_fields_r(mini_dict, dico, model):\n \"\"\"\n Create the dico recursively from the magic_dico\n \"\"\"\n\n dico[str(model)] = {}\n dico[str(model)][\"model\"] = model\n dico[str(model)][\"fields\"] = []\n\n for key, value in mini_dict.items():\n if isinstance(value, bool):\n continue\n if value == EMPTY_DICT:\n dico[str(model)][\"fields\"].append(key)\n elif EMPTY_DICT.items() <= value.items():\n dico[str(model)][\"fields\"].append(key)\n model_tmp = associate_model(model, key)\n dict_from_fields_r(value, dico, model_tmp)\n else:\n model_tmp = associate_model(model, key)\n dict_from_fields_r(value, dico, model_tmp)\n\n if magic_dico:\n dict_from_fields_r(magic_dico, dico, self.model)\n return dico", "response": "Build a dictionnary where searchable_fields are the fields of the related objects."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef raise_on_invalid_zip(func: Callable) -> Callable:\n async def decorator(*args: list, **kwargs: dict) -> dict:\n \"\"\"Decorate.\"\"\"\n data = await func(*args, **kwargs)\n if not data['Location']['periods']:\n raise InvalidZipError('No data returned for ZIP code')\n return data\n\n return decorator", "response": "Decorator that raises an exception when there s no data for a bad ZIP code."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse(self):\n d = {\n 'pathname': None,\n 'protocols': self._get_protocols(),\n 'protocol': 'ssh',\n 'href': self._url,\n 'resource': None,\n 'user': None,\n 'port': None,\n 'name': None,\n 'owner': None,\n }\n for regex in POSSIBLE_REGEXES:\n match = regex.search(self._url)\n if match:\n d.update(match.groupdict())\n break\n else:\n msg = \"Invalid URL '{}'\".format(self._url)\n raise ParserError(msg)\n\n return Parsed(**d)", "response": "Parses a GIT URL and returns a Parsed object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\nasync def _request(\n self,\n method: str,\n url: str,\n *,\n headers: dict = None,\n params: dict = None,\n json: dict = None) -> dict:\n \"\"\"Make a request against AirVisual.\"\"\"\n full_url = '{0}/{1}'.format(url, self.zip_code)\n pieces = urlparse(url)\n\n if not headers:\n headers = {}\n headers.update({\n 'Content-Type': 'application/json',\n 'Referer': '{0}://{1}'.format(pieces.scheme, pieces.netloc),\n 'User-Agent': API_USER_AGENT\n })\n\n async with self._websession.request(method, full_url, headers=headers,\n params=params, json=json) as resp:\n try:\n resp.raise_for_status()\n data = await resp.json(content_type=None)\n return data\n except client_exceptions.ClientError as err:\n raise RequestError(\n 'Error requesting data from {0}: {1}'.format(url, err))", "response": "Make a request against AirVisual."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef setup_errors(app, error_template=\"error.html\"):\n def error_handler(error):\n if isinstance(error, HTTPException):\n description = error.get_description(request.environ)\n code = error.code\n name = error.name\n else:\n description = error\n code = 500\n name = \"Internal Server Error\"\n return render_template(error_template,\n error=error,\n code=code,\n name=Markup(name),\n description=Markup(description)), code\n\n for exception in default_exceptions:\n app.register_error_handler(exception, error_handler)", "response": "Add a handler for each of the available HTTP error responses."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef domain_search(self, domain=None, company=None, limit=None, offset=None,\n emails_type=None, raw=False):\n \"\"\"\n Return all the email addresses found for a given domain.\n\n :param domain: The domain on which to search for emails. Must be\n defined if company is not.\n\n :param company: The name of the company on which to search for emails.\n Must be defined if domain is not.\n\n :param limit: The maximum number of emails to give back. Default is 10.\n\n :param offset: The number of emails to skip. Default is 0.\n\n :param emails_type: The type of emails to give back. Can be one of\n 'personal' or 'generic'.\n\n :param raw: Gives back the entire response instead of just the 'data'.\n\n :return: Full payload of the query as a dict, with email addresses\n found.\n \"\"\"\n if not domain and not company:\n raise MissingCompanyError(\n 'You must supply at least a domain name or a company name'\n )\n\n if domain:\n params = {'domain': domain, 'api_key': self.api_key}\n elif company:\n params = {'company': company, 'api_key': self.api_key}\n\n if limit:\n params['limit'] = limit\n\n if offset:\n params['offset'] = offset\n\n if emails_type:\n params['type'] = emails_type\n\n endpoint = self.base_endpoint.format('domain-search')\n\n return self._query_hunter(endpoint, params, raw=raw)", "response": "Search for email addresses for a given domain."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding the email address of a person given its name and company's domain. :param domain: The domain of the company where the person works. Must be defined if company is not. :param company: The name of the company where the person works. Must be defined if domain is not. :param first_name: The first name of the person. Must be defined if full_name is not. :param last_name: The last name of the person. Must be defined if full_name is not. :param full_name: The full name of the person. Must be defined if first_name AND last_name are not. :param raw: Gives back the entire response instead of just email and score. :return: email and score as a tuple.", "response": "def email_finder(self, domain=None, company=None, first_name=None,\n last_name=None, full_name=None, raw=False):\n \"\"\"\n Find the email address of a person given its name and company's domain.\n\n :param domain: The domain of the company where the person works. Must\n be defined if company is not.\n\n :param company: The name of the company where the person works. Must\n be defined if domain is not.\n\n :param first_name: The first name of the person. Must be defined if\n full_name is not.\n\n :param last_name: The last name of the person. Must be defined if\n full_name is not.\n\n :param full_name: The full name of the person. Must be defined if\n first_name AND last_name are not.\n\n :param raw: Gives back the entire response instead of just email and score.\n\n :return: email and score as a tuple.\n \"\"\"\n params = self.base_params\n\n if not domain and not company:\n raise MissingCompanyError(\n 'You must supply at least a domain name or a company name'\n )\n\n if domain:\n params['domain'] = domain\n elif company:\n params['company'] = company\n\n if not(first_name and last_name) and not full_name:\n raise MissingNameError(\n 'You must supply a first name AND a last name OR a full name'\n )\n\n if first_name and last_name:\n params['first_name'] = first_name\n params['last_name'] = last_name\n elif full_name:\n params['full_name'] = full_name\n\n endpoint = self.base_endpoint.format('email-finder')\n\n res = self._query_hunter(endpoint, params, raw=raw)\n if raw:\n return res\n\n email = res['email']\n score = res['score']\n\n return email, score"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nverifies the deliverability of a given email adress.", "response": "def email_verifier(self, email, raw=False):\n \"\"\"\n Verify the deliverability of a given email adress.abs\n\n :param email: The email adress to check.\n\n :param raw: Gives back the entire response instead of just the 'data'.\n\n :return: Full payload of the query as a dict.\n \"\"\"\n params = {'email': email, 'api_key': self.api_key}\n\n endpoint = self.base_endpoint.format('email-verifier')\n\n return self._query_hunter(endpoint, params, raw=raw)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngive back the number of email adresses Hunter has for this domain/company. :param domain: The domain of the company where the person works. Must be defined if company is not. If both 'domain' and 'company' are given, the 'domain' will be used. :param company: The name of the company where the person works. Must be defined if domain is not. :param raw: Gives back the entire response instead of just the 'data'. :return: Full payload of the query as a dict.", "response": "def email_count(self, domain=None, company=None, raw=False):\n \"\"\"\n Give back the number of email adresses Hunter has for this domain/company.\n\n :param domain: The domain of the company where the person works. Must\n be defined if company is not. If both 'domain' and 'company' are given,\n the 'domain' will be used.\n\n :param company: The name of the company where the person works. Must\n be defined if domain is not.\n\n :param raw: Gives back the entire response instead of just the 'data'.\n\n :return: Full payload of the query as a dict.\n \"\"\"\n params = self.base_params\n\n if not domain and not company:\n raise MissingCompanyError(\n 'You must supply at least a domain name or a company name'\n )\n\n if domain:\n params['domain'] = domain\n elif company:\n params['company'] = company\n\n endpoint = self.base_endpoint.format('email-count')\n\n return self._query_hunter(endpoint, params, raw=raw)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the account information for the current api_key.", "response": "def account_information(self, raw=False):\n \"\"\"\n Gives the information about the account associated with the api_key.\n\n :param raw: Gives back the entire response instead of just the 'data'.\n\n :return: Full payload of the query as a dict.\n \"\"\"\n params = self.base_params\n\n endpoint = self.base_endpoint.format('account')\n\n res = self._query_hunter(endpoint, params, raw=raw)\n if raw:\n return res\n\n res['calls']['left'] = res['calls']['available'] - res['calls']['used']\n\n return res"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of all leads in a lead list.", "response": "def get_leads(self, offset=None, limit=None, lead_list_id=None,\n first_name=None, last_name=None, email=None, company=None,\n phone_number=None, twitter=None):\n \"\"\"\n Gives back all the leads saved in your account.\n\n :param offset: Number of leads to skip.\n\n :param limit: Maximum number of leads to return.\n\n :param lead_list_id: Id of a lead list to query leads on.\n\n :param first_name: First name to filter on.\n\n :param last_name: Last name to filter on.\n\n :param email: Email to filter on.\n\n :param company: Company to filter on.\n\n :param phone_number: Phone number to filter on.\n\n :param twitter: Twitter account to filter on.\n\n :return: All leads found as a dict.\n \"\"\"\n args = locals()\n args_params = dict((key, value) for key, value in args.items() if value\n is not None)\n args_params.pop('self')\n\n params = self.base_params\n params.update(args_params)\n\n endpoint = self.base_endpoint.format('leads')\n\n return self._query_hunter(endpoint, params)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_lead(self, lead_id):\n params = self.base_params\n\n endpoint = self.base_endpoint.format('leads/' + str(lead_id))\n\n return self._query_hunter(endpoint, params)", "response": "Get a specific lead from the account."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_lead(self, first_name, last_name, email=None, position=None,\n company=None, company_industry=None, company_size=None,\n confidence_score=None, website=None, country_code=None,\n postal_code=None, source=None, linkedin_url=None,\n phone_number=None, twitter=None, leads_list_id=None):\n \"\"\"\n Create a lead on your account.\n\n :param first_name: The first name of the lead to create. Must be\n defined.\n\n :param last_name: The last name of the lead to create. Must be defined.\n\n :param email: The email of the lead to create.\n\n :param position: The professional position of the lead to create.\n\n :param company: The company of the lead to create.\n\n :param company_industry: The type of industry of the company where the\n lead works.\n\n :param company_size: The size of the company where the lead works.\n\n :param confidence_score: The confidence score of the lead's email.\n\n :param website: The website of the lead's company.\n\n :param country_code: The country code of the lead's company.\n\n :param postal_code: The postal code of the lead's company.\n\n :param source: The source of the lead's email.\n\n :param linkedin_url: The URL of the lead's LinkedIn profile.\n\n :param phone_number: The phone number of the lead to create.\n\n :param twitter: The lead's Twitter account.\n\n :param leads_list_id: The id of the leads list where to save the new\n lead.\n\n :return: The newly created lead as a dict.\n \"\"\"\n args = locals()\n payload = dict((key, value) for key, value in args.items() if value\n is not None)\n payload.pop('self')\n\n params = self.base_params\n\n endpoint = self.base_endpoint.format('leads')\n\n return self._query_hunter(endpoint, params, 'post', payload)", "response": "Create a new lead on your account."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a dict of all the leads lists saved on your account.", "response": "def get_leads_lists(self, offset=None, limit=None):\n \"\"\"\n Gives back all the leads lists saved on your account.\n\n :param offset: Number of lists to skip.\n\n :param limit: Maximum number of lists to return.\n\n :return: Leads lists found as a dict.\n \"\"\"\n params = self.base_params\n\n if offset:\n params['offset'] = offset\n if limit:\n params['limit'] = limit\n\n endpoint = self.base_endpoint.format('leads_lists')\n\n return self._query_hunter(endpoint, params)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a leads list.", "response": "def create_leads_list(self, name, team_id=None):\n \"\"\"\n Create a leads list.\n\n :param name: Name of the list to create. Must be defined.\n\n :param team_id: The id of the list to share this list with.\n\n :return: The created leads list as a dict.\n \"\"\"\n params = self.base_params\n\n payload = {'name': name}\n if team_id:\n payload['team_id'] = team_id\n\n endpoint = self.base_endpoint.format('leads_lists')\n\n return self._query_hunter(endpoint, params, 'post', payload)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates a leads list.", "response": "def update_leads_list(self, leads_list_id, name, team_id=None):\n \"\"\"\n Update a leads list.\n\n :param name: Name of the list to update. Must be defined.\n\n :param team_id: The id of the list to share this list with.\n\n :return: 204 Response.\n \"\"\"\n params = self.base_params\n\n payload = {'name': name}\n if team_id:\n payload['team_id'] = team_id\n\n endpoint = self.base_endpoint.format('leads_lists/' + str(leads_list_id))\n\n return self._query_hunter(endpoint, params, 'put', payload)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_leads_list(self, leads_list_id):\n params = self.base_params\n\n endpoint = self.base_endpoint.format(\n 'leads_lists/' +\n str(leads_list_id)\n )\n\n return self._query_hunter(endpoint, params, 'delete')", "response": "Delete a leads list."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_int(s):\n try:\n return int(s.replace('_', ''))\n except ValueError:\n return int(ast.literal_eval(s))", "response": "converts a string to an integer"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a generator producing dicts from json lines", "response": "def dicts_from_lines(lines):\n \"\"\" returns a generator producing dicts from json lines\n\n 1 JSON object per line is supported:\n\n {\"name\": \"n1\"}\n {\"name\": \"n2\"}\n\n Or 1 JSON object:\n\n {\n \"name\": \"n1\"\n }\n\n Or a list of JSON objects:\n\n [\n {\"name\": \"n1\"},\n {\"name\": \"n2\"},\n ]\n \"\"\"\n lines = iter(lines)\n for line in lines:\n line = line.strip()\n if not line:\n continue # skip empty lines\n try:\n yield json.loads(line, object_pairs_hook=OrderedDict)\n except json.decoder.JSONDecodeError:\n content = line + ''.join(lines)\n dicts = json.loads(content, object_pairs_hook=OrderedDict)\n if isinstance(dicts, list):\n yield from dicts\n else:\n yield dicts"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompose any number of unary functions into a single unary function.", "response": "def compose(*funcs):\n\t\"\"\"\n\tCompose any number of unary functions into a single unary function.\n\n\t>>> import textwrap\n\t>>> from six import text_type\n\t>>> stripped = text_type.strip(textwrap.dedent(compose.__doc__))\n\t>>> compose(text_type.strip, textwrap.dedent)(compose.__doc__) == stripped\n\tTrue\n\n\tCompose also allows the innermost function to take arbitrary arguments.\n\n\t>>> round_three = lambda x: round(x, ndigits=3)\n\t>>> f = compose(round_three, int.__truediv__)\n\t>>> [f(3*x, x+1) for x in range(1,10)]\n\t[1.5, 2.0, 2.25, 2.4, 2.5, 2.571, 2.625, 2.667, 2.7]\n\t\"\"\"\n\n\tdef compose_two(f1, f2):\n\t\treturn lambda *args, **kwargs: f1(f2(*args, **kwargs))\n\treturn functools.reduce(compose_two, funcs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a function that will call a named method on the object with optional positional and keyword arguments.", "response": "def method_caller(method_name, *args, **kwargs):\n\t\"\"\"\n\tReturn a function that will call a named method on the\n\ttarget object with optional positional and keyword\n\targuments.\n\n\t>>> lower = method_caller('lower')\n\t>>> lower('MyString')\n\t'mystring'\n\t\"\"\"\n\tdef call_method(target):\n\t\tfunc = getattr(target, method_name)\n\t\treturn func(*args, **kwargs)\n\treturn call_method"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndecorating func so it's only ever called the first time. This decorator can ensure that an expensive or non-idempotent function will not be expensive on subsequent calls and is idempotent. >>> add_three = once(lambda a: a+3) >>> add_three(3) 6 >>> add_three(9) 6 >>> add_three('12') 6 To reset the stored value, simply clear the property ``saved_result``. >>> del add_three.saved_result >>> add_three(9) 12 >>> add_three(8) 12 Or invoke 'reset()' on it. >>> add_three.reset() >>> add_three(-3) 0 >>> add_three(0) 0", "response": "def once(func):\n\t\"\"\"\n\tDecorate func so it's only ever called the first time.\n\n\tThis decorator can ensure that an expensive or non-idempotent function\n\twill not be expensive on subsequent calls and is idempotent.\n\n\t>>> add_three = once(lambda a: a+3)\n\t>>> add_three(3)\n\t6\n\t>>> add_three(9)\n\t6\n\t>>> add_three('12')\n\t6\n\n\tTo reset the stored value, simply clear the property ``saved_result``.\n\n\t>>> del add_three.saved_result\n\t>>> add_three(9)\n\t12\n\t>>> add_three(8)\n\t12\n\n\tOr invoke 'reset()' on it.\n\n\t>>> add_three.reset()\n\t>>> add_three(-3)\n\t0\n\t>>> add_three(0)\n\t0\n\t\"\"\"\n\t@functools.wraps(func)\n\tdef wrapper(*args, **kwargs):\n\t\tif not hasattr(wrapper, 'saved_result'):\n\t\t\twrapper.saved_result = func(*args, **kwargs)\n\t\treturn wrapper.saved_result\n\twrapper.reset = lambda: vars(wrapper).__delitem__('saved_result')\n\treturn wrapper"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwraps lru_cache to support storing the cache data in the object instances. Abstracts the common paradigm where the method explicitly saves an underscore-prefixed protected property on first call and returns that subsequently. >>> class MyClass: ... calls = 0 ... ... @method_cache ... def method(self, value): ... self.calls += 1 ... return value >>> a = MyClass() >>> a.method(3) 3 >>> for x in range(75): ... res = a.method(x) >>> a.calls 75 Note that the apparent behavior will be exactly like that of lru_cache except that the cache is stored on each instance, so values in one instance will not flush values from another, and when an instance is deleted, so are the cached values for that instance. >>> b = MyClass() >>> for x in range(35): ... res = b.method(x) >>> b.calls 35 >>> a.method(0) 0 >>> a.calls 75 Note that if method had been decorated with ``functools.lru_cache()``, a.calls would have been 76 (due to the cached value of 0 having been flushed by the 'b' instance). Clear the cache with ``.cache_clear()`` >>> a.method.cache_clear() Another cache wrapper may be supplied: >>> cache = lru_cache(maxsize=2) >>> MyClass.method2 = method_cache(lambda self: 3, cache_wrapper=cache) >>> a = MyClass() >>> a.method2() 3 Caution - do not subsequently wrap the method with another decorator, such as ``@property``, which changes the semantics of the function. See also http://code.activestate.com/recipes/577452-a-memoize-decorator-for-instance-methods/ for another implementation and additional justification.", "response": "def method_cache(method, cache_wrapper=None):\n\t\"\"\"\n\tWrap lru_cache to support storing the cache data in the object instances.\n\n\tAbstracts the common paradigm where the method explicitly saves an\n\tunderscore-prefixed protected property on first call and returns that\n\tsubsequently.\n\n\t>>> class MyClass:\n\t... calls = 0\n\t...\n\t... @method_cache\n\t... def method(self, value):\n\t... self.calls += 1\n\t... return value\n\n\t>>> a = MyClass()\n\t>>> a.method(3)\n\t3\n\t>>> for x in range(75):\n\t... res = a.method(x)\n\t>>> a.calls\n\t75\n\n\tNote that the apparent behavior will be exactly like that of lru_cache\n\texcept that the cache is stored on each instance, so values in one\n\tinstance will not flush values from another, and when an instance is\n\tdeleted, so are the cached values for that instance.\n\n\t>>> b = MyClass()\n\t>>> for x in range(35):\n\t... res = b.method(x)\n\t>>> b.calls\n\t35\n\t>>> a.method(0)\n\t0\n\t>>> a.calls\n\t75\n\n\tNote that if method had been decorated with ``functools.lru_cache()``,\n\ta.calls would have been 76 (due to the cached value of 0 having been\n\tflushed by the 'b' instance).\n\n\tClear the cache with ``.cache_clear()``\n\n\t>>> a.method.cache_clear()\n\n\tAnother cache wrapper may be supplied:\n\n\t>>> cache = lru_cache(maxsize=2)\n\t>>> MyClass.method2 = method_cache(lambda self: 3, cache_wrapper=cache)\n\t>>> a = MyClass()\n\t>>> a.method2()\n\t3\n\n\tCaution - do not subsequently wrap the method with another decorator, such\n\tas ``@property``, which changes the semantics of the function.\n\n\tSee also\n\thttp://code.activestate.com/recipes/577452-a-memoize-decorator-for-instance-methods/\n\tfor another implementation and additional justification.\n\t\"\"\"\n\tcache_wrapper = cache_wrapper or lru_cache()\n\n\tdef wrapper(self, *args, **kwargs):\n\t\t# it's the first call, replace the method with a cached, bound method\n\t\tbound_method = types.MethodType(method, self)\n\t\tcached_method = cache_wrapper(bound_method)\n\t\tsetattr(self, method.__name__, cached_method)\n\t\treturn cached_method(*args, **kwargs)\n\n\treturn _special_method_cache(method, cache_wrapper) or wrapper"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _special_method_cache(method, cache_wrapper):\n\tname = method.__name__\n\tspecial_names = '__getattr__', '__getitem__'\n\tif name not in special_names:\n\t\treturn\n\n\twrapper_name = '__cached' + name\n\n\tdef proxy(self, *args, **kwargs):\n\t\tif wrapper_name not in vars(self):\n\t\t\tbound = types.MethodType(method, self)\n\t\t\tcache = cache_wrapper(bound)\n\t\t\tsetattr(self, wrapper_name, cache)\n\t\telse:\n\t\t\tcache = getattr(self, wrapper_name)\n\t\treturn cache(*args, **kwargs)\n\n\treturn proxy", "response": "This is a hack to handle special methods that are not cached by the same name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef result_invoke(action):\n\tdef wrap(func):\n\t\t@functools.wraps(func)\n\t\tdef wrapper(*args, **kwargs):\n\t\t\tresult = func(*args, **kwargs)\n\t\t\taction(result)\n\t\t\treturn result\n\t\treturn wrapper\n\treturn wrap", "response": "A function that is\n\tinvoked on the results returned by the decorated function and then return the original result."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef first_invoke(func1, func2):\n\tdef wrapper(*args, **kwargs):\n\t\tfunc1()\n\t\treturn func2(*args, **kwargs)\n\treturn wrapper", "response": "Returns a function that when invoked will invoke func1 with any parameters passed in func2 with whatever parameters were passed."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef retry_call(func, cleanup=lambda: None, retries=0, trap=()):\n\tattempts = count() if retries == float('inf') else range(retries)\n\tfor attempt in attempts:\n\t\ttry:\n\t\t\treturn func()\n\t\texcept trap:\n\t\t\tcleanup()\n\n\treturn func()", "response": "Attempts to call a function func with the specified exceptions and cleanups the exception if any."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef retry(*r_args, **r_kwargs):\n\tdef decorate(func):\n\t\t@functools.wraps(func)\n\t\tdef wrapper(*f_args, **f_kwargs):\n\t\t\tbound = functools.partial(func, *f_args, **f_kwargs)\n\t\t\treturn retry_call(bound, *r_args, **r_kwargs)\n\t\treturn wrapper\n\treturn decorate", "response": "Decorator for retry_call. Accepts arguments to retry_call\n\texcept func then returns a decorator for the decorated function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a generator into a function that prints all yielded elements ...", "response": "def print_yielded(func):\n\t\"\"\"\n\tConvert a generator into a function that prints all yielded elements\n\n\t>>> @print_yielded\n\t... def x():\n\t... yield 3; yield None\n\t>>> x()\n\t3\n\tNone\n\t\"\"\"\n\tprint_all = functools.partial(map, print)\n\tprint_results = compose(more_itertools.recipes.consume, print_all, func)\n\treturn functools.wraps(func)(print_results)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef pass_none(func):\n\t@functools.wraps(func)\n\tdef wrapper(param, *args, **kwargs):\n\t\tif param is not None:\n\t\t\treturn func(param, *args, **kwargs)\n\treturn wrapper", "response": "A function that prints None if the first param is None."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nassigns parameters from namespace where func solicits.", "response": "def assign_params(func, namespace):\n\t\"\"\"\n\tAssign parameters from namespace where func solicits.\n\n\t>>> def func(x, y=3):\n\t... print(x, y)\n\t>>> assigned = assign_params(func, dict(x=2, z=4))\n\t>>> assigned()\n\t2 3\n\n\tThe usual errors are raised if a function doesn't receive\n\tits required parameters:\n\n\t>>> assigned = assign_params(func, dict(y=3, z=4))\n\t>>> assigned()\n\tTraceback (most recent call last):\n\tTypeError: func() ...argument...\n\n\tIt even works on methods:\n\n\t>>> class Handler:\n\t... def meth(self, arg):\n\t... print(arg)\n\t>>> assign_params(Handler().meth, dict(arg='crystal', foo='clear'))()\n\tcrystal\n\t\"\"\"\n\ttry:\n\t\tsig = inspect.signature(func)\n\t\tparams = sig.parameters.keys()\n\texcept AttributeError:\n\t\tspec = inspect.getargspec(func)\n\t\tparams = spec.args\n\tcall_ns = {\n\t\tk: namespace[k]\n\t\tfor k in params\n\t\tif k in namespace\n\t}\n\treturn functools.partial(func, **call_ns)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwrapping a method such that when it is called, the args and kwargs are saved on the method. >>> class MyClass: ... @save_method_args ... def method(self, a, b): ... print(a, b) >>> my_ob = MyClass() >>> my_ob.method(1, 2) 1 2 >>> my_ob._saved_method.args (1, 2) >>> my_ob._saved_method.kwargs {} >>> my_ob.method(a=3, b='foo') 3 foo >>> my_ob._saved_method.args () >>> my_ob._saved_method.kwargs == dict(a=3, b='foo') True The arguments are stored on the instance, allowing for different instance to save different args. >>> your_ob = MyClass() >>> your_ob.method({str('x'): 3}, b=[4]) {'x': 3} [4] >>> your_ob._saved_method.args ({'x': 3},) >>> my_ob._saved_method.args ()", "response": "def save_method_args(method):\n\t\"\"\"\n\tWrap a method such that when it is called, the args and kwargs are\n\tsaved on the method.\n\n\t>>> class MyClass:\n\t... @save_method_args\n\t... def method(self, a, b):\n\t... print(a, b)\n\t>>> my_ob = MyClass()\n\t>>> my_ob.method(1, 2)\n\t1 2\n\t>>> my_ob._saved_method.args\n\t(1, 2)\n\t>>> my_ob._saved_method.kwargs\n\t{}\n\t>>> my_ob.method(a=3, b='foo')\n\t3 foo\n\t>>> my_ob._saved_method.args\n\t()\n\t>>> my_ob._saved_method.kwargs == dict(a=3, b='foo')\n\tTrue\n\n\tThe arguments are stored on the instance, allowing for\n\tdifferent instance to save different args.\n\n\t>>> your_ob = MyClass()\n\t>>> your_ob.method({str('x'): 3}, b=[4])\n\t{'x': 3} [4]\n\t>>> your_ob._saved_method.args\n\t({'x': 3},)\n\t>>> my_ob._saved_method.args\n\t()\n\t\"\"\"\n\targs_and_kwargs = collections.namedtuple('args_and_kwargs', 'args kwargs')\n\n\t@functools.wraps(method)\n\tdef wrapper(self, *args, **kwargs):\n\t\tattr_name = '_saved_' + method.__name__\n\t\tattr = args_and_kwargs(args, kwargs)\n\t\tsetattr(self, attr_name, attr)\n\t\treturn method(self, *args, **kwargs)\n\treturn wrapper"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nensure at least 1 / max_rate seconds from last call", "response": "def _wait(self):\n\t\t\"ensure at least 1/max_rate seconds from last call\"\n\t\telapsed = time.time() - self.last_called\n\t\tmust_wait = 1 / self.max_rate - elapsed\n\t\ttime.sleep(max(0, must_wait))\n\t\tself.last_called = time.time()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef row_col_maker(app, fromdocname, all_needs, need_info, need_key, make_ref=False, ref_lookup=False, prefix=''):\n row_col = nodes.entry()\n para_col = nodes.paragraph()\n\n if need_key in need_info and need_info[need_key] is not None:\n if not isinstance(need_info[need_key], (list, set)):\n data = [need_info[need_key]]\n else:\n data = need_info[need_key]\n\n for index, datum in enumerate(data):\n link_id = datum\n link_part = None\n\n if need_key in ['links', 'back_links']:\n if '.' in datum:\n link_id = datum.split('.')[0]\n link_part = datum.split('.')[1]\n\n datum_text = prefix + datum\n text_col = nodes.Text(datum_text, datum_text)\n if make_ref or ref_lookup:\n try:\n ref_col = nodes.reference(\"\", \"\")\n if not ref_lookup:\n ref_col['refuri'] = app.builder.get_relative_uri(fromdocname, need_info['docname'])\n ref_col['refuri'] += \"#\" + datum\n else:\n temp_need = all_needs[link_id]\n ref_col['refuri'] = app.builder.get_relative_uri(fromdocname, temp_need['docname'])\n ref_col['refuri'] += \"#\" + temp_need[\"id\"]\n if link_part is not None:\n ref_col['refuri'] += '.' + link_part\n\n except KeyError:\n para_col += text_col\n else:\n ref_col.append(text_col)\n para_col += ref_col\n else:\n para_col += text_col\n\n if index + 1 < len(data):\n para_col += nodes.emphasis(\"; \", \"; \")\n\n row_col += para_col\n\n return row_col", "response": "Creates and returns a row and column object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupload a file into a blob table", "response": "def insert_blob(filename, hosts=None, table=None):\n \"\"\"Upload a file into a blob table \"\"\"\n conn = connect(hosts)\n container = conn.get_blob_container(table)\n with open(filename, 'rb') as f:\n digest = container.put(f)\n return '{server}/_blobs/{table}/{digest}'.format(\n server=conn.client.active_servers[0],\n table=table,\n digest=digest\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the list of DOIs in a single csv file.", "response": "def update_dois(csv_source, write_file=True):\n \"\"\"\n Get DOI publication info for a batch of DOIs. This is LiPD-independent and only requires a CSV file with all DOIs\n listed in a single column. The output is LiPD-formatted publication data for each entry.\n\n :param str csv_source: Local path to CSV file\n :param bool write_file: Write output data to JSON file (True), OR pretty print output to console (False)\n :return none:\n \"\"\"\n\n _dois_arr = []\n _dois_raw = []\n\n # open the CSV file\n with open(csv_source, \"r\") as f:\n reader = csv.reader(f)\n for row in reader:\n # sort the DOIs as an array of DOI strings\n _dois_arr.append(row[0])\n\n # run the DOI resolver once for each DOI string.\n for _doi in _dois_arr:\n _dois_raw.append(_update_doi(_doi))\n\n if write_file:\n # Write the file\n new_filename = os.path.splitext(csv_source)[0]\n write_json_to_file(_dois_raw, new_filename)\n\n else:\n print(json.dumps(_dois_raw, indent=2))\n\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwrites all JSON in python dictionary to a new json file.", "response": "def write_json_to_file(json_data, filename=\"metadata\"):\n \"\"\"\n Write all JSON in python dictionary to a new json file.\n :param any json_data: JSON data\n :param str filename: Target filename (defaults to 'metadata.jsonld')\n :return None:\n \"\"\"\n # Use demjson to maintain unicode characters in output\n json_bin = demjson.encode(json_data, encoding='utf-8', compactly=False)\n # Write json to file\n try:\n open(\"{}.json\".format(filename), \"wb\").write(json_bin)\n except FileNotFoundError as e:\n print(\"Error: Writing json to file: {}\".format(filename))\n return"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef compile_authors(authors):\n author_list = []\n for person in authors:\n author_list.append({'name': person['family'] + \", \" + person['given']})\n return author_list", "response": "Returns a list of Author objects representing the authors in the last and first order."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nlooping over Raw and add selected items to Fetch with proper formatting", "response": "def compile_fetch(raw, doi_id):\n \"\"\"\n Loop over Raw and add selected items to Fetch with proper formatting\n :param dict raw: JSON data from doi.org\n :param str doi_id:\n :return dict:\n \"\"\"\n fetch_dict = OrderedDict()\n order = {'author': 'author', 'type': 'type', 'identifier': '', 'title': 'title', 'journal': 'container-title',\n 'pubYear': '', 'volume': 'volume', 'publisher': 'publisher', 'page':'page', 'issue': 'issue'}\n\n for k, v in order.items():\n try:\n if k == 'identifier':\n fetch_dict[k] = [{\"type\": \"doi\", \"id\": doi_id, \"url\": \"http://dx.doi.org/\" + doi_id}]\n elif k == 'author':\n fetch_dict[k] = compile_authors(raw[v])\n elif k == 'pubYear':\n fetch_dict[k] = compile_date(raw['issued']['date-parts'])\n else:\n fetch_dict[k] = raw[v]\n except KeyError as e:\n # If we try to add a key that doesn't exist in the raw dict, then just keep going.\n pass\n return fetch_dict"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_data(doi_id):\n\n fetch_dict = {}\n\n try:\n # Send request to grab metadata at URL\n print(\"Requesting : {}\".format(doi_id))\n url = \"http://dx.doi.org/\" + doi_id\n headers = {\"accept\": \"application/rdf+xml;q=0.5, application/citeproc+json;q=1.0\"}\n r = requests.get(url, headers=headers)\n\n\n # DOI 404. Data not retrieved. Log and return original pub\n if r.status_code == 400 or r.status_code == 404:\n print(\"HTTP 404: DOI not found, {}\".format(doi_id))\n\n # Ignore other status codes. Run when status is 200 (good response)\n elif r.status_code == 200:\n # Load data from http response\n raw = json.loads(r.text)\n # Create a new pub dictionary with metadata received\n fetch_dict = compile_fetch(raw, doi_id)\n fetch_dict['pubDataUrl'] = 'doi.org'\n\n except urllib.error.URLError as e:\n fetch_dict[\"ERROR\"] = e\n fetch_dict[\"doi\"] = doi_id\n print(\"get_data: URLError: malformed doi: {}, {}\".format(doi_id, e))\n except Exception as e:\n fetch_dict[\"ERROR\"] = e\n fetch_dict[\"doi\"] = doi_id\n print(\"get_data: ValueError: cannot resolve dois from this publisher: {}, {}\".format(doi_id, e))\n return fetch_dict", "response": "Resolve DOI and compile all attributes into one dictionary"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nuses regex to extract all DOI ids from string (i.e. 10.1029/2005pa001215) :param str doi_string: Raw DOI string value from input file. Often not properly formatted. :return list: DOI ids. May contain 0, 1, or multiple ids.", "response": "def clean_doi(doi_string):\n \"\"\"\n Use regex to extract all DOI ids from string (i.e. 10.1029/2005pa001215)\n\n :param str doi_string: Raw DOI string value from input file. Often not properly formatted.\n :return list: DOI ids. May contain 0, 1, or multiple ids.\n \"\"\"\n regex = re.compile(r'\\b(10[.][0-9]{3,}(?:[.][0-9]+)*/(?:(?![\"&\\'<>,])\\S)+)\\b')\n try:\n # Returns a list of matching strings\n m = re.findall(regex, doi_string)\n except TypeError as e:\n # If doi_string is None type, return empty list\n print(\"TypeError cleaning DOI: {}, {}\".format(doi_string, e))\n m = []\n return m"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlog a given function and how long it takes in seconds", "response": "def log_benchmark(fn, start, end):\n \"\"\"\n Log a given function and how long the function takes in seconds\n :param str fn: Function name\n :param float start: Function start time\n :param float end: Function end time\n :return none:\n \"\"\"\n elapsed = round(end - start, 2)\n line = (\"Benchmark - Function: {} , Time: {} seconds\".format(fn, elapsed))\n return line"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate or update the changelog txt file. Prompt for update description.", "response": "def update_changelog():\n \"\"\"\n Create or update the changelog txt file. Prompt for update description.\n :return None:\n \"\"\"\n # description = input(\"Please enter a short description for this update:\\n \")\n description = 'Placeholder for description here.'\n\n # open changelog file for appending. if doesn't exist, creates file.\n with open('changelog.txt', 'a+') as f:\n # write update line\n f.write(str(datetime.datetime.now().strftime(\"%d %B %Y %I:%M%p\")) + '\\nDescription: ' + description)\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_benchmark(name, log_file, level=logging.INFO):\n handler = logging.FileHandler(log_file)\n rtf_handler = RotatingFileHandler(log_file, maxBytes=30000, backupCount=0)\n formatter = logging.Formatter('%(asctime)s [%(levelname)s] %(name)s: %(message)s')\n handler.setFormatter(formatter)\n logger = logging.getLogger(name)\n logger.setLevel(level)\n logger.addHandler(handler)\n logger.addHandler(rtf_handler)\n return logger", "response": "Create a logger for function benchmark"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a logger with the below attributes.", "response": "def create_logger(name):\n \"\"\"\n Creates a logger with the below attributes.\n :param str name: Name of the logger\n :return obj: Logger\n \"\"\"\n logging.config.dictConfig({\n 'version': 1,\n 'disable_existing_loggers': False,\n 'formatters': {\n 'simple': {\n 'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'\n },\n 'detailed': {\n 'format': '%(asctime)s %(module)-17s line:%(lineno)-4d '\n '%(levelname)-8s %(message)s'\n },\n 'email': {\n 'format': 'Timestamp: %(asctime)s\\nModule: %(module)s\\n'\n 'Line: %(lineno)d\\nMessage: %(message)s'\n },\n },\n 'handlers': {\n # 'stream': {\n # 'level': 'DEBUG',\n # 'class': 'logging.StreamHandler',\n # \"formatter\": \"simple\"\n # },\n \"file\": {\n \"level\": \"DEBUG\",\n \"formatter\": \"simple\",\n \"class\": \"logging.handlers.RotatingFileHandler\",\n \"filename\": \"debug.log\",\n 'mode': 'a',\n 'maxBytes': 30000,\n 'backupCount': 0\n }\n },\n 'loggers': {\n '': {\n # \"handlers\": [\"stream\", \"file\"],\n \"handlers\": [\"file\"],\n \"level\": \"DEBUG\",\n 'propagate': True\n }\n }\n })\n return logging.getLogger(name)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef timeit(hosts=None,\n stmt=None,\n warmup=30,\n repeat=None,\n duration=None,\n concurrency=1,\n output_fmt=None,\n fail_if=None,\n sample_mode='reservoir'):\n \"\"\"Run the given statement a number of times and return the runtime stats\n\n Args:\n fail-if: An expression that causes cr8 to exit with a failure if it\n evaluates to true.\n The expression can contain formatting expressions for:\n - runtime_stats\n - statement\n - meta\n - concurrency\n - bulk_size\n For example:\n --fail-if \"{runtime_stats.mean} > 1.34\"\n \"\"\"\n num_lines = 0\n log = Logger(output_fmt)\n with Runner(hosts, concurrency, sample_mode) as runner:\n version_info = aio.run(runner.client.get_server_version)\n for line in as_statements(lines_from_stdin(stmt)):\n runner.warmup(line, warmup)\n timed_stats = runner.run(line, iterations=repeat, duration=duration)\n r = Result(\n version_info=version_info,\n statement=line,\n timed_stats=timed_stats,\n concurrency=concurrency\n )\n log.result(r)\n if fail_if:\n eval_fail_if(fail_if, r)\n num_lines += 1\n if num_lines == 0:\n raise SystemExit(\n 'No SQL statements provided. Use --stmt or provide statements via stdin')", "response": "Run a given statement a number of times and return the runtime stats."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndetermine the OS and the associated download folder.", "response": "def get_download_path():\n \"\"\"\n Determine the OS and the associated download folder.\n :return str Download path:\n \"\"\"\n if os.name == 'nt':\n import winreg\n sub_key = r'SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders'\n downloads_guid = '{374DE290-123F-4565-9164-39C4925E467B}'\n with winreg.OpenKey(winreg.HKEY_CURRENT_USER, sub_key) as key:\n location = winreg.QueryValueEx(key, downloads_guid)[0]\n return location\n else:\n return os.path.join(os.path.expanduser('~'), 'Downloads')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndownload a file from a given URL and save it to a local path.", "response": "def download_file(src_url, dst_path):\n \"\"\"\n Use the given URL and destination to download and save a file\n :param str src_url: Direct URL to lipd file download\n :param str dst_path: Local path to download file to, including filename and ext. ex. /path/to/filename.lpd\n :return none:\n \"\"\"\n if \"MD982181\" not in src_url:\n try:\n print(\"downloading file from url...\")\n urllib.request.urlretrieve(src_url, dst_path)\n except Exception as e:\n print(\"Error: unable to download from url: {}\".format(e))\n return"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef wait_until(predicate, timeout=30):\n not_expired = Timeout(timeout)\n while not_expired():\n r = predicate()\n if r:\n break", "response": "Wait until predicate returns a truthy value or the timeout is reached."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfinds the first matching version in a list of versions.", "response": "def _find_matching_version(versions, version_pattern):\n \"\"\"\n Return the first matching version\n\n >>> _find_matching_version(['1.1.4', '1.0.12', '1.0.5'], '1.0.x')\n '1.0.12'\n\n >>> _find_matching_version(['1.1.4', '1.0.6', '1.0.5'], '2.x.x')\n \"\"\"\n pattern = fnmatch.translate(version_pattern.replace('x', '*'))\n return next((v for v in versions if re.match(pattern, v)), None)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nbuilds a tarball from src and return the path to it", "response": "def _build_tarball(src_repo) -> str:\n \"\"\" Build a tarball from src and return the path to it \"\"\"\n run = partial(subprocess.run, cwd=src_repo, check=True)\n run(['git', 'clean', '-xdff'])\n src_repo = Path(src_repo)\n if os.path.exists(src_repo / 'es' / 'upstream'):\n run(['git', 'submodule', 'update', '--init', '--', 'es/upstream'])\n run(['./gradlew', '--no-daemon', 'clean', 'distTar'])\n distributions = Path(src_repo) / 'app' / 'build' / 'distributions'\n return next(distributions.glob('crate-*.tar.gz'))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _crates_cache() -> str:\n return os.environ.get(\n 'XDG_CACHE_HOME',\n os.path.join(os.path.expanduser('~'), '.cache', 'cr8', 'crates'))", "response": "Return the path to the crates cache folder"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nretrieves a Crate tarball extract it and return the path.", "response": "def get_crate(version, crate_root=None):\n \"\"\"Retrieve a Crate tarball, extract it and return the path.\n\n Args:\n version: The Crate version to get.\n Can be specified in different ways:\n\n - A concrete version like '0.55.0'\n - A version including a `x` as wildcards. Like: '1.1.x' or '1.x.x'.\n This will use the latest version that matches.\n - Release branch, like `3.1`\n - An alias: 'latest-stable' or 'latest-testing'\n - A URI pointing to a crate tarball\n crate_root: Where to extract the tarball to.\n If this isn't specified ``$XDG_CACHE_HOME/.cache/cr8/crates``\n will be used.\n \"\"\"\n if not crate_root:\n crate_root = _crates_cache()\n _remove_old_crates(crate_root)\n if _is_project_repo(version):\n return _extract_tarball(_build_tarball(version))\n m = BRANCH_VERSION_RE.match(version)\n if m:\n return _build_from_release_branch(m.group(0), crate_root)\n uri = _lookup_uri(version)\n crate_dir = _download_and_extract(uri, crate_root)\n return crate_dir"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _parse_options(options: List[str]) -> Dict[str, str]:\n try:\n return dict(i.split('=', maxsplit=1) for i in options)\n except ValueError:\n raise ArgumentError(\n f'Option must be in format =, got: {options}')", "response": "Parse repeatable CLI options into a single dict."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run_crate(\n version,\n env=None,\n setting=None,\n crate_root=None,\n keep_data=False,\n disable_java_magic=False,\n):\n \"\"\"Launch a crate instance.\n\n Supported version specifications:\n - Concrete version like \"0.55.0\" or with wildcard: \"1.1.x\"\n - An alias (one of [latest-nightly, latest-stable, latest-testing])\n - A URI pointing to a CrateDB tarball (in .tar.gz format)\n - A URI pointing to a checked out CrateDB repo directory\n\n run-crate supports command chaining. To launch a CrateDB node and another\n sub-command use:\n\n cr8 run-crate -- timeit -s \"select 1\" --hosts '{node.http_url}'\n\n To launch any (blocking) subprocess, prefix the name with '@':\n\n cr8 run-crate -- @http '{node.http_url}'\n\n If run-crate is invoked using command chaining it will exit once all\n chained commands finished.\n\n The postgres host and port are available as {node.addresses.psql.host} and\n {node.addresses.psql.port}\n \"\"\"\n with create_node(\n version,\n env,\n setting,\n crate_root,\n keep_data,\n java_magic=not disable_java_magic,\n ) as n:\n try:\n n.start()\n n.process.wait()\n except KeyboardInterrupt:\n print('Stopping Crate...')", "response": "Launch a crate instance."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstarting the process. This will block until the Crate cluster is ready to process requests.", "response": "def start(self):\n \"\"\"Start the process.\n\n This will block until the Crate cluster is ready to process requests.\n \"\"\"\n log.info('Starting Crate process')\n self.process = proc = self.enter_context(subprocess.Popen(\n self.cmd,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n env=self.env,\n universal_newlines=True\n ))\n\n msg = ('CrateDB launching:\\n'\n ' PID: %s\\n'\n ' Logs: %s\\n'\n ' Data: %s')\n if not self.keep_data:\n msg += ' (removed on stop)\\n'\n\n logfile = os.path.join(self.logs_path, self.cluster_name + '.log')\n log.info(\n msg,\n proc.pid,\n logfile,\n self.data_path\n )\n self.addresses = DotDict({})\n self.monitor.consumers.append(AddrConsumer(self._set_addr))\n self.monitor.start(proc)\n\n log_lines = []\n self.monitor.consumers.append(log_lines.append)\n spinner = cycle(['/', '-', '\\\\', '|'])\n\n def show_spinner():\n if sys.stdout.isatty():\n print(next(spinner), end='\\r')\n return True\n try:\n wait_until(\n lambda: show_spinner() and _ensure_running(proc) and self.http_host,\n timeout=60\n )\n host = self.addresses.http.host\n port = self.addresses.http.port\n wait_until(\n lambda: _ensure_running(proc) and _is_up(host, port),\n timeout=30\n )\n if _has_ssl(host, port):\n self.http_url = self.http_url.replace('http://', 'https://')\n wait_until(\n lambda: show_spinner() and cluster_state_200(self.http_url),\n timeout=30\n )\n except (SystemError, TimeoutError):\n if not log_lines:\n _try_print_log(logfile)\n else:\n for line in log_lines:\n log.error(line)\n raise SystemExit(\"Exiting because CrateDB didn't start correctly\")\n else:\n self.monitor.consumers.remove(log_lines.append)\n log.info('Cluster ready to process requests')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _parse(line):\n m = AddrConsumer.ADDRESS_RE.match(line)\n if not m:\n return None, None\n protocol = m.group('protocol')\n protocol = AddrConsumer.PROTOCOL_MAP.get(protocol, protocol)\n return protocol, m.group('addr')", "response": "Parse the log line and return the protocol and bound address."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _calc_block_mean_variance(image, mask, blocksize):\n I = image.copy()\n I_f = I.astype(np.float32) / 255. # Used for mean and std.\n\n result = np.zeros(\n (image.shape[0] / blocksize, image.shape[1] / blocksize),\n dtype=np.float32)\n\n for i in xrange(0, image.shape[0] - blocksize, blocksize):\n for j in xrange(0, image.shape[1] - blocksize, blocksize):\n\n patch = I_f[i:i+blocksize+1, j:j+blocksize+1]\n mask_patch = mask[i:i+blocksize+1, j:j+blocksize+1]\n\n tmp1 = np.zeros((blocksize, blocksize))\n tmp2 = np.zeros((blocksize, blocksize))\n mean, std_dev = cv2.meanStdDev(patch, tmp1, tmp2, mask_patch)\n\n value = 0\n if std_dev[0][0] > MEAN_VARIANCE_THRESHOLD:\n value = mean[0][0]\n\n result[i/blocksize, j/blocksize] = value\n\n small_image = cv2.resize(I, (image.shape[1] / blocksize,\n image.shape[0] / blocksize))\n\n res, inpaintmask = cv2.threshold(result, 0.02, 1, cv2.THRESH_BINARY)\n\n inpainted = cv2.inpaint(small_image, inpaintmask.astype(np.uint8), 5,\n cv2.INPAINT_TELEA)\n\n res = cv2.resize(inpainted, (image.shape[1], image.shape[0]))\n\n return res", "response": "Adaptively determines image background and mean variance."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\napplying adaptive thresholding to the given image.", "response": "def threshold(image, block_size=DEFAULT_BLOCKSIZE, mask=None):\n \"\"\"Applies adaptive thresholding to the given image.\n\n Args:\n image: BGRA image.\n block_size: optional int block_size to use for adaptive thresholding.\n mask: optional mask.\n Returns:\n Thresholded image.\n \"\"\"\n if mask is None:\n mask = np.zeros(image.shape[:2], dtype=np.uint8)\n mask[:] = 255\n\n if len(image.shape) > 2 and image.shape[2] == 4:\n image = cv2.cvtColor(image, cv2.COLOR_BGRA2GRAY)\n res = _calc_block_mean_variance(image, mask, block_size)\n res = image.astype(np.float32) - res.astype(np.float32) + 255\n _, res = cv2.threshold(res, 215, 255, cv2.THRESH_BINARY)\n return res"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_version(include_dev_version=True, root='%(pwd)s',\n version_file='%(root)s/version.txt', version_module_paths=(),\n git_args=None, vcs_args=None, decrement_dev_version=None,\n strip_prefix='v',\n Popen=subprocess.Popen, open=open):\n \"\"\"Find an appropriate version number from version control.\n\n It's much more convenient to be able to use your version control system's\n tagging mechanism to derive a version number than to have to duplicate that\n information all over the place.\n\n The default behavior is to write out a ``version.txt`` file which contains\n the VCS output, for systems where the appropriate VCS is not installed or\n there is no VCS metadata directory present. ``version.txt`` can (and\n probably should!) be packaged in release tarballs by way of the\n ``MANIFEST.in`` file.\n\n :param include_dev_version: By default, if there are any commits after the\n most recent tag (as reported by the VCS), that number will be included\n in the version number as a ``.post`` suffix. For example, if the most\n recent tag is ``1.0`` and there have been three commits after that tag,\n the version number will be ``1.0.post3``. This behavior can be disabled\n by setting this parameter to ``False``.\n\n :param root: The directory of the repository root. The default value is the\n current working directory, since when running ``setup.py``, this is\n often (but not always) the same as the current working directory.\n Standard substitutions are performed on this value.\n\n :param version_file: The name of the file where version information will be\n saved. Reading and writing version files can be disabled altogether by\n setting this parameter to ``None``. Standard substitutions are\n performed on this value.\n\n :param version_module_paths: A list of python modules which will be\n automatically generated containing ``__version__`` and ``__sha__``\n attributes. For example, with ``package/_version.py`` as a version\n module path, ``package/__init__.py`` could do ``from package._version\n import __version__, __sha__``.\n\n :param git_args: **Deprecated.** Please use *vcs_args* instead.\n\n :param vcs_args: The command to run to get a version. By default, this is\n automatically guessed from directories present in the repository root.\n Specify this as a list of string arguments including the program to\n run, e.g. ``['git', 'describe']``. Standard substitutions are performed\n on each value in the provided list.\n\n :param decrement_dev_version: If ``True``, subtract one from the number of\n commits after the most recent tag. This is primarily for hg, as hg\n requires a commit to make a tag. If the VCS used is hg (i.e. the\n revision starts with ``'hg'``) and *decrement_dev_version* is not\n specified as ``False``, *decrement_dev_version* will be set to\n ``True``.\n\n :param strip_prefix: A string which will be stripped from the start of\n version number tags. By default this is ``'v'``, but could be\n ``'debian/'`` for compatibility with ``git-dch``.\n\n :param Popen: Defaults to ``subprocess.Popen``. This is for testing.\n\n :param open: Defaults to ``open``. This is for testing.\n\n *root*, *version_file*, and *git_args* each support some substitutions:\n\n ``%(root)s``\n The value provided for *root*. This is not available for the *root*\n parameter itself.\n\n ``%(pwd)s``\n The current working directory.\n\n ``/`` will automatically be translated into the correct path separator for\n the current platform, such as ``:`` or ``\\``.\n\n ``vcversioner`` will perform automatic VCS detection with the following\n directories, in order, and run the specified commands.\n\n ``%(root)s/.git``\n\n ``git --git-dir %(root)s/.git describe --tags --long``. ``--git-dir``\n is used to prevent contamination from git repositories which aren't\n the git repository of your project.\n\n ``%(root)s/.hg``\n\n ``hg log -R %(root)s -r . --template\n '{latesttag}-{latesttagdistance}-hg{node|short}'``. ``-R`` is\n similarly used to prevent contamination.\n\n \"\"\"\n\n substitutions = {'pwd': os.getcwd()}\n substitutions['root'] = root % substitutions\n def substitute(val):\n return _fix_path(val % substitutions)\n if version_file is not None:\n version_file = substitute(version_file)\n\n if git_args is not None:\n warnings.warn(\n 'passing `git_args is deprecated; please use vcs_args',\n DeprecationWarning)\n vcs_args = git_args\n\n if vcs_args is None:\n for path, args in _vcs_args_by_path:\n if os.path.exists(substitute(path)):\n vcs_args = args\n break\n\n raw_version = None\n vcs_output = []\n\n if vcs_args is not None:\n vcs_args = [substitute(arg) for arg in vcs_args]\n\n # try to pull the version from some VCS, or (perhaps) fall back on a\n # previously-saved version.\n try:\n proc = Popen(vcs_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n except OSError:\n pass\n else:\n stdout, stderr = proc.communicate()\n raw_version = stdout.strip().decode()\n vcs_output = stderr.decode().splitlines()\n version_source = 'VCS'\n failure = '%r failed' % (vcs_args,)\n else:\n failure = 'no VCS could be detected in %(root)r' % substitutions\n\n def show_vcs_output():\n if not vcs_output:\n return\n print('-- VCS output follows --')\n for line in vcs_output:\n print(line)\n\n # VCS failed if the string is empty\n if not raw_version:\n if version_file is None:\n print('%s.' % (failure,))\n show_vcs_output()\n raise SystemExit(2)\n elif not os.path.exists(version_file):\n print(\"%s and %r isn't present.\" % (failure, version_file))\n print(\"are you installing from a github tarball?\")\n show_vcs_output()\n raise SystemExit(2)\n with open(version_file, 'rb') as infile:\n raw_version = infile.read().decode()\n version_source = repr(version_file)\n\n\n # try to parse the version into something usable.\n try:\n tag_version, commits, sha = raw_version.rsplit('-', 2)\n except ValueError:\n print(\"%r (from %s) couldn't be parsed into a version.\" % (\n raw_version, version_source))\n show_vcs_output()\n raise SystemExit(2)\n\n # remove leading prefix\n if tag_version.startswith(strip_prefix):\n tag_version = tag_version[len(strip_prefix):]\n\n if version_file is not None:\n with open(version_file, 'w') as outfile:\n outfile.write(raw_version)\n\n if sha.startswith('hg') and decrement_dev_version is None:\n decrement_dev_version = True\n\n if decrement_dev_version:\n commits = str(int(commits) - 1)\n\n if commits == '0' or not include_dev_version:\n version = tag_version\n else:\n version = '%s.post%s' % (tag_version, commits)\n\n for path in version_module_paths:\n with open(path, 'w') as outfile:\n outfile.write(\"\"\"\n# This file is automatically generated by setup.py.\n__version__ = {0}\n__sha__ = {1}\n__revision__ = {1}\n\"\"\".format(repr(version).lstrip('u'), repr(sha).lstrip('u')))\n\n return Version(version, commits, sha)", "response": "Find an appropriate version number from version control."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setup(dist, attr, value):\n\n dist.metadata.version = find_version(**value).version", "response": "A hook for simplifying vcversioner use from distutils.\n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates an insert statement using the given table and dictionary.", "response": "def to_insert(table, d):\n \"\"\"Generate an insert statement using the given table and dictionary.\n\n Args:\n table (str): table name\n d (dict): dictionary with column names as keys and values as values.\n Returns:\n tuple of statement and arguments\n\n >>> to_insert('doc.foobar', {'name': 'Marvin'})\n ('insert into doc.foobar (\"name\") values (?)', ['Marvin'])\n \"\"\"\n\n columns = []\n args = []\n for key, val in d.items():\n columns.append('\"{}\"'.format(key))\n args.append(val)\n stmt = 'insert into {table} ({columns}) values ({params})'.format(\n table=table,\n columns=', '.join(columns),\n params=', '.join(['?'] * len(columns)))\n return (stmt, args)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninserts JSON lines fed into stdin into a Crate cluster.", "response": "def insert_json(table=None,\n bulk_size=1000,\n concurrency=25,\n hosts=None,\n output_fmt=None):\n \"\"\"Insert JSON lines fed into stdin into a Crate cluster.\n\n If no hosts are specified the statements will be printed.\n\n Args:\n table: Target table name.\n bulk_size: Bulk size of the insert statements.\n concurrency: Number of operations to run concurrently.\n hosts: hostname:port pairs of the Crate nodes\n \"\"\"\n if not hosts:\n return print_only(table)\n\n queries = (to_insert(table, d) for d in dicts_from_stdin())\n bulk_queries = as_bulk_queries(queries, bulk_size)\n print('Executing inserts: bulk_size={} concurrency={}'.format(\n bulk_size, concurrency), file=sys.stderr)\n\n stats = Stats()\n with clients.client(hosts, concurrency=concurrency) as client:\n f = partial(aio.measure, stats, client.execute_many)\n try:\n aio.run_many(f, bulk_queries, concurrency)\n except clients.SqlException as e:\n raise SystemExit(str(e))\n try:\n print(format_stats(stats.get(), output_fmt))\n except KeyError:\n if not stats.sampler.values:\n raise SystemExit('No data received via stdin')\n raise"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\npicks dominant angle of a set of lines.", "response": "def _get_dominant_angle(lines, domination_type=MEDIAN):\n \"\"\"Picks dominant angle of a set of lines.\n\n Args:\n lines: iterable of (x1, y1, x2, y2) tuples that define lines.\n domination_type: either MEDIAN or MEAN.\n\n Returns:\n Dominant angle value in radians.\n\n Raises:\n ValueError: on unknown domination_type.\n \"\"\"\n if domination_type == MEDIAN:\n return _get_median_angle(lines)\n elif domination_type == MEAN:\n return _get_mean_angle(lines)\n else:\n raise ValueError('Unknown domination type provided: %s' % (\n domination_type))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _normalize_angle(angle, range, step):\n while angle <= range[0]:\n angle += step\n while angle >= range[1]:\n angle -= step\n return angle", "response": "Normalizes an angle in a given range."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a dict of collectors. is a list of collectors.", "response": "def get_collectors(self, limit=1000, offset=0):\n \"\"\"Returns a dict of collectors.\n\n Args:\n limit (int): number of collectors to return\n offset (int): the offset of where the list of collectors should begin from\n \"\"\"\n options = {\n 'limit': limit,\n 'offset': offset,\n }\n request = requests.get(self.url, params=options, auth=self.auth)\n\n try:\n results = request.json()['collectors']\n except KeyError:\n results = request.json()\n except json.decoder.JSONDecodeError:\n results = []\n\n return results"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find(self, name):\n collectors = self.get_collectors()\n\n for collector in collectors:\n if name.lower() == collector['name'].lower():\n self.collector_id = collector['id']\n return collector\n\n return {'status': 'No results found.'}", "response": "Returns a dict of collector s details if found."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete(self, collector_id=None):\n cid = self.collector_id\n\n if collector_id:\n cid = collector_id\n\n # param to delete id\n url = '{0}/{1}'.format(self.url, cid)\n request = requests.delete(url, auth=self.auth)\n try:\n # unable to delete collector\n response = request.json()\n except ValueError:\n # returns when collector is deleted\n # apparently, the request does not return\n # a json response\n response = {\n u'message': u'The request completed successfully.',\n u'status': 200,\n }\n return response", "response": "Delete a collector from inventory."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef info(self, collector_id):\n cid = self.collector_id\n if collector_id:\n cid = collector_id\n\n url = '{0}/{1}'.format(self.url, cid)\n request = requests.get(url, auth=self.auth)\n return request.json()", "response": "Returns a dict of collector.\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nflattens nested dictionary into a single dict.", "response": "def _dotnotation_for_nested_dictionary(d, key, dots):\n \"\"\"\n Flattens nested data structures using dot notation.\n :param dict d: Original or nested dictionary\n :param str key:\n :param dict dots: Dotted dictionary so far\n :return dict: Dotted dictionary so far\n \"\"\"\n if key == 'chronData':\n # Not interested in expanding chronData in dot notation. Keep it as a chunk.\n dots[key] = d\n elif isinstance(d, dict):\n for k in d:\n _dotnotation_for_nested_dictionary(d[k], key + '.' + k if key else k, dots)\n elif isinstance(d, list) and \\\n not all(isinstance(item, (int, float, complex, list)) for item in d):\n for n, d in enumerate(d):\n _dotnotation_for_nested_dictionary(d, key + '.' + str(n) if key != \"\" else key, dots)\n else:\n dots[key] = d\n return dots"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a data frame from given nested lists of ensemble data", "response": "def create_dataframe(ensemble):\n \"\"\"\n Create a data frame from given nested lists of ensemble data\n :param list ensemble: Ensemble data\n :return obj: Dataframe\n \"\"\"\n logger_dataframes.info(\"enter ens_to_df\")\n # \"Flatten\" the nested lists. Bring all nested lists up to top-level. Output looks like [ [1,2], [1,2], ... ]\n ll = unwrap_arrays(ensemble)\n # Check that list lengths are all equal\n valid = match_arr_lengths(ll)\n if valid:\n # Lists are equal lengths, create the dataframe\n df = pd.DataFrame(ll)\n else:\n # Lists are unequal. Print error and return nothing.\n df = \"empty\"\n print(\"Error: Numpy Array lengths do not match. Cannot create data frame\")\n logger_dataframes.info(\"exit ens_to_df\")\n return df"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef lipd_to_df(metadata, csvs):\n dfs = {}\n logger_dataframes.info(\"enter lipd_to_df\")\n\n # Flatten the dictionary, but ignore the chron data items\n dict_in_dotted = {}\n logger_dataframes.info(\"enter dot_notation\")\n _dotnotation_for_nested_dictionary(metadata, '', dict_in_dotted)\n dict_in_dotted = collections.OrderedDict(sorted(dict_in_dotted.items()))\n\n # Create one data frame for metadata items\n dfs[\"metadata\"] = pd.DataFrame(list(dict_in_dotted.items()), columns=[\"Key\", \"Value\"])\n\n # Create data frames for paleo data and chron data items. This does not use LiPD data, it uses the csv data\n dfs.update(_get_dfs(csvs))\n\n return dfs", "response": "Create an organized collection of data frames from LiPD data and csv data."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a data frame from one TimeSeries object", "response": "def ts_to_df(metadata):\n \"\"\"\n Create a data frame from one TimeSeries object\n :param dict metadata: Time Series dictionary\n :return dict: One data frame per table, organized in a dictionary by name\n \"\"\"\n logger_dataframes.info(\"enter ts_to_df\")\n dfs = {}\n\n # Plot the variable + values vs year, age, depth (whichever are available)\n dfs[\"paleoData\"] = pd.DataFrame(_plot_ts_cols(metadata))\n\n # Plot the chronology variables + values in a data frame\n dfs[\"chronData\"] = _get_key_data(metadata, \"chronData_df\")\n\n # Take out the chronData pandas data frame object if it exists in the metadata\n # Otherwise, the data frame renderer gets crazy and errors out.\n if \"chronData_df\" in metadata:\n del metadata[\"chronData_df\"]\n s = collections.OrderedDict(sorted(metadata.items()))\n\n # Put key-vars in a data frame to make it easier to visualize\n dfs[\"metadata\"] = pd.DataFrame(list(s.items()), columns=['Key', 'Value'])\n\n logger_dataframes.info(\"exit ts_to_df\")\n return dfs"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _plot_ts_cols(ts):\n logger_dataframes.info(\"enter get_ts_cols()\")\n d = {}\n\n # Not entirely necessary, but this will make the column headers look nicer for the data frame\n # The column header will be in format \"variableName (units)\"\n try:\n units = \" (\" + ts[\"paleoData_units\"] + \")\"\n except KeyError as e:\n units = \"\"\n logger_dataframes.warn(\"get_ts_cols: KeyError: paleoData_units not found, {}\".format(e))\n try:\n d[ts[\"paleoData_variableName\"] + units] = ts[\"paleoData_values\"]\n except KeyError as e:\n logger_dataframes.warn(\"get_ts_cols: KeyError: variableName or values not found, {}\".format(e))\n\n # Start looking for age, year, depth columns\n for k, v in ts.items():\n if re_pandas_x_num.match(k):\n try:\n units = \" (\" + ts[k + \"Units\"] + \")\"\n d[k + units] = v\n except KeyError as e:\n logger_dataframes.warn(\"get_ts_cols: KeyError: Special column units, {}, {}\".format(k, e))\n logger_dataframes.info(\"exit get_ts_cols: found {}\".format(len(d)))\n return d", "response": "Get variable + values vs year age depth columns from a TimeSeries dictionary"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a data frame for each column in the LiPD metadata dictionary.", "response": "def _get_dfs(csvs):\n \"\"\"\n LiPD Version 1.2\n Create a data frame for each table for the given key\n :param dict csvs: LiPD metadata dictionary\n :return dict: paleo data data frames\n \"\"\"\n logger_dataframes.info(\"enter get_lipd_cols\")\n # placeholders for the incoming data frames\n dfs = {\"chronData\": {}, \"paleoData\": {}}\n try:\n for filename, cols in csvs.items():\n tmp = {}\n for var, data in cols.items():\n tmp[var] = pd.Series(data[\"values\"])\n if \"chron\" in filename.lower():\n dfs[\"chronData\"][filename] = pd.DataFrame(tmp)\n elif \"paleo\" in filename.lower():\n dfs[\"paleoData\"][filename] = pd.DataFrame(tmp)\n except KeyError:\n logger_dataframes.warn(\"get_lipd_cols: AttributeError: expected type dict, given type {}\".format(type(csvs)))\n\n logger_dataframes.info(\"exit get_lipd_cols\")\n return dfs"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_filtered_dfs(lib, expr):\n logger_dataframes.info(\"enter get_filtered_dfs\")\n\n dfs = {}\n tt = None\n\n # Process all lipds files or one lipds file?\n specific_files = _check_expr_filename(expr)\n\n # Determine the table type wanted\n if \"chron\" in expr:\n tt = \"chron\"\n elif \"paleo\" in expr:\n tt = \"paleo\"\n\n # Get all filenames of target type.\n if tt:\n\n if specific_files:\n # The user has specified a single LiPD file to get data frames from.\n for file in specific_files:\n if file in lib:\n lo_meta = lib[file].get_metadata()\n lo_dfs = lib[file].get_dfs()\n\n # Only start a search if this lipds file has data frames available. Otherwise, pointless.\n if lo_dfs:\n # Get list of all matching filenames\n filenames = _match_dfs_expr(lo_meta, expr, tt)\n # Update our output data frames dictionary\n dfs.update(_match_filenames_w_dfs(filenames, lo_dfs))\n else:\n print(\"Unable to find LiPD file in Library: {}\".format(file))\n\n # Process all LiPD files in the library. A file has not been specified in the expression.\n else:\n # Loop once on each lipds object in the library\n for ln, lo in lib.items():\n # Get the\n lo_meta = lo.get_metadata()\n lo_dfs = lo.get_dfs()\n\n # Only start a search if this lipds file has data frames available. Otherwise, pointless.\n if lo_dfs:\n # Get list of all matching filenames\n filenames = _match_dfs_expr(lo_meta, expr, tt)\n # Update our output data frames dictionary\n dfs.update(_match_filenames_w_dfs(filenames, lo_dfs))\n\n logger_dataframes.info(\"exit get_filtered_dfs\")\n return dfs", "response": "Main function to get all data frames that match the given expression"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget all data frames that match the given expression", "response": "def _match_dfs_expr(lo_meta, expr, tt):\n \"\"\"\n Use the given expression to get all data frames that match the criteria (i.e. \"paleo measurement tables\")\n :param dict lo_meta: Lipd object metadata\n :param str expr: Search expression\n :param str tt: Table type (chron or paleo)\n :return list: All filenames that match the expression\n \"\"\"\n logger_dataframes.info(\"enter match_dfs_expr\")\n filenames = []\n s = \"{}Data\".format(tt)\n\n # Top table level. Going through all tables of certain type (i.e. chron or paleo)\n for k, v in lo_meta[\"{}Data\".format(tt)].items():\n\n # Inner table level. Get data from one specific table\n if \"measurement\" in expr:\n for k1, v1 in v[\"{}MeasurementTable\".format(tt)].items():\n try:\n f = v1[\"filename\"]\n if f.endswith(\".csv\"):\n filenames.append(f)\n except KeyError:\n # Not concerned if the key wasn't found.\n logger_dataframes.info(\"match_dfs_expr: KeyError: filename not found in: {} {}\".format(tt, \"measurement\"))\n\n elif \"ensemble\" in expr:\n for k1, v1 in v[\"{}Model\".format(tt)].items():\n try:\n f = v1[\"ensembleTable\"][\"filename\"]\n if f.endswith(\".csv\"):\n filenames.append(f)\n except KeyError:\n # Not concerned if the key wasn't found.\n logger_dataframes.info(\"match_dfs_expr: KeyError: filename not found in: {} {}\".format(tt, \"ensemble\"))\n\n elif \"model\" in expr:\n for k1, v1 in v[\"{}Model\".format(tt)].items():\n try:\n f = v1[\"{}ModelTable\".format(tt)][\"filename\"]\n if f.endswith(\".csv\"):\n filenames.append(f)\n except KeyError:\n # Not concerned if the key wasn't found.\n logger_dataframes.info(\"match_dfs_expr: KeyError: filename not found in: {} {}\".format(tt, \"model\"))\n\n elif \"dist\" in expr:\n for k1, v1 in v[\"{}Model\".format(tt)].items():\n for k2, v2 in v1[\"distribution\"].items():\n try:\n f = v2[\"filename\"]\n if f.endswith(\".csv\"):\n filenames.append(f)\n except KeyError:\n # Not concerned if the key wasn't found.\n logger_dataframes.info(\n \"match_dfs_expr: KeyError: filename not found in: {} {}\".format(tt, \"dist\"))\n\n logger_dataframes.info(\"exit match_dfs_expr\")\n return filenames"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmatches a list of filenames to their data frame counterparts. Return data frames with matching filenames.", "response": "def _match_filenames_w_dfs(filenames, lo_dfs):\n \"\"\"\n Match a list of filenames to their data frame counterparts. Return data frames\n :param list filenames: Filenames of data frames to retrieve\n :param dict lo_dfs: All data frames\n :return dict: Filenames and data frames (filtered)\n \"\"\"\n logger_dataframes.info(\"enter match_filenames_w_dfs\")\n dfs = {}\n\n for filename in filenames:\n try:\n if filename in lo_dfs[\"chronData\"]:\n dfs[filename] = lo_dfs[\"chronData\"][filename]\n elif filename in lo_dfs[\"paleoData\"]:\n dfs[filename] = lo_dfs[\"paleoData\"][filename]\n except KeyError:\n logger_dataframes.info(\"filter_dfs: KeyError: missing data frames keys\")\n\n logger_dataframes.info(\"exit match_filenames_w_dfs\")\n return dfs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsplit the expression and check if there s a specific filename that the user wants to process.", "response": "def _check_expr_filename(expr):\n \"\"\"\n Split the expression and look to see if there's a specific filename that the user wants to process.\n :param str expr: Search expression\n :return str: Filename or None\n \"\"\"\n expr_lst = expr.split()\n f = [x for x in expr_lst if x not in DATA_FRAMES and x.endswith(\".lpd\")]\n return f"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef lipd_read(path):\n _j = {}\n dir_original = os.getcwd()\n\n # Import metadata into object\n try:\n print(\"reading: {}\".format(print_filename(path)))\n # bigger than 2mb file? This could take a while\n if os.stat(path).st_size > 1000000:\n _size = os.stat(path).st_size\n print(\"{} :That's a big file! This may take a while to load...\".format(\"{} MB\".format(round(_size/1000000,2))))\n dir_tmp = create_tmp_dir()\n unzipper(path, dir_tmp)\n os.chdir(dir_tmp)\n _dir_data = find_files()\n os.chdir(_dir_data)\n _j = read_jsonld()\n _j = rm_empty_fields(_j)\n _j = check_dsn(path, _j)\n _j = update_lipd_version(_j)\n _j = idx_num_to_name(_j)\n _j = rm_empty_doi(_j)\n _j = rm_empty_fields(_j)\n _j = put_tsids(_j)\n _csvs = read_csvs()\n _j = merge_csv_metadata(_j, _csvs)\n # Why ? Because we need to align the csv filenames with the table filenames. We don't need the csv output here.\n _j, _csv = get_csv_from_metadata(_j[\"dataSetName\"], _j)\n os.chdir(dir_original)\n shutil.rmtree(dir_tmp)\n except FileNotFoundError:\n print(\"Error: lipd_read: LiPD file not found. Please make sure the filename includes the .lpd extension\")\n except Exception as e:\n logger_lipd.error(\"lipd_read: {}\".format(e))\n print(\"Error: lipd_read: unable to read LiPD: {}\".format(e))\n os.chdir(dir_original)\n logger_lipd.info(\"lipd_read: record loaded: {}\".format(path))\n return _j", "response": "Loads a LiPD file from local path. Unzips read and processes data and converts it into a dictionary."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsave current state of LiPD object data to a LiPD file.", "response": "def lipd_write(_json, path):\n \"\"\"\n Saves current state of LiPD object data. Outputs to a LiPD file.\n Steps: create tmp, create bag dir, get dsn, splice csv from json, write csv, clean json, write json, create bagit,\n zip up bag folder, place lipd in target dst, move to original dir, delete tmp\n\n :param dict _json: Metadata\n :param str path: Destination path\n :return none:\n \"\"\"\n # Json is pass by reference. Make a copy so we don't mess up the original data.\n _json_tmp = copy.deepcopy(_json)\n dir_original = os.getcwd()\n try:\n dir_tmp = create_tmp_dir()\n dir_bag = os.path.join(dir_tmp, \"bag\")\n os.mkdir(dir_bag)\n os.chdir(dir_bag)\n _dsn = get_dsn(_json_tmp)\n _dsn_lpd = _dsn + \".lpd\"\n _json_tmp, _csv = get_csv_from_metadata(_dsn, _json_tmp)\n write_csv_to_file(_csv)\n _json_tmp = rm_values_fields(_json_tmp)\n _json_tmp = put_tsids(_json_tmp)\n _json_tmp = idx_name_to_num(_json_tmp)\n write_json_to_file(_json_tmp)\n create_bag(dir_bag)\n rm_file_if_exists(path, _dsn_lpd)\n zipper(root_dir=dir_tmp, name=\"bag\", path_name_ext=os.path.join(path, _dsn_lpd))\n os.chdir(dir_original)\n shutil.rmtree(dir_tmp)\n except Exception as e:\n logger_lipd.error(\"lipd_write: {}\".format(e))\n print(\"Error: lipd_write: {}\".format(e))\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates bulk_size until num_records is reached or active becomes false", "response": "def _bulk_size_generator(num_records, bulk_size, active):\n \"\"\" Generate bulk_size until num_records is reached or active becomes false\n\n >>> gen = _bulk_size_generator(155, 50, [True])\n >>> list(gen)\n [50, 50, 50, 5]\n \"\"\"\n while active and num_records > 0:\n req_size = min(num_records, bulk_size)\n num_records -= req_size\n yield req_size"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef insert_fake_data(hosts=None,\n table=None,\n num_records=1e5,\n bulk_size=1000,\n concurrency=25,\n mapping_file=None):\n \"\"\"Generate random data and insert it into a table.\n\n This will read the table schema and then find suitable random data providers.\n Which provider is choosen depends on the column name and data type.\n\n Example:\n\n A column named `name` will map to the `name` provider.\n A column named `x` of type int will map to `random_int` because there\n is no `x` provider.\n\n Available providers are listed here:\n https://faker.readthedocs.io/en/latest/providers.html\n\n Additional providers:\n - auto_inc:\n Returns unique incrementing numbers.\n Automatically used for columns named \"id\" of type int or long\n - geo_point\n Returns [, ]\n Automatically used for columns of type geo_point\n\n Args:\n hosts: :[] of the Crate node\n table: The table name into which the data should be inserted.\n Either fully qualified: `.` or just `
`\n num_records: Number of records to insert.\n Usually a number but expressions like `1e4` work as well.\n bulk_size: The bulk size of the insert statements.\n concurrency: How many operations to run concurrently.\n mapping_file: A JSON file that defines a mapping from column name to\n fake-factory provider.\n The format is as follows:\n {\n \"column_name\": [\"provider_with_args\", [\"arg1\", \"arg\"]],\n \"x\": [\"provider_with_args\", [\"arg1\"]],\n \"y\": \"provider_without_args\"\n }\n \"\"\"\n with clients.client(hosts, concurrency=1) as client:\n schema, table_name = parse_table(table)\n columns = retrieve_columns(client, schema, table_name)\n if not columns:\n sys.exit('Could not find columns for table \"{}\"'.format(table))\n print('Found schema: ')\n print(json.dumps(columns, sort_keys=True, indent=4))\n mapping = None\n if mapping_file:\n mapping = json.load(mapping_file)\n\n bulk_size = min(num_records, bulk_size)\n num_inserts = int(math.ceil(num_records / bulk_size))\n\n gen_row = create_row_generator(columns, mapping)\n\n stmt = to_insert('\"{schema}\".\"{table_name}\"'.format(**locals()), columns)[0]\n print('Using insert statement: ')\n print(stmt)\n\n print('Will make {} requests with a bulk size of {}'.format(\n num_inserts, bulk_size))\n\n print('Generating fake data and executing inserts')\n q = asyncio.Queue(maxsize=concurrency)\n with clients.client(hosts, concurrency=concurrency) as client:\n active = [True]\n\n def stop():\n asyncio.ensure_future(q.put(None))\n active.clear()\n loop.remove_signal_handler(signal.SIGINT)\n if sys.platform != 'win32':\n loop.add_signal_handler(signal.SIGINT, stop)\n bulk_seq = _bulk_size_generator(num_records, bulk_size, active)\n with ThreadPoolExecutor() as e:\n tasks = asyncio.gather(\n _gen_data_and_insert(q, e, client, stmt, gen_row, bulk_seq),\n consume(q, total=num_inserts)\n )\n loop.run_until_complete(tasks)", "response": "Generate random data and insert it into a table."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main(self):\n logger_doi_resolver.info(\"enter doi_resolver\")\n for idx, pub in enumerate(self.root_dict['pub']):\n # Retrieve DOI id key-value from the root_dict\n doi_string, doi_found = self.find_doi(pub)\n\n if doi_found:\n logger_doi_resolver.info(\"doi found: {}\".format(doi_string))\n # Empty list for no match, or list of 1+ matching DOI id strings\n doi_list = clean_doi(doi_string)\n if not doi_list:\n self.illegal_doi(doi_string)\n else:\n for doi_id in doi_list:\n self.get_data(doi_id, idx)\n else:\n logger_doi_resolver.warn(\"doi not found: publication index: {}\".format(self.name, idx))\n self.root_dict['pub'][idx]['pubDataUrl'] = 'Manually Entered'\n logger_doi_resolver.info(\"exit doi_resolver\")\n return rm_empty_fields(self.root_dict)", "response": "This function gets file and creates outputs and runs all operations."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef compare_replace(pub_dict, fetch_dict):\n blank = [\" \", \"\", None]\n for k, v in fetch_dict.items():\n try:\n if fetch_dict[k] != blank:\n pub_dict[k] = fetch_dict[k]\n except KeyError:\n pass\n return pub_dict", "response": "Compare the original pub dictionary with the fetched pub dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef noaa_citation(self, doi_string):\n # Append location 1\n if 'link' in self.root_dict['pub'][0]:\n self.root_dict['pub'][0]['link'].append({\"url\": doi_string})\n else:\n self.root_dict['pub'][0]['link'] = [{\"url\": doi_string}]\n\n # Append location 2\n self.root_dict['dataURL'] = doi_string\n\n return", "response": "Special instructions for moving NOAA data to the correct fields\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef illegal_doi(self, doi_string):\n logger_doi_resolver.info(\"enter illegal_doi\")\n # Ignores empty or irrelevant strings (blank, spaces, na, nan, ', others)\n if len(doi_string) > 5:\n\n # NOAA string\n if 'noaa' in doi_string.lower():\n self.noaa_citation(doi_string)\n\n # Paragraph citation / Manual citation\n elif doi_string.count(' ') > 3:\n self.root_dict['pub'][0]['citation'] = doi_string\n\n # Strange Links or Other, send to quarantine\n else:\n logger_doi_resolver.warn(\"illegal_doi: bad doi string: {}\".format(doi_string))\n logger_doi_resolver.info(\"exit illegal_doi\")\n return", "response": "This method is called when a DOI string is not valid."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef compile_fetch(self, raw, doi_id):\n fetch_dict = OrderedDict()\n order = {'author': 'author', 'type': 'type', 'identifier': '', 'title': 'title', 'journal': 'container-title',\n 'pubYear': '', 'volume': 'volume', 'publisher': 'publisher', 'page':'page', 'issue': 'issue'}\n\n for k, v in order.items():\n try:\n if k == 'identifier':\n fetch_dict[k] = [{\"type\": \"doi\", \"id\": doi_id, \"url\": \"http://dx.doi.org/\" + doi_id}]\n elif k == 'author':\n fetch_dict[k] = self.compile_authors(raw[v])\n elif k == 'pubYear':\n fetch_dict[k] = self.compile_date(raw['issued']['date-parts'])\n else:\n fetch_dict[k] = raw[v]\n except KeyError as e:\n # If we try to add a key that doesn't exist in the raw dict, then just keep going.\n logger_doi_resolver.warn(\"compile_fetch: KeyError: key not in raw: {}, {}\".format(v, e))\n return fetch_dict", "response": "Loop over Raw and add selected items to Fetch with proper formatting"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nresolve DOI and compile all attributes into one dictionary", "response": "def get_data(self, doi_id, idx):\n \"\"\"\n Resolve DOI and compile all attributes into one dictionary\n :param str doi_id:\n :param int idx: Publication index\n :return dict: Updated publication dictionary\n \"\"\"\n\n tmp_dict = self.root_dict['pub'][0].copy()\n try:\n # Send request to grab metadata at URL\n url = \"http://dx.doi.org/\" + doi_id\n headers = {\"accept\": \"application/rdf+xml;q=0.5, application/citeproc+json;q=1.0\"}\n r = requests.get(url, headers=headers)\n\n # DOI 404. Data not retrieved. Log and return original pub\n if r.status_code == 400:\n logger_doi_resolver.warn(\"doi.org STATUS: 404, {}\".format(doi_id))\n\n # Ignore other status codes. Run when status is 200 (good response)\n elif r.status_code == 200:\n logger_doi_resolver.info(\"doi.org STATUS: 200\")\n # Load data from http response\n raw = json.loads(r.text)\n\n # Create a new pub dictionary with metadata received\n fetch_dict = self.compile_fetch(raw, doi_id)\n\n # Compare the two pubs. Overwrite old data with new data where applicable\n tmp_dict = self.compare_replace(tmp_dict, fetch_dict)\n tmp_dict['pubDataUrl'] = 'doi.org'\n self.root_dict['pub'][idx] = tmp_dict\n\n except urllib.error.URLError as e:\n logger_doi_resolver.warn(\"get_data: URLError: malformed doi: {}, {}\".format(doi_id, e))\n except ValueError as e:\n logger_doi_resolver.warn(\"get_data: ValueError: cannot resolve dois from this publisher: {}, {}\".format(doi_id, e))\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nattempts to cast string to float.", "response": "def cast_values_csvs(d, idx, x):\n \"\"\"\n Attempt to cast string to float. If error, keep as a string.\n\n :param dict d: Data\n :param int idx: Index number\n :param str x: Data\n :return any:\n \"\"\"\n try:\n d[idx].append(float(x))\n except ValueError:\n d[idx].append(x)\n # logger_misc.warn(\"cast_values_csv: ValueError\")\n # logger_misc.warn(\"ValueError: col: {}, {}\".format(x, e))\n except KeyError as e:\n logger_misc.warn(\"cast_values_csv: KeyError: col: {}, {}\".format(x, e))\n\n return d"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cast_float(x):\n try:\n x = float(x)\n except ValueError:\n try:\n x = x.strip()\n except AttributeError as e:\n logger_misc.warn(\"parse_str: AttributeError: String not number or word, {}, {}\".format(x, e))\n return x", "response": "Attempt to cleanup string or convert to number value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncast unknown type into integer AttributeNames", "response": "def cast_int(x):\n \"\"\"\n Cast unknown type into integer\n\n :param any x:\n :return int:\n \"\"\"\n try:\n x = int(x)\n except ValueError:\n try:\n x = x.strip()\n except AttributeError as e:\n logger_misc.warn(\"parse_str: AttributeError: String not number or word, {}, {}\".format(x, e))\n return x"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef clean_doi(doi_string):\n regex = re.compile(r'\\b(10[.][0-9]{3,}(?:[.][0-9]+)*/(?:(?![\"&\\'<>,])\\S)+)\\b')\n try:\n # Returns a list of matching strings\n m = re.findall(regex, doi_string)\n except TypeError as e:\n # If doi_string is None type, return empty list\n logger_misc.warn(\"TypeError cleaning DOI: {}, {}\".format(doi_string, e))\n m = []\n return m", "response": "This function takes a string containing a DOI string and returns a list of all the DOI ids."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef decimal_precision(row):\n # _row = []\n try:\n # Convert tuple to list for processing\n row = list(row)\n for idx, x in enumerate(row):\n x = str(x)\n # Is this a scientific notated float? Tear it apart with regex, round, and piece together again\n m = re.match(re_sci_notation, x)\n if m:\n _x2 = round(float(m.group(2)), 3)\n x = m.group(1) + str(_x2)[1:] + m.group(3)\n # A normal float? round to 3 decimals as usual\n else:\n try:\n x = round(float(x), 3)\n except (ValueError, TypeError):\n x = x\n\n row[idx] = x\n # Convert list back to tuple for csv writer\n row = tuple(row)\n except Exception as e:\n print(\"Error: Unable to fix the precision of values. File size may be larger than normal, {}\".format(e))\n return row", "response": "Change the precision of values before writing to CSV."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfix the coordinate decimal degrees calculated by an excel formula.", "response": "def fix_coordinate_decimal(d):\n \"\"\"\n Coordinate decimal degrees calculated by an excel formula are often too long as a repeating decimal.\n Round them down to 5 decimals\n\n :param dict d: Metadata\n :return dict d: Metadata\n \"\"\"\n try:\n for idx, n in enumerate(d[\"geo\"][\"geometry\"][\"coordinates\"]):\n d[\"geo\"][\"geometry\"][\"coordinates\"][idx] = round(n, 5)\n except Exception as e:\n logger_misc.error(\"fix_coordinate_decimal: {}\".format(e))\n return d"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef generate_timestamp(fmt=None):\n if fmt:\n time = dt.datetime.now().strftime(fmt)\n else:\n time = dt.date.today()\n return str(time)", "response": "Generate a timestamp to mark when this file was last modified."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef generate_tsid(size=8):\n chars = string.ascii_uppercase + string.digits\n _gen = \"\".join(random.choice(chars) for _ in range(size))\n return \"PYT\" + str(_gen)", "response": "Generate a TSid string. Use the PYT prefix for traceability and 8 trailing generated characters."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_appended_name(name, columns):\n loop = 0\n while name in columns:\n loop += 1\n if loop > 10:\n logger_misc.warn(\"get_appended_name: Too many loops: Tried to get appended name but something looks wrong\")\n break\n tmp = name + \"-\" + str(loop)\n if tmp not in columns:\n return tmp\n return name + \"-99\"", "response": "Returns the name of the variable with the number appended to the given column."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_authors_as_str(x):\n _authors = \"\"\n # if it's a string already, we're done\n if isinstance(x, str):\n return x\n\n # elif it's a list, keep going\n elif isinstance(x, list):\n # item in list is a str\n if isinstance(x[0], str):\n # loop and concat until the last item\n for name in x[:-1]:\n # all inner items get a semi-colon at the end\n _authors += str(name) + \"; \"\n # last item does not get a semi-colon at the end\n _authors += str(x[-1])\n\n # item in list is a dictionary\n elif isinstance(x[0], dict):\n # dictionary structure SHOULD have authors listed until the \"name\" key.\n try:\n # loop and concat until the last item\n for entry in x[:-1]:\n # all inner items get a semi-colon at the end\n _authors += str(entry[\"name\"]) + \"; \"\n # last item does not get a semi-colon at the end\n _authors += str(x[-1][\"name\"])\n except KeyError:\n logger_misc.warn(\"get_authors_as_str: KeyError: Authors incorrect data structure\")\n\n else:\n logger_misc.debug(\"get_authors_as_str: TypeError: author/investigators isn't str or list: {}\".format(type(x)))\n\n return _authors", "response": "Take author or investigator data and convert it to a concatenated string of names."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_dsn(d):\n\n try:\n return d[\"dataSetName\"]\n except Exception as e:\n logger_misc.warn(\"get_dsn: Exception: No datasetname found, unable to continue: {}\".format(e))\n exit(1)", "response": "Get the dataset name from a record\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndetermine if this is a 1 or 2 column ensemble. Then determine how many columns and rows it has.", "response": "def get_ensemble_counts(d):\n \"\"\"\n Determine if this is a 1 or 2 column ensemble. Then determine how many columns and rows it has.\n\n :param dict d: Metadata (table)\n :return dict _rows_cols: Row and column counts\n \"\"\"\n _rows_cols = {\"rows\": 0, \"cols\": 0}\n try:\n\n if len(d) == 1:\n for var, data in d.items():\n # increment columns by one\n _rows_cols[\"cols\"] += len(data[\"values\"])\n # get row count by getting len of column (since it's only one list\n _rows_cols[\"rows\"] = len(data[\"values\"][0])\n break\n\n elif len(d) == 2:\n for var, data in d.items():\n # multiple columns in one. list of lists\n if isinstance(data[\"number\"], list):\n # add total amount of columns to the running total\n _rows_cols[\"cols\"] += len(data[\"values\"])\n # single column. one list\n else:\n # increment columns by one\n _rows_cols[\"cols\"] += 1\n # get row count by getting len of column (since it's only one list\n _rows_cols[\"rows\"] = len(data[\"values\"])\n\n except Exception as e:\n logger_misc.warn(\"get_ensemble_counts: {}\".format(e))\n\n return _rows_cols"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_missing_value_key(d):\n _mv = \"nan\"\n\n # Attempt to find a table-level missing value key\n try:\n # check for missing value key at the table root\n _mv = d[\"missingValue\"]\n except KeyError as e:\n logger_misc.info(\"get_missing_value: No missing value key found: {}\".format(e))\n except AttributeError as e:\n logger_misc.warn(\"get_missing_value: Column is wrong data type: {}\".format(e))\n\n # No table-level missing value found. Attempt to find a column-level missing value key\n if not _mv:\n try:\n # loop for each column of data, searching for a missing value key\n for k, v in d[\"columns\"].items():\n # found a column with a missing value key. Store it and exit the loop.\n _mv = v[\"missingValue\"]\n break\n except KeyError:\n # There are no columns in this table. We've got bigger problems!\n pass\n\n # No table-level or column-level missing value. Out of places to look. Ask the user to enter the missing value\n # used in this data\n # if not _mv:\n # print(\"No 'missingValue' key provided. Please type the missingValue used in this file: {}\\n\".format(filename))\n # _mv = input(\"missingValue: \")\n return _mv", "response": "Get the Missing Value entry from a table of data."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_variable_name_col(d):\n var = \"\"\n try:\n var = d[\"variableName\"]\n except KeyError:\n try:\n var = d[\"name\"]\n except KeyError:\n num = \"unknown\"\n if \"number\" in d:\n num = d[\"number\"]\n print(\"Error: column number <{}> is missing a variableName. Please fix.\".format(num))\n logger_misc.info(\"get_variable_name_col: KeyError: missing key\")\n return var", "response": "Get the variable name from a table or column."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntrying to get a table name from a data table", "response": "def get_table_key(key, d, fallback=\"\"):\n \"\"\"\n Try to get a table name from a data table\n\n :param str key: Key to try first\n :param dict d: Data table\n :param str fallback: (optional) If we don't find a table name, use this as a generic name fallback.\n :return str var: Data table name\n \"\"\"\n try:\n var = d[key]\n return var\n except KeyError:\n logger_misc.info(\"get_variable_name_table: KeyError: missing {}, use name: {}\".format(key, fallback))\n return fallback"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if a table of data is an ensemble table.", "response": "def is_ensemble(d):\n \"\"\"\n Check if a table of data is an ensemble table. Is the first values index a list? ensemble. Int/float? not ensemble.\n\n :param dict d: Table data\n :return bool: Ensemble or not ensemble\n \"\"\"\n for var, data in d.items():\n try:\n if isinstance(data[\"number\"], list):\n return True\n except Exception as e:\n logger_misc.debug(\"misc: is_ensemble: {}\".format(e))\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncheck that the file extension matches the target extension given.", "response": "def load_fn_matches_ext(file_path, file_type):\n \"\"\"\n Check that the file extension matches the target extension given.\n\n :param str file_path: Path to be checked\n :param str file_type: Target extension\n :return bool correct_ext: Extension match or does not match\n \"\"\"\n correct_ext = False\n curr_ext = os.path.splitext(file_path)[1]\n exts = [curr_ext, file_type]\n try:\n # special case: if file type is excel, both extensions are valid.\n if \".xlsx\" in exts and \".xls\" in exts:\n correct_ext = True\n elif curr_ext == file_type:\n correct_ext = True\n else:\n print(\"Use '{}' to load this file: {}\".format(FILE_TYPE_MAP[curr_ext][\"load_fn\"],\n os.path.basename(file_path)))\n except Exception as e:\n logger_misc.debug(\"load_fn_matches_ext: {}\".format(e))\n\n return correct_ext"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef match_operators(inp, relate, cut):\n logger_misc.info(\"enter match_operators\")\n ops = {'>': operator.gt,\n '<': operator.lt,\n '>=': operator.ge,\n '<=': operator.le,\n '=': operator.eq\n }\n try:\n truth = ops[relate](inp, cut)\n except KeyError as e:\n truth = False\n logger_misc.warn(\"get_truth: KeyError: Invalid operator input: {}, {}\".format(relate, e))\n logger_misc.info(\"exit match_operators\")\n return truth", "response": "Compare two items. Compare a string operator to an operator function\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks that all the array lengths match.", "response": "def match_arr_lengths(l):\n \"\"\"\n Check that all the array lengths match so that a DataFrame can be created successfully.\n\n :param list l: Nested arrays\n :return bool: Valid or invalid\n \"\"\"\n try:\n # length of first list. use as basis to check other list lengths against.\n inner_len = len(l[0])\n # check each nested list\n for i in l:\n # if the length doesn't match the first list, then don't proceed.\n if len(i) != inner_len:\n return False\n except IndexError:\n # couldn't get index 0. Wrong data type given or not nested lists\n print(\"Error: Array data is not formatted correctly.\")\n return False\n except TypeError:\n # Non-iterable data type given.\n print(\"Error: Array data missing\")\n return False\n # all array lengths are equal. made it through the whole list successfully\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmoves all files from one directory to another directory", "response": "def mv_files(src, dst):\n \"\"\"\n Move all files from one directory to another\n\n :param str src: Source directory\n :param str dst: Destination directory\n :return none:\n \"\"\"\n # list the files in the src directory\n files = os.listdir(src)\n # loop for each file found\n for file in files:\n # move the file from the src to the dst\n shutil.move(os.path.join(src, file), os.path.join(dst, file))\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nnormalize a string into a byte string form.", "response": "def normalize_name(s):\n \"\"\"\n Remove foreign accents and characters to normalize the string. Prevents encoding errors.\n\n :param str s: String\n :return str s: String\n \"\"\"\n # Normalize the string into a byte string form\n s = unicodedata.normalize('NFKD', s).encode('ascii', 'ignore')\n # Remove the byte string and quotes from the string\n s = str(s)[2:-1]\n return s"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef path_type(path, target):\n if os.path.isfile(path) and target == \"file\":\n return True\n elif os.path.isdir(path) and target == \"directory\":\n return True\n else:\n print(\"Error: Path given is not a {}: {}\".format(target, path))\n return False", "response": "Determines if given path is file directory or other. Compare with target to see if it s the type we wanted."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef print_filename(path):\n\n if os.path.basename(path):\n return os.path.basename(path)\n return path", "response": "Print out lipd filename that is being read or written."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef prompt_protocol():\n stop = 3\n ans = \"\"\n while True and stop > 0:\n ans = input(\"Save as (d)ictionary or (o)bject?\\n\"\n \"* Note:\\n\"\n \"Dictionaries are more basic, and are compatible with Python v2.7+.\\n\"\n \"Objects are more complex, and are only compatible with v3.4+ \")\n if ans not in (\"d\", \"o\"):\n print(\"Invalid response: Please choose 'd' or 'o'\")\n else:\n break\n # if a valid answer isn't captured, default to dictionary (safer, broader)\n if ans == \"\":\n ans = \"d\"\n return ans", "response": "Prompt user if they would like to save pickle file as a dictionary or an object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef rm_empty_fields(x):\n # No logger here because the function is recursive.\n # Int types don't matter. Return as-is.\n if not isinstance(x, int) and not isinstance(x, float):\n if isinstance(x, str) or x is None:\n try:\n # Remove new line characters and carriage returns\n x = x.rstrip()\n except AttributeError:\n # None types don't matter. Keep going.\n pass\n if x in EMPTY:\n # Substitute empty entries with \"\"\n x = ''\n elif isinstance(x, list):\n # Recurse once for each item in the list\n for i, v in enumerate(x):\n x[i] = rm_empty_fields(x[i])\n # After substitutions, remove empty entries.\n for i in x:\n # Many 0 values are important (coordinates, m/m/m/m). Don't remove them.\n if not i and i not in [0, 0.0]:\n x.remove(i)\n elif isinstance(x, dict):\n # First, go through and substitute \"\" (empty string) entry for any values in EMPTY\n for k, v in x.items():\n x[k] = rm_empty_fields(v)\n # After substitutions, go through and delete the key-value pair.\n # This has to be done after we come back up from recursion because we cannot pass keys down.\n for key in list(x.keys()):\n if not x[key] and x[key] not in [0, 0.0]:\n del x[key]\n return x", "response": "Recursive function to remove all empty fields in the Unknown\n ."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef rm_empty_doi(d):\n logger_misc.info(\"enter remove_empty_doi\")\n try:\n # Check each publication dictionary\n for pub in d['pub']:\n # If no identifier, then we can quit here. If identifier, then keep going.\n if 'identifier' in pub:\n if 'id' in pub['identifier'][0]:\n # If there's a DOI id, but it's EMPTY\n if pub['identifier'][0]['id'] in EMPTY:\n del pub['identifier']\n else:\n # If there's an identifier section, with no DOI id\n del pub['identifier']\n except KeyError as e:\n # What else could go wrong?\n logger_misc.warn(\"remove_empty_doi: KeyError: publication key not found, {}\".format(e))\n logger_misc.info(\"exit remove_empty_doi\")\n return d", "response": "Remove empty DOI from metadata."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving all files in the given directory with the given extension", "response": "def rm_files(path, extension):\n \"\"\"\n Remove all files in the given directory with the given extension\n\n :param str path: Directory\n :param str extension: File type to remove\n :return none:\n \"\"\"\n files = list_files(extension, path)\n for file in files:\n if file.endswith(extension):\n os.remove(os.path.join(path, file))\n return"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove the missingValue key from each column of the metadata", "response": "def rm_missing_values_table(d):\n \"\"\"\n Loop for each table column and remove the missingValue key & data\n\n :param dict d: Metadata (table)\n :return dict d: Metadata (table)\n \"\"\"\n try:\n for k, v in d[\"columns\"].items():\n d[\"columns\"][k] = rm_keys_from_dict(v, [\"missingValue\"])\n except Exception:\n # If we get a KeyError or some other error, it's not a big deal. Keep going.\n pass\n return d"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngiving a dictionary and a list of keys remove any data in the dictionary with the given keys.", "response": "def rm_keys_from_dict(d, keys):\n \"\"\"\n Given a dictionary and a key list, remove any data in the dictionary with the given keys.\n\n :param dict d: Metadata\n :param list keys: Keys to be removed\n :return dict d: Metadata\n \"\"\"\n # Loop for each key given\n for key in keys:\n # Is the key in the dictionary?\n if key in d:\n try:\n d.pop(key, None)\n except KeyError:\n # Not concerned with an error. Keep going.\n pass\n return d"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _replace_missing_values_table(values, mv):\n\n for idx, column in enumerate(values):\n values[idx] = _replace_missing_values_column(column, mv)\n\n return values", "response": "Replace missing values in the given table column values with values that are not in use by the current user."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _replace_missing_values_column(values, mv):\n for idx, v in enumerate(values):\n try:\n if v in EMPTY or v == mv:\n values[idx] = \"nan\"\n elif math.isnan(float(v)):\n values[idx] = \"nan\"\n else:\n values[idx] = v\n except (TypeError, ValueError):\n values[idx] = v\n\n return values", "response": "Replace missing values in the values list where applicable\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef split_path_and_file(s):\n _path = s\n _filename = \"\"\n try:\n x = os.path.split(s)\n _path = x[0]\n _filename = x[1]\n except Exception:\n print(\"Error: unable to split path\")\n\n return _path, _filename", "response": "Given a full path to a file split and return a path and filename"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nunwraps nested lists to be one flat list of lists.", "response": "def unwrap_arrays(l):\n \"\"\"\n Unwrap nested lists to be one \"flat\" list of lists. Mainly for prepping ensemble data for DataFrame() creation\n\n :param list l: Nested lists\n :return list l2: Flattened lists\n \"\"\"\n # keep processing until all nesting is removed\n process = True\n # fail safe: cap the loops at 20, so we don't run into an error and loop infinitely.\n # if it takes more than 20 loops then there is a problem with the data given.\n loops = 25\n while process and loops > 0:\n try:\n # new \"flat\" list\n l2 = []\n for k in l:\n # all items in this list are numeric, so this list is done. append to main list\n if all(isinstance(i, float) or isinstance(i, int) for i in k):\n l2.append(k)\n # this list has more nested lists inside. append each individual nested list to the main one.\n elif all(isinstance(i, list) or isinstance(i, np.ndarray) for i in k):\n for i in k:\n l2.append(i)\n except Exception:\n print(\"something went wrong during process\")\n # verify the main list\n try:\n # if every list has a numeric at index 0, then there is no more nesting and we can stop processing\n if all(isinstance(i[0], (int, str, float)) for i in l2):\n process = False\n else:\n l = l2\n except IndexError:\n # there's no index 0, so there must be mixed data types or empty data somewhere.\n print(\"something went wrong during verify\")\n loops -= 1\n return l2"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extract(d, whichtables, mode, time):\n logger_ts.info(\"enter extract_main\")\n _root = {}\n _ts = {}\n # _switch = {\"paleoData\": \"chronData\", \"chronData\": \"paleoData\"}\n _pc = \"paleoData\"\n if mode == \"chron\":\n _pc = \"chronData\"\n _root[\"mode\"] = _pc\n _root[\"time_id\"] = time\n try:\n # Build the root level data.\n # This will serve as the template for which column data will be added onto later.\n for k, v in d.items():\n if k == \"funding\":\n _root = _extract_fund(v, _root)\n elif k == \"geo\":\n _root = _extract_geo(v, _root)\n elif k == 'pub':\n _root = _extract_pub(v, _root)\n # elif k in [\"chronData\", \"paleoData\"]:\n # # Store chronData and paleoData as-is. Need it to collapse without data loss.\n # _root[k] = copy.deepcopy(v)\n else:\n if k not in [\"chronData\", \"paleoData\"]:\n _root[k] = v\n # Create tso dictionaries for each individual column (build on root data)\n _ts = _extract_pc(d, _root, _pc, whichtables)\n except Exception as e:\n logger_ts.error(\"extract: Exception: {}\".format(e))\n print(\"extract: Exception: {}\".format(e))\n\n logger_ts.info(\"exit extract_main\")\n return _ts", "response": "This function extracts the LiPD data from one LiPD file and returns a list of TSOs objects."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _extract_fund(l, _root):\n logger_ts.info(\"enter _extract_funding\")\n for idx, i in enumerate(l):\n for k, v in i.items():\n _root['funding' + str(idx + 1) + '_' + k] = v\n return _root", "response": "Extracts the Funding entries from a list of Funding entries into a flat dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nextracts geo data from input geo dictionary d", "response": "def _extract_geo(d, _root):\n \"\"\"\n Extract geo data from input\n :param dict d: Geo\n :return dict _root: Root data\n \"\"\"\n logger_ts.info(\"enter ts_extract_geo\")\n # May not need these if the key names are corrected in the future.\n # COORDINATE ORDER: [LON, LAT, ELEV]\n x = ['geo_meanLon', 'geo_meanLat', 'geo_meanElev']\n # Iterate through geo dictionary\n for k, v in d.items():\n # Case 1: Coordinates special naming\n if k == 'coordinates':\n for idx, p in enumerate(v):\n try:\n # Check that our value is not in EMPTY.\n if isinstance(p, str):\n if p.lower() in EMPTY:\n # If elevation is a string or 0, don't record it\n if idx != 2:\n # If long or lat is empty, set it as 0 instead\n _root[x[idx]] = 0\n else:\n # Set the value as a float into its entry.\n _root[x[idx]] = float(p)\n # Value is a normal number, or string representation of a number\n else:\n # Set the value as a float into its entry.\n _root[x[idx]] = float(p)\n except IndexError as e:\n logger_ts.warn(\"_extract_geo: IndexError: idx: {}, val: {}, {}\".format(idx, p, e))\n # Case 2: Any value that is a string can be added as-is\n elif isinstance(v, str):\n if k == 'meanElev':\n try:\n # Some data sets have meanElev listed under properties for some reason.\n _root['geo_' + k] = float(v)\n except ValueError as e:\n # If the value is a string, then we don't want it\n logger_ts.warn(\"_extract_geo: ValueError: meanElev is a string: {}, {}\".format(v, e))\n else:\n _root['geo_' + k] = v\n # Case 3: Nested dictionary. Recursion\n elif isinstance(v, dict):\n _root = _extract_geo(v, _root)\n return _root"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nextracting publication data from one or more publication entries.", "response": "def _extract_pub(l, _root):\n \"\"\"\n Extract publication data from one or more publication entries.\n :param list l: Publication\n :return dict _root: Root data\n \"\"\"\n logger_ts.info(\"enter _extract_pub\")\n # For each publication entry\n for idx, pub in enumerate(l):\n logger_ts.info(\"processing publication #: {}\".format(idx))\n # Get author data first, since that's the most ambiguously structured data.\n _root = _extract_authors(pub, idx, _root)\n # Go through data of this publication\n for k, v in pub.items():\n # Case 1: DOI ID. Don't need the rest of 'identifier' dict\n if k == 'identifier':\n try:\n _root['pub' + str(idx + 1) + '_DOI'] = v[0]['id']\n except KeyError as e:\n logger_ts.warn(\"_extract_pub: KeyError: no doi id: {}, {}\".format(v, e))\n # Case 2: All other string entries\n else:\n if k != 'authors' and k != 'author':\n _root['pub' + str(idx + 1) + '_' + k] = v\n return _root"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nextract the authors from a publication author structure.", "response": "def _extract_authors(pub, idx, _root):\n \"\"\"\n Create a concatenated string of author names. Separate names with semi-colons.\n :param any pub: Publication author structure is ambiguous\n :param int idx: Index number of Pub\n \"\"\"\n logger_ts.info(\"enter extract_authors\")\n try:\n # DOI Author data. We'd prefer to have this first.\n names = pub['author']\n except KeyError as e:\n try:\n # Manually entered author data. This is second best.\n names = pub['authors']\n except KeyError as e:\n # Couldn't find any author data. Skip it altogether.\n names = False\n logger_ts.info(\"extract_authors: KeyError: author data not provided, {}\".format(e))\n\n # If there is author data, find out what type it is\n if names:\n # Build author names onto empty string\n auth = ''\n # Is it a list of dicts or a list of strings? Could be either\n # Authors: Stored as a list of dictionaries or list of strings\n if isinstance(names, list):\n for name in names:\n if isinstance(name, str):\n auth += name + ';'\n elif isinstance(name, dict):\n for k, v in name.items():\n auth += v + ';'\n elif isinstance(names, str):\n auth = names\n # Enter finished author string into target\n _root['pub' + str(idx + 1) + '_author'] = auth[:-1]\n return _root"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _extract_pc(d, root, pc, whichtables):\n logger_ts.info(\"enter extract_pc\")\n _ts = []\n try:\n # For each table in pc\n for k, v in d[pc].items():\n if whichtables == \"all\" or whichtables == \"meas\":\n for _table_name1, _table_data1 in v[\"measurementTable\"].items():\n _ts = _extract_table(_table_data1, copy.deepcopy(root), pc, _ts, \"meas\")\n if whichtables != \"meas\":\n if \"model\" in v:\n for _table_name1, _table_data1 in v[\"model\"].items():\n # get the method info for this model. This will be paired to all summ and ens table data\n _method = _extract_method(_table_data1[\"method\"])\n if whichtables == \"all\" or whichtables == \"summ\":\n if \"summaryTable\" in _table_data1:\n for _table_name2, _table_data2 in _table_data1[\"summaryTable\"].items():\n # take a copy of this tso root\n _tso = copy.deepcopy(root)\n # add in the method details\n _tso.update(_method)\n # add in the table details\n _ts = _extract_table(_table_data2, _tso, pc, _ts, \"summ\")\n if whichtables == \"all\" or whichtables == \"ens\":\n if \"ensembleTable\" in _table_data1:\n for _table_name2, _table_data2 in _table_data1[\"ensembleTable\"].items():\n _tso = copy.deepcopy(root)\n _tso.update(_method)\n _ts = _extract_table(_table_data2, _tso, pc, _ts, \"ens\")\n\n except Exception as e:\n logger_ts.warn(\"extract_pc: Exception: {}\".format(e))\n return _ts", "response": "Extract all data from a PaleoData dictionary."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nextract a timeseries - formatted version of model method data", "response": "def _extract_method(method):\n \"\"\"\n Make a timeseries-formatted version of model method data\n\n :param dict method: Method data\n :return dict _method: Method data, formatted\n \"\"\"\n _method = {}\n for k,v in method.items():\n _method[\"method_\" + k] = v\n return _method"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nextracts year age and depth column from table data.", "response": "def _extract_special(current, table_data):\n \"\"\"\n Extract year, age, and depth column from table data\n :param dict table_data: Data at the table level\n :param dict current: Current data\n :return dict current:\n \"\"\"\n logger_ts.info(\"enter extract_special\")\n try:\n # Add age, year, and depth columns to ts_root where possible\n for k, v in table_data['columns'].items():\n s = \"\"\n\n # special case for year bp, or any variation of it. Translate key to \"age\"\"\n if \"bp\" in k.lower():\n s = \"age\"\n\n # all other normal cases. clean key and set key.\n elif any(x in k.lower() for x in ('age', 'depth', 'year', \"yr\", \"distance_from_top\", \"distance\")):\n # Some keys have units hanging on them (i.e. 'year_ad', 'depth_cm'). We don't want units on the keys\n if re_pandas_x_und.match(k):\n s = k.split('_')[0]\n elif \"distance\" in k:\n s = \"depth\"\n else:\n s = k\n\n # create the entry in ts_root.\n if s:\n try:\n current[s] = v['values']\n except KeyError as e:\n # Values key was not found.\n logger_ts.warn(\"extract_special: KeyError: 'values' not found, {}\".format(e))\n try:\n current[s + 'Units'] = v['units']\n except KeyError as e:\n # Values key was not found.\n logger_ts.warn(\"extract_special: KeyError: 'units' not found, {}\".format(e))\n\n except Exception as e:\n logger_ts.error(\"extract_special: {}\".format(e))\n\n return current"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nextracting data from the root level of a paleoData table.", "response": "def _extract_table_root(d, current, pc):\n \"\"\"\n Extract data from the root level of a paleoData table.\n :param dict d: paleoData table\n :param dict current: Current root data\n :param str pc: paleoData or chronData\n :return dict current: Current root data\n \"\"\"\n logger_ts.info(\"enter extract_table_root\")\n try:\n for k, v in d.items():\n if isinstance(v, str):\n current[pc + '_' + k] = v\n except Exception as e:\n logger_ts.error(\"extract_table_root: {}\".format(e))\n return current"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nextracts the modelNumber and summaryNumber fields from the table data.", "response": "def _extract_table_model(table_data, current, tt):\n \"\"\"\n Add in modelNumber and summaryNumber fields if this is a summary table\n\n :param dict table_data: Table data\n :param dict current: LiPD root data\n :param str tt: Table type \"summ\", \"ens\", \"meas\"\n :return dict current: Current root data\n \"\"\"\n try:\n if tt in [\"summ\", \"ens\"]:\n m = re.match(re_sheet, table_data[\"tableName\"])\n if m:\n _pc_num= m.group(1) + \"Number\"\n current[_pc_num] = m.group(2)\n current[\"modelNumber\"] = m.group(4)\n current[\"tableNumber\"] = m.group(6)\n else:\n logger_ts.error(\"extract_table_summary: Unable to parse paleo/model/table numbers\")\n except Exception as e:\n logger_ts.error(\"extract_table_summary: {}\".format(e))\n return current"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a time series entry for each column in the table.", "response": "def _extract_table(table_data, current, pc, ts, tt):\n \"\"\"\n Use the given table data to create a time series entry for each column in the table.\n\n :param dict table_data: Table data\n :param dict current: LiPD root data\n :param str pc: paleoData or chronData\n :param list ts: Time series (so far)\n :param bool summary: Summary Table or not\n :return list ts: Time series (so far)\n \"\"\"\n current[\"tableType\"] = tt\n # Get root items for this table\n current = _extract_table_root(table_data, current, pc)\n # Add in modelNumber and tableNumber if this is \"ens\" or \"summ\" table\n current = _extract_table_model(table_data, current, tt)\n # Add age, depth, and year columns to root if available\n _table_tmp = _extract_special(current, table_data)\n try:\n # Start creating entries using dictionary copies.\n for _col_name, _col_data in table_data[\"columns\"].items():\n # Add column data onto root items. Copy so we don't ruin original data\n _col_tmp = _extract_columns(_col_data, copy.deepcopy(_table_tmp), pc)\n try:\n ts.append(_col_tmp)\n except Exception as e:\n logger_ts.warn(\"extract_table: Unable to create ts entry, {}\".format(e))\n except Exception as e:\n logger_ts.error(\"extract_table: {}\".format(e))\n return ts"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _extract_columns(d, tmp_tso, pc):\n logger_ts.info(\"enter extract_columns\")\n for k, v in d.items():\n if isinstance(v, dict):\n flat_data = _extract_nested(pc + \"_\" + k, v, {})\n for n,m in flat_data.items():\n tmp_tso[n] = m\n else:\n # Assume if it's not a special nested case, then it's a string value\n tmp_tso[pc + '_' + k] = v\n return tmp_tso", "response": "Extract data from one paleoData column\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_current_names(current, dsn, pc):\n _table_name = \"\"\n _variable_name = \"\"\n # Get key info\n try:\n _table_name = current['{}_tableName'.format(pc)]\n _variable_name = current['{}_variableName'.format(pc)]\n except Exception as e:\n print(\"Error: Unable to collapse time series: {}, {}\".format(dsn, e))\n logger_ts.error(\"get_current: {}, {}\".format(dsn, e))\n return _table_name, _variable_name", "response": "Get the table name and variable name from the given time series entry"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _collapse_root(master, current, dsn, pc):\n logger_ts.info(\"enter collapse_root\")\n _tmp_fund = {}\n _tmp_pub = {}\n # The tmp lipd data that we'll place in master later\n _tmp_master = {'pub': [], 'geo': {'geometry': {'coordinates': []}, 'properties': {}}, 'funding': [],\n 'paleoData': {}, \"chronData\": {}}\n # _raw = _switch[pc]\n _c_keys = ['meanLat', 'meanLon', 'meanElev']\n _c_vals = [0, 0, 0]\n _p_keys = ['siteName', 'pages2kRegion', \"location\", \"gcmdLocation\", \"\"]\n try:\n\n # does not have\n # paleoData, chronData, mode, tableType, time_id, depth, depthUnits, age, ageUnits\n\n # does have\n # pub, geo, funding, proxy, archiveType, description, investigator,\n\n # For all keys in the current time series entry\n for k, v in current.items():\n\n # Underscore present. Only underscore keys that belong here are funding, geo, and pub\n if \"_\" in k:\n # FUNDING\n if 'funding' in k:\n # Group funding items in tmp_funding by number\n m = re_fund_valid.match(k)\n try:\n _tmp_fund[m.group(1)][m.group(2)] = v\n except Exception:\n try:\n # If the first layer is missing, create it and try again\n _tmp_fund[m.group(1)] = {}\n _tmp_fund[m.group(1)][m.group(2)] = v\n except Exception:\n # Still not working. Give up.\n pass\n\n # GEO\n elif 'geo' in k:\n key = k.split('_')\n # Coordinates - [LON, LAT, ELEV]\n if key[1] in _c_keys:\n if key[1] == 'meanLon' or key[1] == \"longitude\":\n _c_vals[0] = v\n elif key[1] == 'meanLat' or key[1] == \"latitude\":\n _c_vals[1] = v\n elif key[1] == 'meanElev' or key[1] == \"elevation\":\n _c_vals[2] = v\n # Properties\n else:\n _tmp_master['geo']['properties'][key[1]] = v\n # All others\n # else:\n # _tmp_master['geo'][key[1]] = v\n\n # PUBLICATION\n elif 'pub' in k:\n # Group pub items in tmp_pub by number\n m = re_pub_valid.match(k.lower())\n if m:\n number = int(m.group(1)) - 1 # 0 indexed behind the scenes, 1 indexed to user.\n key = m.group(2)\n # Authors (\"Pu, Y.; Nace, T.; etc..\")\n if key == 'author' or key == 'authors':\n try:\n _tmp_pub[number]['author'] = _collapse_author(v)\n except KeyError as e:\n # Dictionary not created yet. Assign one first.\n _tmp_pub[number] = {}\n _tmp_pub[number]['author'] = _collapse_author(v)\n # DOI ID\n elif key == 'DOI':\n try:\n _tmp_pub[number]['identifier'] = [{\"id\": v, \"type\": \"doi\", \"url\": \"http://dx.doi.org/\" + str(v)}]\n except KeyError:\n # Dictionary not created yet. Assign one first.\n _tmp_pub[number] = {}\n _tmp_pub[number]['identifier'] = [{\"id\": v, \"type\": \"doi\", \"url\": \"http://dx.doi.org/\" + str(v)}]\n # All others\n else:\n try:\n _tmp_pub[number][key] = v\n except KeyError:\n # Dictionary not created yet. Assign one first.\n _tmp_pub[number] = {}\n _tmp_pub[number][key] = v\n\n # No underscore in name, we can rule out the other obvious keys we don't want\n else:\n # Rule out any timeseries keys that we added, and paleoData/chronData prefixed keys.\n if not any(i in k.lower() or i is k.lower() for i in [\"paleodata\", \"chrondata\", \"mode\", \"tabletype\", \"time_id\", \"depth\", \"depthunits\", \"age\", \"ageunits\"]):\n # Root item:\n _tmp_master[k] = v\n continue\n\n\n\n # Append the compiled data into the master dataset data\n for k, v in _tmp_pub.items():\n _tmp_master['pub'].append(v)\n for k, v in _tmp_fund.items():\n _tmp_master['funding'].append(v)\n\n # Get rid of elevation coordinate if one was never added.\n if _c_vals[2] == 0:\n del _c_vals[2]\n _tmp_master['geo']['geometry']['coordinates'] = _c_vals\n\n # Create entry in object master, and set our new data to it.\n master[dsn] = _tmp_master\n except Exception as e:\n logger_ts.error(\"collapse_root: Exception: {}, {}\".format(dsn, e))\n logger_ts.info(\"exit collapse_root\")\n return master, current", "response": "Collapses the root items of the current time series entry into a single item."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsplit author string back into organized dictionary", "response": "def _collapse_author(s):\n \"\"\"\n Split author string back into organized dictionary\n :param str s: Formatted names string \"Last, F.; Last, F.; etc..\"\n :return list of dict: One dictionary per author name\n \"\"\"\n logger_ts.info(\"enter collapse_author\")\n l = []\n authors = s.split(';')\n for author in authors:\n l.append({'name': author})\n return l"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _collapse_pc(master, current, dsn, pc):\n logger_ts.info(\"enter collapse_paleo\")\n _table_name, _variable_name = _get_current_names(current, dsn, pc)\n\n try:\n # Get the names we need to build the hierarchy\n _m = re.match(re_sheet_w_number, _table_name)\n\n # Is this a summary table or a measurement table?\n _switch = {\"meas\": \"measurementTable\", \"summ\": \"summaryTable\", \"ens\": \"ensembleTable\"}\n _ms = _switch[current[\"tableType\"]]\n\n # This is a measurement table. Put it in the correct part of the structure\n # master[datasetname][chronData][chron0][measurementTable][chron0measurement0]\n if _ms == \"measurementTable\":\n\n # master[dsn] = _collapse_build_skeleton(master(dsn), _ms, _m)\n\n # Collapse the keys in the table root if a table does not yet exist\n if _table_name not in master[dsn][pc][_m.group(1)][_ms]:\n _tmp_table = _collapse_table_root(current, dsn, pc)\n master[dsn][pc][_m.group(1)][_ms][_table_name] = _tmp_table\n\n # Collapse the keys at the column level, and return the column data\n _tmp_column = _collapse_column(current, pc)\n # Create the column entry in the table\n master[dsn][pc][_m.group(1)][_ms][_table_name]['columns'][_variable_name] = _tmp_column\n\n # This is a summary table. Put it in the correct part of the structure\n # master[datasetname][chronData][chron0][model][chron0model0][summaryTable][chron0model0summary0]\n elif _ms in [\"ensembleTable\", \"summaryTable\"]:\n # Collapse the keys in the table root if a table does not yet exist\n if _table_name not in master[dsn][pc][_m.group(1)][\"model\"][_m.group(1) + _m.group(2)][_ms]:\n _tmp_table = _collapse_table_root(current, dsn, pc)\n master[dsn][pc][_m.group(1)][\"model\"][_m.group(1) + _m.group(2)][_ms][_table_name] = _tmp_table\n\n # Collapse the keys at the column level, and return the column data\n _tmp_column = _collapse_column(current, pc)\n # Create the column entry in the table\n master[dsn][pc][_m.group(1)][\"model\"][_m.group(1) + _m.group(2)][_ms][_table_name][\"columns\"][_variable_name] = _tmp_column\n\n except Exception as e:\n print(\"Error: Unable to collapse column data: {}, {}\".format(dsn, e))\n logger_ts.error(\"collapse_paleo: {}, {}, {}\".format(dsn, _variable_name, e))\n\n # If these sections had any items added to them, then add them to the column master.\n\n return master", "response": "Collapse the paleo or chron for the current time series entry and return the LiPD data."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _collapse_table_root(current, dsn, pc):\n logger_ts.info(\"enter collapse_table_root\")\n _table_name, _variable_name = _get_current_names(current, dsn, pc)\n _tmp_table = {'columns': {}}\n\n try:\n for k, v in current.items():\n # These are the main table keys that we should be looking for\n for i in ['filename', 'googleWorkSheetKey', 'tableName', \"missingValue\", \"tableMD5\", \"dataMD5\"]:\n if i in k:\n try:\n _tmp_table[i] = v\n except Exception:\n # Not all keys are available. It's okay if we hit a KeyError.\n pass\n except Exception as e:\n print(\"Error: Unable to collapse: {}, {}\".format(dsn, e))\n logger_ts.error(\"collapse_table_root: Unable to collapse: {}, {}, {}\".format(_table_name, dsn, e))\n\n return _tmp_table", "response": "Create a new table with items in root given the current time series entry and the dataset name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncollapse the column data and return the new data structure", "response": "def _collapse_column(current, pc):\n \"\"\"\n Collapse the column data and\n :param current:\n :param pc:\n :return:\n \"\"\"\n _tmp_column = {}\n try:\n for k, v in current.items():\n try:\n # We do not want to store these table keys at the column level.\n if not any(i in k for i in [\"tableName\", \"google\", \"filename\", \"md5\", \"MD5\"]):\n # ['paleoData', 'key']\n m = k.split('_')\n # Is this a chronData or paleoData key?\n if pc in m[0] and len(m) >= 2:\n # Create a link to the growing column data\n tmp = _tmp_column\n # Loop for each key, not including the PC. Start at index 1\n for idx, b in enumerate(m[1:]):\n # Are we at the last item in the list?\n if idx == len(m) - 2:\n # Set the value into the column data\n tmp[b] = v\n # All loops before the last item\n else:\n # Key already exists in the column\n if b in _tmp_column:\n # Move into the data structure and keep going\n tmp = _tmp_column[b]\n # Key does not exist yet\n else:\n # Create the data structure\n tmp[b] = {}\n # Move into the new data structure and keep going\n tmp = tmp[b]\n except Exception as e:\n logger_ts.error(\"collapse_column: loop: {}\".format(e))\n except Exception as e:\n logger_ts.error(\"collapse_column: {}\".format(e))\n\n return _tmp_column"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mode_ts(ec, mode=\"\", ts=None):\n phrase = \"\"\n if ec == \"extract\":\n if mode==\"chron\":\n phrase = \"extracting chronData...\"\n else:\n phrase = \"extracting paleoData...\"\n elif ec == \"collapse\":\n if ts[0][\"mode\"] == \"chronData\":\n phrase = \"collapsing chronData\"\n else:\n phrase = \"collapsing paleoData...\"\n return phrase", "response": "Returns the string for the mode of the current node."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef translate_expression(expression):\n logger_ts.info(\"enter translate_expression\")\n m = re_filter_expr.findall(expression)\n matches = []\n if m:\n for i in m:\n logger_ts.info(\"parse match: {}\".format(i))\n tmp = list(i[1:])\n if tmp[1] in COMPARISONS:\n tmp[1] = COMPARISONS[tmp[1]]\n tmp[0] = cast_float(tmp[0])\n tmp[2] = cast_float(tmp[2])\n matches.append(tmp)\n else:\n logger_ts.warn(\"translate_expression: invalid expression: {}\".format(expression))\n print(\"Invalid input expression\")\n logger_ts.info(\"exit translate_expression\")\n return matches", "response": "Translate an expression into a list of lists."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget a list of time series objects that match the given expression.", "response": "def get_matches(expr_lst, ts):\n \"\"\"\n Get a list of TimeSeries objects that match the given expression.\n :param list expr_lst: Expression\n :param list ts: TimeSeries\n :return list new_ts: Matched time series objects\n :return list idxs: Indices of matched objects\n \"\"\"\n logger_ts.info(\"enter get_matches\")\n new_ts = []\n idxs = []\n match = False\n try:\n for idx, ts_data in enumerate(ts):\n for expr in expr_lst:\n try:\n val = ts_data[expr[0]]\n # Check what comparison operator is being used\n if expr[1] == 'in':\n # \"IN\" operator can't be used in get_truth. Handle first.\n if expr[2] in val:\n match = True\n elif match_operators(val, expr[1], expr[2]):\n # If it's a typical operator, check with the truth test.\n match = True\n else:\n # If one comparison is false, then it can't possibly be a match\n match = False\n break\n except KeyError as e:\n logger_ts.warn(\"get_matches: KeyError: getting value from TimeSeries object, {}, {}\".format(expr, e))\n match = False\n except IndexError as e:\n logger_ts.warn(\"get_matches: IndexError: getting value from TimeSeries object, {}, {}\".format(expr, e))\n match = False\n if match:\n idxs.append(idx)\n new_ts.append(ts_data)\n except AttributeError as e:\n logger_ts.debug(\"get_matches: AttributeError: unable to get expression matches, {}, {}\".format(type(ts), e))\n print(\"Error: Timeseries is an invalid data type\")\n if not new_ts:\n print(\"No matches found for that expression\")\n else:\n print(\"Found {} matches from {} columns\".format(len(new_ts), len(ts)))\n logger_ts.info(\"exit get_matches\")\n return new_ts, idxs"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts a string of whitespace or comma separated hosts into a list of hosts.", "response": "def _to_http_hosts(hosts: Union[Iterable[str], str]) -> List[str]:\n \"\"\"Convert a string of whitespace or comma separated hosts into a list of hosts.\n\n Hosts may also already be a list or other iterable.\n Each host will be prefixed with 'http://' if it is not already there.\n\n >>> _to_http_hosts('n1:4200,n2:4200')\n ['http://n1:4200', 'http://n2:4200']\n\n >>> _to_http_hosts('n1:4200 n2:4200')\n ['http://n1:4200', 'http://n2:4200']\n\n >>> _to_http_hosts('https://n1:4200')\n ['https://n1:4200']\n\n >>> _to_http_hosts(['http://n1:4200', 'n2:4200'])\n ['http://n1:4200', 'http://n2:4200']\n \"\"\"\n if isinstance(hosts, str):\n hosts = hosts.replace(',', ' ').split()\n return [_to_http_uri(i) for i in hosts]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the value of the called object of obj is a callable otherwise the plain object.", "response": "def _plain_or_callable(obj):\n \"\"\"Returns the value of the called object of obj is a callable,\n otherwise the plain object.\n Returns None if obj is None.\n\n >>> obj = None\n >>> _plain_or_callable(obj)\n\n >>> stmt = 'select * from sys.nodes'\n >>> _plain_or_callable(stmt)\n 'select * from sys.nodes'\n\n >>> def _args():\n ... return [1, 'name']\n >>> _plain_or_callable(_args)\n [1, 'name']\n\n >>> _plain_or_callable((x for x in range(10)))\n 0\n\n >>> class BulkArgsGenerator:\n ... def __call__(self):\n ... return [[1, 'foo'], [2, 'bar'], [3, 'foobar']]\n >>> _plain_or_callable(BulkArgsGenerator())\n [[1, 'foo'], [2, 'bar'], [3, 'foobar']]\n \"\"\"\n if callable(obj):\n return obj()\n elif isinstance(obj, types.GeneratorType):\n return next(obj)\n else:\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a host URI into a DSN for aiopg.", "response": "def _to_dsn(hosts):\n \"\"\"Convert a host URI into a dsn for aiopg.\n\n >>> _to_dsn('aiopg://myhostname:4242/mydb')\n 'postgres://crate@myhostname:4242/mydb'\n\n >>> _to_dsn('aiopg://myhostname:4242')\n 'postgres://crate@myhostname:4242/doc'\n\n >>> _to_dsn('aiopg://hoschi:pw@myhostname:4242/doc?sslmode=require')\n 'postgres://hoschi:pw@myhostname:4242/doc?sslmode=require'\n\n >>> _to_dsn('aiopg://myhostname')\n 'postgres://crate@myhostname:5432/doc'\n \"\"\"\n p = urlparse(hosts)\n try:\n user_and_pw, netloc = p.netloc.split('@', maxsplit=1)\n except ValueError:\n netloc = p.netloc\n user_and_pw = 'crate'\n try:\n host, port = netloc.split(':', maxsplit=1)\n except ValueError:\n host = netloc\n port = 5432\n dbname = p.path[1:] if p.path else 'doc'\n dsn = f'postgres://{user_and_pw}@{host}:{port}/{dbname}'\n if p.query:\n dsn += '?' + '&'.join(k + '=' + v[0] for k, v in parse_qs(p.query).items())\n return dsn"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if SSL validation parameter is passed in URI", "response": "def _verify_ssl_from_first(hosts):\n \"\"\"Check if SSL validation parameter is passed in URI\n\n >>> _verify_ssl_from_first(['https://myhost:4200/?verify_ssl=false'])\n False\n\n >>> _verify_ssl_from_first(['https://myhost:4200/'])\n True\n\n >>> _verify_ssl_from_first([\n ... 'https://h1:4200/?verify_ssl=False',\n ... 'https://h2:4200/?verify_ssl=True'\n ... ])\n False\n \"\"\"\n for host in hosts:\n query = parse_qs(urlparse(host).query)\n if 'verify_ssl' in query:\n return _to_boolean(query['verify_ssl'][0])\n return True"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef addTable(D):\n\n _swap = {\n \"1\": \"measurement\",\n \"2\": \"summary\",\n \"3\": \"ensemble\",\n \"4\": \"distribution\"\n }\n\n\n print(\"What type of table would you like to add?\\n\"\n \"1: measurement\\n\"\n \"2: summary\\n\"\n \"3: ensemble (under development)\\n\"\n \"4: distribution (under development)\\n\"\n \"\\n Note: if you want to add a whole model, use the addModel() function\")\n _ans = input(\">\")\n\n if _ans in [\"3\", \"4\"]:\n print(\"I don't know how to do that yet.\")\n # if this is a summary or measurement, split the csv into each column\n elif _ans in [\"1\", \"2\"]:\n # read in a csv file. have the user point to it\n print(\"Locate the CSV file with the values for this table: \")\n _path, _files = browse_dialog_file()\n\n _path = _confirm_file_path(_files)\n _values = read_csv_from_file(_path)\n _table = _build_table(_values)\n _placement = _prompt_placement(D, _swap[_ans])\n D = _put_table(D, _placement, _table)\n\n else:\n print(\"That's not a valid option\")\n\n return D", "response": "Add any table type to the given dataset."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_available_placements(D, tt):\n _options = []\n\n try:\n for _pc in [\"paleoData\", \"chronData\"]:\n if _pc in D:\n # for each entry in pc\n for section_name, section_data in D[_pc].items():\n # looking for open spots for measurement tables\n if tt == \"measurement\":\n if \"measurementTable\" in section_data:\n _options.append(_get_available_placements_1(section_data[\"measurementTable\"], section_name, \"measurement\"))\n\n\n # looking for open spots for model tables\n else:\n # Is there a model? Need model data to keep going\n if \"model\" in section_data:\n # this is for adding a whole model (all 4 tables, ens/dist/sum/method)\n if tt == \"model\":\n _options.append(_get_available_placements_1(section_data[\"model\"], section_name, \"model\"))\n else:\n\n # for adding individual model tables\n for _k, _v in section_data[\"model\"]:\n # keys here are stored as \"Table\", so add \"Table\" to each table type\n _tt_table = \"{}Table\".format(tt)\n # does this table exist?\n if _tt_table in _v:\n # Get the first available position for this section\n _options.append(\n _get_available_placements_1(_v[_tt_table], _k, tt))\n else:\n # Doesn't currently exist. Make the first option index 0.\n _options.append(\"{}{}0\".format(_k, tt))\n\n # no models present, so we automatically default placement options to the 0 index.\n else:\n if tt == \"model\":\n # adding a whole model, so no need to be specific\n _options.append(\"{}model0\".format(section_name))\n else:\n # adding a specific table, so the position is more specific also\n _options.append(\"{}model0{}0\".format(section_name, tt))\n\n\n except Exception as e:\n sys.exit(\"Looking for open table positions: Unable to find placement options, {}\".format(e))\n\n # remove empty names\n _options = [i for i in _options if i]\n # Is the whole list empty? that's not good.\n if not _options:\n sys.exit(\"Error: No available positions found to place new data. Something went wrong.\")\n return _options", "response": "Get a list of possible placements that can be put into the new model."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprompt user to choose model name based on the available placement options.", "response": "def _prompt_placement(D, tt):\n \"\"\"\n Since automatic placement didn't work, find somewhere to place the model data manually with the help of the user.\n\n :param dict D: Metadata\n :param str tt: Table type\n :return str _model_name: Chosen model name for placement\n \"\"\"\n _model_name = \"\"\n # There wasn't a table name match, so we need prompts to fix it\n _placement_options = _get_available_placements(D, tt)\n print(\"Please choose where you'd like to place this model:\")\n for _idx, _opt in enumerate(_placement_options):\n print(\"({}) {}\".format(_idx, _opt))\n _choice = input(\"> \")\n try:\n if int(_choice) <= len(_placement_options) and _choice:\n # Get the option the user chose\n _model_name = _placement_options[int(_choice)]\n else:\n # They user chose an option out of the placement list range\n print(\"Invalid choice input\")\n return\n except Exception as e:\n # Choice was not a number or empty\n print(\"Invalid choice\")\n\n return _model_name"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a new table into the dataset.", "response": "def _put_table(D, name, table):\n \"\"\"\n Use the dataset and name to place the new table data into the dataset.\n\n :param dict D: Dataset\n :param str name: Table name / path to store new table\n :param dict table: Newly created table data\n :return dict D: Dataset\n \"\"\"\n\n try:\n # print(\"Placing table: {}\".format(name))\n table[\"tableName\"] = name\n m = re.match(re_table_name, name)\n if m:\n _pc = m.group(1) + \"Data\"\n _section = m.group(1) + m.group(2)\n # place a measurement table\n if m.group(3) == \"measurement\":\n # This shouldn't happen. User chose one of our options. That should be an empty location.\n if name in D[_pc][_section][\"measurementTable\"]:\n print(\"Oops. This shouldn't happen. That table path is occupied in the dataset\")\n # Place the data\n else:\n D[_pc][_section][\"measurementTable\"][name] = table\n # place a model table type\n else:\n _model = _section + m.group(3) + m.group(4)\n _tt = m.group(5) + \"Table\"\n if name in D[_pc][_model][_tt]:\n print(\"Oops. This shouldn't happen. That table path is occupied in the dataset\")\n else:\n D[_pc][_model][_tt][name] = table\n\n else:\n print(\"Oops. This shouldn't happen. That table name doesn't look right. Please report this error\")\n return\n\n except Exception as e:\n print(\"addTable: Unable to put the table data into the dataset, {}\".format(e))\n\n return D"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _update_table_names(name, dat):\n for _tabletype in [\"summary\", \"distribution\", \"ensemble\"]:\n _ttname = \"{}Table\".format(_tabletype)\n if _ttname in dat:\n _new_tables = OrderedDict()\n _idx = 0\n # change all the top level table names\n for k,v in dat[_ttname].items():\n _new_ttname= \"{}{}{}\".format(name, _tabletype, _idx)\n _idx +=1\n #change all the table names in the table metadata\n v[\"tableName\"] = _new_ttname\n # remove the filename. It shouldn't be stored anyway\n if \"filename\" in v:\n v[\"filename\"] = \"\"\n # place dat into the new ordered dictionary\n _new_tables[_new_ttname] = v\n\n # place new tables into the original dat\n dat[_ttname] = _new_tables\n\n return dat", "response": "Update the names of the tables in the metadata dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef addModel(D, models):\n try:\n # Loop for each model that needs to be added\n for _model_name, _model_data in models.items():\n # split the table name into a path that we can use\n _m = re.match(re_model_name, _model_name)\n if _m:\n D = _put_model(D, _model_name, _model_data, _m)\n else:\n print(\"The table name found in the given model data isn't valid for automatic placement\")\n _placement_name = _prompt_placement(D, \"model\")\n _m = re.match(re_model_name, _placement_name)\n if _m:\n D = _put_model(D, _placement_name, _model_data, _m)\n else:\n print(\"Oops. This shouldn't happen. That table name doesn't look right. Please report this error\")\n return\n\n except Exception as e:\n print(\"addModel: Model data NOT added, {}\".format(e))\n\n return D", "response": "Insert model data into a LiPD dataset"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nputting the model data given into the location given.", "response": "def _put_model(D, name, dat, m):\n \"\"\"\n Place the model data given, into the location (m) given.\n\n :param dict D: Metadata (dataset)\n :param str name: Model name (ex: chron0model0)\n :param dict dat: Model data\n :param regex m: Model name regex groups\n :return dict D: Metadata (dataset)\n \"\"\"\n try:\n # print(\"Placing model: {}\".format(name))\n _pc = m.group(1) + \"Data\"\n _section = m.group(1) + m.group(2)\n if _pc not in D:\n # Section missing entirely? Can't continue\n print(\"{} not found in the provided dataset. Please try again\".format(_pc))\n return\n else:\n if _section not in D[_pc]:\n # Creates section: Example: D[chronData][chron0]\n D[_pc][_section] = OrderedDict()\n if \"model\" not in D[_pc][_section]:\n # Creates model top level: Example: D[chronData][chron0][\"model\"]\n D[_pc][_section][\"model\"] = OrderedDict()\n if name not in D[_pc][_section][\"model\"]:\n dat = _update_table_names(name, dat)\n D[_pc][_section][\"model\"][name] = dat\n else:\n # Model already exists, should we overwrite it?\n _prompt_overwrite = input(\n \"This model already exists in the dataset. Do you want to overwrite it? (y/n)\")\n # Yes, overwrite with the model data provided\n if _prompt_overwrite == \"y\":\n dat = _update_table_names(name, dat)\n D[_pc][_section][\"model\"][name] = dat\n # No, do not overwrite.\n elif _prompt_overwrite == \"n\":\n _name2 = _prompt_placement(D, \"model\")\n _m = re.match(re_model_name, _name2)\n if _m:\n D = _put_model(D, _name2, dat, _m)\n else:\n print(\"Invalid choice\")\n except Exception as e:\n print(\"addModel: Unable to put the model data into the dataset, {}\".format(e))\n\n return D"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef prepare_env(app, env, docname):\n if not hasattr(env, 'needs_all_needs'):\n # Used to store all needed information about all needs in document\n env.needs_all_needs = {}\n\n if not hasattr(env, 'needs_functions'):\n # Used to store all registered functions for supporting dynamic need values.\n env.needs_functions = {}\n\n # needs_functions = getattr(app.config, 'needs_functions', [])\n needs_functions = app.needs_functions\n if needs_functions is None:\n needs_functions = []\n if not isinstance(needs_functions, list):\n raise SphinxError('Config parameter needs_functions must be a list!')\n\n # Register built-in functions\n for need_common_func in needs_common_functions:\n register_func(env, need_common_func)\n\n # Register functions configured by user\n for needs_func in needs_functions:\n register_func(env, needs_func)\n\n app.config.needs_hide_options += ['hidden']\n app.config.needs_extra_options['hidden'] = directives.unchanged\n\n if not hasattr(env, 'needs_workflow'):\n # Used to store workflow status information for already executed tasks.\n # Some tasks like backlink_creation need be be performed only once.\n # But most sphinx-events get called several times (for each single document file), which would also\n # execute our code several times...\n env.needs_workflow = {\n 'backlink_creation': False,\n 'dynamic_values_resolved': False\n }", "response": "Prepares the Sphinx environment to store the internal data for the given docname."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_entity_name(name):\n invalid_chars = \"-=!#$%^&*[](){}/~'`<>:;\"\n for char in invalid_chars:\n name = name.replace(char, \"_\")\n return name", "response": "Creates a valid PlantUML entity name from the given value."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main(self):\n logger_lpd_noaa.info(\"enter main\")\n # Starting Directory: dir_tmp/dir_bag/data/\n\n # convert all lipd keys to noaa keys\n # timestamp the conversion of the file\n\n # MISC SETUP FUNCTIONS\n\n self.noaa_data_sorted[\"File_Last_Modified_Date\"][\"Modified_Date\"] = generate_timestamp()\n\n self.__get_table_count()\n\n # Get measurement tables from metadata, and sort into object self\n self.__put_tables_in_self([\"paleo\", \"paleoData\", \"measurementTable\"])\n self.__put_tables_in_self([\"chron\", \"chronData\", \"measurementTable\"])\n\n # how many measurement tables exist? this will tell use how many noaa files to create\n self.__get_table_pairs()\n\n # reorganize data into noaa sections\n self.__reorganize()\n\n # special case: earliest_year, most_recent_year, and time unit\n # self.__check_time_values()\n # self.__check_time_unit()\n\n self.__get_overall_data(self.lipd_data)\n self.__reorganize_sensor()\n self.__lists_to_str()\n self.__generate_study_name()\n\n # END MISC SETUP FUNCTIONS\n\n # Use data in steps_dict to write to\n # self.noaa_data_sorted = self.__key_conversion(self.noaa_data_sorted)\n self.__create_file()\n logger_lpd_noaa.info(\"exit main\")\n return", "response": "This function is called by the parser class to load in the template file and run through the parser_noaa module."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking the time values for the related noaa data.", "response": "def __check_time_values(self):\n \"\"\"\n Rules\n 1. AD or CE units: bigger number is recent, smaller number is older\n 2. BP: bigger number is older, smaller number is recent.\n 3. No units: If max year is 1900-2017(current), then assume AD. Else, assume BP\n\n :return none:\n \"\"\"\n _earliest = float(self.noaa_data_sorted[\"Data_Collection\"][\"Earliest_Year\"])\n _recent = float(self.noaa_data_sorted[\"Data_Collection\"][\"Most_Recent_Year\"])\n try:\n _unit = self.noaa_data_sorted[\"Data_Collection\"][\"Time_Unit\"]\n except Exception:\n _unit = \"\"\n\n if not _unit:\n # If the max value is between 1900 - 2017 (current), then assume \"AD\"\n _max = max([_earliest, _recent])\n _min = min([_earliest, _recent])\n if _max >= 1900 and _max <= 2018:\n self.noaa_data_sorted[\"Data_Collection\"][\"Time_Unit\"] = \"AD\"\n self.noaa_data_sorted[\"Data_Collection\"][\"Most_Recent_Year\"] = str(_max)\n self.noaa_data_sorted[\"Data_Collection\"][\"Earliest_Year\"] = str(_min)\n # Else, assume it's BP\n else:\n # Units don't exist, assume BP\n self.noaa_data_sorted[\"Data_Collection\"][\"Time_Unit\"] = \"BP\"\n self.noaa_data_sorted[\"Data_Collection\"][\"Most_Recent_Year\"] = str(_min)\n self.noaa_data_sorted[\"Data_Collection\"][\"Earliest_Year\"] = str(_max)\n else:\n # Units exist\n if _unit.lower() in [\"ad\", \"ce\"]:\n if _earliest > _recent:\n self.noaa_data_sorted[\"Data_Collection\"][\"Most_Recent_Year\"] = str(_earliest)\n self.noaa_data_sorted[\"Data_Collection\"][\"Earliest_Year\"] = str(_recent)\n else:\n if _recent > _earliest:\n self.noaa_data_sorted[\"Data_Collection\"][\"Most_Recent_Year\"] = str(_earliest)\n self.noaa_data_sorted[\"Data_Collection\"][\"Earliest_Year\"] = str(_recent)\n\n return"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __convert_keys_2(header, d):\n d_out = {}\n try:\n for k, v in d.items():\n try:\n noaa_key = LIPD_NOAA_MAP_BY_SECTION[header][k]\n d_out[noaa_key] = v\n except Exception:\n logger_lpd_noaa.warn(\"lpd_noaa: convert_keys_section: ran into an error converting {}\".format(k))\n except KeyError:\n logger_lpd_noaa.warn(\"lpd_noaa: convert_keys_section: KeyError: header key {} is not in NOAA_ALL_DICT\".format(header))\n except AttributeError:\n logger_lpd_noaa.warn(\"lpd_noaa: convert_keys_section: AttributeError: metdata is wrong data type\".format(header))\n return d\n return d_out", "response": "Convert lpd to noaa keys for this one section"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef __convert_keys_1(self, header, d):\n d2 = {}\n try:\n for k, v in d.items():\n try:\n d2[self.__get_noaa_key_w_context(header, k)] = v\n except KeyError:\n pass\n except Exception:\n return d\n return d2", "response": "Loop over keys in a dictionary and replace the lipd keys with noaa keys"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate blanks for all keys in section_name.", "response": "def __create_blanks(section_name, d):\n \"\"\"\n All keys need to be written to the output, with or without a value. Furthermore, only keys that have values\n exist at this point. We need to manually insert the other keys with a blank value. Loop through the global list\n to see what's missing in our dict.\n :param str section_name: Retrieve data from global dict for this section\n :return none:\n \"\"\"\n try:\n for key in NOAA_KEYS_BY_SECTION[section_name]:\n if key not in d:\n # Key not in our dict. Create the blank entry.\n d[key] = \"\"\n except Exception:\n logger_lpd_noaa.error(\"lpd_noaa: create_blanks: must section: {}, key\".format(section_name, key))\n return d"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __flatten_col(d):\n\n try:\n for entry in [\"climateInterpretation\", \"calibration\"]:\n if entry in d:\n for k, v in d[entry].items():\n d[k] = v\n del d[entry]\n except AttributeError:\n pass\n\n return d", "response": "Flatten the column so climateInterpretation and calibration are not nested."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef __generate_study_name(self):\n study_name = \"\"\n _exist = False\n try:\n if self.noaa_data_sorted[\"Top\"][\"Study_Name\"]:\n _exist = True\n except KeyError:\n pass\n\n if not _exist:\n try:\n _site = self.noaa_data_sorted[\"Site_Information\"][\"properties\"][\"siteName\"]\n _year = self.noaa_data_sorted[\"Publication\"][0][\"pubYear\"]\n _author = self.noaa_data_sorted[\"Publication\"][0][\"author\"]\n _author = self.__get_author_last_name(_author)\n study_name = \"{}.{}.{}\".format(_author, _site, _year)\n study_name = study_name.replace(\" \", \"_\").replace(\",\", \"_\")\n except (KeyError, Exception):\n pass\n self.noaa_data_sorted[\"Top\"][\"Study_Name\"] = study_name\n self.noaa_data_sorted[\"Title\"][\"Study_Name\"] = study_name\n return", "response": "Generate a study name for the current noaa class."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing the Dataset_DOI field. Could be one DOI string or a list of DOIs.", "response": "def __parse_dois(self, x):\n \"\"\"\n Parse the Dataset_DOI field. Could be one DOI string, or a list of DOIs\n :param any x: Str or List of DOI ids\n :return none: list is set to self\n \"\"\"\n # datasetDOI is a string. parse, validate and return a list of DOIs\n if isinstance(x, str):\n # regex cleans string, and returns a list with 1 entry for each regex doi match\n m = clean_doi(x)\n # make sure m is not an empty list\n if m:\n # set list directly into self\n self.doi = m\n\n # datasetDOI is a list. use regex to validate each doi entry.\n elif isinstance(x, list):\n for entry in x:\n # regex cleans string, and returns a list with 1 entry for each regex doi match\n m = clean_doi(entry)\n # make sure m is not an empty list\n if m:\n # combine lists with existing self list\n self.doi += m\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __split_path(string):\n out = []\n position = string.find(':')\n if position != -1:\n # A position of 0+ means that \":\" was found in the string\n key = string[:position]\n val = string[position+1:]\n out.append(key)\n out.append(val)\n if ('-' in key) and ('Funding' not in key) and ('Grant' not in key):\n out = key.split('-')\n out.append(val)\n return out", "response": "Used in the path_context function. Splits the full path into a list of steps."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _values_exist(table):\n try:\n for var, data in table[\"columns\"].items():\n if \"values\" in data:\n return True\n except KeyError as e:\n logger_lpd_noaa.warn(\"values_exist: KeyError: {}\".format(e))\n except Exception as e:\n logger_lpd_noaa.warn(\"values_exist: Excpetion: {}\".format(e))\n return False", "response": "Check that values exist in this table and we can write out data columns\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreorganizes the keys into their proper section order for the NOAA output file.", "response": "def __reorganize(self):\n \"\"\"\n Reorganize the keys into their proper section order for the NOAA output file\n DO NOT parse data tables (paleoData or chronData). We will do those separately.\n :param str key:\n :param any value:\n :return none:\n \"\"\"\n logger_lpd_noaa.info(\"enter reorganize\")\n # NOAA files are organized in sections differently than NOAA. try to translate these sections.\n for key, value in self.lipd_data.items():\n # if this key has a noaa match, it'll be returned. otherwise, empty string for no match\n noaa_key = self.__get_noaa_key(key)\n # check if this lipd key is in the NOAA_KEYS conversion dictionary.\n # if it's not, then stash it in our ignore list.\n if key not in LIPD_NOAA_MAP_FLAT:\n self.noaa_data_sorted[\"Ignore\"][noaa_key] = value\n # studyName is placed two times in file. Line #1, and under the 'title' section\n elif noaa_key == \"Study_Name\":\n # study name gets put in two locations\n self.noaa_data_sorted[\"Top\"][noaa_key] = value\n self.noaa_data_sorted[\"Title\"][noaa_key] = value\n # put archiveType in self, because we'll reuse it later for the 9-part-variables as well\n elif noaa_key == \"Archive\":\n self.lsts_tmp[\"archive\"].append(value)\n # Dataset_DOI is a repeatable element. the key could be a single DOI, or a list of DOIs.\n elif noaa_key == \"Dataset_DOI\":\n self.__parse_dois(value)\n\n # all other keys. determine which noaa section they belong in.\n else:\n # noaa keys are sorted by section.\n for header, content in NOAA_KEYS_BY_SECTION.items():\n try:\n # if our key is a noaa header key, then that means it's the ONLY key in the section.\n # set value directly\n if noaa_key == header:\n self.noaa_data_sorted[header] = value\n # all other cases, the key is part of the section\n elif noaa_key in content:\n self.noaa_data_sorted[header][noaa_key] = value\n except KeyError:\n # this shouldn't ever really happen, but just in case\n logger_lpd_noaa.warn(\"lpd_noaa: reorganize: KeyError: {}\".format(noaa_key))\n return"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __reorganize_author(self):\n try:\n for idx, pub in enumerate(self.noaa_data_sorted[\"Publication\"]):\n if \"author\" in pub:\n _str = pub[\"author\"]\n if \" and \" in _str:\n self.noaa_data_sorted[\"Publication\"][idx][\"author\"] = _str.replace(\" and \", \"; \")\n if \";\" in _str:\n self.noaa_data_sorted[\"Publication\"][idx][\"author\"] = _str.replace(\";\", \"; \")\n\n except Exception:\n pass\n return", "response": "Reorganizes author names by and and."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef __reorganize_coordinates(self):\n try:\n l = self.noaa_data_sorted[\"Site_Information\"]['geometry']['coordinates']\n locations = [\"Northernmost_Latitude\", \"Southernmost_Latitude\", \"Easternmost_Longitude\",\n \"Westernmost_Longitude\", \"Elevation\"]\n logger_lpd_noaa.info(\"coordinates: {} coordinates found\".format(len(l)))\n\n # Amount of coordinates in the list\n _len_coords = len(l)\n\n # Odd number of coordinates. Elevation value exists\n if _len_coords % 2 == 1:\n # Store the elevation, which is always the last value in the list\n self.noaa_geo[\"Elevation\"] = l[-1]\n # If elevation, then subtract one from the length\n _len_coords -= 1\n\n # Start compiling the lat lon coordinates\n\n # 0 coordinate values. fill in locations with empty values\n if _len_coords == 0:\n for location in locations:\n self.noaa_geo[location] = ' '\n # 2 coordinates values. duplicate to fill 4 location slots.\n elif _len_coords == 2:\n self.noaa_geo[locations[0]] = l[1]\n self.noaa_geo[locations[1]] = l[1]\n self.noaa_geo[locations[2]] = l[0]\n self.noaa_geo[locations[3]] = l[0]\n\n # 4 coordinate values. put each in its correct location slot.\n elif _len_coords == 4:\n for index, location in enumerate(locations):\n self.noaa_geo[locations[index]] = l[index]\n else:\n logger_lpd_noaa.info(\"coordinates: too many coordinates given\")\n except KeyError:\n logger_lpd_noaa.info(\"lpd_noaa: coordinates: no coordinate information\")\n except Exception:\n logger_lpd_noaa.error(\"lpd_noaa: coordinates: unknown exception\")\n\n return", "response": "Reorganize the coordinates of the current object based on how many values are available."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreorganizes the Funding_Agency_Name and Grant keys.", "response": "def __reorganize_funding(self):\n \"\"\"\n Funding gets added to noaa_data_sorted with LiPD keys. Change those keys to NOAA\n :return none:\n \"\"\"\n _map = {\"agency\": \"Funding_Agency_Name\", \"grant\": \"Grant\"}\n try:\n _l = []\n for item in self.noaa_data_sorted[\"Funding_Agency\"]:\n _tmp = {}\n for lpd_name, noaa_name in _map.items():\n val = \"\"\n if lpd_name in item:\n val = item[lpd_name]\n _tmp[noaa_name] = val\n _l.append(_tmp)\n self.noaa_data_sorted[\"Funding_Agency\"] = _l\n except Exception:\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreorganizing the geo data from self. noaa_data_sorted", "response": "def __reorganize_geo(self):\n \"\"\"\n Concat geo value and units, and reorganize the rest\n References geo data from self.noaa_data_sorted\n Places new data into self.noaa_geo temporarily, and then back into self.noaa_data_sorted.\n :return:\n \"\"\"\n logger_lpd_noaa.info(\"enter reorganize_geo\")\n\n try:\n # Geo -> Properties\n for k, v in self.noaa_data_sorted[\"Site_Information\"]['properties'].items():\n noaa_key = self.__get_noaa_key(k)\n self.noaa_geo[noaa_key] = v\n except KeyError:\n logger_lpd_noaa.info(\"reorganize_geo: KeyError: geo properties\")\n try:\n # Geo -> Geometry\n self.__reorganize_coordinates()\n except Exception:\n logger_lpd_noaa.warning(\"reorganize_geo: Exception: missing required data: coordinates\")\n\n # put the temporarily organized data into the self.noaa_data_sorted\n self.noaa_data_sorted[\"Site_Information\"] = self.noaa_geo\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef __reorganize_sensor(self):\n _code = []\n _name = []\n\n # Check if any of the sensor data is misplaced, and create corrected lists.\n if self.lsts_tmp[\"genus\"]:\n for name in self.lsts_tmp[\"genus\"]:\n if len(name) == 4 and name.isupper():\n _code.append(name)\n else:\n _name.append(name)\n\n if self.lsts_tmp[\"species\"]:\n for name in self.lsts_tmp[\"species\"]:\n if len(name) != 4 and not name.isupper():\n _name.append(name)\n else:\n _code.append(name)\n\n # Set the strings into the noaa data sorted\n self.lsts_tmp[\"species\"] = _name\n self.lsts_tmp[\"genus\"] = _code\n\n return", "response": "Reorganize the sensor data into the noaa data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nput the variableNames with the corresponding column data.", "response": "def __put_names_on_csv_cols(names, cols):\n \"\"\"\n Put the variableNames with the corresponding column data.\n :param list names: variableNames\n :param list cols: List of Lists of column data\n :return dict:\n \"\"\"\n _combined = {}\n for idx, name in enumerate(names):\n # Use the variableName, and the column data from the same index\n _combined[name] = cols[idx]\n\n\n\n return _combined"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef __put_year_col_first(d):\n if \"year\" in d:\n D = OrderedDict()\n # store the year column first\n D[\"year\"] = d[\"year\"]\n for k,v in d.items():\n if k != \"year\":\n # store the other columns\n D[k] = v\n return D\n else:\n # year is not found, return data as-is\n return d", "response": "Always write year column first."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove exact duplicate entries for abstract or citation fields. Check all publication entries.", "response": "def __rm_duplicate_citation_2(self, key):\n \"\"\"\n Remove exact duplicate entries for abstract or citation fields. Check all publication entries.\n :return:\n \"\"\"\n citations = []\n # Before writing, remove any duplicate \"Full_Citations\" that are found\n for idx, pub in enumerate(self.noaa_data_sorted[\"Publication\"]):\n try:\n citations.append(pub[key])\n except KeyError:\n # Key was not found. Enter a blank entry in citations list\n citations.append(\"\")\n\n # Create a backwards dictionary, that lists which indexes are duplicates\n d = defaultdict(list)\n for i, item in enumerate(citations):\n d[item].append(i)\n d = {k: v for k, v in d.items() if len(v) > 1}\n # Duplicates indexes are listed like so:\n # {\"full citation info here\": [0, 2, 5]}\n\n # Loop over duplicate indexes\n for citation, idxs in d.items():\n # do not process any duplicate entries that were initially blanks.\n if citation:\n # Loop for [1:], since we want to keep the first citation and remove the rest.\n for idx in idxs[1:]:\n # Set citation to blank\n self.noaa_data_sorted[\"Publication\"][idx][key] = \"\""} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __rm_names_on_csv_cols(d):\n _names = []\n _data = []\n for name, data in d.items():\n _names.append(name)\n _data.append(data)\n return _names, _data", "response": "Remove the variableNames from the columns."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntakes a string of author names get the first author name then get the first authors last name only.", "response": "def __get_author_last_name(author):\n \"\"\"\n Take a string of author(s), get the first author name, then get the first authors last name only.\n\n :param str author: Author(s)\n :return str _author: First author's last name\n \"\"\"\n _author = \"\"\n if isinstance(author, str):\n try:\n # example: 'Ahmed, Moinuddin and Anchukaitis, Kevin J and ...'\n # If this is a list of authors, try to split it and just get the first author.\n if \" and \" in author:\n _author = author.split(\" and \")[0]\n # 'Ahmed, Moinuddin; Anchukaitis, Kevin J; ...'\n elif \";\" in author:\n _author = author.split(\";\")[0]\n except Exception:\n _author = \"\"\n try:\n # example : 'Ahmed Moinuddin, Anchukaitis Kevin J, ...'\n _author = author.replace(\" \", \"\")\n if \",\" in author:\n _author = author.split(\",\")[0]\n except Exception:\n _author = \"\"\n elif isinstance(author, list):\n try:\n # example: [{'name': 'Ahmed, Moinuddin'}, {'name': 'Anchukaitis, Kevin J.'}, ..]\n # just get the last name of the first author. this could get too long.\n _author = author[0][\"name\"].split(\",\")[0]\n except Exception as e:\n _author = \"\"\n\n return _author"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a string that can be used to get the authors from the first publication.", "response": "def __create_author_investigator_str(self):\n \"\"\"\n When investigators is empty, try to get authors from the first publication instead.\n :return str author: Author names\n \"\"\"\n _author = \"\"\n try:\n for pub in self.noaa_data_sorted[\"Publication\"]:\n if \"author\" in pub:\n if pub[\"author\"]:\n _author_src = pub[\"author\"]\n if isinstance(_author_src, str):\n try:\n if \" and \" in _author_src:\n _author = _author_src.replace(\" and \", \"; \")\n elif \";\" in _author_src:\n # If there is a semi-colon, add a space after it, just in case it didn't have one\n _author = _author_src.replace(\";\", \"; \")\n break\n except Exception as e:\n _author = \"\"\n elif isinstance(_author_src, list):\n try:\n for _entry in _author_src:\n _author += _entry[\"name\"].split(\",\")[0] + \", \"\n except Exception as e:\n _author = \"\"\n except Exception:\n _author = \"\"\n return _author"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget filename from a data table", "response": "def __get_filename(table):\n \"\"\"\n Get filename from a data table\n :param dict table:\n :return str:\n \"\"\"\n try:\n filename = table['filename']\n except KeyError as e:\n filename = \"\"\n logger_lpd_noaa.warning(\"get_filename: KeyError: Table missing filename, {}\".format(e))\n except TypeError:\n try:\n filename = table[0][\"filename\"]\n except Exception as e:\n filename = \"\"\n logger_lpd_noaa.warning(\"get_filename: Generic: Unable to get filename from table, {}\".format(e))\n return filename"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget DOI from this ONE publication entry.", "response": "def __get_doi(pub):\n \"\"\"\n Get DOI from this ONE publication entry.\n :param dict pub: Single publication entry\n :return:\n \"\"\"\n doi = \"\"\n # Doi location: d[\"pub\"][idx][\"identifier\"][0][\"id\"]\n try:\n doi = pub[\"DOI\"][0][\"id\"]\n doi = clean_doi(doi)\n except KeyError:\n logger_lpd_noaa.info(\"get_dois: KeyError: missing a doi key\")\n except Exception:\n logger_lpd_noaa.info(\"get_dois: Exception: something went wrong\")\n\n # if we received a doi that's a list, we want to concat into a single string\n if isinstance(doi, list):\n if len(doi) == 1:\n doi = doi[0]\n else:\n \", \".join(doi)\n return doi"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __get_noaa_key_w_context(header, lipd_key):\n try:\n noaa_key = LIPD_NOAA_MAP_BY_SECTION[header][lipd_key]\n except KeyError:\n return lipd_key\n return noaa_key", "response": "Get the proper noaa key for a given lipd key."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsearches for either Age or Year to calculate the max, min, and time unit for this table/file. Preference: Look for Age first, and then Year second (if needed) :param dict table: Table data :return dict: Max, min, and time unit", "response": "def __get_max_min_time_1(self, table):\n \"\"\"\n Search for either Age or Year to calculate the max, min, and time unit for this table/file.\n Preference: Look for Age first, and then Year second (if needed)\n\n :param dict table: Table data\n :return dict: Max, min, and time unit\n \"\"\"\n try:\n # find the values and units we need to calculate\n vals, units = self.__get_max_min_time_2(table, [\"age\", \"yearbp\", \"yrbp\"], True)\n\n if not vals and not units:\n vals, units = self.__get_max_min_time_2(table, [\"year\", \"yr\"], True)\n\n if not vals and not units:\n vals, units = self.__get_max_min_time_2(table, [\"age\", \"yearbp\", \"yrbp\"], False)\n\n if not vals and not units:\n vals, units = self.__get_max_min_time_2(table, [\"year\", \"yr\"], False)\n\n # now put this data into the noaa data sorted self for writing to file.\n # year farthest in the past\n _max = max(vals)\n _min = min(vals)\n if _min or _min in [0, 0.0]:\n self.noaa_data_sorted[\"Data_Collection\"][\"Earliest_Year\"] = str(_min)\n if _max or _max in [0, 0.0]:\n self.noaa_data_sorted[\"Data_Collection\"][\"Most_Recent_Year\"] = str(_max)\n # AD or... yrs BP?\n self.noaa_data_sorted[\"Data_Collection\"][\"Time_Unit\"] = units\n\n except Exception as e:\n logger_lpd_noaa.debug(\"get_max_min_time_2: {}\".format(e))\n\n return"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef __get_max_min_time_2(table, terms, exact):\n vals = []\n unit = \"\"\n try:\n for k, v in table[\"columns\"].items():\n if exact:\n if k.lower() in terms:\n try:\n vals = v[\"values\"]\n unit = v[\"units\"]\n break\n except KeyError:\n pass\n elif not exact:\n for term in terms:\n if term in k:\n try:\n vals = v[\"values\"]\n unit = v[\"units\"]\n break\n except KeyError:\n pass\n\n except Exception as e:\n logger_lpd_noaa.debug(\"get_max_min_time_3: {}\".format(e))\n\n return vals, unit", "response": "Get the max and min time for a given table and list of terms."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlooping over all publication entries and add data citation to self. data_citation", "response": "def __get_data_citation(self, l):\n \"\"\"\n If originalDataURL / investigators not in root data, check for a dataCitation pub entry.\n :return:\n \"\"\"\n # loop once for each publication entry\n for pub in l:\n try:\n # at the moment, these are the only keys of interest inside of dataCitation. Check each.\n for key in [\"url\", \"investigators\"]:\n if pub[\"type\"] == \"dataCitation\" and key in pub:\n noaa_key = self.__get_noaa_key(key)\n self.data_citation[noaa_key] = pub[key]\n except KeyError:\n # no \"type\" key in pub\n logger_lpd_noaa.info(\"lpd_noaa: get_data_citation: KeyError: pub is missing 'type'\")\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset up the output filenames.", "response": "def __get_output_filenames(self):\n \"\"\"\n Set up the output filenames. If more than one file is being written out, we need appended filenames.\n :return:\n \"\"\"\n # if there is only one file, use the normal filename with nothing appended.\n if len(self.noaa_data_sorted[\"Data\"]) == 1:\n self.output_filenames.append(self.filename_txt)\n\n # if there are multiple files that need to be written out, (multiple data table pairs), then append numbers\n elif len(self.noaa_data_sorted[\"Data\"]) > 1:\n for i in range(0, self.output_file_ct):\n tmp_name = \"{}-{}.txt\".format(self.dsn, i+1)\n self.output_filenames.append(tmp_name)\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the paleo tables and the chron tables.", "response": "def __get_table_pairs(self):\n \"\"\"\n Use the tables in self.paleos and self.chrons (sorted by idx) to put in self.noaa_data_sorted.\n :return:\n \"\"\"\n try:\n for _idx, _p in enumerate(self.data_paleos):\n _c = {}\n try:\n _c = self.data_chrons[_idx]\n except IndexError:\n pass\n # create entry in self object collection of data tables\n self.noaa_data_sorted[\"Data\"].append({\"paleo\": _p, \"chron\": _c})\n except KeyError:\n logger_lpd_noaa.warning(\"lpd_noaa: get_meas_table: 0 paleo data tables\")\n\n self.output_file_ct = len(self.noaa_data_sorted[\"Data\"])\n return"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking how many tables are in the json and if so check if we have to make the append_filenames flag", "response": "def __get_table_count(self):\n \"\"\"\n Check how many tables are in the json\n :return int:\n \"\"\"\n _count = 0\n try:\n keys = [\"paleo\", \"paleoData\", \"paleoMeasurementTable\"]\n # get the count for how many tables we have. so we know to make appended filenames or not.\n for pd_name, pd_data in self.lipd_data[\"paleoData\"].items():\n for section_name, section_data in pd_data.items():\n _count += len(section_data)\n\n except Exception:\n pass\n\n if _count > 1:\n self.append_filenames = True\n\n return"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef __create_file(self):\n logger_lpd_noaa.info(\"enter create_file\")\n\n self.__get_output_filenames()\n\n for idx, filename in enumerate(self.output_filenames):\n try:\n # self.noaa_txt = open(os.path.join(self.path, filename), \"w+\")\n # self.noaa_file_output[filename] = \"\"\n # self.noaa_txt = self.noaa_file_output[filename]\n self.noaa_txt = \"\"\n print(\"writing: {}\".format(filename))\n logger_lpd_noaa.info(\"write_file: opened output txt file\")\n except Exception as e:\n logger_lpd_noaa.error(\"write_file: failed to open output txt file, {}\".format(e))\n return\n\n self.__get_max_min_time_1(self.noaa_data_sorted[\"Data\"][idx][\"paleo\"])\n self.__check_time_values()\n\n self.__write_top(filename)\n self.__write_generic('Contribution_Date')\n self.__write_generic('File_Last_Modified_Date')\n self.__write_generic('Title')\n self.__write_generic('Investigators')\n self.__write_generic('Description_Notes_and_Keywords')\n self.__write_pub()\n self.__write_funding()\n self.__write_geo()\n self.__write_generic('Data_Collection')\n self.__write_generic('Species')\n self.__write_data(idx)\n\n self.noaa_file_output[filename] = self.noaa_txt\n # logger_lpd_noaa.info(\"closed output text file\")\n # reset the max min time unit to none\n self.max_min_time = {\"min\": \"\", \"max\": \"\", \"time\": \"\"}\n # shutil.copy(os.path.join(os.getcwd(), filename), self.dir_root)\n logger_lpd_noaa.info(\"exit create_file\")\n return", "response": "Create a new noaa file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwriting the top section of the txt file.", "response": "def __write_top(self, filename_txt):\n \"\"\"\n Write the top section of the txt file.\n :param int section_num: Section number\n :return none:\n \"\"\"\n logger_lpd_noaa.info(\"writing section: {}\".format(\"top\"))\n self.__create_blanks(\"Top\", self.noaa_data_sorted[\"Top\"])\n\n # Start writing the NOAA file section by section, starting at the very top of the template.\n self.noaa_txt += \"# {}\".format(self.noaa_data_sorted[\"Top\"]['Study_Name'])\n self.__write_template_top()\n # We don't know what the full online resource path will be yet, so leave the base path only\n self.__write_k_v(\"Online_Resource\", \"{}/{}\".format(self.wds_url, filename_txt), top=True)\n self.__write_k_v(\"Online_Resource_Description\", \" This file. NOAA WDS Paleo formatted metadata and data for version {} of this dataset.\".format(self.version), indent=True)\n self.__write_k_v(\"Online_Resource\", \"{}\".format(self.lpd_file_url), top=True)\n self.__write_k_v(\"Online_Resource_Description\", \" Linked Paleo Data (LiPD) formatted file containing the same metadata and data as this file, for version {} of this dataset.\".format(self.version), indent=True)\n self.__write_k_v(\"Original_Source_URL\", self.noaa_data_sorted[\"Top\"]['Original_Source_URL'], top=True)\n self.noaa_txt += \"\\n# Description/Documentation lines begin with #\\n# Data lines have no #\\n#\"\n self.__write_k_v(\"Archive\", self.noaa_data_sorted[\"Top\"]['Archive'])\n self.__write_k_v(\"Parameter_Keywords\", self.noaa_data_sorted[\"Top\"]['Parameter_Keywords'])\n\n # get the doi from the pub section of self.steps_dict[6]\n # doi = self.__get_doi()\n self.__write_k_v(\"Dataset_DOI\", ', '.join(self.doi), False, True, False, False)\n # self.__write_k_v(\"Parameter_Keywords\", self.steps_dict[section_num]['parameterKeywords'])\n self.__write_divider()\n logger_lpd_noaa.info(\"exit write_top\")\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef __write_generic(self, header, d=None):\n logger_lpd_noaa.info(\"writing section: {}\".format(header))\n if not d:\n d = self.noaa_data_sorted[header]\n d = self.__create_blanks(header, d)\n self.__write_header_name(header)\n for key in NOAA_KEYS_BY_SECTION[header]:\n key = self.__get_noaa_key_w_context(header, key)\n # NOAA writes value and units on one line. Build the string here.\n # if key == 'coreLength':\n # value, unit = self.__get_corelength(val)\n # val = str(value) + \" \" + str(unit)\n # DOI id is nested in \"identifier\" block. Retrieve it.\n if key == \"DOI\":\n val = self.__get_doi(d)\n # Don't write out an empty DOI list. Write out empty string instead.\n if not val:\n val = \"\"\n elif key == \"Investigators\":\n # Investigators is a section by itself. \"d\" should be a direct list\n val = get_authors_as_str(d)\n # If we don't have investigator data, use the authors from the first publication entry\n if not val:\n val = self.__create_author_investigator_str()\n # lipd uses a singular \"author\" key, while NOAA uses plural \"authors\" key.\n elif key in [\"Author\", \"Authors\"]:\n # \"d\" has all publication data, so only pass through the d[\"Authors\"] piece\n val = get_authors_as_str(d[key])\n elif key == \"Collection_Name\":\n if not d[key]:\n # If there is not a collection name, then use the dsn so _something_ is there.\n val = self.dsn\n else:\n val = d[key]\n\n # Write the output line\n self.__write_k_v(str(self.__get_noaa_key(key)), val, indent=True)\n # Don't write a divider if there isn't a Chron section after species. It'll make a double.\n # if header == \"Species\" and not self.noaa_data_sorted[\"Species\"]:\n # return\n self.__write_divider()\n return", "response": "Write a generic section to the. txt file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __write_pub(self):\n try:\n self.__reorganize_author()\n # Check all publications, and remove possible duplicate Full_Citations\n self.__rm_duplicate_citation_1()\n if not self.noaa_data_sorted[\"Publication\"]:\n self.noaa_data_sorted[\"Publication\"].append({\"pubYear\": \"\"})\n for idx, pub in enumerate(self.noaa_data_sorted[\"Publication\"]):\n logger_lpd_noaa.info(\"publication: {}\".format(idx))\n # Do not write out Data Citation publications. Check, and skip if necessary\n is_data_citation = self.__get_pub_type(pub)\n if not is_data_citation or is_data_citation and len(self.noaa_data_sorted[\"Publication\"]) == 1:\n pub = self.__convert_keys_1(\"Publication\", pub)\n self.__write_generic('Publication', pub)\n except KeyError:\n logger_lpd_noaa.info(\"write_pub: KeyError: pub section not found\")\n except TypeError:\n logger_lpd_noaa.debug(\"write_pub: TypeError: pub not a list type\")\n\n return", "response": "Write pub section. There may be multiple, so write a generic section for each one.\n :return none:"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites funding section. There are likely multiple entries. :param dict d: :return none:", "response": "def __write_funding(self):\n \"\"\"\n Write funding section. There are likely multiple entries.\n :param dict d:\n :return none:\n \"\"\"\n self.__reorganize_funding()\n # if funding is empty, insert a blank entry so that it'll still write the empty section on the template.\n if not self.noaa_data_sorted[\"Funding_Agency\"]:\n self.noaa_data_sorted[\"Funding_Agency\"].append({\"grant\": \"\", \"agency\": \"\"})\n for idx, entry in enumerate(self.noaa_data_sorted[\"Funding_Agency\"]):\n logger_lpd_noaa.info(\"funding: {}\".format(idx))\n self.__write_generic('Funding_Agency', entry)\n return"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef __write_data(self, idx):\n pair = self.noaa_data_sorted[\"Data\"][idx]\n # Run once for each pair (paleo+chron) of tables that was gathered earlier.\n # for idx, pair in enumerate(self.noaa_data_sorted[\"Data\"]):\n lst_pc = [\"chron\", \"paleo\"]\n # loop once for paleo, once for chron\n for pc in lst_pc:\n # safeguard in case the table is an empty set.\n table = pair[pc]\n if pc == \"paleo\":\n self.__write_variables_1(table)\n self.__write_divider()\n self.__write_columns(pc, table)\n elif pc == \"chron\":\n # self.__write_variables_1(table)\n self.__write_columns(pc, table)\n self.__write_divider(nl=False)\n return", "response": "Write out the measurement tables found in paleoData and chronData."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __write_variables_1(self, table):\n logger_lpd_noaa.info(\"writing section: {}\".format(\"Variables\"))\n\n # Write the template lines first\n self.__write_template_variable()\n\n try:\n self.noaa_txt += '#'\n\n # Special NOAA Request: Write the \"year\" column first always, if available\n # write year data first, when available\n for name, data in table[\"columns\"].items():\n if name == \"year\":\n # write first line in variables section here\n self.__write_variables_2(data)\n # leave the loop, because this is all we needed to accomplish\n break\n\n # all other cases\n for name, data in table[\"columns\"].items():\n # we already wrote out the year column, so don't duplicate.\n if name != \"year\":\n self.__write_variables_2(data)\n\n except KeyError as e:\n logger_lpd_noaa.warn(\"write_variables: KeyError: {} not found\".format(e))\n return", "response": "Write the variables section of txt file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __write_variables_2(self, col):\n col = self.__convert_keys_1(\"Variables\", col)\n\n # Write one line for each column. One line has all metadata for one column.\n for entry in NOAA_KEYS_BY_SECTION[\"Variables\"]:\n # May need a better way of handling this in the future. Need a strict list for this section.\n try:\n # First entry: Add extra hash and tab\n if entry == 'shortname':\n # DEPRECATED: Fixed spacing for variable names.\n # self.noaa_txt.write('{:<20}'.format('#' + str(col[entry])))\n # Fluid spacing for variable names. Spacing dependent on length of variable names.\n self.noaa_txt += '{}\\t'.format('#' + str(col[entry]))\n # Last entry: No space or comma\n elif entry == \"additional\":\n e = \" \"\n for item in [\"notes\", \"uncertainty\"]:\n try:\n if col[item]:\n e += str(col[item]).replace(\",\", \";\") + \"; \"\n except KeyError:\n pass\n self.noaa_txt += '{} '.format(e)\n\n # elif entry == 'notes':\n # self.noaa_txt.write('{} '.format(str(col[entry])))\n else:\n # This is for any entry that is not first or last in the line ordering\n # Account for nested entries.\n # if entry == \"uncertainty\":\n # try:\n # e = str(col[\"calibration\"][entry])\n # except KeyError:\n # e = \"\"\n if entry == \"seasonality\":\n try:\n e = str(col[\"climateInterpretation\"][entry])\n except KeyError:\n e = \"\"\n elif entry == \"archive\":\n e = self.noaa_data_sorted[\"Top\"][\"Archive\"]\n elif entry == \"dataType\":\n # Lipd uses real data types (floats, ints), NOAA wants C or N (character or numeric)\n if col[entry] == \"float\":\n e = \"N\"\n else:\n e = \"C\"\n else:\n e = str(col[entry])\n try:\n e = e.replace(\",\", \";\")\n except AttributeError as ee:\n logger_lpd_noaa.warn(\"write_variables_2: AttributeError: {}, {}\".format(e, ee))\n self.noaa_txt += '{}, '.format(e)\n except KeyError as e:\n self.noaa_txt += '{:<0}'.format(',')\n logger_lpd_noaa.info(\"write_variables: KeyError: missing {}\".format(e))\n self.noaa_txt += '\\n#'\n return", "response": "Write the variables section."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwrites the numeric data to the bottom section of the txt file.", "response": "def __write_columns(self, pc, table):\n \"\"\"\n Read numeric data from csv and write to the bottom section of the txt file.\n :param dict table: Paleodata dictionary\n :return none:\n \"\"\"\n logger_lpd_noaa.info(\"writing section: data, csv values from file\")\n # get filename for this table's csv data\n # filename = self.__get_filename(table)\n # logger_lpd_noaa.info(\"processing csv file: {}\".format(filename))\n # # get missing value for this table\n # # mv = self.__get_mv(table)\n # # write template lines\n # # self.__write_template_paleo(mv)\n if pc == \"paleo\":\n self.__write_template_paleo()\n elif pc == \"chron\":\n self.__write_template_chron()\n\n # continue if csv exists\n if self._values_exist(table):\n # logger_lpd_noaa.info(\"_write_columns: csv data exists: {}\".format(filename))\n\n # sort the dictionary so the year column is first\n _csv_data_by_name = self.__put_year_col_first(table[\"columns\"])\n\n # now split the sorted dictionary back into two lists (easier format to write to file)\n _names, _data = self.__rm_names_on_csv_cols(_csv_data_by_name)\n\n # write column variableNames\n self.__write_data_col_header(_names, pc)\n\n # write data columns index by index\n self.__write_data_col_vals(_data, pc)\n\n return"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef __write_k_v(self, k, v, top=False, bot=False, multi=False, indent=False):\n if top:\n self.noaa_txt += \"\\n#\"\n if multi:\n for item in v:\n if indent:\n self.noaa_txt += \"\\n# {}: {}\".format(str(k), str(item))\n else:\n self.noaa_txt += \"\\n# {}: {}\".format(str(k), str(item))\n else:\n if indent:\n self.noaa_txt += \"\\n# {}: {}\".format(str(k), str(v))\n else:\n self.noaa_txt += \"\\n# {}: {}\".format(str(k), str(v))\n if bot:\n self.noaa_txt += \"\\n#\"\n return", "response": "Write a key - value pair to the output file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __write_divider(self, top=False, bot=False, nl=True):\n if top:\n self.noaa_txt += \"\\n#\"\n if nl:\n self.noaa_txt += \"\\n\"\n self.noaa_txt += \"#------------------\\n\"\n if bot:\n self.noaa_txt += \"\\n#\"\n return", "response": "Write a divider line"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __write_data_col_header(self, l, pc):\n count = len(l)\n if pc == \"chron\":\n self.noaa_txt += \"# \"\n for name in l:\n # last column - spacing not important\n if count == 1:\n self.noaa_txt += \"{}\\t\".format(name)\n # all [:-1] columns - fixed spacing to preserve alignment\n else:\n self.noaa_txt += \"{:<15}\".format(name)\n count -= 1\n\n self.noaa_txt += '\\n'", "response": "Write the variableNames that are the column header in the Data section."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef __write_data_col_vals(self, ll, pc):\n\n # all columns should have the same amount of values. grab that number\n try:\n _items_in_cols = len(ll[0][\"values\"])\n for idx in range(0, _items_in_cols):\n # amount of columns\n _count = len(ll)\n self.noaa_txt += \"# \"\n for col in ll:\n self.noaa_txt += \"{}\\t\".format(str(col[\"values\"][idx]))\n _count -= 1\n if (idx < _items_in_cols):\n self.noaa_txt += '\\n'\n\n except IndexError:\n logger_lpd_noaa(\"_write_data_col_vals: IndexError: couldn't get length of columns\")\n\n return", "response": "Loop over value arrays and write index by index to correspond to the rows of a txt file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef copy(app, need, needs, option, need_id=None):\n if need_id is not None:\n need = needs[need_id]\n\n return need[option]", "response": "Copies the value of one need option to another need."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks if all linked needs have a given value.", "response": "def check_linked_values(app, need, needs, result, search_option, search_value, filter_string=None, one_hit=False):\n \"\"\"\n Returns a specific value, if for all linked needs a given option has a given value.\n\n The linked needs can be filtered by using the ``filter`` option.\n\n If ``one_hit`` is set to True, only one linked need must have a positive match for the searched value.\n\n **Examples**\n\n **Needs used as input data**\n\n .. code-block:: jinja\n\n .. req:: Input A\n :id: clv_A\n :status: in progress\n\n .. req:: Input B\n :id: clv_B\n :status: in progress\n\n .. spec:: Input C\n :id: clv_C\n :status: closed\n\n .. req:: Input A\n :id: clv_A\n :status: in progress\n :collapse: False\n\n .. req:: Input B\n :id: clv_B\n :status: in progress\n :collapse: False\n\n .. spec:: Input C\n :id: clv_C\n :status: closed\n :collapse: False\n\n\n **Example 1: Positive check**\n\n Status gets set to *progress*.\n\n .. code-block:: jinja\n\n .. spec:: result 1: Positive check\n :links: clv_A, clv_B\n :status: [[check_linked_values('progress', 'status', 'in progress' )]]\n\n .. spec:: result 1: Positive check\n :id: clv_1\n :links: clv_A, clv_B\n :status: [[check_linked_values('progress', 'status', 'in progress' )]]\n :collapse: False\n\n\n **Example 2: Negative check**\n\n Status gets not set to *progress*, because status of linked need *clv_C* does not match *\"in progress\"*.\n\n .. code-block:: jinja\n\n .. spec:: result 2: Negative check\n :links: clv_A, clv_B, clv_C\n :status: [[check_linked_values('progress', 'status', 'in progress' )]]\n\n .. spec:: result 2: Negative check\n :id: clv_2\n :links: clv_A, clv_B, clv_C\n :status: [[check_linked_values('progress', 'status', 'in progress' )]]\n :collapse: False\n\n\n **Example 3: Positive check thanks of used filter**\n\n status gets set to *progress*, because linked need *clv_C* is not part of the filter.\n\n .. code-block:: jinja\n\n .. spec:: result 3: Positive check thanks of used filter\n :links: clv_A, clv_B, clv_C\n :status: [[check_linked_values('progress', 'status', 'in progress', 'type == \"req\" ' )]]\n\n .. spec:: result 3: Positive check thanks of used filter\n :id: clv_3\n :links: clv_A, clv_B, clv_C\n :status: [[check_linked_values('progress', 'status', 'in progress', 'type == \"req\" ' )]]\n :collapse: False\n\n **Example 4: Positive check thanks of one_hit option**\n\n Even *clv_C* has not the searched status, status gets anyway set to *progress*.\n That's because ``one_hit`` is used so that only one linked need must have the searched\n value.\n\n .. code-block:: jinja\n\n .. spec:: result 4: Positive check thanks of one_hit option\n :links: clv_A, clv_B, clv_C\n :status: [[check_linked_values('progress', 'status', 'in progress', one_hit=True )]]\n\n .. spec:: result 4: Positive check thanks of one_hit option\n :id: clv_4\n :links: clv_A, clv_B, clv_C\n :status: [[check_linked_values('progress', 'status', 'in progress', one_hit=True )]]\n :collapse: False\n\n **Result 5: Two checks and a joint status**\n Two checks are performed and both are positive. So their results get joined.\n\n .. code-block:: jinja\n\n .. spec:: result 5: Two checks and a joint status\n :links: clv_A, clv_B, clv_C\n :status: [[check_linked_values('progress', 'status', 'in progress', one_hit=True )]] [[check_linked_values('closed', 'status', 'closed', one_hit=True )]]\n\n .. spec:: result 5: Two checks and a joint status\n :id: clv_5\n :links: clv_A, clv_B, clv_C\n :status: [[check_linked_values('progress', 'status', 'in progress', one_hit=True )]] [[check_linked_values('closed', 'status', 'closed', one_hit=True )]]\n :collapse: False\n\n :param result: value, which gets returned if all linked needs have parsed the checks\n :param search_option: option name, which is used n linked needs for the search\n :param search_value: value, which an option of a linked need must match\n :param filter_string: Checks are only performed on linked needs, which pass the defined filter\n :param one_hit: If True, only one linked need must have a positive check\n :return: result, if all checks are positive\n \"\"\"\n links = need[\"links\"]\n if not isinstance(search_value, list):\n search_value = [search_value]\n\n for link in links:\n if filter_string is not None:\n try:\n if not filter_single_need(needs[link], filter_string):\n continue\n except Exception as e:\n logger.warning(\"CheckLinkedValues: Filter {0} not valid: Error: {1}\".format(filter_string, e))\n\n if not one_hit and not needs[link][search_option] in search_value:\n return None\n elif one_hit and needs[link][search_option] in search_value:\n return result\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncalculate the sum of the values of a given option in a given linked tree.", "response": "def calc_sum(app, need, needs, option, filter=None, links_only=False):\n \"\"\"\n Sums the values of a given option in filtered needs up to single number.\n\n Useful e.g. for calculating the amount of needed hours for implementation of all linked\n specification needs.\n\n\n **Input data**\n\n .. spec:: Do this\n :id: sum_input_1\n :hours: 7\n :collapse: False\n\n .. spec:: Do that\n :id: sum_input_2\n :hours: 15\n :collapse: False\n\n .. spec:: Do too much\n :id: sum_input_3\n :hours: 110\n :collapse: False\n\n **Example 2**\n\n .. code-block:: jinja\n\n .. req:: Result 1\n :amount: [[calc_sum(\"hours\")]]\n\n .. req:: Result 1\n :amount: [[calc_sum(\"hours\")]]\n :collapse: False\n\n\n **Example 2**\n\n .. code-block:: jinja\n\n .. req:: Result 2\n :amount: [[calc_sum(\"hours\", \"hours.isdigit() and float(hours) > 10\")]]\n\n .. req:: Result 2\n :amount: [[calc_sum(\"hours\", \"hours.isdigit() and float(hours) > 10\")]]\n :collapse: False\n\n **Example 3**\n\n .. code-block:: jinja\n\n .. req:: Result 3\n :links: sum_input_1; sum_input_3\n :amount: [[calc_sum(\"hours\", links_only=\"True\")]]\n\n .. req:: Result 3\n :links: sum_input_1; sum_input_3\n :amount: [[calc_sum(\"hours\", links_only=\"True\")]]\n :collapse: False\n\n **Example 4**\n\n .. code-block:: jinja\n\n .. req:: Result 4\n :links: sum_input_1; sum_input_3\n :amount: [[calc_sum(\"hours\", \"hours.isdigit() and float(hours) > 10\", \"True\")]]\n\n .. req:: Result 4\n :links: sum_input_1; sum_input_3\n :amount: [[calc_sum(\"hours\", \"hours.isdigit() and float(hours) > 10\", \"True\")]]\n :collapse: False\n\n :param option: Options, from which the numbers shall be taken\n :param filter: Filter string, which all needs must passed to get their value added.\n :param links_only: If \"True\", only linked needs are taken into account.\n\n :return: A float number\n \"\"\"\n if not links_only:\n check_needs = needs.values()\n else:\n check_needs = []\n for link in need[\"links\"]:\n check_needs.append(needs[link])\n\n calculated_sum = 0\n\n for check_need in check_needs:\n if filter is not None:\n try:\n if not filter_single_need(check_need, filter):\n continue\n except ValueError as e:\n pass\n except NeedInvalidFilter as ex:\n logger.warning('Given filter is not valid. Error: {}'.format(ex))\n try:\n calculated_sum += float(check_need[option])\n except ValueError:\n pass\n\n return calculated_sum"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprompts user for conversion between NOAA and LiPD file formats.", "response": "def noaa_prompt():\n \"\"\"\n Convert between NOAA and LiPD file formats.\n :return:\n \"\"\"\n logger_noaa.info(\"enter noaa\")\n # Run lpd_noaa or noaa_lpd ?\n print(\"Which conversion?\\n1. LPD to NOAA\\n2. NOAA to LPD\\n\")\n mode = input(\"Option: \")\n logger_noaa.info(\"chose option: {}\".format(mode))\n\n return mode"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef noaa_prompt_1():\n print(\"Enter the project information below. We'll use this to create the WDS URL\")\n print(\"What is the project name?\")\n _project = input(\">\")\n print(\"What is the project version?\")\n _version = input(\">\")\n return _project, _version", "response": "Prompt user for the project and version number of the WDS links."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts NOAA format to LiPD format", "response": "def noaa_to_lpd(files):\n \"\"\"\n Convert NOAA format to LiPD format\n :param dict files: Files metadata\n :return None:\n \"\"\"\n logger_noaa.info(\"enter process_noaa\")\n # only continue if the user selected a mode correctly\n logger_noaa.info(\"Found {} NOAA txt file(s)\".format(str(len(files[\".txt\"]))))\n print(\"Found {} NOAA txt file(s)\".format(str(len(files[\".txt\"]))))\n # Process each available file of the specified .lpd or .txt type\n for file in files[\".txt\"]:\n # try to filter out example files and stuff without real data\n if \"template\" not in file[\"filename_ext\"] and \"example\" not in file[\"filename_ext\"]:\n os.chdir(file[\"dir\"])\n print('processing: {}'.format(file[\"filename_ext\"]))\n logger_noaa.info(\"processing: {}\".format(file[\"filename_ext\"]))\n\n # Unzip file and get tmp directory path\n dir_tmp = create_tmp_dir()\n try:\n NOAA_LPD(file[\"dir\"], dir_tmp, file[\"filename_no_ext\"]).main()\n except Exception as e:\n print(\"Error: Unable to convert file: {}, {}\".format(file[\"filename_no_ext\"], e))\n\n # Create the lipd archive in the original file's directory.\n zipper(root_dir=dir_tmp, name=\"bag\", path_name_ext=os.path.join(file[\"dir\"], file[\"filename_no_ext\"] + \".lpd\"))\n # Delete tmp folder and all contents\n os.chdir(file[\"dir\"])\n try:\n shutil.rmtree(dir_tmp)\n except FileNotFoundError:\n # directory is already gone. keep going.\n pass\n\n logger_noaa.info(\"exit noaa_to_lpd\")\n return"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting a LiPD format to NOAA format", "response": "def lpd_to_noaa(D, wds_url, lpd_url, version, path=\"\"):\n \"\"\"\n Convert a LiPD format to NOAA format\n\n :param dict D: Metadata\n :return dict D: Metadata\n \"\"\"\n logger_noaa.info(\"enter process_lpd\")\n d = D\n try:\n dsn = get_dsn(D)\n # Remove all the characters that are not allowed here. Since we're making URLs, they have to be compliant.\n dsn = re.sub(r'[^A-Za-z-.0-9]', '', dsn)\n # project = re.sub(r'[^A-Za-z-.0-9]', '', project)\n version = re.sub(r'[^A-Za-z-.0-9]', '', version)\n # Create the conversion object, and start the conversion process\n _convert_obj = LPD_NOAA(D, dsn, wds_url, lpd_url, version, path)\n _convert_obj.main()\n # get our new, modified master JSON from the conversion object\n d = _convert_obj.get_master()\n noaas = _convert_obj.get_noaa_texts()\n __write_noaas(noaas, path)\n # remove any root level urls that are deprecated\n d = __rm_wdc_url(d)\n\n except Exception as e:\n logger_noaa.error(\"lpd_to_noaa: {}\".format(e))\n print(\"Error: lpd_to_noaa: {}\".format(e))\n\n # logger_noaa.info(\"exit lpd_to_noaa\")\n return d"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __write_noaas(dat, path):\n for filename, text in dat.items():\n try:\n with open(os.path.join(path, filename), \"w+\") as f:\n f.write(text)\n except Exception as e:\n print(\"write_noaas: There was a problem writing the NOAA text file: {}: {}\".format(filename, e))\n return", "response": "Write the data as NOAA text files"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _parse_java_version(line: str) -> tuple:\n m = VERSION_RE.search(line)\n version_str = m and m.group(0).replace('\"', '') or '0.0.0'\n if '_' in version_str:\n fst, snd = version_str.split('_', maxsplit=2)\n version = parse_version(fst)\n return (version[1], version[2], int(snd))\n else:\n return parse_version(version_str)", "response": "Parse the java version line."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef find_java_home(cratedb_version: tuple) -> str:\n if MIN_VERSION_FOR_JVM11 <= cratedb_version < (4, 0):\n # Supports 8 to 11+, use whatever is set\n return os.environ.get('JAVA_HOME', '')\n if cratedb_version < MIN_VERSION_FOR_JVM11:\n return _find_matching_java_home(lambda ver: ver[0] == 8)\n else:\n return _find_matching_java_home(lambda ver: ver[0] >= 11)", "response": "Find the JAVA_HOME suites for the given CrateDB version."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef zipper(root_dir=\"\", name=\"\", path_name_ext=\"\"):\n logger_zips.info(\"re_zip: name: {}, dir_tmp: {}\".format(path_name_ext, root_dir))\n # creates a zip archive in current directory. \"somefile.lpd.zip\"\n shutil.make_archive(path_name_ext, format='zip', root_dir=root_dir, base_dir=name)\n # drop the .zip extension. only keep .lpd\n os.rename(\"{}.zip\".format(path_name_ext), path_name_ext)\n return", "response": "Zip the entire tree of the current directory."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef unzipper(filename, dir_tmp):\n logger_zips.info(\"enter unzip\")\n # Unzip contents to the tmp directory\n try:\n with zipfile.ZipFile(filename) as f:\n f.extractall(dir_tmp)\n except FileNotFoundError as e:\n logger_zips.debug(\"unzip: FileNotFound: {}, {}\".format(filename, e))\n shutil.rmtree(dir_tmp)\n logger_zips.info(\"exit unzip\")\n return", "response": "Unzip the contents of a single. lpd file into a tmp directory."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalculates the percentile using the nearest rank method.", "response": "def percentile(sorted_values, p):\n \"\"\"Calculate the percentile using the nearest rank method.\n\n >>> percentile([15, 20, 35, 40, 50], 50)\n 35\n\n >>> percentile([15, 20, 35, 40, 50], 40)\n 20\n\n >>> percentile([], 90)\n Traceback (most recent call last):\n ...\n ValueError: Too few data points (0) for 90th percentile\n \"\"\"\n size = len(sorted_values)\n idx = (p / 100.0) * size - 0.5\n if idx < 0 or idx > size:\n raise ValueError('Too few data points ({}) for {}th percentile'.format(size, p))\n return sorted_values[int(idx)]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a sampler constructor for the given sample_mode.", "response": "def get_sampler(sample_mode: str):\n \"\"\"Return a sampler constructor\n\n >>> get_sampler('all')\n \n\n >>> get_sampler('reservoir')\n \n\n >>> get_sampler('reservoir:100')\n functools.partial(, size=100)\n \"\"\"\n if sample_mode == 'all':\n return All\n mode = sample_mode.split(':')\n if mode[0] == 'reservoir':\n if len(mode) == 2:\n return partial(UniformReservoir, size=int(mode[1]))\n else:\n return UniformReservoir\n raise TypeError(f'Invalid sample_mode: {sample_mode}')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef query(self, criteria, **opts):\n time_now = datetime.datetime.now().replace(second=0, microsecond=0)\n right_now = time_now.isoformat()\n minutes_ago = (time_now - datetime.timedelta(minutes=15)).isoformat()\n\n formats = opts.get('formats', 'json')\n timezone = opts.get('timezone', 'UTC')\n time_from = opts.get('time_from', minutes_ago)\n time_to = opts.get('time_to', right_now)\n\n # setting up options\n t_options = {\n 'q': criteria,\n 'format': formats,\n 'tz': timezone,\n 'from': time_from,\n 'to': time_to,\n }\n options = '&'.join(['{}={}'.format(k, v)\n for k, v in t_options.items()])\n\n req = requests.get('%s?%s' %\n (self.url, options), auth=self.auth)\n\n try:\n data = req.json()\n except json.decoder.JSONDecodeError:\n data = []\n\n return {\n 'data': data,\n 'response': req.status_code,\n 'reason': req.reason,\n }", "response": "Query the cache for the given criteria"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef excel_main(file):\n\n os.chdir(file[\"dir\"])\n name_ext = file[\"filename_ext\"]\n\n # Filename without extension\n name = file[\"filename_no_ext\"]\n # remove foreign characters to prevent wiki uploading errors\n name = normalize_name(name)\n print(\"processing: {}\".format(name_ext))\n logger_excel.info(\"processing: {}\".format(name_ext))\n\n pending_csv = []\n final = OrderedDict()\n logger_excel.info(\"variables initialized\")\n\n # Create a temporary folder and set paths\n dir_tmp = create_tmp_dir()\n\n \"\"\"\n EACH DATA TABLE WILL BE STRUCTURED LIKE THIS\n \"paleo_chron\": \"paleo\",\n \"pc_idx\": 1,\n \"model_idx\": \"\",\n \"table_type\": \"measurement\",\n \"table_idx\": 1,\n \"name\": sheet,\n \"filename\": sheet,\n \"data\": { column data }\n \"\"\"\n\n # Open excel workbook with filename\n try:\n workbook = xlrd.open_workbook(name_ext)\n logger_excel.info(\"opened XLRD workbook\")\n\n except Exception as e:\n # There was a problem opening a file with XLRD\n print(\"Failed to open Excel workbook: {}\".format(name))\n workbook = None\n logger_excel.debug(\"excel: xlrd failed to open workbook: {}, {}\".format(name, e))\n\n if workbook:\n\n # Build sheets, but don't make full filenames yet. Need to parse metadata sheet first to get datasetname.\n sheets, ct_paleo, ct_chron, metadata_str = _get_sheet_metadata(workbook, name)\n\n # METADATA WORKSHEETS\n # Parse Metadata sheet and add to output dictionary\n if metadata_str:\n logger_excel.info(\"parsing worksheet: {}\".format(metadata_str))\n final = cells_dn_meta(workbook, metadata_str, 0, 0, final)\n\n # Now that we have the dataSetName, we can use it to build filenames\n dsn = __get_datasetname(final, name)\n filename = str(dsn) + \".lpd\"\n sheets = __set_sheet_filenames(sheets, dsn)\n\n dir_bag = os.path.join(dir_tmp, \"bag\")\n dir_data = os.path.join(dir_bag, 'data')\n\n # Make folders in tmp\n os.mkdir(os.path.join(dir_bag))\n os.mkdir(os.path.join(dir_data))\n\n # PALEO AND CHRON SHEETS\n for sheet in sheets:\n logger_excel.info(\"parsing data worksheet: {}\".format(sheet[\"new_name\"]))\n sheet_meta, sheet_csv = _parse_sheet(workbook, sheet)\n if sheet_csv and sheet_meta:\n pending_csv.append(sheet_csv)\n sheet[\"data\"] = sheet_meta\n\n # create the metadata skeleton where we will place the tables. dynamically add empty table blocks for data.\n skeleton_paleo, skeleton_chron = _create_skeleton_1(sheets)\n\n # Reorganize sheet metadata into LiPD structure\n d_paleo, d_chron = _place_tables_main(sheets, skeleton_paleo, skeleton_chron)\n\n # Add organized metadata into final dictionary\n final['paleoData'] = d_paleo\n final['chronData'] = d_chron\n\n # OUTPUT\n\n # Create new files and dump data in dir_data\n os.chdir(dir_data)\n\n # WRITE CSV\n _write_data_csv(pending_csv)\n\n # JSON-LD\n # Invoke DOI Resolver Class to update publisher data\n try:\n logger_excel.info(\"invoking doi resolver\")\n final = DOIResolver(file[\"dir\"], name, final).main()\n except Exception as e:\n print(\"Error: doi resolver failed: {}\".format(name))\n logger_excel.debug(\"excel: doi resolver failed: {}, {}\".format(name, e))\n\n # Dump final_dict to a json file.\n final[\"lipdVersion\"] = 1.2\n final[\"createdBy\"] = \"excel\"\n write_json_to_file(final)\n\n # Move files to bag root for re-bagging\n # dir : dir_data -> dir_bag\n logger_excel.info(\"start cleanup\")\n dir_cleanup(dir_bag, dir_data)\n\n # Create a bag for the 3 files\n finish_bag(dir_bag)\n\n # dir: dir_tmp -> dir_root\n os.chdir(file[\"dir\"])\n\n # Check if same lpd file exists. If so, delete so new one can be made\n if os.path.isfile(filename):\n os.remove(filename)\n\n # Zip dir_bag. Creates in dir_root directory\n logger_excel.info(\"re-zip and rename\")\n zipper(root_dir=dir_tmp, name=\"bag\", path_name_ext=os.path.join(file[\"dir\"], filename))\n\n # Move back to dir_root for next loop.\n os.chdir(file[\"dir\"])\n\n # Cleanup and remove tmp directory\n shutil.rmtree(dir_tmp)\n\n return dsn", "response": "Main function for parsing data from Excel spreadsheets into LiPD files."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_sheet_metadata(workbook, name):\n ct_paleo = 1\n ct_chron = 1\n metadata_str = \"\"\n sheets = []\n skip_sheets = [\"example\", \"sample\", \"lists\", \"guidelines\"]\n\n # Check what worksheets are available, so we know how to proceed.\n for sheet in workbook.sheet_names():\n\n # Use this for when we are dealing with older naming styles of \"data (qc), data, and chronology\"\n old = \"\".join(sheet.lower().strip().split())\n\n # Don't parse example sheets. If these words are in the sheet, assume we skip them.\n if not any(word in sheet.lower() for word in skip_sheets):\n # Group the related sheets together, so it's easier to place in the metadata later.\n if 'metadata' in sheet.lower():\n metadata_str = sheet\n\n # Skip the 'about' and 'proxy' sheets altogether. Proceed with all other sheets.\n elif \"about\" not in sheet.lower() and \"proxy\" not in sheet.lower():\n logger_excel.info(\"creating sheets metadata\")\n\n # If this is a valid sheet name, we will receive a regex object back.\n m = re.match(re_sheet, sheet.lower())\n\n # Valid regex object. This is a valid sheet name and we can use that to build the sheet metadata.\n if m:\n sheets, paleo_ct, chron_ct = _sheet_meta_from_regex(m, sheets, sheet, name, ct_paleo, ct_chron)\n\n # Older excel template style: backwards compatibility. Hard coded for one sheet per table.\n elif old == \"data\" or \"data(qc)\" in old or \"data(original)\" in old:\n sheets.append({\n \"paleo_chron\": \"paleo\",\n \"idx_pc\": ct_paleo,\n \"idx_model\": None,\n \"table_type\": \"measurement\",\n \"idx_table\": 1,\n \"old_name\": sheet,\n \"new_name\": sheet,\n \"filename\": \"paleo{}measurementTable1.csv\".format(ct_paleo),\n \"table_name\": \"paleo{}measurementTable1\".format(ct_paleo),\n \"data\": \"\"\n })\n ct_paleo += 1\n\n # Older excel template style: backwards compatibility. Hard coded for one sheet per table.\n elif old == \"chronology\":\n sheets.append({\n \"paleo_chron\": \"chron\",\n \"idx_pc\": ct_chron,\n \"idx_model\": None,\n \"table_type\": \"measurement\",\n \"idx_table\": 1,\n \"old_name\": sheet,\n \"new_name\": sheet,\n \"filename\": \"chron{}measurementTable1.csv\".format(ct_chron),\n \"table_name\": \"chron{}measurementTable1\".format(ct_chron),\n \"data\": \"\"\n })\n ct_chron += 1\n else:\n # Sheet name does not conform to standard. Guide user to create a standardized sheet name.\n print(\"This sheet name does not conform to naming standard: {}\".format(sheet))\n sheets, paleo_ct, chron_ct = _sheet_meta_from_prompts(sheets, sheet, name, ct_paleo, ct_chron)\n\n return sheets, ct_paleo, ct_chron, metadata_str", "response": "Get the metadata for a single worksheet."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a proper standardized sheet name from prompts.", "response": "def _sheet_meta_from_prompts(sheets, old_name, name, ct_paleo, ct_chron):\n \"\"\"\n Guide the user to create a proper, standardized sheet name\n :param list sheets: Running list of sheet metadata\n :param str old_name: Original sheet name\n :param str name: Data set name\n :param int ct_paleo: Running count of paleoData tables\n :param int ct_chron: Running count of chronData tables\n :return sheets paleo_ct chron_ct: Updated sheets and counts\n \"\"\"\n cont = True\n # Loop until valid sheet name is built, or user gives up\n while cont:\n try:\n pc = input(\"Is this a (p)aleo or (c)hronology sheet?\").lower()\n if pc in (\"p\", \"c\", \"paleo\", \"chron\", \"chronology\"):\n tt = input(\"Is this a (d)istribution, (e)nsemble, (m)easurement, or (s)ummary sheet?\").lower()\n if tt in EXCEL_SHEET_TYPES[\"distribution\"] or tt in EXCEL_SHEET_TYPES[\"ensemble\"] \\\n or tt in EXCEL_SHEET_TYPES[\"summary\"] or tt in EXCEL_SHEET_TYPES[\"measurement\"]:\n # valid answer, keep going\n if tt in EXCEL_SHEET_TYPES[\"distribution\"]:\n tt = \"distribution\"\n elif tt in EXCEL_SHEET_TYPES[\"summary\"]:\n tt = \"summary\"\n elif tt in EXCEL_SHEET_TYPES[\"ensemble\"]:\n tt = \"ensemble\"\n elif tt in EXCEL_SHEET_TYPES[\"measurement\"]:\n tt = \"measurement\"\n\n if pc in EXCEL_SHEET_TYPES[\"paleo\"]:\n if tt in [\"ensemble\", \"summary\"]:\n sheet = \"{}{}{}{}\".format(\"paleo\", ct_paleo, tt, 1)\n else:\n sheet = \"{}{}{}\".format(\"paleo\", ct_paleo, tt)\n elif pc in EXCEL_SHEET_TYPES[\"chron\"]:\n if tt in [\"ensemble\", \"summary\"]:\n sheet = \"{}{}{}{}\".format(\"chron\", ct_chron, tt, 1)\n else:\n sheet = \"{}{}{}\".format(\"chron\", ct_chron, tt)\n # Test the sheet that was built from the user responses.\n # If it matches the Regex, then continue to build the sheet metadata. If not, try again or skip sheet.\n m = re.match(re_sheet, sheet.lower())\n if m:\n sheets, ct_paleo, ct_chron = _sheet_meta_from_regex(m, sheets, old_name, name, ct_paleo, ct_chron)\n print(\"Sheet created: {}\".format(sheet))\n cont = False\n else:\n resp = input(\"invalid sheet name. try again? (y/n): \")\n if resp == \"n\":\n print(\"No valid sheet name was created. Skipping sheet: {}\".format(sheet))\n cont = False\n except Exception as e:\n logger_excel.debug(\"excel: sheet_meta_from_prompts: error during prompts, {}\".format(e))\n cont = False\n\n print(\"=====================================================\")\n return sheets, ct_paleo, ct_chron"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild metadata for a single sheet from a regex match object.", "response": "def _sheet_meta_from_regex(m, sheets, old_name, name, ct_paleo, ct_chron):\n \"\"\"\n Build metadata for a sheet. Receive valid regex match object and use that to create metadata.\n :param obj m: Regex match object\n :param list sheets: Running list of sheet metadata\n :param str old_name: Original sheet name\n :param str name: Data set name\n :param int ct_paleo: Running count of paleoData tables\n :param int ct_chron: Running count of chronData tables\n :return sheets paleo_ct chron_ct: Updated sheets and counts\n \"\"\"\n try:\n idx_model = None\n idx_table = None\n pc = m.group(1)\n # Get the model idx number from string if it exists\n if m.group(3):\n idx_model = int(m.group(4))\n # check if there's an index (for distribution tables)\n if m.group(6):\n idx_table = int(m.group(6))\n # find out table type\n if pc == \"paleodata\" or pc == \"paleo\":\n pc = \"paleo\"\n ct_paleo += 1\n elif pc == \"chrondata\" or pc == \"chron\":\n pc = \"chron\"\n ct_chron += 1\n # build filename and table name strings. build table name first, then make filename from the table_name\n new_name = \"{}{}\".format(pc, m.group(2))\n if idx_model:\n new_name = \"{}model{}\".format(new_name, idx_model)\n new_name = \"{}{}\".format(new_name, m.group(5))\n if idx_table:\n new_name = \"{}{}\".format(new_name, m.group(6))\n filename = \"{}.csv\".format(new_name)\n except Exception as e:\n logger_excel.debug(\"excel: sheet_meta_from_regex: error during setup, {}\".format(e))\n\n # Standard naming. This matches the regex and the sheet name is how we want it.\n # paleo/chron - idx - table_type - idx\n # Not sure what to do with m.group(2) yet\n # ex: m.groups() = [ paleo, 1, model, 1, ensemble, 1 ]\n try:\n sheets.append({\n \"old_name\": old_name,\n \"new_name\": new_name,\n \"filename\": filename,\n \"paleo_chron\": pc,\n \"idx_pc\": int(m.group(2)),\n \"idx_model\": idx_model,\n \"idx_table\": idx_table,\n \"table_type\": m.group(5),\n \"data\": \"\"\n })\n except Exception as e:\n print(\"error: build sheets\")\n logger_excel.debug(\"excel: build_sheet: unable to build sheet, {}\".format(e))\n\n return sheets, ct_paleo, ct_chron"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nplacing data into the skeleton for either a measurement or distribution or summary tables.", "response": "def _place_tables_section(skeleton_section, sheet, keys_section):\n \"\"\"\n Place data into skeleton for either a paleo or chron section.\n :param dict skeleton_section: Empty or current progress of skeleton w/ data\n :param dict sheet: Sheet metadata\n :param list keys_section: Paleo or Chron specific keys\n :return dict: Skeleton section full of data\n \"\"\"\n logger_excel.info(\"enter place_tables_section\")\n try:\n logger_excel.info(\"excel: place_tables_section: placing table: {}\".format(sheet[\"new_name\"]))\n new_name = sheet[\"new_name\"]\n logger_excel.info(\"placing_tables_section: {}\".format(new_name))\n # get all the sheet metadata needed for this function\n idx_pc = sheet[\"idx_pc\"] - 1\n idx_model = sheet[\"idx_model\"]\n idx_table = sheet[\"idx_table\"]\n table_type = sheet[\"table_type\"]\n data = sheet[\"data\"]\n # paleoMeas or chronMeas key\n key_1 = keys_section[0]\n # paleoModel or chronModel key\n key_2 = keys_section[1]\n # Is this a measurement, or distribution table?\n if idx_table:\n # Yes, a table idx exists, so decrement it.\n idx_table = sheet[\"idx_table\"] - 1\n # Is this a ensemble, dist, or summary table?\n if idx_model:\n # Yes, a model idx exists, so decrement it.\n idx_model -= 1\n except Exception as e:\n logger_excel.debug(\"excel: place_tables_section: error during setup, {}\".format(e))\n\n # If it's measurement table, it goes in first.\n try:\n if table_type == \"measurement\":\n skeleton_section[idx_pc][key_1][idx_table] = data\n # Other types of tables go one step below\n elif table_type in [\"ensemble\", \"distribution\", \"summary\"]:\n if table_type == \"summary\":\n skeleton_section[idx_pc][key_2][idx_model][\"summaryTable\"] = data\n elif table_type == \"ensemble\":\n skeleton_section[idx_pc][key_2][idx_model][\"ensembleTable\"] = data\n elif table_type == \"distribution\":\n skeleton_section[idx_pc][key_2][idx_model][\"distributionTable\"][idx_table] = data\n except Exception as e:\n logger_excel.warn(\"excel: place_tables_section: Unable to place table {}, {}\".format(new_name, e))\n logger_excel.info(\"exit place_tables_section\")\n return skeleton_section"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _place_tables_main(sheets, skeleton_paleo, skeleton_chron):\n logger_excel.info(\"enter place_tables_main\")\n\n for sheet in sheets:\n pc = sheet[\"paleo_chron\"]\n if pc == \"paleo\":\n skeleton_paleo = _place_tables_section(skeleton_paleo, sheet, [\"paleoMeasurementTable\", \"paleoModel\"])\n elif pc == \"chron\":\n skeleton_chron = _place_tables_section(skeleton_chron, sheet, [\"chronMeasurementTable\", \"chronModel\"])\n\n # when returning, these should no longer be skeletons. They should be tables filled with data\n logger_excel.info(\"exit place_tables_main\")\n return skeleton_paleo, skeleton_chron", "response": "Place tables into the main LiPD structure."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_table_counts(sheet, num_section):\n tt = sheet[\"table_type\"]\n idx_pc = sheet[\"idx_pc\"]\n idx_table = sheet[\"idx_table\"]\n idx_model = sheet[\"idx_model\"]\n\n # Have we started counters for this idx model yet??\n if idx_pc not in num_section:\n # No, create the counters and start tracking table counts\n num_section[idx_pc] = {\"ct_meas\": 0, \"ct_model\": 0, \"ct_in_model\": {}}\n\n # Compare indices to get the highest index number for this table type\n # If we have a higher number model index, then increment out models count\n\n try:\n # Is this a ens, dist, or summary table?\n if idx_model:\n # Yes it is.\n\n # Is this model idx higher than ours?\n if idx_model > num_section[idx_pc][\"ct_model\"]:\n # Yes. Now, we have N number of model tables.\n num_section[idx_pc][\"ct_model\"] = idx_model\n\n # Have we started counters for this idx model yet??\n if idx_model not in num_section[idx_pc][\"ct_in_model\"]:\n # No, create the counters and start tracking table counts.\n num_section[idx_pc][\"ct_in_model\"][idx_model] = {\"ct_ens\": 0, \"ct_sum\": 0, \"ct_dist\": 0}\n except Exception as e:\n logger_excel.debug(\"excel: get_table_counts: error incrementing model counts, \".format(e))\n\n # Incrementer!\n # For the given table type, track the highest index number.\n # That number is how many tables we need to make of that type. Ex. 'measurement4', we need to make 4 empty tables\n try:\n if tt == \"measurement\":\n # Is this meas table a higher idx?\n if idx_table > num_section[idx_pc][\"ct_meas\"]:\n # Yes, set this idx to the counter.\n num_section[idx_pc][\"ct_meas\"] = idx_table\n elif tt == \"distribution\":\n # Is this dist table a higher idx?\n if idx_table > num_section[idx_pc][\"ct_in_model\"][idx_model][\"ct_dist\"]:\n # Yes, set this idx to the counter.\n num_section[idx_pc][\"ct_in_model\"][idx_model][\"ct_dist\"] = idx_table\n elif tt == \"summary\":\n # Summary tables are not indexed. Only one per table.\n num_section[idx_pc][\"ct_in_model\"][idx_model][\"ct_sum\"] = 1\n elif tt == \"ensemble\":\n # Ensemble tables are not indexed. Only one per table.\n num_section[idx_pc][\"ct_in_model\"][idx_model][\"ct_ens\"] = 1\n except Exception as e:\n logger_excel.debug(\"excel: get_table_counts: error incrementing table count\".format(e))\n\n return num_section", "response": "Loop through sheet metadata and count how many of each table type is needed at each index."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates the metadata skeleton for the current mode and the current paleo.", "response": "def _create_skeleton_3(pc, l, num_section):\n \"\"\"\n Bottom level: {\"measurement\": [], \"model\": [{summary, distributions, ensemble}]}\n Fill in measurement and model tables with N number of EMPTY meas, summary, ensemble, and distributions.\n :param str pc: Paleo or Chron \"mode\"\n :param list l:\n :param dict num_section:\n :return dict:\n \"\"\"\n logger_excel.info(\"enter create_skeleton_inner_2\")\n\n # Table Template: Model\n template_model = {\"summaryTable\": {}, \"ensembleTable\": {}, \"distributionTable\": []}\n\n # Build string appropriate for paleo/chron mode\n pc_meas = \"{}MeasurementTable\".format(pc)\n pc_mod = \"{}Model\".format(pc)\n\n # Loop for each table count\n for idx1, table in num_section.items():\n try:\n # Create N number of empty measurement lists\n l[idx1 - 1][pc_meas] = [None] * num_section[idx1][\"ct_meas\"]\n\n # Create N number of empty model table lists\n l[idx1 - 1][pc_mod] = [copy.deepcopy(template_model)] * num_section[idx1][\"ct_model\"]\n\n # Create N number of empty model tables at list index\n #\n for idx2, nums in table[\"ct_in_model\"].items():\n dists = []\n try:\n # Create N number of empty distributions at list index\n [dists.append({}) for i in range(0, nums[\"ct_dist\"])]\n except IndexError as e:\n logger_excel.debug(\"excel: create_metadata_skeleton: paleo tables messed up, {}\".format(e))\n\n # Model template complete, insert it at list index\n l[idx1 - 1][pc_mod][idx2-1] = {\"summaryTable\": {}, \"ensembleTable\": {}, \"distributionTable\": dists}\n except IndexError as e:\n logger_excel.warn(\"create_skeleton_inner_tables: IndexError: {}\".format(e))\n except KeyError as e:\n logger_excel.warn(\"create_skeleton_inner_tables: KeyError: {}\".format(e))\n return l"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _create_skeleton_2(l, pc, num_section, template):\n logger_excel.info(\"enter create_skeleton_inner_1\")\n try:\n # Create N number of paleoData/chronData tables.\n l = [copy.deepcopy(template)] * len(num_section)\n # Create the necessary tables inside of \"model\"\n l = _create_skeleton_3(pc, l, num_section)\n except Exception as e:\n logger_excel.warn(\"excel: create_skeleton_inner_main: error duplicating template tables, {}\".format(e))\n\n return l", "response": "Create the main and main tables for the main section of the main section."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a skeleton for the main table.", "response": "def _create_skeleton_1(sheets):\n \"\"\"\n Top level: {\"chronData\", \"paleoData\"}\n Fill in paleoData/chronData tables with N number of EMPTY measurement and models.\n\n :return list: Blank list of N indices\n \"\"\"\n logger_excel.info(\"enter create_skeleton_main\")\n\n # Table template: paleoData\n template_paleo = {\"paleoMeasurementTable\": [], \"paleoModel\": []}\n template_chron = {\"chronMeasurementTable\": [], \"chronModel\": []}\n\n num_chron = {}\n num_paleo = {}\n paleo = []\n chron = []\n\n # Get table counts for all table types.\n # Table types: paleoData, chronData, model, meas, dist, summary, ensemble\n for sheet in sheets:\n pc = sheet[\"paleo_chron\"]\n\n # Tree for chron types\n if pc == \"chron\":\n num_chron = _get_table_counts(sheet, num_chron)\n elif pc == \"paleo\":\n num_paleo = _get_table_counts(sheet, num_paleo)\n\n # Create metadata skeleton for using out table counts.\n paleo = _create_skeleton_2(paleo, \"paleo\", num_paleo, template_paleo)\n chron = _create_skeleton_2(chron, \"chron\", num_chron, template_chron)\n\n logger_excel.info(\"exit create_skeleton_main\")\n return paleo, chron"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse an universal spreadsheet sheet.", "response": "def _parse_sheet(workbook, sheet):\n \"\"\"\n The universal spreadsheet parser. Parse chron or paleo tables of type ensemble/model/summary.\n :param str name: Filename\n :param obj workbook: Excel Workbook\n :param dict sheet: Sheet path and naming info\n :return dict dict: Table metadata and numeric data\n \"\"\"\n logger_excel.info(\"enter parse_sheet: {}\".format(sheet[\"old_name\"]))\n\n # Markers to track where we are on the sheet\n ensemble_on = False\n var_header_done = False\n metadata_on = False\n metadata_done = False\n data_on = False\n notes = False\n\n # Open the sheet from the workbook\n temp_sheet = workbook.sheet_by_name(sheet[\"old_name\"])\n filename = sheet[\"filename\"]\n\n # Store table metadata and numeric data separately\n table_name = \"{}DataTableName\".format(sheet[\"paleo_chron\"])\n\n # Organize our root table data\n table_metadata = OrderedDict()\n table_metadata[table_name] = sheet[\"new_name\"]\n table_metadata['filename'] = filename\n table_metadata['missingValue'] = 'nan'\n if \"ensemble\" in sheet[\"new_name\"]:\n ensemble_on = True\n\n # Store all CSV in here by rows\n table_data = {filename: []}\n\n # Master list of all column metadata\n column_metadata = []\n\n # Index tracks which cells are being parsed\n num_col = 0\n num_row = 0\n nrows = temp_sheet.nrows\n col_total = 0\n\n # Tracks which \"number\" each metadata column is assigned\n col_add_ct = 1\n\n header_keys = []\n variable_keys = []\n variable_keys_lower = []\n mv = \"\"\n\n try:\n # Loop for every row in the sheet\n for i in range(0, nrows):\n # Hold the contents of the current cell\n cell = temp_sheet.cell_value(num_row, num_col)\n row = temp_sheet.row(num_row)\n\n # Skip all template lines\n if isinstance(cell, str):\n # Note and missing value entries are rogue. They are not close to the other data entries.\n if cell.lower().strip() not in EXCEL_TEMPLATE:\n\n if \"notes\" in cell.lower() and not metadata_on:\n # Store at the root table level\n nt = temp_sheet.cell_value(num_row, 1)\n if nt not in EXCEL_TEMPLATE:\n table_metadata[\"notes\"] = nt\n\n elif cell.lower().strip() in ALTS_MV:\n # Store at the root table level and in our function\n mv = temp_sheet.cell_value(num_row, 1)\n # Add if not placeholder value\n if mv not in EXCEL_TEMPLATE:\n table_metadata[\"missingValue\"] = mv\n\n # Variable template header row\n elif cell.lower() in EXCEL_HEADER and not metadata_on and not data_on:\n\n # Grab the header line\n row = temp_sheet.row(num_row)\n header_keys = _get_header_keys(row)\n\n # Turn on the marker\n var_header_done = True\n\n # Data section (bottom of sheet)\n elif data_on:\n\n # Parse the row, clean, and add to table_data\n table_data = _parse_sheet_data_row(temp_sheet, num_row, col_total, table_data, filename, mv)\n\n # Metadata section. (top)\n elif metadata_on:\n\n # Reached an empty cell while parsing metadata. Mark the end of the section.\n if cell in EMPTY:\n metadata_on = False\n metadata_done = True\n\n # Create a list of all the variable names found\n for entry in column_metadata:\n try:\n # var keys is used as the variableName entry in each column's metadata\n variable_keys.append(entry[\"variableName\"].strip())\n # var keys lower is used for comparing and finding the data header row\n variable_keys_lower.append(entry[\"variableName\"].lower().strip())\n except KeyError:\n # missing a variableName key\n pass\n\n # Not at the end of the section yet. Parse the metadata\n else:\n # Get the row data\n row = temp_sheet.row(num_row)\n\n # Get column metadata\n col_tmp = _compile_column_metadata(row, header_keys, col_add_ct)\n\n # Append to master list\n column_metadata.append(col_tmp)\n col_add_ct += 1\n\n # Variable metadata, if variable header exists\n elif var_header_done and not metadata_done:\n\n # Start piecing column metadata together with their respective variable keys\n metadata_on = True\n\n # Get the row data\n row = temp_sheet.row(num_row)\n\n # Get column metadata\n col_tmp = _compile_column_metadata(row, header_keys, col_add_ct)\n\n # Append to master list\n column_metadata.append(col_tmp)\n col_add_ct += 1\n\n # Variable metadata, if variable header does not exist\n elif not var_header_done and not metadata_done and cell:\n # LiPD Version 1.1 and earlier: Chronology sheets don't have variable headers\n # We could blindly parse, but without a header row_num we wouldn't know where\n # to save the metadata\n # Play it safe and assume data for first column only: variable name\n metadata_on = True\n\n # Get the row data\n row = temp_sheet.row(num_row)\n\n # Get column metadata\n col_tmp = _compile_column_metadata(row, header_keys, col_add_ct)\n\n # Append to master list\n column_metadata.append(col_tmp)\n col_add_ct += 1\n\n # Data variable header row. Column metadata exists and metadata_done marker is on.\n # This is where we compare top section variableNames to bottom section variableNames to see if\n # we need to start parsing the column values\n else:\n try:\n # Clean up variable_keys_lower so we all variable names change from \"age(yrs BP)\" to \"age\"\n # Units in parenthesis make it too difficult to compare variables. Remove them.\n row = _rm_units_from_var_names_multi(row)\n\n if metadata_done and any(i in row for i in variable_keys_lower):\n data_on = True\n\n # Take the difference of the two lists. If anything exists, then that's a problem\n __compare_vars(row, variable_keys_lower, sheet[\"old_name\"])\n\n # Ensemble columns are counted differently.\n if ensemble_on:\n # Get the next row, and count the data cells.\n col_total = len(temp_sheet.row(num_row+1))\n # If there's an empty row, between, then try the next row.\n if col_total < 2:\n col_total = temp_sheet.row(num_row + 2)\n try:\n ens_cols = []\n [ens_cols.append(i+1) for i in range(0, col_total-1)]\n column_metadata[1][\"number\"] = ens_cols\n except IndexError:\n logger_excel.debug(\"excel: parse_sheet: unable to add ensemble 'number' key\")\n except KeyError:\n logger_excel.debug(\"excel: parse_sheet: unable to add ensemble 'number' list at key\")\n\n # All other cass, columns are the length of column_metadata\n else:\n col_total = len(column_metadata)\n\n except AttributeError:\n pass\n # cell is not a string, and lower() was not a valid call.\n\n # If this is a numeric cell, 99% chance it's parsing the data columns.\n elif isinstance(cell, float) or isinstance(cell, int):\n if data_on or metadata_done:\n\n # Parse the row, clean, and add to table_data\n table_data = _parse_sheet_data_row(temp_sheet, num_row, col_total, table_data, filename, mv)\n\n # Move on to the next row\n num_row += 1\n table_metadata[\"columns\"] = column_metadata\n except IndexError as e:\n logger_excel.debug(\"parse_sheet: IndexError: sheet: {}, row_num: {}, col_num: {}, {}\".format(sheet, num_row, num_col, e))\n\n # If there isn't any data in this sheet, and nothing was parsed, don't let this\n # move forward to final output.\n if not table_data[filename]:\n table_data = None\n table_metadata = None\n\n logger_excel.info(\"exit parse_sheet\")\n return table_metadata, table_data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _parse_sheet_data_row(temp_sheet, num_row, col_total, table_data, filename, mv):\n # Get row of data\n row = temp_sheet.row(num_row)\n\n # In case our row holds more cells than the amount of columns we have, slice the row\n # We don't want to have extra empty cells in our output.\n row = row[:col_total]\n\n # Replace missing values where necessary\n row = _replace_mvs(row, mv)\n\n # Append row to list we will use to write out csv file later.\n table_data[filename].append(row)\n\n return table_data", "response": "Parse a row from the data section of the sheet."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _replace_mvs(row, mv):\n for idx, v in enumerate(row):\n try:\n if v.value.lower() in EMPTY or v.value.lower() == mv:\n row[idx] = \"nan\"\n else:\n row[idx] = v.value\n except AttributeError:\n if v.value == mv:\n row[idx] = \"nan\"\n else:\n row[idx] = v.value\n\n return row", "response": "Replace Missing Values in the data rows where applicable\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the header keys from this special row", "response": "def _get_header_keys(row):\n \"\"\"\n Get the variable header keys from this special row\n :return list: Header keys\n \"\"\"\n # Swap out NOAA keys for LiPD keys\n for idx, key in enumerate(row):\n key_low = key.value.lower()\n\n # Simple case: Nothing fancy here, just map to the LiPD key counterpart.\n if key_low in EXCEL_LIPD_MAP_FLAT:\n row[idx] = EXCEL_LIPD_MAP_FLAT[key_low]\n\n # Nested data case: Check if this is a calibration, interpretation, or some other data that needs to be nested.\n # elif key_low:\n # pass\n\n # Unknown key case: Store the key as-is because we don't have a LiPD mapping for it.\n else:\n try:\n row[idx] = key.value\n except AttributeError as e:\n logger_excel.warn(\"excel_main: get_header_keys: unknown header key, unable to add: {}\".format(e))\n\n # Since we took a whole row of cells, we have to drop off the empty cells at the end of the row.\n header_keys = _rm_cells_reverse(row)\n return header_keys"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _rm_units_from_var_name_single(var):\n # Use the regex to match the cell\n m = re.match(re_var_w_units, var)\n # Should always get a match, but be careful anyways.\n if m:\n # m.group(1): variableName\n # m.group(2): units in parenthesis (may not exist).\n try:\n var = m.group(1).strip().lower()\n # var = m.group(1).strip().lower()\n except Exception:\n # This must be a malformed cell somehow. This regex should match every variableName cell.\n # It didn't work out. Return the original var as a fallback\n pass\n return var", "response": "Removes units from a single variable name in a single section of the metadata and data structures."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _rm_units_from_var_names_multi(row):\n l2 = []\n # Check each var in the row\n for idx, var in enumerate(row):\n l2.append(_rm_units_from_var_name_single(row[idx].value))\n return l2", "response": "Wrapper around _rm_units_from_var_name_single for doing a list instead of a single cell."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncompiling the interpretation data into a list of multiples based on the keys provided.", "response": "def _compile_interpretation(data):\n \"\"\"\n Compile the interpretation data into a list of multiples, based on the keys provided.\n Disassemble the key to figure out how to place the data\n :param dict data: Interpretation data (unsorted)\n :return dict: Interpretation data (sorted)\n \"\"\"\n # KEY FORMAT : \"interpretation1_somekey\"\n _count = 0\n\n # Determine how many entries we are going to need, by checking the interpretation index in the string\n for _key in data.keys():\n _key_low = _key.lower()\n # Get regex match\n m = re.match(re_interpretation, _key_low)\n # If regex match was successful..\n if m:\n # Check if this interpretation count is higher than what we have.\n _curr_count = int(m.group(1))\n if _curr_count > _count:\n # New max count, record it.\n _count = _curr_count\n\n # Create the empty list with X entries for the interpretation data\n _tmp = [{} for i in range(0, _count)]\n\n # Loop over all the interpretation keys and data\n for k, v in data.items():\n # Get the resulting regex data.\n # EXAMPLE ENTRY: \"interpretation1_variable\"\n # REGEX RESULT: [\"1\", \"variable\"]\n m = re.match(re_interpretation, k)\n # Get the interpretation index number\n idx = int(m.group(1))\n # Get the field variable\n key = m.group(2)\n # Place this data in the _tmp array. Remember to adjust given index number for 0-indexing\n _tmp[idx-1][key] = v\n\n # Return compiled interpretation data\n return _tmp"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _compile_column_metadata(row, keys, number):\n # Store the variable keys by index in a dictionary\n _column = {}\n _interpretation = {}\n _calibration = {}\n _physical = {}\n\n # Use the header keys to place the column data in the dictionary\n if keys:\n for idx, key in enumerate(keys):\n _key_low = key.lower()\n\n # Special case: Calibration data\n if re.match(re_calibration, _key_low):\n m = re.match(re_calibration, _key_low)\n if m:\n _key = m.group(1)\n _calibration[_key] = row[idx].value\n\n # Special case: PhysicalSample data\n elif re.match(re_physical, _key_low):\n m = re.match(re_physical, _key_low)\n if m:\n _key = m.group(1)\n _physical[_key] = row[idx].value\n\n # Special case: Interpretation data\n elif re.match(re_interpretation, _key_low):\n # Put interpretation data in a tmp dictionary that we'll sort later.\n _interpretation[_key_low] = row[idx].value\n\n else:\n try:\n val = row[idx].value\n except Exception:\n logger_excel.info(\"compile_column_metadata: Couldn't get value from row cell\")\n val = \"n/a\"\n try:\n if key == \"variableName\":\n val = _rm_units_from_var_name_single(row[idx].value)\n except Exception:\n # when a variableName fails to split, keep the name as-is and move on.\n pass\n _column[key] = val\n _column[\"number\"] = number\n\n if _calibration:\n _column[\"calibration\"] = _calibration\n # Only allow physicalSample on measured variableTypes. duh.\n if _physical and _column[\"variableType\"] == \"measured\":\n _column[\"physicalSample\"] = _physical\n if _interpretation:\n _interpretation_data = _compile_interpretation(_interpretation)\n _column[\"interpretation\"] = _interpretation_data\n\n # If there are not keys, that means it's a header-less metadata section.\n else:\n # Assume we only have one cell, because we have no keys to know what data is here.\n try:\n val = row[0].value.lower()\n except AttributeError:\n val = row[0].value\n except Exception:\n logger_excel.info(\"compile_column_metadata: Couldn't get value from row cell\")\n val = \"n/a\"\n val = _rm_units_from_var_name_single(val)\n _column[\"variableName\"] = val\n _column[\"number\"] = number\n\n # Add this column to the overall metadata, but skip if there's no data present\n _column = {k: v for k, v in _column.items() if v}\n\n return _column", "response": "Compile column metadata from one excel row"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves the cells that are empty or template in reverse order. Stop when you hit data.", "response": "def _rm_cells_reverse(l):\n \"\"\"\n Remove the cells that are empty or template in reverse order. Stop when you hit data.\n :param list l: One row from the spreadsheet\n :return list: Modified row\n \"\"\"\n rm = []\n # Iter the list in reverse, and get rid of empty and template cells\n for idx, key in reversed(list(enumerate(l))):\n if key.lower() in EXCEL_TEMPLATE:\n rm.append(idx)\n elif key in EMPTY:\n rm.append(idx)\n else:\n break\n\n for idx in rm:\n l.pop(idx)\n return l"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwrites CSV data to file by file.", "response": "def _write_data_csv(csv_data):\n \"\"\"\n CSV data has been parsed by this point, so take it and write it file by file.\n :return:\n \"\"\"\n logger_excel.info(\"enter write_data_csv\")\n # Loop for each file and data that is stored\n for file in csv_data:\n for filename, data in file.items():\n # Make sure we're working with the right data types before trying to open and write a file\n if isinstance(filename, str) and isinstance(data, list):\n try:\n with open(filename, 'w+') as f:\n w = csv.writer(f)\n for line in data:\n w.writerow(line)\n except Exception:\n logger_excel.debug(\"write_data_csv: Unable to open/write file: {}\".format(filename))\n\n logger_excel.info(\"exit write_data_csv\")\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a GeoJSON Linestring. Latitude and Longitude have 2 values each.", "response": "def geometry_linestring(lat, lon, elev):\n \"\"\"\n GeoJSON Linestring. Latitude and Longitude have 2 values each.\n :param list lat: Latitude values\n :param list lon: Longitude values\n :return dict:\n \"\"\"\n logger_excel.info(\"enter geometry_linestring\")\n d = OrderedDict()\n coordinates = []\n temp = [\"\", \"\"]\n\n # Point type, Matching pairs.\n if lat[0] == lat[1] and lon[0] == lon[1]:\n logger_excel.info(\"matching geo coordinate\")\n lat.pop()\n lon.pop()\n d = geometry_point(lat, lon, elev)\n\n else:\n # Creates coordinates list\n logger_excel.info(\"unique geo coordinates\")\n for i in lon:\n temp[0] = i\n for j in lat:\n temp[1] = j\n coordinates.append(copy.copy(temp))\n if elev:\n for i in coordinates:\n i.append(elev)\n # Create geometry block\n d['type'] = 'Linestring'\n d['coordinates'] = coordinates\n logger_excel.info(\"exit geometry_linestring\")\n return d"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef geometry_range(crd_range, elev, crd_type):\n\n d = OrderedDict()\n coordinates = [[] for i in range(len(crd_range))]\n\n # latitude\n if crd_type == \"lat\":\n for idx, i in enumerate(crd_range):\n coordinates[idx] = [crd_range[idx], \"nan\"]\n if elev:\n coordinates[idx].append(elev)\n\n # longitude\n elif crd_type == \"lon\":\n for idx, i in enumerate(crd_range):\n coordinates[idx] = [\"nan\", crd_range[idx]]\n if elev:\n coordinates[idx].append(elev)\n\n d[\"type\"] = \"Range\"\n d[\"coordinates\"] = coordinates\n\n return d", "response": "Returns a dictionary of coordinates for a single object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef geometry_point(lat, lon, elev):\n logger_excel.info(\"enter geometry_point\")\n coordinates = []\n point_dict = OrderedDict()\n for idx, val in enumerate(lat):\n try:\n coordinates.append(lon[idx])\n coordinates.append(lat[idx])\n except IndexError as e:\n print(\"Error: Invalid geo coordinates\")\n logger_excel.debug(\"geometry_point: IndexError: lat: {}, lon: {}, {}\".format(lat, lon, e))\n\n coordinates.append(elev)\n point_dict['type'] = 'Point'\n point_dict['coordinates'] = coordinates\n logger_excel.info(\"exit geometry_point\")\n return point_dict", "response": "Returns a GeoJSON point in the format of a series of coordinates."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a dictionary of geometry values for the given latitude and longitude.", "response": "def compile_geometry(lat, lon, elev):\n \"\"\"\n Take in lists of lat and lon coordinates, and determine what geometry to create\n :param list lat: Latitude values\n :param list lon: Longitude values\n :param float elev: Elevation value\n :return dict:\n \"\"\"\n logger_excel.info(\"enter compile_geometry\")\n lat = _remove_geo_placeholders(lat)\n lon = _remove_geo_placeholders(lon)\n\n # 4 coordinate values\n if len(lat) == 2 and len(lon) == 2:\n logger_excel.info(\"found 4 coordinates\")\n geo_dict = geometry_linestring(lat, lon, elev)\n\n # # 4 coordinate values\n # if (lat[0] != lat[1]) and (lon[0] != lon[1]):\n # geo_dict = geometry_polygon(lat, lon)\n # # 3 unique coordinates\n # else:\n # geo_dict = geometry_multipoint(lat, lon)\n #\n\n # 2 coordinate values\n elif len(lat) == 1 and len(lon) == 1:\n logger_excel.info(\"found 2 coordinates\")\n geo_dict = geometry_point(lat, lon, elev)\n\n # coordinate range. one value given but not the other.\n elif (None in lon and None not in lat) or (len(lat) > 0 and len(lon) == 0):\n geo_dict = geometry_range(lat, elev, \"lat\")\n\n elif (None in lat and None not in lon) or (len(lon) > 0 and len(lat) == 0):\n geo_dict = geometry_range(lat, elev, \"lon\")\n\n # Too many points, or no points\n else:\n geo_dict = {}\n logger_excel.warn(\"compile_geometry: invalid coordinates: lat: {}, lon: {}\".format(lat, lon))\n logger_excel.info(\"exit compile_geometry\")\n return geo_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncompiles top - level Geography dictionary.", "response": "def compile_geo(d):\n \"\"\"\n Compile top-level Geography dictionary.\n :param d:\n :return:\n \"\"\"\n logger_excel.info(\"enter compile_geo\")\n d2 = OrderedDict()\n\n # get max number of sites, or number of coordinate points given.\n num_loc = _get_num_locations(d)\n\n # if there's one more than one location put it in a collection\n if num_loc > 1:\n d2[\"type\"] = \"FeatureCollection\"\n features = []\n for idx in range(0, num_loc):\n # Do process for one site\n site = _parse_geo_locations(d, idx)\n features.append(site)\n d2[\"features\"] = features\n\n # if there's only one location\n elif num_loc == 1:\n d2 = _parse_geo_location(d)\n\n logger_excel.info(\"exit compile_geo\")\n return d2"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfind out how many locations are being parsed. Compare lengths of each ArcGIS metadata and return the maximal locations.", "response": "def _get_num_locations(d):\n \"\"\"\n Find out how many locations are being parsed. Compare lengths of each\n coordinate list and return the max\n :param dict d: Geo metadata\n :return int: Max number of locations\n \"\"\"\n lengths = []\n for key in EXCEL_GEO:\n try:\n if key != \"siteName\":\n lengths.append(len(d[key]))\n except Exception:\n lengths.append(1)\n\n try:\n num = max(lengths)\n except ValueError:\n num = 0\n return num"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nparse one geo location and return a dict of properties", "response": "def _parse_geo_location(d):\n \"\"\"\n Parse one geo location\n :param d:\n :return:\n \"\"\"\n d2 = OrderedDict()\n filt = {}\n d2['type'] = 'Feature'\n # If the necessary keys are missing, put in placeholders so there's no KeyErrors.\n for key in EXCEL_GEO:\n if key not in d:\n d[key] = \"\"\n\n # Compile the geometry based on the info available.\n d2['geometry'] = compile_geometry([d['latMin'], d['latMax']], [d['lonMin'], d['lonMax']], d['elevation'])\n d2['properties'] = {'siteName': d['siteName']}\n\n return d2"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse one geo location in a dictionary", "response": "def _parse_geo_locations(d, idx):\n \"\"\"\n Parse one geo location\n :param d:\n :return:\n \"\"\"\n d2 = OrderedDict()\n filt = {}\n d2['type'] = 'Feature'\n # If the necessary keys are missing, put in placeholders so there's no KeyErrors.\n for key in EXCEL_GEO:\n if key not in d:\n d[key] = \"\"\n\n for key in EXCEL_GEO:\n try:\n if key == \"siteName\" and isinstance(d[\"siteName\"], str):\n filt[\"siteName\"] = d[\"siteName\"]\n else:\n filt[key] = d[key][idx]\n except KeyError:\n filt[key] = None\n except TypeError:\n filt[key] = None\n\n # Compile the geometry based on the info available.\n d2['geometry'] = compile_geometry([filt['latMin'], filt['latMax']], [filt['lonMin'], filt['lonMax']], filt['elevation'])\n d2['properties'] = {'siteName': filt['siteName']}\n\n return d2"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsplits the string of author names into the BibJSON format.", "response": "def compile_authors(cell):\n \"\"\"\n Split the string of author names into the BibJSON format.\n :param str cell: Data from author cell\n :return: (list of dicts) Author names\n \"\"\"\n logger_excel.info(\"enter compile_authors\")\n author_lst = []\n s = cell.split(';')\n for w in s:\n author_lst.append(w.lstrip())\n logger_excel.info(\"exit compile_authors\")\n return author_lst"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef compile_temp(d, key, value):\n if not value:\n d[key] = None\n elif len(value) == 1:\n d[key] = value[0]\n else:\n d[key] = value\n return d", "response": "Compiles temporary dictionaries for metadata. Adds a new entry to an existing dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncompiles the funding block.", "response": "def compile_fund(workbook, sheet, row, col):\n \"\"\"\n Compile funding entries. Iter both rows at the same time. Keep adding entries until both cells are empty.\n :param obj workbook:\n :param str sheet:\n :param int row:\n :param int col:\n :return list of dict: l\n \"\"\"\n logger_excel.info(\"enter compile_fund\")\n l = []\n temp_sheet = workbook.sheet_by_name(sheet)\n while col < temp_sheet.ncols:\n col += 1\n try:\n # Make a dictionary for this funding entry.\n _curr = {\n 'agency': temp_sheet.cell_value(row, col),\n 'grant': temp_sheet.cell_value(row+1, col),\n \"principalInvestigator\": temp_sheet.cell_value(row+2, col),\n \"country\": temp_sheet.cell_value(row + 3, col)\n }\n # Make a list for all\n _exist = [temp_sheet.cell_value(row, col), temp_sheet.cell_value(row+1, col),\n temp_sheet.cell_value(row+2, col), temp_sheet.cell_value(row+3, col)]\n\n # Remove all empty items from the list\n _exist = [i for i in _exist if i]\n # If we have all empty entries, then don't continue. Quit funding and return what we have.\n if not _exist:\n return l\n\n # We have funding data. Add this funding block to the growing list.\n l.append(_curr)\n\n except IndexError as e:\n logger_excel.debug(\"compile_fund: IndexError: sheet:{} row:{} col:{}, {}\".format(sheet, row, col, e))\n logger_excel.info(\"exit compile_fund\")\n return l"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __compare_vars(a1, a2, name):\n try:\n a1 = [i for i in a1 if i]\n a2 = [i for i in a2 if i]\n a3 = set(a1).symmetric_difference(set(a2))\n if a3:\n print(\"- Error: Variables are not entered correctly in sheet: {}\\n\\tUnmatched variables: {}\".format(name, a3))\n except Exception as e:\n logger_excel.error(\"compare_vars: {}\".format(e))\n return", "response": "Check that the metadata variable names are the same as the data header variable names in the sheet name."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the filename based on the dataset name in the metadata", "response": "def __get_datasetname(d, filename):\n \"\"\"\n Get the filename based on the dataset name in the metadata\n :param str filename: Filename.lpd\n :return str: Filename\n \"\"\"\n try:\n filename = d[\"dataSetName\"]\n except KeyError:\n logger_excel.info(\"get_datasetname: KeyError: No dataSetName found. Reverting to: {}\".format(filename))\n return filename"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the filenames in the sheets metadata", "response": "def __set_sheet_filenames(sheets, n):\n \"\"\"\n Use the dataset name to build the filenames in the sheets metadata\n :param list sheets: Sheet metadata\n :param str n: Dataset Name\n :return list: Sheet metadata\n \"\"\"\n try:\n for idx, sheet in enumerate(sheets):\n try:\n sheets[idx][\"filename\"] = \"{}.{}\".format(n, sheet[\"filename\"])\n except Exception as e:\n logger_excel.error(\"set_sheet_filenames: inner: {}\".format(e), exc_info=True)\n except Exception as q:\n logger_excel.error(\"set_sheet_filenames: outer: {}\".format(q), exc_info=True)\n return sheets"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef name_to_jsonld(title_in):\n title_out = ''\n try:\n title_in = title_in.lower()\n title_out = EXCEL_LIPD_MAP_FLAT[title_in]\n except (KeyError, AttributeError) as e:\n if \"(\" in title_in:\n title_in = title_in.split(\"(\")[0].strip()\n # try to find an exact match first.\n try:\n v = EXCEL_LIPD_MAP_FLAT[title_in]\n return v\n except KeyError:\n pass\n\n # if no exact match, find whatever is a closest match\n for k, v in EXCEL_LIPD_MAP_FLAT.items():\n if k in title_in:\n return v\n if not title_out:\n logger_excel.debug(\"name_to_jsonld: No match found: {}\".format(title_in))\n return title_out", "response": "Convert formal titles to camelcase json_ld text that matches our context file"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmatching data type and return string", "response": "def instance_str(cell):\n \"\"\"\n Match data type and return string\n :param any cell:\n :return str:\n \"\"\"\n if isinstance(cell, str):\n return 'str'\n elif isinstance(cell, int):\n return 'int'\n elif isinstance(cell, float):\n return 'float'\n else:\n return 'unknown'"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nextract units from parenthesis in a string. i. e. elevation meters", "response": "def extract_units(string_in):\n \"\"\"\n Extract units from parenthesis in a string. i.e. \"elevation (meters)\"\n :param str string_in:\n :return str:\n \"\"\"\n start = '('\n stop = ')'\n return string_in[string_in.index(start) + 1:string_in.index(stop)]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list of cells in a specific row with a specific publication section.", "response": "def cells_rt_meta_pub(workbook, sheet, row, col, pub_qty):\n \"\"\"\n Publication section is special. It's possible there's more than one publication.\n :param obj workbook:\n :param str sheet:\n :param int row:\n :param int col:\n :param int pub_qty: Number of distinct publication sections in this file\n :return list: Cell data for a specific row\n \"\"\"\n logger_excel.info(\"enter cells_rt_meta_pub\")\n col_loop = 0\n cell_data = []\n temp_sheet = workbook.sheet_by_name(sheet)\n while col_loop < pub_qty:\n col += 1\n col_loop += 1\n cell_data.append(temp_sheet.cell_value(row, col))\n logger_excel.info(\"exit cells_rt_meta_pub\")\n return cell_data"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntraversing all cells in a row and return a list of all the data in that row.", "response": "def cells_rt_meta(workbook, sheet, row, col):\n \"\"\"\n Traverse all cells in a row. If you find new data in a cell, add it to the list.\n :param obj workbook:\n :param str sheet:\n :param int row:\n :param int col:\n :return list: Cell data for a specific row\n \"\"\"\n logger_excel.info(\"enter cells_rt_meta\")\n col_loop = 0\n cell_data = []\n temp_sheet = workbook.sheet_by_name(sheet)\n while col_loop < temp_sheet.ncols:\n col += 1\n col_loop += 1\n try:\n if temp_sheet.cell_value(row, col) != xlrd.empty_cell and temp_sheet.cell_value(row, col) != '':\n cell_data.append(temp_sheet.cell_value(row, col))\n except IndexError as e:\n logger_excel.warn(\"cells_rt_meta: IndexError: sheet: {}, row: {}, col: {}, {}\".format(sheet, row, col, e))\n logger_excel.info(\"exit cells_right_meta\")\n return cell_data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cells_dn_meta(workbook, sheet, row, col, final_dict):\n logger_excel.info(\"enter cells_dn_meta\")\n row_loop = 0\n pub_cases = ['id', 'year', 'author', 'journal', 'issue', 'volume', 'title', 'pages',\n 'reportNumber', 'abstract', 'alternateCitation']\n geo_cases = ['latMin', 'lonMin', 'lonMax', 'latMax', 'elevation', 'siteName', 'location']\n funding_cases = [\"agency\", \"grant\", \"principalInvestigator\", \"country\"]\n\n # Temp\n pub_qty = 0\n geo_temp = {}\n general_temp = {}\n pub_temp = []\n funding_temp = []\n\n temp_sheet = workbook.sheet_by_name(sheet)\n\n # Loop until we hit the max rows in the sheet\n while row_loop < temp_sheet.nrows:\n try:\n # Get cell value\n cell = temp_sheet.cell_value(row, col)\n\n # If there is content in the cell...\n if cell not in EMPTY:\n\n # Convert title to correct format, and grab the cell data for that row\n title_formal = temp_sheet.cell_value(row, col)\n title_json = name_to_jsonld(title_formal)\n\n # If we don't have a title for it, then it's not information we want to grab\n if title_json:\n\n # Geo\n if title_json in geo_cases:\n cell_data = cells_rt_meta(workbook, sheet, row, col)\n geo_temp = compile_temp(geo_temp, title_json, cell_data)\n\n # Pub\n # Create a list of dicts. One for each pub column.\n elif title_json in pub_cases:\n\n # Authors seem to be the only consistent field we can rely on to determine number of Pubs.\n if title_json == 'author':\n cell_data = cells_rt_meta(workbook, sheet, row, col)\n pub_qty = len(cell_data)\n for i in range(pub_qty):\n author_lst = compile_authors(cell_data[i])\n pub_temp.append({'author': author_lst, 'pubDataUrl': 'Manually Entered'})\n else:\n cell_data = cells_rt_meta_pub(workbook, sheet, row, col, pub_qty)\n for pub in range(pub_qty):\n if title_json == 'id':\n pub_temp[pub]['identifier'] = [{\"type\": \"doi\", \"id\": cell_data[pub]}]\n else:\n pub_temp[pub][title_json] = cell_data[pub]\n # Funding\n elif title_json in funding_cases:\n if title_json == \"agency\":\n funding_temp = compile_fund(workbook, sheet, row, col)\n\n # All other cases do not need fancy structuring\n else:\n cell_data = cells_rt_meta(workbook, sheet, row, col)\n general_temp = compile_temp(general_temp, title_json, cell_data)\n\n except IndexError as e:\n logger_excel.debug(\"cells_dn_datasheets: IndexError: sheet: {}, row: {}, col: {}, {}\".format(sheet, row, col, e))\n row += 1\n row_loop += 1\n\n # Compile the more complicated items\n geo = compile_geo(geo_temp)\n\n logger_excel.info(\"compile metadata dictionary\")\n # Insert into final dictionary\n final_dict['@context'] = \"context.jsonld\"\n final_dict['pub'] = pub_temp\n final_dict['funding'] = funding_temp\n final_dict['geo'] = geo\n\n # Add remaining general items\n for k, v in general_temp.items():\n final_dict[k] = v\n logger_excel.info(\"exit cells_dn_meta\")\n return final_dict", "response": "Traverse all cells in a column moving downward and create a new one."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef count_chron_variables(temp_sheet):\n total_count = 0\n start_row = traverse_to_chron_var(temp_sheet)\n while temp_sheet.cell_value(start_row, 0) != '':\n total_count += 1\n start_row += 1\n return total_count", "response": "Count the number of chron variables in a resource tree."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget all the vars in the chron sheet", "response": "def get_chron_var(temp_sheet, start_row):\n \"\"\"\n Capture all the vars in the chron sheet (for json-ld output)\n :param obj temp_sheet:\n :param int start_row:\n :return: (list of dict) column data\n \"\"\"\n col_dict = OrderedDict()\n out_list = []\n column = 1\n\n while (temp_sheet.cell_value(start_row, 0) != '') and (start_row < temp_sheet.nrows):\n short_cell = temp_sheet.cell_value(start_row, 0)\n units_cell = temp_sheet.cell_value(start_row, 1)\n long_cell = temp_sheet.cell_value(start_row, 2)\n\n # Fill the dictionary for this column\n col_dict['number'] = column\n col_dict['variableName'] = short_cell\n col_dict['description'] = long_cell\n col_dict['units'] = units_cell\n out_list.append(col_dict.copy())\n start_row += 1\n column += 1\n\n return out_list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntraversing to the first row that has chron data", "response": "def traverse_to_chron_data(temp_sheet):\n \"\"\"\n Traverse down to the first row that has chron data\n :param obj temp_sheet:\n :return int: traverse_row\n \"\"\"\n traverse_row = traverse_to_chron_var(temp_sheet)\n reference_var = temp_sheet.cell_value(traverse_row, 0)\n\n # Traverse past all the short_names, until you hit a blank cell (the barrier)\n while temp_sheet.cell_value(traverse_row, 0) != '':\n traverse_row += 1\n # Traverse past the empty cells until we hit the chron data area\n while temp_sheet.cell_value(traverse_row, 0) == '':\n traverse_row += 1\n\n # Check if there is a header row. If there is, move past it. We don't want that data\n if temp_sheet.cell_value(traverse_row, 0) == reference_var:\n traverse_row += 1\n logger_excel.info(\"traverse_to_chron_data: row:{}\".format(traverse_row))\n return traverse_row"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntraverse down to the first variable .", "response": "def traverse_to_chron_var(temp_sheet):\n \"\"\"\n Traverse down to the row that has the first variable\n :param obj temp_sheet:\n :return int:\n \"\"\"\n row = 0\n while row < temp_sheet.nrows - 1:\n if 'Parameter' in temp_sheet.cell_value(row, 0):\n row += 1\n break\n row += 1\n logger_excel.info(\"traverse_to_chron_var: row:{}\".format(row))\n return row"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting all data in for a specific chron data row", "response": "def get_chron_data(temp_sheet, row, total_vars):\n \"\"\"\n Capture all data in for a specific chron data row (for csv output)\n :param obj temp_sheet:\n :param int row:\n :param int total_vars:\n :return list: data_row\n \"\"\"\n data_row = []\n missing_val_list = ['none', 'na', '', '-']\n for i in range(0, total_vars):\n cell = temp_sheet.cell_value(row, i)\n if isinstance(cell, str):\n cell = cell.lower()\n if cell in missing_val_list:\n cell = 'nan'\n data_row.append(cell)\n return data_row"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _remove_geo_placeholders(l):\n vals = []\n for i in l:\n if isinstance(i, list):\n for k in i:\n if isinstance(k, float) or isinstance(k, int):\n vals.append(k)\n elif isinstance(i, float) or isinstance(i, int):\n vals.append(i)\n vals.sort()\n return vals", "response": "Remove placeholders from coordinate lists and sort\n "} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads the jsonld file in the LiPD archive and returns it in the Jsonld format.", "response": "def read_jsonld():\n \"\"\"\n Find jsonld file in the cwd (or within a 2 levels below cwd), and load it in.\n :return dict: Jsonld data\n \"\"\"\n _d = {}\n\n try:\n # Find a jsonld file in cwd. If none, fallback for a json file. If neither found, return empty.\n _filename = [file for file in os.listdir() if file.endswith(\".jsonld\")][0]\n if not _filename:\n _filename = [file for file in os.listdir() if file.endswith(\".json\")][0]\n\n if _filename:\n try:\n # Load and decode\n _d = demjson.decode_file(_filename, decode_float=float)\n logger_jsons.info(\"Read JSONLD successful: {}\".format(_filename))\n except FileNotFoundError as fnf:\n print(\"Error: metadata file not found: {}\".format(_filename))\n logger_jsons.error(\"read_jsonld: FileNotFound: {}, {}\".format(_filename, fnf))\n except Exception:\n try:\n _d = demjson.decode_file(_filename, decode_float=float, encoding=\"latin-1\")\n logger_jsons.info(\"Read JSONLD successful: {}\".format(_filename))\n except Exception as e:\n print(\"Error: unable to read metadata file: {}\".format(e))\n logger_jsons.error(\"read_jsonld: Exception: {}, {}\".format(_filename, e))\n else:\n print(\"Error: metadata file (.jsonld) not found in LiPD archive\")\n except Exception as e:\n print(\"Error: Unable to find jsonld file in LiPD archive. This may be a corrupt file.\")\n logger_jsons.error(\"Error: Unable to find jsonld file in LiPD archive. This may be a corrupt file.\")\n logger_jsons.info(\"exit read_json_from_file\")\n return _d"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read_json_from_file(filename):\n logger_jsons.info(\"enter read_json_from_file\")\n d = OrderedDict()\n try:\n # Load and decode\n d = demjson.decode_file(filename, decode_float=float)\n logger_jsons.info(\"successful read from json file\")\n except FileNotFoundError:\n # Didn't find a jsonld file. Maybe it's a json file instead?\n try:\n d = demjson.decode_file(os.path.splitext(filename)[0] + '.json', decode_float=float)\n except FileNotFoundError as e:\n # No json or jsonld file. Exit\n print(\"Error: jsonld file not found: {}\".format(filename))\n logger_jsons.debug(\"read_json_from_file: FileNotFound: {}, {}\".format(filename, e))\n except Exception:\n print(\"Error: unable to read jsonld file\")\n\n if d:\n d = rm_empty_fields(d)\n logger_jsons.info(\"exit read_json_from_file\")\n return d", "response": "Read the JSON data from a file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef idx_num_to_name(L):\n logger_jsons.info(\"enter idx_num_to_name\")\n\n try:\n if \"paleoData\" in L:\n L[\"paleoData\"] = _import_data(L[\"paleoData\"], \"paleo\")\n if \"chronData\" in L:\n L[\"chronData\"] = _import_data(L[\"chronData\"], \"chron\")\n except Exception as e:\n logger_jsons.error(\"idx_num_to_name: {}\".format(e))\n print(\"Error: idx_name_to_num: {}\".format(e))\n\n logger_jsons.info(\"exit idx_num_to_name\")\n return L", "response": "Switch from index - by - number to index - by - name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nimporting the metadata and change it to index - by - name.", "response": "def _import_data(sections, crumbs):\n \"\"\"\n Import the section metadata and change it to index-by-name.\n\n :param list sections: Metadata\n :param str pc: paleo or chron\n :return dict _sections: Metadata\n \"\"\"\n logger_jsons.info(\"enter import_data: {}\".format(crumbs))\n _sections = OrderedDict()\n try:\n for _idx, section in enumerate(sections):\n _tmp = OrderedDict()\n\n # Process the paleo measurement table\n if \"measurementTable\" in section:\n _tmp[\"measurementTable\"] = _idx_table_by_name(section[\"measurementTable\"], \"{}{}{}\".format(crumbs, _idx, \"measurement\"))\n\n # Process the paleo model\n if \"model\" in section:\n _tmp[\"model\"] = _import_model(section[\"model\"], \"{}{}{}\".format(crumbs, _idx, \"model\"))\n\n # Get the table name from the first measurement table, and use that as the index name for this table\n _table_name = \"{}{}\".format(crumbs, _idx)\n\n # If we only have generic table names, and one exists already, don't overwrite. Create dynamic name\n if _table_name in _sections:\n _table_name = \"{}_{}\".format(_table_name, _idx)\n\n # Put the final product into the output dictionary. Indexed by name\n _sections[_table_name] = _tmp\n\n except Exception as e:\n logger_jsons.error(\"import_data: Exception: {}\".format(e))\n print(\"Error: import_data: {}\".format(e))\n\n logger_jsons.info(\"exit import_data: {}\".format(crumbs))\n return _sections"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nimport nested paleoModel data into a single metadata dictionary.", "response": "def _import_model(models, crumbs):\n \"\"\"\n Change the nested items of the paleoModel data. Overwrite the data in-place.\n\n :param list models: Metadata\n :param str crumbs: Crumbs\n :return dict _models: Metadata\n \"\"\"\n logger_jsons.info(\"enter import_model\".format(crumbs))\n _models = OrderedDict()\n try:\n for _idx, model in enumerate(models):\n # Keep the original dictionary, but replace the three main entries below\n\n # Do a direct replacement of chronModelTable columns. No table name, no table work needed.\n if \"summaryTable\" in model:\n model[\"summaryTable\"] = _idx_table_by_name(model[\"summaryTable\"], \"{}{}{}\".format(crumbs, _idx, \"summary\"))\n # Do a direct replacement of ensembleTable columns. No table name, no table work needed.\n if \"ensembleTable\" in model:\n model[\"ensembleTable\"] = _idx_table_by_name(model[\"ensembleTable\"], \"{}{}{}\".format(crumbs, _idx, \"ensemble\"))\n if \"distributionTable\" in model:\n model[\"distributionTable\"] = _idx_table_by_name(model[\"distributionTable\"], \"{}{}{}\".format(crumbs, _idx, \"distribution\"))\n\n _table_name = \"{}{}\".format(crumbs, _idx)\n _models[_table_name] = model\n except Exception as e:\n logger_jsons.error(\"import_model: {}\".format(e))\n print(\"Error: import_model: {}\".format(e))\n logger_jsons.info(\"exit import_model: {}\".format(crumbs))\n return _models"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _idx_table_by_name(tables, crumbs):\n _tables = OrderedDict()\n try:\n for _idx, _table in enumerate(tables):\n # Use \"name\" as tableName\n _name = \"{}{}\".format(crumbs, _idx)\n # Call idx_table_by_name\n _tmp = _idx_col_by_name(_table)\n if _name in _tables:\n _name = \"{}_{}\".format(_name, _idx)\n _tmp[\"tableName\"] = _name\n _tables[_name] = _tmp\n except Exception as e:\n logger_jsons.error(\"idx_table_by_name: {}\".format(e))\n print(\"Error: idx_table_by_name: {}\".format(e))\n\n return _tables", "response": "Import summary ensemble or distribution data."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_csv_from_json(d):\n logger_jsons.info(\"enter get_csv_from_json\")\n csv_data = OrderedDict()\n\n if \"paleoData\" in d:\n csv_data = _get_csv_from_section(d, \"paleoData\", csv_data)\n\n if \"chronData\" in d:\n csv_data = _get_csv_from_section(d, \"chronData\", csv_data)\n\n logger_jsons.info(\"exit get_csv_from_json\")\n return csv_data", "response": "Get CSV values when mixed into json data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting csv from paleo and chron sections", "response": "def _get_csv_from_section(d, pc, csv_data):\n \"\"\"\n Get csv from paleo and chron sections\n :param dict d: Metadata\n :param str pc: Paleo or chron\n :return dict: running csv data\n \"\"\"\n logger_jsons.info(\"enter get_csv_from_section: {}\".format(pc))\n\n for table, table_content in d[pc].items():\n # Create entry for this table/CSV file (i.e. Asia-1.measTable.PaleoData.csv)\n # Note: Each table has a respective CSV file.\n csv_data[table_content['filename']] = OrderedDict()\n for column, column_content in table_content['columns'].items():\n # Set the \"values\" into csv dictionary in order of column \"number\"\n csv_data[table_content['filename']][column_content['number']] = column_content['values']\n\n logger_jsons.info(\"exit get_csv_from_section: {}\".format(pc))\n return csv_data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef remove_csv_from_json(d):\n logger_jsons.info(\"enter remove_csv_from_json\")\n\n # Check both sections\n if \"paleoData\" in d:\n d = _remove_csv_from_section(d, \"paleoData\")\n\n if \"chronData\" in d:\n d = _remove_csv_from_section(d, \"chronData\")\n\n logger_jsons.info(\"exit remove_csv_from_json\")\n return d", "response": "Remove all CSV data entries from paleoData table in the JSON structure."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nremoves CSV from metadata in this section", "response": "def _remove_csv_from_section(d, pc):\n \"\"\"\n Remove CSV from metadata in this section\n :param dict d: Metadata\n :param str pc: Paleo or chron\n :return dict: Modified metadata\n \"\"\"\n logger_jsons.info(\"enter remove_csv_from_json: {}\".format(pc))\n\n for table, table_content in d[pc].items():\n for column, column_content in table_content['columns'].items():\n try:\n # try to delete the values key entry\n del column_content['values']\n except KeyError as e:\n # if the key doesn't exist, keep going\n logger_jsons.debug(\"remove_csv_from_json: KeyError: {}, {}\".format(pc, e))\n\n logger_jsons.info(\"exit remove_csv_from_json: {}\".format(pc))\n return d"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_json_to_file(json_data, filename=\"metadata\"):\n logger_jsons.info(\"enter write_json_to_file\")\n json_data = rm_empty_fields(json_data)\n # Use demjson to maintain unicode characters in output\n json_bin = demjson.encode(json_data, encoding='utf-8', compactly=False)\n # Write json to file\n try:\n open(\"{}.jsonld\".format(filename), \"wb\").write(json_bin)\n logger_jsons.info(\"wrote data to json file\")\n except FileNotFoundError as e:\n print(\"Error: Writing json to file: {}\".format(filename))\n logger_jsons.debug(\"write_json_to_file: FileNotFound: {}, {}\".format(filename, e))\n logger_jsons.info(\"exit write_json_to_file\")\n return", "response": "Write all JSON in python dictionary to a new json file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nswitching from index - by - name to index - by - number.", "response": "def idx_name_to_num(L):\n \"\"\"\n Switch from index-by-name to index-by-number.\n :param dict L: Metadata\n :return dict: Modified metadata\n \"\"\"\n logger_jsons.info(\"enter idx_name_to_num\")\n\n # Process the paleoData section\n if \"paleoData\" in L:\n L[\"paleoData\"] = _export_section(L[\"paleoData\"], \"paleo\")\n\n # Process the chronData section\n if \"chronData\" in L:\n L[\"chronData\"] = _export_section(L[\"chronData\"], \"chron\")\n\n logger_jsons.info(\"exit idx_name_to_num\")\n return L"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nswitch chron data to index - by - number", "response": "def _export_section(sections, pc):\n \"\"\"\n Switch chron data to index-by-number\n :param dict sections: Metadata\n :return list _sections: Metadata\n \"\"\"\n logger_jsons.info(\"enter export_data: {}\".format(pc))\n _sections = []\n\n for name, section in sections.items():\n\n # Process chron models\n if \"model\" in section:\n section[\"model\"] = _export_model(section[\"model\"])\n\n # Process the chron measurement table\n if \"measurementTable\" in section:\n section[\"measurementTable\"] = _idx_table_by_num(section[\"measurementTable\"])\n\n # Add only the table to the output list\n _sections.append(section)\n\n logger_jsons.info(\"exit export_data: {}\".format(pc))\n return _sections"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _export_model(models):\n logger_jsons.info(\"enter export_model\")\n _models = []\n try:\n for name, model in models.items():\n\n if \"summaryTable\" in model:\n model[\"summaryTable\"] = _idx_table_by_num(model[\"summaryTable\"])\n\n # Process ensemble table (special two columns)\n if \"ensembleTable\" in model:\n model[\"ensembleTable\"] = _idx_table_by_num(model[\"ensembleTable\"])\n\n if \"distributionTable\" in model:\n model[\"distributionTable\"] = _idx_table_by_num(model[\"distributionTable\"])\n\n _models.append(model)\n\n except Exception as e:\n logger_jsons.error(\"export_model: {}\".format(e))\n print(\"Error: export_model: {}\".format(e))\n logger_jsons.info(\"exit export_model\")\n return _models", "response": "Switch model tables to index - by - number"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nswitch tables to index - by - number", "response": "def _idx_table_by_num(tables):\n \"\"\"\n Switch tables to index-by-number\n\n :param dict tables: Metadata\n :return list _tables: Metadata\n \"\"\"\n logger_jsons.info(\"enter idx_table_by_num\")\n _tables = []\n for name, table in tables.items():\n try:\n # Get the modified table data\n tmp = _idx_col_by_num(table)\n # Append it to the growing calibrated age list of tables\n _tables.append(tmp)\n except Exception as e:\n logger_jsons.error(\"idx_table_by_num: {}\".format(e))\n logger_jsons.info(\"exit idx_table_by_num\")\n return _tables"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nindex columns by number instead of by name. Use number key in column to maintain order", "response": "def _idx_col_by_num(table):\n \"\"\"\n Index columns by number instead of by name. Use \"number\" key in column to maintain order\n\n :param dict table: Metadata\n :return list _table: Metadata\n \"\"\"\n _columns = []\n try:\n # Create an empty list that matches the length of the column dictionary\n _columns = [None for i in range(0, len(table[\"columns\"]))]\n\n # Loop and start placing data in the output list based on its \"number\" entry\n for _name, _dat in table[\"columns\"].items():\n try:\n # Special case for ensemble table \"numbers\" list\n if isinstance(_dat[\"number\"], list):\n _columns.append(_dat)\n # Place at list index based on its column number\n else:\n # cast number to int, just in case it's stored as a string.\n n = int(_dat[\"number\"])\n _columns[n - 1] = _dat\n except KeyError as ke:\n print(\"Error: idx_col_by_num: {}\".format(ke))\n logger_jsons.error(\"idx_col_by_num: KeyError: missing number key: {}, {}\".format(_name, ke))\n except Exception as e:\n print(\"Error: idx_col_by_num: {}\".format(e))\n logger_jsons.error(\"idx_col_by_num: Exception: {}\".format(e))\n\n table[\"columns\"] = _columns\n except Exception as e:\n logger_jsons.error(\"idx_col_by_num: {}\".format(e))\n print(\"Error: idx_col_by_num: {}\".format(e))\n\n return table"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_bag(bag_dir, bag_info=None, processes=1, checksum=None):\n bag_dir = os.path.abspath(bag_dir)\n logger.info(\"creating bag for directory %s\", bag_dir)\n # assume md5 checksum if not specified\n if not checksum:\n checksum = ['md5']\n\n if not os.path.isdir(bag_dir):\n logger.error(\"no such bag directory %s\", bag_dir)\n raise RuntimeError(\"no such bag directory %s\" % bag_dir)\n\n old_dir = os.path.abspath(os.path.curdir)\n os.chdir(bag_dir)\n\n try:\n unbaggable = _can_bag(os.curdir)\n if unbaggable:\n logger.error(\"no write permissions for the following directories and files: \\n%s\", unbaggable)\n raise BagError(\"Not all files/folders can be moved.\")\n unreadable_dirs, unreadable_files = _can_read(os.curdir)\n if unreadable_dirs or unreadable_files:\n if unreadable_dirs:\n logger.error(\"The following directories do not have read permissions: \\n%s\", unreadable_dirs)\n if unreadable_files:\n logger.error(\"The following files do not have read permissions: \\n%s\", unreadable_files)\n raise BagError(\"Read permissions are required to calculate file fixities.\")\n else:\n logger.info(\"creating data dir\")\n\n cwd = os.getcwd()\n temp_data = tempfile.mkdtemp(dir=cwd)\n\n for f in os.listdir('.'):\n if os.path.abspath(f) == temp_data:\n continue\n new_f = os.path.join(temp_data, f)\n logger.info(\"moving %s to %s\", f, new_f)\n os.rename(f, new_f)\n\n logger.info(\"moving %s to %s\", temp_data, 'data')\n os.rename(temp_data, 'data')\n\n # permissions for the payload directory should match those of the\n # original directory\n os.chmod('data', os.stat(cwd).st_mode)\n\n for c in checksum:\n logger.info(\"writing manifest-%s.txt\", c)\n Oxum = _make_manifest('manifest-%s.txt' % c, 'data', processes, c)\n\n logger.info(\"writing bagit.txt\")\n txt = \"\"\"BagIt-Version: 0.97\\nTag-File-Character-Encoding: UTF-8\\n\"\"\"\n with open(\"bagit.txt\", \"w\") as bagit_file:\n bagit_file.write(txt)\n\n logger.info(\"writing bag-info.txt\")\n if bag_info is None:\n bag_info = {}\n\n # allow 'Bagging-Date' and 'Bag-Software-Agent' to be overidden\n if 'Bagging-Date' not in bag_info:\n bag_info['Bagging-Date'] = date.strftime(date.today(), \"%Y-%m-%d\")\n if 'Bag-Software-Agent' not in bag_info:\n bag_info['Bag-Software-Agent'] = 'bagit.py '\n bag_info['Payload-Oxum'] = Oxum\n _make_tag_file('bag-info.txt', bag_info)\n\n for c in checksum:\n _make_tagmanifest_file(c, bag_dir)\n except Exception:\n logger.exception(\"An error occurred creating the bag\")\n raise\n finally:\n os.chdir(old_dir)\n\n return Bag(bag_dir)", "response": "Create a bag from a given directory."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncalculates the hash of the file at the provided path.", "response": "def _calculate_file_hashes(full_path, f_hashers):\n \"\"\"\n Returns a dictionary of (algorithm, hexdigest) values for the provided\n filename\n \"\"\"\n if not os.path.exists(full_path):\n raise BagValidationError(\"%s does not exist\" % full_path)\n\n try:\n with open(full_path, 'rb') as f:\n while True:\n block = f.read(1048576)\n if not block:\n break\n for i in list(f_hashers.values()):\n i.update(block)\n except IOError as e:\n raise BagValidationError(\"could not read %s: %s\" % (full_path, str(e)))\n except OSError as e:\n raise BagValidationError(\"could not read %s: %s\" % (full_path, str(e)))\n\n return dict(\n (alg, h.hexdigest()) for alg, h in list(f_hashers.items())\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses a tag file and yields a list of tuples.", "response": "def _parse_tags(tag_file):\n \"\"\"Parses a tag file, according to RFC 2822. This\n includes line folding, permitting extra-long\n field values.\n\n See http://www.faqs.org/rfcs/rfc2822.html for\n more information.\n \"\"\"\n\n tag_name = None\n tag_value = None\n\n # Line folding is handled by yielding values only after we encounter\n # the start of a new tag, or if we pass the EOF.\n for num, line in enumerate(tag_file):\n # If byte-order mark ignore it for now.\n if num == 0:\n if line.startswith(BOM):\n line = line.lstrip(BOM)\n # Skip over any empty or blank lines.\n if len(line) == 0 or line.isspace():\n continue\n elif line[0].isspace() and tag_value is not None: # folded line\n tag_value += line\n else:\n # Starting a new tag; yield the last one.\n if tag_name:\n yield (tag_name, tag_value.strip())\n\n if ':' not in line:\n raise BagValidationError(\"invalid line '%s' in %s\" % (line.strip(),\n os.path.basename(tag_file.name)))\n\n parts = line.strip().split(':', 1)\n tag_name = parts[0].strip()\n tag_value = parts[1]\n\n # Passed the EOF. All done after this.\n if tag_name:\n yield (tag_name, tag_value.strip())"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns ( unwriteable files and folders", "response": "def _can_bag(test_dir):\n \"\"\"returns (unwriteable files/folders)\n \"\"\"\n unwriteable = []\n for inode in os.listdir(test_dir):\n if not os.access(os.path.join(test_dir, inode), os.W_OK):\n unwriteable.append(os.path.join(os.path.abspath(test_dir), inode))\n return tuple(unwriteable)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a tuple of tuples containing all directories and files that can be read from the test directory.", "response": "def _can_read(test_dir):\n \"\"\"\n returns ((unreadable_dirs), (unreadable_files))\n \"\"\"\n unreadable_dirs = []\n unreadable_files = []\n for dirpath, dirnames, filenames in os.walk(test_dir):\n for dn in dirnames:\n if not os.access(os.path.join(dirpath, dn), os.R_OK):\n unreadable_dirs.append(os.path.join(dirpath, dn))\n for fn in filenames:\n if not os.access(os.path.join(dirpath, fn), os.R_OK):\n unreadable_files.append(os.path.join(dirpath, fn))\n return (tuple(unreadable_dirs), tuple(unreadable_files))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompare the files actually in the payload and returns a list of all the files that need to be fetched.", "response": "def compare_fetch_with_fs(self):\n \"\"\"Compares the fetch entries with the files actually\n in the payload, and returns a list of all the files\n that still need to be fetched.\n \"\"\"\n\n files_on_fs = set(self.payload_files())\n files_in_fetch = set(self.files_to_be_fetched())\n\n return list(files_in_fetch - files_on_fs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save(self, processes=1, manifests=False):\n # Error checking\n if not self.path:\n raise BagError(\"Bag does not have a path.\")\n\n # Change working directory to bag directory so helper functions work\n old_dir = os.path.abspath(os.path.curdir)\n os.chdir(self.path)\n\n # Generate new manifest files\n if manifests:\n unbaggable = _can_bag(self.path)\n if unbaggable:\n logger.error(\"no write permissions for the following directories and files: \\n%s\", unbaggable)\n raise BagError(\"Not all files/folders can be moved.\")\n unreadable_dirs, unreadable_files = _can_read(self.path)\n if unreadable_dirs or unreadable_files:\n if unreadable_dirs:\n logger.error(\"The following directories do not have read permissions: \\n%s\", unreadable_dirs)\n if unreadable_files:\n logger.error(\"The following files do not have read permissions: \\n%s\", unreadable_files)\n raise BagError(\"Read permissions are required to calculate file fixities.\")\n\n oxum = None\n self.algs = list(set(self.algs)) # Dedupe\n for alg in self.algs:\n logger.info('updating manifest-%s.txt', alg)\n oxum = _make_manifest('manifest-%s.txt' % alg, 'data', processes, alg)\n\n # Update Payload-Oxum\n logger.info('updating %s', self.tag_file_name)\n if oxum:\n self.info['Payload-Oxum'] = oxum\n\n _make_tag_file(self.tag_file_name, self.info)\n\n # Update tag-manifest for changes to manifest & bag-info files\n for alg in self.algs:\n _make_tagmanifest_file(alg, self.path)\n\n # Reload the manifests\n self._load_manifests()\n\n os.chdir(old_dir)", "response": "Save the current bag to the file system."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef missing_optional_tagfiles(self):\n for tagfilepath in list(self.tagfile_entries().keys()):\n if not os.path.isfile(os.path.join(self.path, tagfilepath)):\n yield tagfilepath", "response": "Yields the optional tagfiles that are missing from the manifest."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate(self, processes=1, fast=False):\n self._validate_structure()\n self._validate_bagittxt()\n self._validate_contents(processes=processes, fast=fast)\n return True", "response": "Checks that the structure and contents are valid."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn validation success or failure as boolean.", "response": "def is_valid(self, fast=False):\n \"\"\"Returns validation success or failure as boolean.\n Optional fast parameter passed directly to validate().\n \"\"\"\n try:\n self.validate(fast=fast)\n except BagError:\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _validate_entries(self, processes):\n errors = list()\n\n # First we'll make sure there's no mismatch between the filesystem\n # and the list of files in the manifest(s)\n only_in_manifests, only_on_fs = self.compare_manifests_with_fs()\n for path in only_in_manifests:\n e = FileMissing(path)\n logger.warning(str(e))\n errors.append(e)\n for path in only_on_fs:\n e = UnexpectedFile(path)\n logger.warning(str(e))\n errors.append(e)\n\n # To avoid the overhead of reading the file more than once or loading\n # potentially massive files into memory we'll create a dictionary of\n # hash objects so we can open a file, read a block and pass it to\n # multiple hash objects\n\n available_hashers = set()\n for alg in self.algs:\n try:\n hashlib.new(alg)\n available_hashers.add(alg)\n except ValueError:\n logger.warning(\"Unable to validate file contents using unknown %s hash algorithm\", alg)\n\n if not available_hashers:\n raise RuntimeError(\"%s: Unable to validate bag contents: none of the hash algorithms in %s are supported!\" % (self, self.algs))\n\n def _init_worker():\n signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n args = ((self.path, rel_path, hashes, available_hashers) for rel_path, hashes in list(self.entries.items()))\n\n try:\n if processes == 1:\n hash_results = list(map(_calc_hashes, args))\n else:\n try:\n pool = multiprocessing.Pool(processes if processes else None, _init_worker)\n hash_results = pool.map(_calc_hashes, args)\n finally:\n try:\n pool.terminate()\n except:\n # we really don't care about any exception in terminate()\n pass\n # Any unhandled exceptions are probably fatal\n except:\n logger.exception(\"unable to calculate file hashes for %s\", self)\n raise\n\n for rel_path, f_hashes, hashes in hash_results:\n for alg, computed_hash in list(f_hashes.items()):\n stored_hash = hashes[alg]\n if stored_hash.lower() != computed_hash:\n e = ChecksumMismatch(rel_path, alg, stored_hash.lower(), computed_hash)\n logger.warning(str(e))\n errors.append(e)\n\n if errors:\n raise BagValidationError(\"invalid bag\", errors)", "response": "Validate that the actual file contents matches the recorded hashes stored in the manifest files\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _validate_bagittxt(self):\n bagit_file_path = os.path.join(self.path, \"bagit.txt\")\n with open(bagit_file_path, 'r') as bagit_file:\n first_line = bagit_file.readline()\n if first_line.startswith(BOM):\n raise BagValidationError(\"bagit.txt must not contain a byte-order mark\")", "response": "Verify that bagit. txt conforms to specification\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _fix_numeric_types(c):\n try:\n for var, data in c.items():\n for k, v in data.items():\n if k in [\"hasMeanValue\", \"hasMaxValue\", \"hasMinValue\", \"hasMedianValue\"]:\n if math.isnan(v):\n c[var][k] = \"nan\"\n elif not isinstance(v, (int, float)):\n try:\n c[var][k] = float(v)\n except Exception as e:\n logger_inferred_data.info(\"fix_numeric_types: converting float: {}\".format(e))\n elif k == \"hasResolution\":\n for b, g in v.items():\n if b in [\"hasMeanValue\", \"hasMaxValue\", \"hasMinValue\", \"hasMedianValue\"]:\n if math.isnan(g):\n c[var][k][b] = \"nan\"\n elif not isinstance(g, (int, float)):\n try:\n f = float(g)\n c[var][k][b] = f\n except Exception as e:\n logger_inferred_data.info(\"fix_numeric_types: converting float: {}\".format(e))\n except Exception as e:\n logger_inferred_data.error(\"fix_numeric_types: {}\".format(e))\n return c", "response": "Fix any numpy data types that didn t map back to python data types properly\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_age(columns):\n\n # Need to check multiple places for age, year, or yrbp\n # 1. Check the column variable name\n # 2. Check for an \"inferredVariableType\n\n age = []\n try:\n # Step 1:\n # Check for age first (exact match)\n if \"age\" in columns:\n # Save the values\n age = columns[\"age\"][\"values\"]\n # Check for year second (exact match)\n elif \"year\" in columns:\n # Save the values\n age = columns[\"year\"][\"values\"]\n elif \"yrbp\" in columns:\n # Save the values\n age = columns[\"yrbp\"][\"values\"]\n\n # Step 2 No exact matches, check for an \"inferredVariableType\" : \"Age\" or \"Year\"\n if not age:\n # Loop through column variableNames\n for k, v in columns.items():\n try:\n if v[\"inferredVariableType\"].lower() == \"age\":\n age = v[\"values\"]\n elif v[\"inferredVariableType\"].lower() == \"year\":\n age = v[\"values\"]\n except Exception:\n # Not too concerned if we error here.\n pass\n\n # Step 3: No year or age found, start searching for a loose match with \"age\" or \"year\" in the variableName.\n if not age:\n # Loop through column variableNames\n for k, v in columns.items():\n k_low = k.lower()\n # Check for age in the variableName (loose match)\n if \"age\" in k_low:\n # Save the values\n age = v[\"values\"]\n # Check for year in variableName (loose match)\n elif \"year\" in k_low:\n # Save the values\n age = v[\"values\"]\n elif \"yrbp\" in k_low:\n # Save the values\n age = v[\"values\"]\n\n # If we expected a dictionary, and didn't get one\n except AttributeError as e:\n logger_inferred_data.warn(\"get_age: AttributeError: {}\".format(e))\n # If we were looking for values, and didn't get one\n except KeyError as e:\n logger_inferred_data.warn(\"get_age: KeyError: {}\".format(e))\n # Fail-safe for other problems\n except Exception as e:\n logger_inferred_data.warn(\"get_age: Exception: {}\".format(e))\n\n return age", "response": "Return the age of the candidate in the table column data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_resolution(age, values):\n res = []\n try:\n # Get the nan index from the values and remove from age\n # age2 = age[np.where(~np.isnan(values))[0]]\n # res = np.diff(age2)\n\n # Make sure that age and values are numpy arrays\n # age = np.array(age, dtype=float)\n # values = np.array(values, dtype=float)\n # Get the nan index from the values and remove from age\n age2 = age[np.where(~np.isnan(values))[0]]\n res = np.diff(age2)\n\n except IndexError as e:\n print(\"get_resolution: IndexError: {}\".format(e))\n except Exception as e:\n logger_inferred_data.warn(\"get_resolution: Exception: {}\".format(e))\n\n return res", "response": "Calculates the resolution of the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef __get_inferred_data_res_2(v=None, calc=True):\n # Base: If something goes wrong, or if there are no values, then use \"NaN\" placeholders.\n d = {\n \"hasMinValue\": \"nan\", \"hasMaxValue\": \"nan\",\n \"hasMeanValue\": \"nan\", \"hasMedianValue\": \"nan\",\n }\n try:\n if calc:\n _min = np.nanmin(v)\n _max = np.nanmax(v)\n _mean = np.nanmean(v)\n _med = np.nanmedian(v)\n\n if np.isnan(_min):\n _min = \"nan\"\n else:\n _min = abs(_min)\n if np.isnan(_max):\n _max = \"nan\"\n else:\n _min = abs(_min)\n if np.isnan(_mean):\n _mean = \"nan\"\n else:\n _min = abs(_min)\n if np.isnan(_med):\n _med = \"nan\"\n else:\n _min = abs(_min)\n\n d = {\n \"hasMinValue\": _min,\n \"hasMaxValue\": _max,\n \"hasMeanValue\": _mean,\n \"hasMedianValue\": _med\n }\n except Exception as e:\n logger_inferred_data.error(\"get_inferred_data_res_2: {}\".format(e))\n\n return d", "response": "This function calculates the inferred data for a single element of the array v."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_inferred_data_res(column, age):\n try:\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n # Get the values for this column\n values = column[\"values\"]\n # Make sure that age and values are numpy arrays\n _values = np.array(copy.copy(values), dtype=float)\n # _values = _values[np.where(~np.isnan(_values))[0]]\n _age = np.array(age, dtype=float)\n # _age = _age[np.where(~np.isnan(_age))[0]]\n\n # If we have values, keep going\n if len(_values) != 0:\n # Get the resolution for this age and column values data\n res = _get_resolution(_age, _values)\n # If we have successful resolution data, keep going\n if len(res) != 0:\n column[\"hasResolution\"] = __get_inferred_data_res_2(res)\n\n # Remove the NaNs from the values list.\n _values = _values[np.where(~np.isnan(_values))[0]]\n # Calculate column non-resolution data, update the column with the results.\n column.update(__get_inferred_data_res_2(_values))\n\n except KeyError as e:\n logger_inferred_data.debug(\"get_inferred_data_column: KeyError: {}\".format(e))\n except Exception as e:\n logger_inferred_data.debug(\"get_inferred_data_column: Exception: {}\".format(e))\n\n return column", "response": "Calculate Resolution and m/m for column values."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_inferred_data_column(column):\n try:\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n # Get the values for this column\n values = column[\"values\"]\n # Make sure that age and values are numpy arrays\n _values = np.array(copy.copy(values), dtype=float)\n # If we have values, keep going\n if len(_values) != 0:\n # Remove the NaNs from the values list.\n _values = _values[np.where(~np.isnan(_values))[0]]\n # Use the values to create new entries and data\n column.update(__get_inferred_data_res_2(_values))\n\n # Even though we're not calculating resolution, still add it with \"NaN\" placeholders.\n column[\"hasResolution\"] = __get_inferred_data_res_2(None, calc=False)\n\n except KeyError as e:\n logger_inferred_data.debug(\"get_inferred_data_column: KeyError: {}\".format(e))\n except Exception as e:\n logger_inferred_data.debug(\"get_inferred_data_column: Exception: {}\".format(e))\n\n return column", "response": "Calculate the m / m / m for the values of the column."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncalculate the inferred data for a specific variable in the metadata table.", "response": "def get_inferred_data_table(table, pc):\n \"\"\"\n Table level: Dive down, calculate data, then return the new table with the inferred data.\n\n :param str pc: paleo or chron\n :param dict table: Metadata\n :return dict table: Metadata\n \"\"\"\n age = None\n if pc == \"paleo\":\n # Get the age values data first, since it's needed to calculate the other column data.\n age = _get_age(table[\"columns\"])\n\n try:\n # If age values were not found, then skip resolution.\n if age:\n # Loop for all the columns in the table\n for var, col in table[\"columns\"].items():\n # Special cases\n # We do not calculate data for each of the keys below, and we cannot calculate any \"string\" data\n if \"age\" in var or \"year\" in var:\n # Calculate m/m/m/m, but not resolution\n table[\"columns\"][var] = _get_inferred_data_column(col)\n elif not all(isinstance(i, str) for i in col[\"values\"]):\n # Calculate m/m/m/m and resolution\n table[\"columns\"][var] = _get_inferred_data_res(col, age)\n else:\n # Fall through case. No calculations made.\n logger_inferred_data.info(\"get_inferred_data_table: \"\n \"Not calculating inferred data for variableName: {}\".format(var))\n\n # If there isn't an age, still calculate the m/m/m/m for the column values.\n else:\n for var, col in table[\"columns\"].items():\n if not all(isinstance(i, str) for i in col[\"values\"]):\n # Calculate m/m/m/m and resolution\n table[\"columns\"][var] = _get_inferred_data_column(col)\n else:\n # Fall through case. No calculations made.\n logger_inferred_data.info(\"get_inferred_data_table: \"\n \"Not calculating inferred data for variableName: {}\".format(var))\n\n except AttributeError as e:\n logger_inferred_data.warn(\"get_inferred_data_table: AttributeError: {}\".format(e))\n except Exception as e:\n logger_inferred_data.warn(\"get_inferred_data_table: Exception: {}\".format(e))\n\n table[\"columns\"] = _fix_numeric_types(table[\"columns\"])\n return table"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nregistering a new sphinx - needs function for the given sphinx environment.", "response": "def register_func(env, need_function):\n \"\"\"\n Registers a new sphinx-needs function for the given sphinx environment.\n :param env: Sphinx environment\n :param need_function: Python method\n :return: None\n \"\"\"\n\n if not hasattr(env, 'needs_functions'):\n env.needs_functions = {}\n\n func_name = need_function.__name__\n\n if func_name in env.needs_functions.keys():\n raise SphinxError('sphinx-needs: Function name {} already registered.'.format(func_name))\n\n env.needs_functions[func_name] = {\n 'name': func_name,\n 'function': need_function\n }"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nexecuting a given function string.", "response": "def execute_func(env, need, func_string):\n \"\"\"\n Executes a given function string.\n :param env: Sphinx environment\n :param need: Actual need, which contains the found function string\n :param func_string: string of the found function. Without [[ ]]\n :return: return value of executed function\n \"\"\"\n func_name, func_args, func_kwargs = _analyze_func_string(func_string)\n\n if func_name not in env.needs_functions.keys():\n raise SphinxError('Unknown dynamic sphinx-needs function: {}. Found in need: {}'.format(func_name, need['id']))\n\n func = env.needs_functions[func_name]['function']\n func_return = func(env, need, env.needs_all_needs, *func_args, **func_kwargs)\n\n if not isinstance(func_return, (str, int, float, list, unicode)) and func_return is not None:\n raise SphinxError('Return value of function {} is of type {}. Allowed are str, int, float'.format(\n func_name, type(func_return)))\n\n if isinstance(func_return, list):\n for element in func_return:\n if not isinstance(element, (str, int, float, unicode)):\n raise SphinxError('Element of return list of function {} is of type {}. '\n 'Allowed are str, int, float'.format(func_name, type(func_return)))\n return func_return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfinding a node and its children and replace it with the content of the node.", "response": "def find_and_replace_node_content(node, env, need):\n \"\"\"\n Search inside a given node and its children for nodes of type Text,\n if found check if it contains a function string and run/replace it.\n\n :param node: Node to analyse\n :return: None\n \"\"\"\n new_children = []\n if not node.children:\n if isinstance(node, nodes.Text):\n func_match = func_pattern.findall(node)\n new_text = node\n for func_string in func_match:\n if not is_python3:\n func_string = func_string.encode('utf-8')\n # sphinx is replacing ' and \" with language specific quotation marks (up and down), which makes\n # it impossible for the later used AST render engine to detect a python function call in the given\n # string. Therefor a replacement is needed for the execution of the found string.\n func_string_org = func_string[:]\n func_string = func_string.replace('\u201e', '\"')\n func_string = func_string.replace('\u201c', '\"')\n func_string = func_string.replace('\u201d', '\"')\n func_string = func_string.replace('\u201d', '\"')\n\n func_string = func_string.replace('\u2018', '\"')\n func_string = func_string.replace('\u2019', '\"')\n func_return = execute_func(env, need, func_string)\n if not is_python3:\n new_text = new_text.replace(u'[[{}]]'.format(func_string_org.decode('utf-8')), func_return)\n else:\n new_text = new_text.replace(u'[[{}]]'.format(func_string_org), func_return)\n node = nodes.Text(new_text, new_text)\n return node\n else:\n for child in node.children:\n new_child = find_and_replace_node_content(child, env, need)\n new_children.append(new_child)\n node.children = new_children\n return node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nresolving dynamic values inside need data.", "response": "def resolve_dynamic_values(env):\n \"\"\"\n Resolve dynamic values inside need data.\n\n Rough workflow:\n\n #. Parse all needs and their data for a string like [[ my_func(a,b,c) ]]\n #. Extract function name and call parameters\n #. Execute registered function name with extracted call parameters\n #. Replace original string with return value\n\n :param env: Sphinx environment\n :return: return value of given function\n \"\"\"\n # Only perform calculation if not already done yet\n if env.needs_workflow['dynamic_values_resolved']:\n return\n\n needs = env.needs_all_needs\n for key, need in needs.items():\n for need_option in need:\n if need_option in ['docname', 'lineno', 'target_node', 'content']:\n # dynamic values in this data are not allowed.\n continue\n if not isinstance(need[need_option], (list, set)):\n func_call = True\n while func_call:\n try:\n func_call, func_return = _detect_and_execute(need[need_option], need, env)\n except FunctionParsingException:\n raise SphinxError(\"Function definition of {option} in file {file}:{line} has \"\n \"unsupported parameters. \"\n \"supported are str, int/float, list\".format(option=need_option,\n file=need['docname'],\n line=need['lineno']))\n\n if func_call is None:\n continue\n # Replace original function string with return value of function call\n if func_return is None:\n need[need_option] = need[need_option].replace('[[{}]]'.format(func_call), '')\n else:\n need[need_option] = need[need_option].replace('[[{}]]'.format(func_call), str(func_return))\n\n if need[need_option] == '':\n need[need_option] = None\n else:\n new_values = []\n for element in need[need_option]:\n try:\n func_call, func_return = _detect_and_execute(element, need, env)\n except FunctionParsingException:\n raise SphinxError(\"Function definition of {option} in file {file}:{line} has \"\n \"unsupported parameters. \"\n \"supported are str, int/float, list\".format(option=need_option,\n file=need['docname'],\n line=need['lineno']))\n if func_call is None:\n new_values.append(element)\n else:\n # Replace original function string with return value of function call\n if isinstance(need[need_option], (str, int, float)):\n new_values.append(element.replace('[[{}]]'.format(func_call), str(func_return)))\n else:\n if isinstance(need[need_option], (list, set)):\n new_values += func_return\n\n need[need_option] = new_values\n\n # Finally set a flag so that this function gets not executed several times\n env.needs_workflow['dynamic_values_resolved'] = True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _analyze_func_string(func_string):\n func = ast.parse(func_string)\n try:\n func_call = func.body[0].value\n func_name = func_call.func.id\n except AttributeError:\n raise SphinxError(\"Given dynamic function string is not a valid python call. Got: {}\".format(func_string))\n\n func_args = []\n for arg in func_call.args:\n if isinstance(arg, ast.Num):\n func_args.append(arg.n)\n elif isinstance(arg, ast.Str):\n func_args.append(arg.s)\n elif isinstance(arg, ast.BoolOp):\n func_args.append(arg.s)\n elif isinstance(arg, ast.List):\n arg_list = []\n for element in arg.elts:\n if isinstance(element, ast.Num):\n arg_list.append(element.n)\n elif isinstance(element, ast.Str):\n arg_list.append(element.s)\n func_args.append(arg_list)\n else:\n raise FunctionParsingException()\n func_kargs = {}\n for keyword in func_call.keywords:\n kvalue = keyword.value\n kkey = keyword.arg\n if isinstance(kvalue, ast.Num):\n func_kargs[kkey] = kvalue.n\n elif isinstance(kvalue, ast.Str):\n func_kargs[kkey] = kvalue.s\n elif isinstance(kvalue, ast_boolean): # Check if Boolean\n if is_python3:\n func_kargs[kkey] = kvalue.value\n else:\n func_kargs[kkey] = kvalue.id\n elif isinstance(kvalue, ast.List):\n arg_list = []\n for element in kvalue.elts:\n if isinstance(element, ast.Num):\n arg_list.append(element.n)\n elif isinstance(element, ast.Str):\n arg_list.append(element.s)\n func_kargs[kkey] = arg_list\n else:\n raise FunctionParsingException()\n\n return func_name, func_args, func_kargs", "response": "Analyze given string of the function and extract the function name arguments keyword arguments and return the function name arguments keyword arguments"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef run():\n # GLOBALS\n global cwd, files, logger_start, logger_benchmark, settings, _timeseries_data\n _timeseries_data = {}\n # files = {\".lpd\": [ {\"full_path\", \"filename_ext\", \"filename_no_ext\", \"dir\"} ], \".xls\": [...], \".txt\": [...]}\n settings = {\"note_update\": True, \"note_validate\": True, \"verbose\": True}\n cwd = os.getcwd()\n # logger created in whatever directory lipd is called from\n logger_start = create_logger(\"start\")\n logger_benchmark = create_benchmark(\"benchmarks\", \"benchmark.log\")\n files = {\".txt\": [], \".lpd\": [], \".xls\": []}\n\n return", "response": "Initialize and start objects. This is called automatically when importing the package."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef readLipd(usr_path=\"\"):\n global cwd, settings, files\n if settings[\"verbose\"]:\n __disclaimer(opt=\"update\")\n start = clock()\n files[\".lpd\"] = []\n __read(usr_path, \".lpd\")\n _d = __read_lipd_contents()\n end = clock()\n logger_benchmark.info(log_benchmark(\"readLipd\", start, end))\n return _d", "response": "Read a. LiPD file and return a dict of metadata."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef readExcel(usr_path=\"\"):\n global cwd, files\n start = clock()\n files[\".xls\"] = []\n __read(usr_path, \".xls\")\n end = clock()\n logger_benchmark.info(log_benchmark(\"readExcel\", start, end))\n return cwd", "response": "Read Excel file and return current working directory"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef excel():\n global files, cwd, settings\n _d = {}\n # Turn off verbose. We don't want to clutter the console with extra reading/writing output statements\n settings[\"verbose\"] = False\n # Find excel files\n print(\"Found \" + str(len(files[\".xls\"])) + \" Excel files\")\n logger_start.info(\"found excel files: {}\".format(len(files[\".xls\"])))\n # Start the clock\n start = clock()\n # Loop for each excel file\n for file in files[\".xls\"]:\n # Convert excel file to LiPD\n dsn = excel_main(file)\n try:\n # Read the new LiPD file back in, to get fixes, inferred calculations, updates, etc.\n _d[dsn] = readLipd(os.path.join(file[\"dir\"], dsn + \".lpd\"))\n # Write the modified LiPD file back out again.\n writeLipd(_d[dsn], cwd)\n except Exception as e:\n logger_start.error(\"excel: Unable to read new LiPD file, {}\".format(e))\n print(\"Error: Unable to read new LiPD file: {}, {}\".format(dsn, e))\n # Time!\n end = clock()\n logger_benchmark.info(log_benchmark(\"excel\", start, end))\n # Start printing stuff again.\n settings[\"verbose\"] = True\n return _d", "response": "Convert Excel files to LiPD files."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef noaa(D=\"\", path=\"\", wds_url=\"\", lpd_url=\"\", version=\"\"):\n global files, cwd\n # When going from NOAA to LPD, use the global \"files\" variable.\n # When going from LPD to NOAA, use the data from the LiPD Library.\n\n # Choose the mode\n _mode = noaa_prompt()\n start = clock()\n # LiPD mode: Convert LiPD files to NOAA files\n if _mode == \"1\":\n # _project, _version = noaa_prompt_1()\n if not version or not lpd_url:\n print(\"Missing parameters: Please try again and provide all parameters.\")\n return\n if not D:\n print(\"Error: LiPD data must be provided for LiPD -> NOAA conversions\")\n else:\n if \"paleoData\" in D:\n _d = copy.deepcopy(D)\n D = lpd_to_noaa(_d, wds_url, lpd_url, version, path)\n else:\n # For each LiPD file in the LiPD Library\n for dsn, dat in D.items():\n _d = copy.deepcopy(dat)\n # Process this data through the converter\n _d = lpd_to_noaa(_d, wds_url, lpd_url, version, path)\n # Overwrite the data in the LiPD object with our new data.\n D[dsn] = _d\n # If no wds url is provided, then remove instances from jsonld metadata\n if not wds_url:\n D = rm_wds_url(D)\n # Write out the new LiPD files, since they now contain the new NOAA URL data\n if(path):\n writeLipd(D, path)\n else:\n print(\"Path not provided. Writing to CWD...\")\n writeLipd(D, cwd)\n\n # NOAA mode: Convert NOAA files to LiPD files\n elif _mode == \"2\":\n # Pass through the global files list. Use NOAA files directly on disk.\n noaa_to_lpd(files)\n\n else:\n print(\"Invalid input. Try again.\")\n end = clock()\n logger_benchmark.info(log_benchmark(\"noaa\", start, end))\n return", "response": "Convert between NOAA and LiPD files"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate(D, detailed=True):\n start = clock()\n print(\"\\n\")\n # Fetch new results by calling lipd.net/api/validator (costly, may take a while)\n print(\"Fetching results from validator at lipd.net/validator... this may take a few moments.\\n\")\n try:\n results = []\n # Get the validator-formatted data for each dataset.\n if \"paleoData\" in D:\n _api_data = get_validator_format(D)\n # A list of lists of LiPD-content metadata\n results.append(call_validator_api(D[\"dataSetName\"], _api_data))\n else:\n for dsn, dat in D.items():\n _api_data = get_validator_format(dat)\n # A list of lists of LiPD-content metadata\n results.append(call_validator_api(dsn, _api_data))\n display_results(results, detailed)\n except Exception as e:\n print(\"Error: validate: {}\".format(e))\n\n end = clock()\n logger_benchmark.info(log_benchmark(\"validate\", start, end))\n return", "response": "Validate a single or multiple LiPD files."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef tsToDf(tso):\n dfs = {}\n try:\n dfs = ts_to_df(tso)\n except Exception as e:\n print(\"Error: Unable to create data frame\")\n logger_start.warn(\"ts_to_df: tso malformed: {}\".format(e))\n return dfs", "response": "Create Pandas DataFrame from TimeSeries object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nextracts a single time series from a LiPD data file.", "response": "def extractTs(d, whichtables=\"meas\", mode=\"paleo\"):\n \"\"\"\n Create a time series using LiPD data (uses paleoData by default)\n\n | Example : (default) paleoData and meas tables\n | 1. D = lipd.readLipd()\n | 2. ts = lipd.extractTs(D)\n\n | Example : chronData and all tables\n | 1. D = lipd.readLipd()\n | 2. ts = lipd.extractTs(D, \"all\", \"chron\")\n\n :param dict d: Metadata\n :param str whichtables: \"all\", \"summ\", \"meas\", \"ens\" - The tables that you would like in the timeseries\n :param str mode: \"paleo\" or \"chron\" mode\n :return list l: Time series\n \"\"\"\n # instead of storing each raw dataset per tso, store it once in the global scope. saves memory\n global _timeseries_data\n _l = []\n start = clock()\n try:\n if not d:\n print(\"Error: LiPD data not provided. Pass LiPD data into the function.\")\n else:\n print(mode_ts(\"extract\", mode))\n if \"paleoData\" in d:\n # One dataset: Process directly on file, don't loop\n try:\n _dsn = get_dsn(d)\n _timeseries_data[start] = {}\n _timeseries_data[start][_dsn] = d\n # Use the LiPD data given to start time series extract\n print(\"extracting: {}\".format(_dsn))\n # Copy, so we don't affect the original data\n _v = copy.deepcopy(d)\n # Start extract...\n _l = (extract(_v, whichtables, mode, start))\n except Exception as e:\n print(\"Error: Unable to extractTs for dataset: {}: {}\".format(_dsn, e))\n logger_start.debug(\"extractTs: Exception: {}, {}\".format(_dsn, e))\n\n else:\n _timeseries_data[start] = d\n # Multiple datasets: Loop and append for each file\n for k, v in d.items():\n try:\n # Use the LiPD data given to start time series extract\n print(\"extracting: {}\".format(k))\n # Copy, so we don't affect the original data\n _v = copy.deepcopy(v)\n # Start extract...\n _l += (extract(_v, whichtables, mode, start))\n except Exception as e:\n print(\"Error: Unable to extractTs for dataset: {}: {}\".format(k, e))\n logger_start.debug(\"extractTs: Exception: {}\".format(e))\n print(\"Created time series: {} entries\".format(len(_l)))\n except Exception as e:\n print(\"Error: Unable to extractTs: {}\".format(e))\n logger_start.error(\"extractTs: Exception: {}\".format(e))\n end = clock()\n logger_benchmark.info(log_benchmark(\"extractTs\", start, end))\n return _l"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncollapsing a time series into LiPD record form.", "response": "def collapseTs(ts=None):\n \"\"\"\n Collapse a time series back into LiPD record form.\n\n | Example\n | 1. D = lipd.readLipd()\n | 2. ts = lipd.extractTs(D)\n | 3. New_D = lipd.collapseTs(ts)\n\n _timeseries_data is sorted by time_id, and then by dataSetName\n _timeseries_data[10103341][\"ODP1098B\"] = {data}\n\n :param list ts: Time series\n :return dict: Metadata\n \"\"\"\n # Retrieve the associated raw data according to the \"time_id\" found in each object. Match it in _timeseries_data\n global _timeseries_data\n _d = {}\n if not ts:\n print(\"Error: Time series data not provided. Pass time series into the function.\")\n else:\n # Send time series list through to be collapsed.\n try:\n _raw = _timeseries_data[ts[0][\"time_id\"]]\n print(mode_ts(\"collapse\", mode=\"\", ts=ts))\n _d = collapse(ts, _raw)\n _d = rm_empty_fields(_d)\n except Exception as e:\n print(\"Error: Unable to collapse the time series: {}\".format(e))\n logger_start.error(\"collapseTs: unable to collapse the time series: {}\".format(e))\n return _d"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfilters the given time series by the given list of expressions.", "response": "def filterTs(ts, expressions):\n \"\"\"\n Create a new time series that only contains entries that match the given expression.\n\n | Example:\n | D = lipd.loadLipd()\n | ts = lipd.extractTs(D)\n | new_ts = filterTs(ts, \"archiveType == marine sediment\")\n | new_ts = filterTs(ts, [\"paleoData_variableName == sst\", \"archiveType == marine sediment\"])\n | Expressions should use underscores to denote data nesting.\n | Ex: paleoData_hasResolution_hasMedian or\n\n :param list OR str expressions: Expressions\n :param list ts: Time series\n :return list new_ts: Filtered time series that matches the expression\n \"\"\"\n # Make a copy of the ts. We're going to work directly on it.\n new_ts = ts[:]\n\n # User provided a single query string\n if isinstance(expressions, str):\n # Use some magic to turn the given string expression into a machine-usable comparative expression.\n expr_lst = translate_expression(expressions)\n # Only proceed if the translation resulted in a usable expression.\n if expr_lst:\n # Return the new filtered time series. This will use the same time series\n # that filters down each loop.\n new_ts, _idx = get_matches(expr_lst, new_ts)\n\n # User provided a list of multiple queries\n elif isinstance(expressions, list):\n # Loop for each query\n for expr in expressions:\n # Use some magic to turn the given string expression into a machine-usable comparative expression.\n expr_lst = translate_expression(expr)\n # Only proceed if the translation resulted in a usable expression.\n if expr_lst:\n # Return the new filtered time series. This will use the same time series\n # that filters down each loop.\n new_ts, _idx = get_matches(expr_lst, new_ts)\n\n return new_ts"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef queryTs(ts, expression):\n # Make a copy of the ts. We're going to work directly on it.\n _idx = []\n\n # User provided a single query string\n if isinstance(expressions, str):\n # Use some magic to turn the given string expression into a machine-usable comparative expression.\n expr_lst = translate_expression(expressions)\n # Only proceed if the translation resulted in a usable expression.\n if expr_lst:\n # Return the new filtered time series. This will use the same time series\n # that filters down each loop.\n new_ts, _idx = get_matches(expr_lst, new_ts)\n\n # User provided a list of multiple queries\n elif isinstance(expressions, list):\n # Loop for each query\n for expr in expressions:\n # Use some magic to turn the given string expression into a machine-usable comparative expression.\n expr_lst = translate_expression(expr)\n # Only proceed if the translation resulted in a usable expression.\n if expr_lst:\n # Return the new filtered time series. This will use the same time series\n # that filters down each loop.\n new_ts, _idx = get_matches(expr_lst, new_ts)\n return _idx", "response": "Query the time series for the given expression."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nviewing the contents of one time series entry in a nicely formatted way", "response": "def viewTs(ts):\n \"\"\"\n View the contents of one time series entry in a nicely formatted way\n\n | Example\n | 1. D = lipd.readLipd()\n | 2. ts = lipd.extractTs(D)\n | 3. viewTs(ts[0])\n\n :param dict ts: One time series entry\n :return none:\n \"\"\"\n _ts = ts\n if isinstance(ts, list):\n _ts = ts[0]\n print(\"It looks like you input a full time series. It's best to view one entry at a time.\\n\"\n \"I'll show you the first entry...\")\n _tmp_sort = OrderedDict()\n _tmp_sort[\"ROOT\"] = {}\n _tmp_sort[\"PUBLICATION\"] = {}\n _tmp_sort[\"GEO\"] = {}\n _tmp_sort[\"OTHERS\"] = {}\n _tmp_sort[\"DATA\"] = {}\n\n # Organize the data by section\n for k,v in _ts.items():\n if not any(i == k for i in [\"paleoData\", \"chronData\", \"mode\", \"@context\"]):\n if k in [\"archiveType\", \"dataSetName\", \"googleSpreadSheetKey\", \"metadataMD5\", \"tagMD5\", \"googleMetadataWorksheet\", \"lipdVersion\"]:\n _tmp_sort[\"ROOT\"][k] = v\n elif \"pub\" in k:\n _tmp_sort[\"PUBLICATION\"][k] = v\n elif \"geo\" in k:\n _tmp_sort[\"GEO\"][k] = v\n elif \"paleoData_\" in k or \"chronData_\" in k:\n if isinstance(v, list) and len(v) > 2:\n _tmp_sort[\"DATA\"][k] = \"[{}, {}, {}, ...]\".format(v[0], v[1], v[2])\n else:\n _tmp_sort[\"DATA\"][k] = v\n else:\n if isinstance(v, list) and len(v) > 2:\n _tmp_sort[\"OTHERS\"][k] = \"[{}, {}, {}, ...]\".format(v[0], v[1], v[2])\n else:\n _tmp_sort[\"OTHERS\"][k] = v\n\n # Start printing the data to console\n for k1, v1 in _tmp_sort.items():\n print(\"\\n{}\\n===============\".format(k1))\n for k2, v2 in v1.items():\n print(\"{} : {}\".format(k2, v2))\n\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef showLipds(D=None):\n\n if not D:\n print(\"Error: LiPD data not provided. Pass LiPD data into the function.\")\n else:\n print(json.dumps(D.keys(), indent=2))\n\n return", "response": "Display the dataset names of a given LiPD data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndisplay the metadata specified LiPD in pretty print", "response": "def showMetadata(dat):\n \"\"\"\n Display the metadata specified LiPD in pretty print\n\n | Example\n | showMetadata(D[\"Africa-ColdAirCave.Sundqvist.2013\"])\n\n :param dict dat: Metadata\n :return none:\n \"\"\"\n _tmp = rm_values_fields(copy.deepcopy(dat))\n print(json.dumps(_tmp, indent=2))\n return"}