input
stringlengths
2.65k
237k
output
stringclasses
1 value
and a bottom wall forming an acoustic chamber. A flexible adhesive-backed flange is disposed on the periphery of the ear coupler. The flange attaches to the subject's head, firmly holding the ear coupler in place over the ear. The annular side wall has a port for the placement of a transducer assembly, and also has ribs to help lock the transducer assembly in place. The transducer assembly can be placed in an up or down position, and can be switched between positions while the coupler is attached to the subject's head. The ear coupler advantageously conforms to the subject's head, thereby minimizing the likelihood that the ear coupler will become detached during testing. The coupler can be inexpensively manufactured, since its one-piece design allows the use of relatively low-cost processes such as injection molding and thermoforming.''' expected = ['ear', 'coupler', 'hear', 'evaluat', 'transducer', 'head', 'acoustic chamber', 'flange', 'transducer assembly', 'ear coupler'] idx = self.find_idx(text) actual = self.tfidf.detect_popular_ngrams_in_docs_set(docs_set=[idx]) self.assertGreaterOrEqualDiceScore(expected, actual) # Code fails def test_patent_US09039285_20150526(self): text = '''A synthetic resin-made thrust sliding bearing 1 includes a synthetic resin-made upper casing 2 which is fixed to a vehicle body side via a mounting member; a synthetic resin-made lower casing 3 which is superposed on the upper casing 2 so as to be rotatable about an axis O in a circumferential direction R relative to the upper casing 2 ; and a synthetic resin-made sliding bearing piece 5 disposed in a space 4 between the upper casing 2 and the lower casing 3.''' expected = ['synthetic', 'resin', 'thrust', 'slid', 'bearing', 'vehicle body', 'mounting member', 'circumferential direction'] idx = self.find_idx(text) actual = self.tfidf.detect_popular_ngrams_in_docs_set(docs_set=[idx]) self.assertGreaterOrEqualDiceScore(expected, actual) # FP: {'slide', '3', 'direct', 'lower', 'upper', 'dispo', '2', 'space', 'case', 'circumferenti'} (too general, positional terms) def test_patent_US07603640_20091013(self): text = '''To generate a floorplan for an integrated circuit to be formed by a collection of modules interconnected by nets, the floorspace to be occupied by the integrated circuit is partitioned into regions and all of the modules are allocated among those regions. The regions are then iteratively partitioning into smaller progressively smaller regions with modules previously allocated any partitioned region allocated among the regions into which it was partitioned, until each region of the floorplan has been allocated no more than a predetermined maximum number of modules. A separate floorplan is then generated for each region. Neighboring regions are then iteratively merged to create progressively larger regions, until only a single region remains, wherein upon merging any neighboring regions to form a larger merged region, the floorplans of the neighboring regions are merged and refined to create a floorplan for the merged region.''' expected = ['circuit', 'floorplan', 'module', 'integrated circuit'] idx = self.find_idx(text) actual = self.tfidf.detect_popular_ngrams_in_docs_set(docs_set=[idx]) self.assertGreaterOrEqualDiceScore(expected, actual) # FP: {'iter', 'larger', 'neighbor', 'region', 'merg', 'alloc', 'progress', 'partit'} (too general, positional terms) def test_patent_US08389238_20130305(self): text = '''Efficient and prolonged hCFTR expression is one of the major obstacles for cystic fibrosis lung therapy. hCFTR mRNA expression levels depend on eukaryotic expression cassette components, prokaryotic backbone elements, and the gene transfer method may also influence transcriptional silencing mechanisms. A codon-optimized and CpG-reduced human CFTR gene (CO-CFTR) was made. Various vector modifications were tested to facilitate extended duration of CO-CFTR expression. Insertion of an extended 3′BGH transcribed sequence (712 bp) in an inverted orientation produced prolonged expression of CO-CFTR expression at biologically relevant levels. Further studies revealed that prolonged CO-CFTR expression is dependant on the orientation of the extended BGH 3′ BGH transcribed sequence and its transcription, is not specific to the UbC promoter, and is less dependent on other vector backbone elements.''' expected = ['hCFTR', 'expression', 'cystic fibrosis', 'lung', 'therapy', 'lung therapy', 'mRNA', 'eukaryotic', 'cassette', 'prokaryotic', 'backbone', 'gene', 'transfer', 'transcript', 'codon', 'CpG', 'CTFR', 'CO-CTFR', 'vector', "BGH", 'UbC', 'promoter', 'sequence', 'modification'] idx = self.find_idx(text) actual = self.tfidf.detect_popular_ngrams_in_docs_set(docs_set=[idx]) self.assertGreaterOrEqualDiceScore(expected, actual) # FN: {'therapi', 'lung', 'modif', 'ubc', 'backbon', 'promot', '3', 'cassett', 'prokaryot', 'bp', '712', 'fibrosi', 'cpg', 'bgh', 'codon', 'transcript', 'transfer', 'cystic', 'mrna', 'eukaryot', 'co-ctfr', 'hcftr', 'ctfr', "'"} # In this case, incorporating a lot of biotech technical words, I selected many such words not picked up by the computer def test_patent_US07377274_20080527(self): text = '''An automatic, rapid-firing toy gun is powered by a fast moving air stream. The toy gun is simple in design and does not require a lot of effort and time to fire the projectiles or to load the projectiles between firing. The toy gun includes a barrel, a fan, a loading chamber, and a trigger. The barrel has a forward end, a rear end, and an inner passage between the two ends. The fan is arranged with respect to the barrel to direct an air stream through the inner passage from the rear end to the forward end. The loading chamber is mounted on the barrel and has an opening directed into the inner passage. The loading chamber is sized and shaped to hold a plurality of projectiles and the opening is sized and shaped to sequentially release the plurality of projectiles into the inner passage of the barrel one at a time. The trigger is electrically connected to the fan. Pulling the trigger causes the fan to drive a large volume of air through the gun barrel and the air stream to accelerate as it travels through a narrow passage of the gun barrel. Projectiles sequentially fall into the air stream one at a time and are quickly released from the gun as the air stream accelerates through the gun barrel and exits the gun barrel.''' expected = ['automatic', 'toy', 'gun', 'air', 'air stream', 'fan', 'rapid firing', 'toy gun', 'projectiles', 'barrel'] idx = self.find_idx(text) actual = self.tfidf.detect_popular_ngrams_in_docs_set(docs_set=[idx]) self.assertGreaterOrEqualDiceScore(expected, actual) # The computer picks up terms such as barrel that are already common to guns, and therefore not required def test_patent_US08744823_20140603(self): text = '''A method for creating, from a first axisymmetrical surface, a second surface belonging to a sub-system of a complex system, in which the second surface observes at least one constraint, is disclosed. The method includes: modeling the first axisymmetrical surface, while observing the constraints with at least one parameter, the modeling step including a sub-step for discretizing the first axisymmetrical surface in several points, the parameter being a coordinate of one of these points in a reference system associated with at least one portion of this sub-system, and a sub-step for reconstructing the first axisymmetrical surface from the at least one point and from the at least one constraint; modifying the at least one parameter in the reference system for modeling the second surface; and recording the second surface in a memory of the computer.''' expected = ['computer', 'memory', 'axis', 'surface', 'reconstruct', 'axisymmetrical surface', 'coordinate'] idx = self.find_idx(text) actual = self.tfidf.detect_popular_ngrams_in_docs_set(docs_set=[idx]) self.assertGreaterOrEqualDiceScore(expected, actual) # I think this is about computer disk drive platters, but described using other terms def test_patent_US07444853_20081104(self): text = '''An impulse event separating method, and an apparatus to perform the method, the method including dividing an input signal into frame units and dividing each frame into a plurality of frequency sub-bands; obtaining a power variation and phase variation of the signal of each of the frequency sub-bands, and detecting a plurality of local onsets using the power variation and the phase variation; obtaining a global onset from the local onsets and triggering a plurality of event components using the local onsets and the global onset; tracking and combining the event components in each of the frequency sub-bands to form events; and determining whether the events comprise an impulse event with reference to an impulse event property.''' expected = ['impulse', 'event', 'separat', 'input', 'signal', 'input
import csv from pathlib import Path from collections import Counter from gender_novels import common from gender_novels.novel import Novel class Corpus(common.FileLoaderMixin): """The corpus class is used to load the metadata and full texts of all novels in a corpus Once loaded, each corpus contains a list of Novel objects >>> from gender_novels.corpus import Corpus >>> c = Corpus('sample_novels') >>> type(c.novels), len(c) (<class 'list'>, 99) >>> c.novels[0].author '<NAME>' You can use 'test_corpus' to load a test corpus of 10 novels: >>> test_corpus = Corpus('test_corpus') >>> len(test_corpus) 10 """ def __init__(self, corpus_name=None): self.corpus_name = corpus_name if self.corpus_name == 'gutenberg': self._download_gutenberg_if_not_locally_available() self.load_test_corpus = False if self.corpus_name == 'test_corpus': self.load_test_corpus = True self.corpus_name = 'sample_novels' self.novels = [] if corpus_name is not None: self.relative_corpus_path = Path('corpora', self.corpus_name) self.novels = self._load_novels() def _download_gutenberg_if_not_locally_available(self): """ Checks if the gutenberg corpus is available locally. If not, downloads the corpus and extracts it to corpora/gutenberg No tests because the function depends on user input :return: """ import os gutenberg_path = Path(common.BASE_PATH, 'corpora', 'gutenberg') # if there are more than 4000 text files available, we know that the corpus was downloaded # and can return try: no_gutenberg_novels= len(os.listdir(Path(gutenberg_path, 'texts'))) if no_gutenberg_novels > 4000: gutenberg_available = True else: gutenberg_available = False # if the texts path was not found, we know that we need to download the corpus except FileNotFoundError: gutenberg_available = False if not gutenberg_available: print("The Project Gutenberg corpus is currently not available on your system.", "It consists of more than 4000 novels and 1.8 GB of data.") download_prompt = input( "If you want to download the corpus, please enter (y). Any other input will " "terminate the program: ") if not download_prompt in ['y', '(y)']: raise ValueError("Project Gutenberg corpus will not be downloaded.") import zipfile import urllib url = 'https://s3-us-west-2.amazonaws.com/gutenberg-cache/gutenberg_corpus.zip' urllib.request.urlretrieve(url, 'gutenberg_corpus.zip') zipf = zipfile.ZipFile('gutenberg_corpus.zip') if not os.path.isdir(gutenberg_path): os.mkdir(gutenberg_path) zipf.extractall(gutenberg_path) os.remove('gutenberg_corpus.zip') # check that we now have 4000 novels available try: no_gutenberg_novels = len(os.listdir(Path(gutenberg_path, 'texts'))) print(f'Successfully downloaded {no_gutenberg_novels} novels.') except FileNotFoundError: raise FileNotFoundError("Something went wrong while downloading the gutenberg" "corpus.") def __len__(self): """ For convenience: returns the number of novels in the corpus. >>> from gender_novels.corpus import Corpus >>> c = Corpus('sample_novels') >>> len(c) 99 >>> female_corpus = c.filter_by_gender('female') >>> len(female_corpus) 39 :return: int """ return len(self.novels) def __iter__(self): """ Yield each of the novels from the .novels list. For convenience. >>> from gender_novels.corpus import Corpus >>> c = Corpus('sample_novels') >>> titles = [] >>> for this_novel in c: ... titles.append(this_novel.title) >>> titles #doctest: +ELLIPSIS ['<NAME>', 'Flatland', ... 'The Heir of Redclyffe'] """ for this_novel in self.novels: yield this_novel def __eq__(self, other): """ Returns true if both corpora contain the same novels Note: ignores differences in the corpus name as that attribute is not used apart from initializing a corpus. Presumes the novels to be sorted. (They get sorted by the initializer) >>> from gender_novels.corpus import Corpus >>> sample_corpus = Corpus('sample_novels') >>> sample_corpus.novels = sample_corpus.novels[:20] >>> male_corpus = sample_corpus.filter_by_gender('male') >>> female_corpus = sample_corpus.filter_by_gender('female') >>> merged_corpus = male_corpus + female_corpus >>> merged_corpus == sample_corpus True >>> sample_corpus == merged_corpus + male_corpus False :return: bool """ if not isinstance(other, Corpus): raise NotImplementedError("Only a Corpus can be added to another Corpus.") if len(self) != len(other): return False for i in range(len(self)): if self.novels[i] != other.novels[i]: return False return True def __add__(self, other): """ Adds two corpora together and returns a copy of the result Note: retains the name of the first corpus >>> from gender_novels.corpus import Corpus >>> sample_corpus = Corpus('sample_novels') >>> sample_corpus.novels = sample_corpus.novels[:20] >>> male_corpus = sample_corpus.filter_by_gender('male') >>> female_corpus = sample_corpus.filter_by_gender('female') >>> merged_corpus = male_corpus + female_corpus >>> merged_corpus == sample_corpus True :return: Corpus """ if not isinstance(other, Corpus): raise NotImplementedError("Only a Corpus can be added to another Corpus.") output_corpus = self.clone() for novel in other: output_corpus.novels.append(novel) output_corpus.novels = sorted(output_corpus.novels) return output_corpus def clone(self): """ Return a copy of this Corpus >>> from gender_novels.corpus import Corpus >>> sample_corpus = Corpus('sample_novels') >>> corpus_copy = sample_corpus.clone() >>> len(corpus_copy) == len(sample_corpus) True :return: Corpus """ corpus_copy = Corpus() corpus_copy.corpus_name = self.corpus_name corpus_copy.novels = self.novels[:] return corpus_copy def _load_novels(self): novels = [] relative_csv_path = (self.relative_corpus_path / f'{self.corpus_name}.csv') try: csv_file = self.load_file(relative_csv_path) except FileNotFoundError: err = "Could not find the metadata csv file for the " err += "'{self.corpus_name}' corpus in the expected location " err += f"({relative_csv_path})." raise FileNotFoundError(err) csv_reader = csv.DictReader(csv_file) for novel_metadata in csv_reader: novel_metadata['corpus_name'] = self.corpus_name this_novel = Novel(novel_metadata_dict=novel_metadata) novels.append(this_novel) if self.load_test_corpus and len(novels) == 10: break return sorted(novels) def count_authors_by_gender(self, gender): """ This function returns the number of authors with the specified gender (male, female, non-binary, unknown) >>> from gender_novels.corpus import Corpus >>> c = Corpus('sample_novels') >>> c.count_authors_by_gender('female') 39 Accepted inputs are 'male', 'female', 'non-binary' and 'unknown' but no abbreviations. >>> c.count_authors_by_gender('m') Traceback (most recent call last): ValueError: Gender must be male, female, non-binary, unknown but not m. :rtype: int """ filtered_corpus = self.filter_by_gender(gender) return len(filtered_corpus) def filter_by_gender(self, gender): """ Return a new Corpus object that contains only authors whose gender matches gender_filter. Accepted inputs are 'male', 'female', 'non-binary' and 'unknown' but no abbreviations. >>> from gender_novels.corpus import Corpus >>> c = Corpus('sample_novels') >>> female_corpus = c.filter_by_gender('female') >>> len(female_corpus) 39 >>> female_corpus.novels[0].title 'The Indiscreet Letter' >>> male_corpus = c.filter_by_gender('male') >>> len(male_corpus) 59 >>> male_corpus.novels[0].title '<NAME>' :param gender: gender name :return: Corpus """ supported_genders = ('male', 'female', 'non-binary', 'unknown') if gender not in supported_genders: raise ValueError( f'Gender must be {", ".join(supported_genders)} ' + f'but not {gender}.') corpus_copy = self.clone() corpus_copy.novels = [] for this_novel in self.novels: # check if all novels have an author_gender attribute if not hasattr(this_novel, 'author_gender'): err = f'Cannot count author genders in {self.corpus_name} ' err += 'corpus. The novel ' err += f'{this_novel.title} by {this_novel.author} lacks ' err += 'the attribute "author_gender."' raise AttributeError(err) if this_novel.author_gender == gender: corpus_copy.novels.append(this_novel) return corpus_copy def get_wordcount_counter(self): """ This function returns a Counter telling how many times a word appears in an entire corpus >>> from gender_novels.corpus import Corpus >>> c = Corpus('sample_novels') >>> c.get_wordcount_counter()['fire'] 2269 """ corpus_counter = Counter() for current_novel in self.novels: novel_counter = current_novel.get_wordcount_counter() corpus_counter += novel_counter return corpus_counter def get_corpus_metadata(self): """ This function returns a sorted list of all metadata fields in the corpus as strings. This is different from the get_metadata_fields; this returns the fields which are specific to the corpus it is being called on. >>> from gender_novels.corpus import Corpus >>> c = Corpus('sample_novels') >>> c.get_corpus_metadata() ['author', 'author_gender', 'corpus_name', 'country_publication', 'date', 'filename', 'notes', 'title'] :return: list """ metadata_fields = set() for novel in self.novels: for field in getmembers(novel): metadata_fields.add(field) return sorted(list(metadata_fields)) def get_field_vals(self,field): """ This function returns a sorted list of all values for a particular metadata field as strings. >>> from gender_novels.corpus import Corpus >>> c = Corpus('sample_novels') >>> c.get_field_vals('corpus_name') ['sample_novels'] :param field: str :return: list """ metadata_fields = self.get_corpus_metadata() if field not in metadata_fields: raise ValueError( f'\'{field}\' is not a valid metadata field for this corpus' ) values = set() for novel in self.novels: values.add(getattr(novel,field)) return sorted(list(values)) def subcorpus(self,metadata_field,field_value): """ This method takes a metadata field and value of that field and returns a new Corpus object which includes the subset of movels in the original Corpus that have the specified value for the specified field. :param metadata_field: str :param field_value: str :return: Corpus """ pass def multi_filter(self,characteristic_dict): """ This method takes a dictionary of metadata fields and corresponding values and returns a Corpus object which is the subcorpus of the input corpus which satisfies all the specified constraints. #>>> from gender_novels.corpus import Corpus #>>> c = Corpus('sample_novels') #>>> characteristics = {'author':'female', 'country_publication':'England'} #>>> subcorpus_multi_filtered = c.multi_filter(characteristics) #>>> female_subcorpus = c.filter_by_gender('female') #>>> subcorpus_repeated_method = female_subcorpus.Subcorpus('country_publication','England') #>>> subcorpus_multi_filtered == subcorpus_repeated_method True :param characteristic_dict: dict :return: Corpus """ new_corp = self.copy() metadata_fields = self.get_corpus_metadata() for field in dict: if field not in metadata_fields: raise ValueErro( f'\'{field}\' is not a valid metadata field for this corpus' ) new_corp.subcorpus(field,characteristic_dict[field]) return new_corp #TODO:
first :math:`K_2` will be used. If any columns of ``sigma`` are fixed at zero, only the first few columns of these nodes will be used. The convenience function :func:`build_integration` can be useful when constructing custom nodes and weights. .. note:: If ``nodes`` has multiple columns, it can be specified as a matrix or broken up into multiple one-dimensional fields with column index suffixes that start at zero. For example, if there are three columns of nodes, a ``nodes`` field with three columns can be replaced by three one-dimensional fields: ``nodes0``, ``nodes1``, and ``nodes2``. It may be convenient to define IDs for different agents: - **agent_ids** (`object, optional`) - IDs that identify individual agents within markets. There can be multiple of the same ID within a market. Along with ``market_ids`` and ``agent_ids``, the names of any additional fields can typically be used as variables in ``agent_formulation``. The exception is the name ``'demographics'``, which is reserved for use by :class:`Agents`. In addition to standard demographic variables :math:`d_{it}`, it is also possible to specify product-specific demographics :math:`d_{ijt}`. A typical example is geographic distance of agent :math:`i` from product :math:`j`. If ``agent_formulation`` has, for example, ``'distance'``, instead of including a single ``'distance'`` field in ``agent_data``, one should instead include ``'distance0'``, ``'distance1'``, ``'distance2'`` and so on, where the index corresponds to the order in which products appear within market in ``product_data``. For example, ``'distance5'`` should measure the distance of agents to the fifth product within the market, as ordered in ``product_data``. The last index should be the number of products in the largest market, minus one. For markets with fewer products than this maximum number, latter columns will be ignored. integration : `Integration, optional` :class:`Integration` configuration for how to build nodes and weights for integration over agent choice probabilities, which will replace any ``nodes`` and ``weights`` fields in ``agent_data``. This configuration is required if ``nodes`` and ``weights`` in ``agent_data`` are not specified. It should not be specified if :math:`X_2` is not formulated in ``product_formulations``. If this configuration is specified, :math:`K_2` columns of nodes (the number of demand-side nonlinear product characteristics) will be built. However, if ``sigma`` is left unspecified or is specified with columns fixed at zero, fewer columns will be used. xi : `array-like, optional` Unobserved demand-side product characteristics, :math:`\xi`. By default, if :math:`X_3` is formulated, each pair of unobserved characteristics in this vector and :math:`\omega` is drawn from a mean-zero bivariate normal distribution. This must be specified if :math:`X_3` is not formulated or if ``omega`` is specified. omega : `array-like, optional` Unobserved supply-side product characteristics, :math:`\omega`. By default, if :math:`X_3` is formulated, each pair of unobserved characteristics in this vector and :math:`\xi` is drawn from a mean-zero bivariate normal distribution. This must be specified if :math:`X_3` is formulated and ``xi`` is specified. It is ignored if :math:`X_3` is not formulated. xi_variance : `float, optional` Variance of :math:`\xi`. The default value is ``1.0``. This is ignored if ``xi`` or ``omega`` is specified. omega_variance : `float, optional` Variance of :math:`\omega`. The default value is ``1.0``. This is ignored if ``xi`` or ``omega`` is specified. correlation : `float, optional` Correlation between :math:`\xi` and :math:`\omega`. The default value is ``0.9``. This is ignored if ``xi`` or ``omega`` is specified. rc_types : `sequence of str, optional` Random coefficient types: - ``'linear'`` (default) - The random coefficient is as defined in :eq:`mu`. - ``'log'`` - The random coefficient's column in :eq:`mu` is exponentiated before being pre-multiplied by :math:`X_2`. The list should have as many strings as there are columns in :math:`X_2`. Each string determines the type of the random coefficient on the corresponding product characteristic in :math:`X_2`. A typical example of when to use ``'log'`` is to have a lognormal coefficient on prices. Implementing this typically involves having an ``I(-prices)`` in the formulation for :math:`X_2`, and instead of including ``prices`` in :math:`X_1`, including a ``1`` in the ``agent_formulation``. Then the corresponding coefficient in :math:`\Pi` will serve as the mean parameter for the lognormal random coefficient on negative prices, :math:`-p_{jt}`. epsilon_scale : `float, optional` Factor by which the Type I Extreme Value idiosyncratic preference term, :math:`\epsilon_{ijt}`, is scaled. By default, :math:`\epsilon_{ijt}` is not scaled. The typical use of this parameter is to approximate the pure characteristics model of :ref:`references:Berry and Pakes (2007)` by choosing a value smaller than ``1.0``. As this scaling factor approaches zero, the model approaches the pure characteristics model in which there is no idiosyncratic preference term. For more information about choosing this parameter and estimating models where it is smaller than ``1.0``, refer to the same argument in :meth:`Problem.solve`. In some situations, it may be easier to solve simulations with small epsilon scaling factors by using :meth:`Simulation.replace_exogenous` rather than :meth:`Simulation.replace_endogenous`. costs_type : `str, optional` Specification of the marginal cost function :math:`\tilde{c} = f(c)` in :eq:`costs`. The following specifications are supported: - ``'linear'`` (default) - Linear specification: :math:`\tilde{c} = c`. - ``'log'`` - Log-linear specification: :math:`\tilde{c} = \log c`. seed : `int, optional` Passed to :class:`numpy.random.RandomState` to seed the random number generator before data are simulated. By default, a seed is not passed to the random number generator. Attributes ---------- product_formulations : `tuple` :class:`Formulation` configurations for :math:`X_1`, :math:`X_2`, and :math:`X_3`, respectively. agent_formulation : `tuple` :class:`Formulation` configuration for :math:`d`. product_data : `recarray` Synthetic product data that were loaded or simulated during initialization. Typically, :meth:`Simulation.replace_endogenous` is used replace prices and shares with equilibrium values that are consistent with true parameters. The :func:`data_to_dict` function can be used to convert this into a more usable data type. agent_data : `recarray` Synthetic agent data that were loaded or simulated during initialization. The :func:`data_to_dict` function can be used to convert this into a more usable data type. integration : `Integration` :class:`Integration` configuration for how any nodes and weights were built during initialization. products : `Products` Product data structured as :class:`Products`, which consists of data taken from :attr:`Simulation.product_data` along with matrices build according to :attr:`Simulation.product_formulations`. The :func:`data_to_dict` function can be used to convert this into a more usable data type. agents : `Agents` Agent data structured as :class:`Agents`, which consists of data taken from :attr:`Simulation.agent_data` or built by :attr:`Simulation.integration` along with any demographics formulated by :attr:`Simulation.agent_formulation`. The :func:`data_to_dict` function can be used to convert this into a more usable data type. unique_market_ids : `ndarray` Unique market IDs in product and agent data. unique_firm_ids : `ndarray` Unique firm IDs in product data. unique_nesting_ids : `ndarray` Unique nesting IDs in product data. unique_product_ids : `ndarray` Unique product IDs in product data. unique_agent_ids : `ndarray` Unique agent IDs in agent data. beta : `ndarray` Demand-side linear parameters, :math:`\beta`. sigma : `ndarray` Cholesky root of the covariance matrix for unobserved taste heterogeneity, :math:`\Sigma`. gamma : `ndarray` Supply-side linear parameters, :math:`\gamma`. pi : `ndarray` Parameters that measures how agent tastes vary with demographics, :math:`\Pi`. rho : `ndarray` Parameters that measure within nesting group correlation, :math:`\rho`. xi : `ndarray` Unobserved demand-side product characteristics, :math:`\xi`. omega : `ndarray` Unobserved supply-side product characteristics, :math:`\omega`. rc_types : `list of str` Random coefficient types. epsilon_scale : `float` Factor by which the Type I Extreme Value idiosyncratic preference term, :math:`\epsilon_{ijt}`, is scaled. costs_type : `str` Functional form of the marginal cost function :math:`\tilde{c} = f(c)`. T : `int` Number of markets, :math:`T`. N : `int` Number of products across all markets, :math:`N`. F : `int` Number of firms across all markets, :math:`F`. I : `int` Number of agents across all markets, :math:`I`. K1 : `int` Number of demand-side linear product characteristics, :math:`K_1`. K2 : `int` Number of demand-side nonlinear product characteristics, :math:`K_2`. K3 : `int` Number of supply-side characteristics, :math:`K_3`. D : `int` Number of demographic variables, :math:`D`. MD : `int` Number of demand-side instruments, :math:`M_D`, which is always zero because instruments are added or constructed in :meth:`SimulationResults.to_problem`. MS : `int` Number of supply-side instruments,
<reponame>Tesla2fox/MPDA_Preliminary<filename>Decode/decodeNew.py # -*- coding: utf-8 -*- """ Created on Tue Sep 4 10:12:55 2018 this decode can save the decode status @author: robot """ import os,sys AbsolutePath = os.path.abspath(__file__) #将相对路径转换成绝对路径 SuperiorCatalogue = os.path.dirname(AbsolutePath) #相对路径的上级路径 BaseDir = os.path.dirname(SuperiorCatalogue) #在“SuperiorCatalogue”的基础上在脱掉一层路径,得到我们想要的路径。 if BaseDir in sys.path: print('have been added') pass else: sys.path.append(BaseDir) #sys.path. #sys.path.insert(0,BaseDir) print(sys.path) # ##将我们取出来的路径加入到Python的命名空间去,并将该目录插入在第一个位置中。 #print(__file__) class TaskIDException(Exception): pass from readCfg.read_cfg import Read_Cfg import readCfg.read_cfg as rd #from read_cfg import * from Decode.task import * #task import * import numpy as np import random from Decode.robot import * from enum import Enum import sys from timeit import timeit import time import copy as cp class CalType(Enum): arriveCond = 1 leaveCond = 2 endCond = 3 backCond = 4 stateInvalidCond = 5 class Decode: def __init__(self,insFileName): readCfg = Read_Cfg(insFileName) # ins_pic = Pic() self.robNum = int(readCfg.getSingleVal('robNum')) self.taskNum = int(readCfg.getSingleVal('tskNum')) self.threhold = readCfg.getSingleVal('comp_threhold') self.robAbiLst = [] self.robVelLst = [] self.taskStateLst = [] self.taskRateLst = [] readCfg.get('rob_abi',self.robAbiLst) readCfg.get('rob_vel',self.robVelLst) readCfg.get('tsk_rat',self.taskRateLst) readCfg.get('tsk_state',self.taskStateLst) self.rob2taskDisMat = np.zeros((self.robNum,self.taskNum)) disLst = [] readCfg.get('rob2tskDis',disLst) for i in range(self.robNum): for j in range(self.taskNum): self.rob2taskDisMat[i][j] = disLst[i*self.robNum+j] self.taskDisMat = np.zeros((self.taskNum,self.taskNum)) disLst = [] readCfg.get('tskDis',disLst) for i in range(self.taskNum): for j in range(self.taskNum): self.taskDisMat[i][j] = disLst[i*self.taskNum+j] self.encode = np.zeros((self.robNum,self.taskNum),dtype =int) # print(self.taskDisMat) # print(self.taskStateLst) self.taskLst = [] self.robotLst = [] self.cmpltLst = [] self.decodeTime = 0 self.insFileName = insFileName self.robInfoLst = [] self.taskInfoLst = [] self.decodeTimeLst = [] # ============================================================================= # deg2 is for the decode process # ============================================================================= degFileDir = BaseDir + '//debug//' ins = self.insFileName.split('data//') degFileName = degFileDir + 'deg_' + ins[1] self.deg = open(degFileName,'w') degFileDir = BaseDir + '//debug//' ins = self.insFileName.split('data//') degFileName = degFileDir + 'deg2_' + ins[1] self.deg2 = open(degFileName,'w') def generateRandEncode(self): # random.shuffle(permLst) for i in range(self.robNum): # for j in range(self.taskNum): permLst = [x for x in range(self.taskNum)] random.shuffle(permLst) # print(permLst) self.encode[i][:] = permLst # print(self.encode) def decode(self): circleTime = 0 # self.saveEncode() while True: print('whileCircle = ',circleTime) circleTime += 1 start = time.clock() # self.initStates() cal_type = self.decodeProcessor() # self.saveEncode() end = time.clock() print(end - start) if cal_type == CalType['backCond']: # break continue else: break print(self.cmpltLst) # self.saveEncode() if cal_type == CalType.stateInvalidCond: # print('invalidState') makespan = sys.float_info.max else: # print('validState') makespan = self.calMakespan() # pass # while True: # print('0-0') return makespan def newDecode(self): circleTime = 0 # self.saveEncode() self.initStates() while True: print('whileCircle = ',circleTime) circleTime += 1 # if circleTime > 100: # break start = time.clock() # print(self.cmpltLst) cal_type = self.decodeProcessor() # self.saveEncode() end = time.clock() print(end - start) if cal_type == CalType['backCond']: # break while True: if self.decodeTimeLst[-1] > self.backArriveTime: self.decodeTimeLst.pop() self.robInfoLst.pop() self.taskInfoLst.pop() else: break # print('lastDecode',self.decodeTimeLst[-1]) # print('backArriveTime',self.backArriveTime) # print('backRobId',self.backRobID) # print('') self.eventRecover() # self.deg.write('__backItem__\n') # self.saveRobotInfo() # break continue else: break # print('end = ',self.cmpltLst) if cal_type == CalType.stateInvalidCond: makespan = sys.float_info.max else: makespan = self.calMakespan() return makespan def initStates(self): self.taskLst.clear() self.robotLst.clear() self.cmpltLst.clear() self.cmpltLst = [False] * self.taskNum for i in range(self.robNum): rob = Robot() rob.ability = self.robAbiLst[i] rob.vel = self.robVelLst[i] rob.encodeIndex = 0 # if rob.taskID,rob.encodeIndex = self.getRobTask(robID = i, encodeIndex = 0) if rob.taskID < 0: print('wtf') # rob.taskID = self.encode[i][0] dis = self.rob2taskDisMat[i][rob.taskID] dis_time = dis/rob.vel rob.stateType = RobotState['onRoad'] rob.arriveTime = dis_time rob.leaveTime = 0 self.robotLst.append(rob) for i in range(self.taskNum): task = Task() task.cState = self.taskStateLst[i] task.initState = self.taskStateLst[i] task.cRate = self.taskRateLst[i] task.initRate = self.taskRateLst[i] task.threhod = self.threhold task.cmpltTime = sys.float_info.max self.taskLst.append(task) self.robInfoLst = [] self.taskInfoLst = [] self.decodeTimeLst = [] self.decodeTime = 0 # print(self.decodeTime) # for rob in self.robotLst: # rob.display() # print('__') # for task in self.taskLst: # task.display() # print('') def decodeProcessor(self): invalidFitness = False backBool = False validStateBool = True circleTime = 0 while not self.allTaskCmplt(): cal_type,actionID = self.findActionID() # self.deg.write('actionID '+ str(actionID)+ '\n') # self.deg.write('cal_type '+ str(cal_type) + '\n') # self.deg.flush() if cal_type == CalType['arriveCond']: rob = self.robotLst[actionID] arriveTime = rob.arriveTime # print('old arriveTime', rob.arriveTime) encodeInd = rob.encodeIndex taskID = self.encode[actionID][encodeInd] # robTaskID = rob.taskID # taskID = rob.taskID # if taskID != robTaskID: # self.saveRobotInfo() # print(actionID) # print('taskID = ',taskID,' robTaskID = ',robTaskID) # print(self.encode[actionID]) # print('encodeInd = ',rob.encodeIndex) # return # if taskID < 0: # print('bug') # self.deg.write('__bugItem__\n') # self.saveRobotInfo() # print(actionID) # print(self.encode[actionID]) # print(encodeInd) cmplt = False if taskID >= 0: if self.cmpltLst[taskID]: cmplt = True else: cmplt = True # ============================================================================= # the task has been cmplt # ============================================================================= if cmplt: # print(self.encode[actionID]) # preTaskID = rob.taskID # for i in range(encodeInd,self.taskNum - 1): # self.encode[actionID][i] = self.encode[actionID][i + 1] self.encode[actionID][encodeInd] = -1 # print(self.encode[actionID]) # for debug # oldDur = self.calRoadDur() oldTaskID = taskID # self.encode[actionID][self.taskNum] # self.deg.write('pre_actionID = '+ str(actionID)+ '\n') # self.deg.write('ecodeInd = '+ str(encodeInd) + '\n') # self.deg.write('new_actionID = '+ str(self.encode[actionID][encodeInd])) # print('taskID',taskID) # print('encodeInd', encodeInd) while True: if len(rob.cmpltTaskLst) == 0: taskID = self.encode[actionID][encodeInd] if taskID < 0: encodeInd += 1 continue dis = self.rob2taskDisMat[actionID][taskID] dis_time = dis/rob.vel rob.arriveTime = dis_time rob.encodeIndex = encodeInd break else: if encodeInd == self.taskNum - 1: rob.stopBool = True print('___0-0__') break taskID = self.encode[actionID][encodeInd] if taskID < 0: encodeInd += 1 continue preTaskID = rob.cmpltTaskLst[-1] # print('preTaskID',preTaskID) # wrf = [] # oldRoadDur = self.calRoadDur(oldTaskID,taskID,actionID) roadDur = self.calRoadDur(preTaskID,taskID,actionID) rob.arriveTime = rob.leaveTime + roadDur rob.encodeIndex = encodeInd # print('oldRoadDur ', oldRoadDur) # print('roadDur ',roadDur) break # print('new taskID',taskID) rob.taskID = taskID # print('new arriveTime = ',rob.arriveTime) # print(self.encode[actionID]) self.backOneStep() # ============================================================================= # the task has not been cmplt # ============================================================================= else: task = self.taskLst[taskID] # self.deg.write('taskID '+ str(taskID) +'\n') rob.taskID = taskID validStateBool = task.calCurrentState(arriveTime) if not validStateBool : break task.cRate = task.cRate - rob.ability # can not be cmplted if task.cRate >= 0: leaveTime = sys.float_info.max # can be completed else: rob.executeDur = task.calExecuteDur() rob.executeBool = False leaveTime = rob.arriveTime + rob.executeDur coordLst = self.findCoordRobot(actionID) for coordID in coordLst: coordRob = self.robotLst[coordID] # self.deg.write(str(coordID) + '\n') coordRob.leaveTime = leaveTime coordRob.executeDur = coordRob.leaveTime - coordRob.arriveTime rob.leaveTime = leaveTime rob.stateType = RobotState['onTask'] self.decodeTime = rob.arriveTime # self.deg.write('arriveChanged\n') # self.saveRobotInfo() # print('arriveTime ',self.decodeTime) # ============================================================================= # no coordinated robot # ============================================= ================================ # if not coordLst: # ============================================================================= # begin the leave condition for # ============================================================================= if cal_type == CalType['leaveCond']: rob = self.robotLst[actionID] # print(actionID) taskID = rob.taskID try: if taskID < 0: raise Exception('taskID < 0') except Exception as e: print(e) # print(taskID) task = self.taskLst[taskID] if self.cmpltLst[taskID] == True: print(self.cmpltLst) # validStateBool = True while True: print('bug',taskID) # return else: validStateBool = task.calCurrentState(rob.leaveTime) if not validStateBool : break if(task.isCmplt()): self.cmpltLst[taskID] = True try: self.updateEncode(taskID) except Exception as e: print(e) coordLst = self.findCoordRobot(actionID) for coordID in coordLst: self.updateRobLeaveCond(coordID) self.robotLst[coordID].cmpltTaskLst.append(taskID) self.updateRobLeaveCond(actionID) self.robotLst[actionID].cmpltTaskLst.append(taskID) self.robotLst[actionID].cmpltTaskID = taskID # self.deg.write('taskID '+ str(taskID) +' has been cmplt\n') # self.deg.write('leaveChanged\n') # self.saveRobotInfo() task.cmpltTime = rob.leaveTime self.decodeTime = rob.leaveTime # print(taskID,' cmpltTime ', task.cmpltTime) if cal_type == CalType['endCond']: invalidFitness = True break if cal_type == CalType['backCond']: backBool = True self.backRobID = actionID self.backTaskID = self.robotLst[actionID].taskID self.backArriveTime = self.robotLst[actionID].arriveTime self.backInfo = self.robotLst[actionID].variableInfo() break # # print(task.cRate) # print(task.cRate) # ============================================================================= # the state is too big the decode process is wrong # ============================================================================= if not validStateBool: break # print('circleTime = ', circleTime) # print('decodeTime = ', self.decodeTime) # circleTime += 1 # if circleTime > 3000: # break # print(self.cmpltLst) if not validStateBool: cal_type = CalType['stateInvalidCond'] # print(cal_type) return cal_type # break def allTaskCmplt(self): if False in self.cmpltLst: return False else: return True def findActionID(self): cal_type = CalType['endCond'] actionID = -1 minTime = sys.float_info.max for i in range(self.robNum): rob = self.robotLst[i] if rob.stopBool != True: if rob.stateType == RobotState['onRoad']: if rob.arriveTime < minTime: minTime = rob.arriveTime cal_type = CalType['arriveCond'] actionID = i if rob.stateType == RobotState['onTask']: if rob.leaveTime < minTime: minTime = rob.leaveTime cal_type = CalType['leaveCond'] actionID = i self.saveEventInMemory() if minTime < self.decodeTime: cal_type = CalType['backCond'] print(minTime) print(self.decodeTime) # taskID = self.robotLst[actionI].taskID # self.saveRobotInfo() return cal_type,actionID def findCoordRobot(self,robID): coordLst = [] rob = self.robotLst[robID] taskID = rob.taskID for i in range(self.robNum): if i == robID: continue # crob = self.robotLst[i] if self.robotLst[i].stateType == RobotState['onRoad']: continue if self.robotLst[i].stopBool == True: continue if self.robotLst[i].taskID == taskID: coordLst.append(i) return coordLst def calRoadDur(self,taskID1,taskID2,robID): dis = self.taskDisMat[taskID1][taskID2] rob = self.robotLst[robID] roadDur = dis/rob.vel return roadDur def updateEncode(self,cmpltTaskID): # self.saveEncode() if cmpltTaskID < 0 : raise Exception('cmpltTaskID < 0 ') # self.deg2.write('cmpltTaskID = '+ str(cmpltTaskID) +'\n') for i in range(self.robNum): rob = self.robotLst[i] # endInd = rob.encodeIndex for j in range(rob.encodeIndex + 1, self.taskNum): if self.encode[i][j] == cmpltTaskID: self.encode[i][j] = -1 # self.deg2.write('\nchange\n') # self.saveEncode() def updateRobLeaveCond(self,robID): rob = self.robotLst[robID] preTaskID = rob.taskID while True: if rob.encodeIndex == (self.taskNum - 1): rob.stopBool = True break rob.encodeIndex += 1 taskID
plane X dist_origin_x = px - ox dist_edge_x = ex - px dx = 0 if dist_origin_x < dist_edge_x: dx = dist_origin_x # 1 p_xyz[0] = 1 else: dx = dist_edge_x # -1 p_xyz[0] = -1 if dx < cutoff and dx != 0: pass else: p_xyz[0] = 0 # distance plane Y doy = py - oy dey = ey - py dy = 0 if doy < dey: dy = doy # 1 p_xyz[1] = 1 else: dy = dey # -1 p_xyz[1] = -1 if dy < cutoff and dy != 0.0: pass else: p_xyz[1] = 0 # distance plane Z doz = pz - oz dez = ez - pz dz = 0 if doz < dez: dz = doz # 1 p_xyz[2] = 1 else: dz = dez # -1 p_xyz[2] = -1 if dz < cutoff and dz != 0: pass else: p_xyz[2] = 0 p_xyz = numpy.array(p_xyz) * biased # for 2D we need 3 corner tiles # for 3D we need 7 corner tiles corner = numpy.zeros((4, 3)) indices_non_zero = numpy.nonzero(p_xyz)[0] for i in indices_non_zero: # i is the axis that is close to the point tr.append(pt3d + (self.periodic_table["left"][i] * p_xyz[i])) # 0,1,2 corner[0] += self.periodic_table["left"][i] * p_xyz[i] # 1 # the corner are # X+Y+Z corner[0] # X+Y+0 corner[1] # X+0+Z corner[2] # 0+Y+Z corner[3] if len(indices_non_zero) == 2: # two axis cross-> three pos tr.append(pt3d + corner[0]) if len(indices_non_zero) == 3: # in a corner need total 7 pos, never happen in 2D corner[1] = ( self.periodic_table["left"][0] * p_xyz[0] + self.periodic_table["left"][1] * p_xyz[1] ) corner[2] = ( self.periodic_table["left"][0] * p_xyz[0] + self.periodic_table["left"][2] * p_xyz[2] ) corner[3] = ( self.periodic_table["left"][1] * p_xyz[1] + self.periodic_table["left"][2] * p_xyz[2] ) for i in range(4): # 4+1=5 tr.append(pt3d + corner[i]) return tr def checkPointInside(self, pt3d, dist=None, jitter=[1, 1, 1], bb=None): """ Check if the given 3d points is inside the grid """ if bb is None: bb = self.boundingBox origin = numpy.array(bb[0]) edge = numpy.array(bb[1]) for i in range(len(edge)): if edge[i] < self.gridSpacing: edge[i] = self.gridSpacing packing_location = numpy.array(pt3d) # *jitter test1 = packing_location < origin test2 = packing_location > edge if True in test1 or True in test2: # outside return False else: if dist is not None: # distance to closest wall d1 = (packing_location - origin) * jitter s1 = min(x for x in d1[d1 != 0] if x != 0) d2 = (edge - packing_location) * jitter s2 = min(x for x in d2[d2 != 0] if x != 0) if s1 <= dist or s2 <= dist: self.log.info("s1 s2 smaller than dist %d %d %d", s1, s2, dist) return False return True def getCenter(self): """ Get the center of the grid """ if self.center is None: self.center = [0.0, 0.0, 0.0] for i in range(3): self.center[i] = (self.boundingBox[0][i] + self.boundingBox[1][i]) / 2.0 return self.center def getRadius(self): """ Get the radius the grid """ d = numpy.array(self.boundingBox[0]) - numpy.array(self.boundingBox[1]) s = numpy.sum(d * d) return math.sqrt(s) def getPointsInSphere(self, pt, radius): if self.tree is None: self.tree = spatial.cKDTree(self.masterGridPositions, leafsize=10) # add surface points ptIndices = self.tree.query_ball_point(pt, radius) # , n_jobs=-1) return ptIndices def getPointsInCubeFillBB(self, bb, pt, radius, addSP=True, info=False): """ Return all grid points indices inside the given bounding box. NOTE : need to fix with grid build with numpy arrange """ spacing1 = 1.0 / self.gridSpacing NX, NY, NZ = self.nbGridPoints OX, OY, OZ = self.boundingBox[ 0 ] # origin of fill grid-> bottom lef corner not origin ox, oy, oz = bb[0] ex, ey, ez = bb[1] i0 = int(max(0, floor((ox - OX) * spacing1))) i1 = int(min(NX, int((ex - OX) * spacing1)) + 1) j0 = int(max(0, floor((oy - OY) * spacing1))) j1 = int(min(NY, int((ey - OY) * spacing1)) + 1) k0 = int(max(0, floor((oz - OZ) * spacing1))) k1 = int(min(NZ, int((ez - OZ) * spacing1)) + 1) i0 = int(min(NX - 1, max(0, round((ox - OX) * spacing1)))) j0 = int(min(NY - 1, max(0, round((oy - OY) * spacing1)))) k0 = int(min(NZ - 1, max(0, round((oz - OZ) * spacing1)))) i1 = int(min(NX, max(0, round((ex - OX) * spacing1)))) j1 = int(min(NY, max(0, round((ey - OY) * spacing1)))) k1 = int(min(NZ, max(0, round((ez - OZ) * spacing1)))) if NZ == 1: k0 = 0 k1 = 1 elif NY == 1: j0 = 0 j1 = 1 elif NX == 1: i0 = 0 i1 = 1 ptIndices = [] pid = numpy.mgrid[i0:i1, j0:j1, k0:k1] ijk = numpy.vstack(pid).reshape(3, -1).T # in case 2D, meaning one of the dimension is 1 if NZ == 1: ptIndices = [p[2] + p[1] + NX * p[0] for p in ijk] elif NY == 1: ptIndices = [p[2] + p[1] + NX * p[0] for p in ijk] elif NX == 1: ptIndices = [p[2] + NY * p[1] + p[0] for p in ijk] else: 0.02451198 # add surface points if addSP and self.nbSurfacePoints != 0: result = numpy.zeros((self.nbSurfacePoints,), "i") nb = self.surfPtsBht.closePoints(tuple(pt), radius, result) # nb = self.surfPtsBht.query(tuple(pt),k=self.nbSurfacePoints) ptIndices.extend( list(map(lambda x, length=self.gridVolume: x + length, result[:nb])) ) return ptIndices def test_points_in_bb(self, bb, pt): # given a bounding box, does the point is contains in it origin = numpy.array(bb[0]) E = numpy.array(bb[1]) P = numpy.array(pt) # *jitter test1 = P < origin test2 = P > E inside = False if True in test1 or True in test2: # outside inside = False return inside def getPointsInCube(self, bb, pt, radius, addSP=True, info=False): """ Return all grid points indicesinside the given bounding box. """ spacing1 = 1.0 / self.gridSpacing NX, NY, NZ = self.nbGridPoints OX, OY, OZ = self.boundingBox[ 0 ] # origin of Pack grid-> bottom lef corner not origin ox, oy, oz = bb[0] ex, ey, ez = bb[1] i0 = int(max(0, floor((ox - OX) * spacing1))) i1 = int(min(NX, int((ex - OX) * spacing1) + 1)) j0 = int(max(0, floor((oy - OY) * spacing1))) j1 = int(min(NY, int((ey - OY) * spacing1) + 1)) k0 = int(max(0, floor((oz - OZ) * spacing1))) k1 = int(min(NZ, int((ez - OZ) * spacing1) + 1)) zPlaneLength = NX * NY ptIndices = [] for z in range(int(k0), int(k1)): offz = z * zPlaneLength for y in range(int(j0), int(j1)): off = y * NX + offz for x in range(int(i0), int(i1)): ptIndices.append(x + off) # add surface points if addSP and self.nbSurfacePoints != 0: result = numpy.zeros((self.nbSurfacePoints,), "i") nb = self.surfPtsBht.closePoints(tuple(pt), radius, result) dimx, dimy, dimz = self.nbGridPoints ptIndices.extend( list(map(lambda x, length=self.gridVolume: x + length, result[:nb])) ) return ptIndices def computeGridNumberOfPoint(self, boundingBox, space): """ Return the grid size : total number of point and number of point per axes """ xl, yl, zl = boundingBox[0] xr, yr, zr = boundingBox[1] encapsulatingGrid = self.encapsulatingGrid # Graham Added on Oct17 to allow for truly 2D grid for test fills... may break everything! nx = int(ceil((xr - xl) / space)) + encapsulatingGrid ny = int(ceil((yr - yl) / space)) + encapsulatingGrid nz = int(ceil((zr - zl) / space)) + encapsulatingGrid # nx = nx if (nx == 1) else nx-1 # ny = ny if (ny == 1) else ny-1 # nz = nz if (nz == 1) else nz-1 return nx * ny * nz, (nx, ny, nz) def set_surfPtsBht(self, verts): self.surfPtsBht = None if verts is not None and len(verts): self.surfPtsBht = bhtreelib.BHtree(verts, None, 10) self.nbSurfacePoints = len(verts) def set_surfPtscht(self, verts): self.surfPtsBht = None if verts is not None and len(verts): self.surfPtsBht = spatial.cKDTree(verts, leafsize=10) self.nbSurfacePoints = len(verts) def computeExteriorVolume(self, compartments=None, space=None, fbox_bb=None): # compute exterior volume, totalVolume without compartments volume unitVol = self.gridSpacing ** 3 totalVolume = self.gridVolume * unitVol if fbox_bb is not None: V, nbG = self.computeGridNumberOfPoint(fbox_bb, space) totalVolume = V * unitVol if compartments is not None: for
"(nn::applet::AppletResourceUserId,long)", "pid": True, "outbytes": 0, "name": "SetNpadHandheldActivationMode"}, "68": {"inbytes": 16, "args": "(nn::sf::Out<bool,void>,nn::applet::AppletResourceUserId,nn::hid::SixAxisSensorHandle)", "pid": True, "outbytes": 1, "name": "IsSixAxisSensorFusionEnabled"}, "201": {"inbytes": 32, "args": "(nn::applet::AppletResourceUserId,nn::hid::VibrationDeviceHandle,nn::hid::VibrationValue const&)", "pid": True, "outbytes": 0, "name": "SendVibrationValue"}, "200": {"inbytes": 4, "args": "(nn::sf::Out<nn::hid::VibrationDeviceInfoForIpc,void>,nn::hid::VibrationDeviceHandle)", "outbytes": 8, "name": "GetVibrationDeviceInfo"}, "203": {"inbytes": 0, "args": "(nn::sf::Out<nn::sf::SharedPointer<nn::hid::IActiveVibrationDeviceList>,void>)", "outbytes": 0, "name": "CreateActiveVibrationDeviceList", "outinterfaces": ["nn::hid::IActiveVibrationDeviceList"]}, "202": {"inbytes": 16, "args": "(nn::sf::Out<nn::hid::VibrationValue,void>,nn::applet::AppletResourceUserId,nn::hid::VibrationDeviceHandle)", "pid": True, "outbytes": 16, "name": "GetActualVibrationValue"}, "205": {"inbytes": 0, "args": "(nn::sf::Out<bool,void>)", "outbytes": 1, "name": "IsVibrationPermitted"}, "204": {"inbytes": 1, "args": "(bool)", "outbytes": 0, "name": "PermitVibration"}, "206": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId,nn::sf::InArray<nn::hid::VibrationDeviceHandle> const&,nn::sf::InArray<nn::hid::VibrationValue> const&)", "outbytes": 0, "buffers": [9, 9], "name": "SendVibrationValues"}, "300": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId)", "pid": True, "outbytes": 0, "name": "ActivateConsoleSixAxisSensor"}, "301": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,nn::hid::ConsoleSixAxisSensorHandle)", "pid": True, "outbytes": 0, "name": "StartConsoleSixAxisSensor"}, "302": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,nn::hid::ConsoleSixAxisSensorHandle)", "pid": True, "outbytes": 0, "name": "StopConsoleSixAxisSensor"}, "77": {"inbytes": 16, "args": "(nn::sf::Out<unsigned int,void>,nn::applet::AppletResourceUserId,nn::hid::SixAxisSensorHandle)", "pid": True, "outbytes": 4, "name": "GetAccelerometerPlayMode"}, "76": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,nn::hid::SixAxisSensorHandle,unsigned int)", "pid": True, "outbytes": 0, "name": "SetAccelerometerPlayMode"}, "108": {"inbytes": 4, "args": "(nn::sf::Out<unsigned long,void>,unsigned int)", "outbytes": 8, "name": "GetPlayerLedPattern"}, "123": {"inbytes": 24, "args": "(nn::applet::AppletResourceUserId,unsigned int,long)", "pid": True, "outbytes": 0, "name": "SetNpadJoyAssignmentModeSingle"}, "73": {"inbytes": 24, "args": "(nn::applet::AppletResourceUserId,nn::hid::SixAxisSensorHandle,float,float)", "pid": True, "outbytes": 0, "name": "SetAccelerometerParameters"}, "125": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,unsigned int,unsigned int)", "pid": True, "outbytes": 0, "name": "MergeSingleJoyAsDualJoy"}, "126": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId)", "pid": True, "outbytes": 0, "name": "StartLrAssignmentMode"}, "127": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId)", "pid": True, "outbytes": 0, "name": "StopLrAssignmentMode"}, "91": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,int)", "pid": True, "outbytes": 0, "name": "ActivateGesture"}, "129": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId,nn::sf::Out<long,void>)", "pid": True, "outbytes": 8, "name": "GetNpadHandheldActivationMode"}, "100": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,nn::util::BitFlagSet<32,nn::hid::NpadStyleTag>)", "pid": True, "outbytes": 0, "name": "SetSupportedNpadStyleSet"}, "101": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId,nn::sf::Out<nn::util::BitFlagSet<32,nn::hid::NpadStyleTag>,void>)", "pid": True, "outbytes": 4, "name": "GetSupportedNpadStyleSet"}, "106": {"outhandles": [1], "outbytes": 0, "inbytes": 24, "args": "(nn::applet::AppletResourceUserId,nn::sf::Out<nn::sf::NativeHandle,void>,unsigned int,unsigned long)", "pid": True, "name": "AcquireNpadStyleSetUpdateEventHandle"}, "107": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,unsigned int)", "pid": True, "outbytes": 0, "name": "DisconnectNpad"}, "104": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId)", "pid": True, "outbytes": 0, "name": "DeactivateNpad"}, "78": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,nn::hid::SixAxisSensorHandle)", "pid": True, "outbytes": 0, "name": "ResetAccelerometerPlayMode"}, "11": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId)", "pid": True, "outbytes": 0, "name": "ActivateTouchScreen"}, "120": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,long)", "pid": True, "outbytes": 0, "name": "SetNpadJoyHoldType"}, "59": {"inbytes": 0, "args": "(nn::sf::Out<long,void>,nn::sf::OutArray<nn::hid::JoyXpadId> const&)", "outbytes": 8, "buffers": [10], "name": "GetJoyXpadIds"}, "58": {"inbytes": 4, "outhandles": [1], "outbytes": 0, "args": "(nn::sf::Out<nn::sf::NativeHandle,void>,nn::hid::JoyXpadId)", "name": "GetJoyXpadLifoHandle"}, "121": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId,nn::sf::Out<long,void>)", "pid": True, "outbytes": 8, "name": "GetNpadJoyHoldType"}, "55": {"inbytes": 0, "args": "(nn::sf::Out<long,void>,nn::sf::OutArray<nn::hid::BasicXpadId> const&)", "outbytes": 8, "buffers": [10], "name": "GetXpadIds"}, "31": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId)", "pid": True, "outbytes": 0, "name": "ActivateKeyboard"}, "56": {"inbytes": 4, "args": "(nn::hid::JoyXpadId)", "outbytes": 0, "name": "ActivateJoyXpad"}, "61": {"inbytes": 4, "args": "(nn::hid::BasicXpadId)", "outbytes": 0, "name": "DeactivateSixAxisSensor"}, "122": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,unsigned int)", "pid": True, "outbytes": 0, "name": "SetNpadJoyAssignmentModeSingleByDefault"}, "74": {"inbytes": 16, "args": "(nn::sf::Out<float,void>,nn::sf::Out<float,void>,nn::applet::AppletResourceUserId,nn::hid::SixAxisSensorHandle)", "pid": True, "outbytes": 8, "name": "GetAccelerometerParameters"}, "124": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,unsigned int)", "pid": True, "outbytes": 0, "name": "SetNpadJoyAssignmentModeDual"}, "103": {"inbytes": 8, "args": "(nn::applet::AppletResourceUserId)", "pid": True, "outbytes": 0, "name": "ActivateNpad"}, "1000": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,long)", "pid": True, "outbytes": 0, "name": "SetNpadCommunicationMode"}, "1001": {"inbytes": 0, "args": "(nn::sf::Out<long,void>)", "outbytes": 8, "name": "GetNpadCommunicationMode"}, "72": {"inbytes": 16, "args": "(nn::applet::AppletResourceUserId,nn::hid::SixAxisSensorHandle)", "pid": True, "outbytes": 0, "name": "ResetSixAxisSensorFusionParameters"} }, "nn::ro::detail::IRoInterface": { "1": {"inbytes": 16, "pid": True, "outbytes": 0}, "0": {"inbytes": 40, "pid": True, "outbytes": 8}, "3": {"inbytes": 16, "pid": True, "outbytes": 0}, "2": {"inbytes": 24, "pid": True, "outbytes": 0}, "4": {"inbytes": 8, "inhandles": [1], "pid": True, "outbytes": 0} }, "nn::nifm::detail::IGeneralService": { "24": {"inbytes": 0, "args": "(void)", "outbytes": 0, "name": "WakeUp"}, "25": {"inbytes": 0, "args": "(nn::sf::Out<nn::nifm::SsidListVersion,void>)", "outbytes": 16, "name": "GetSsidListVersion"}, "26": {"inbytes": 0, "args": "(nn::nifm::ClientId)", "outbytes": 0, "buffers": [25], "name": "SetExclusiveClient"}, "27": {"inbytes": 0, "args": "(nn::sf::Out<nn::nifm::IpSettingData,void>)", "outbytes": 0, "buffers": [26], "name": "GetDefaultIpSetting"}, "20": {"inbytes": 0, "args": "(nn::sf::Out<bool,void>)", "outbytes": 1, "name": "IsEthernetCommunicationEnabled"}, "21": {"inbytes": 0, "args": "(nn::sf::Out<bool,void>,nn::nifm::ClientId)", "outbytes": 1, "buffers": [25], "name": "IsAnyInternetRequestAccepted"}, "22": {"inbytes": 0, "args": "(nn::sf::Out<bool,void>)", "outbytes": 1, "name": "IsAnyForegroundRequestAccepted"}, "23": {"inbytes": 0, "args": "(void)", "outbytes": 0, "name": "PutToSleep"}, "28": {"inbytes": 0, "args": "(nn::nifm::IpSettingData const&)", "outbytes": 0, "buffers": [25], "name": "SetDefaultIpSetting"}, "29": {"inbytes": 1, "args": "(bool)", "outbytes": 0, "name": "SetWirelessCommunicationEnabledForTest"}, "1": {"inbytes": 0, "args": "(nn::sf::Out<nn::nifm::ClientId,void>)", "outbytes": 0, "buffers": [26], "name": "GetClientId"}, "2": {"inbytes": 0, "args": "(nn::sf::Out<nn::sf::SharedPointer<nn::nifm::detail::IScanRequest>,void>)", "outbytes": 0, "name": "CreateScanRequest", "outinterfaces": ["nn::nifm::detail::IScanRequest"]}, "5": {"inbytes": 0, "args": "(nn::sf::Out<nn::nifm::detail::sf::NetworkProfileData,void>)", "outbytes": 0, "buffers": [26], "name": "GetCurrentNetworkProfile"}, "4": {"inbytes": 4, "args": "(nn::sf::Out<nn::sf::SharedPointer<nn::nifm::detail::IRequest>,void>,int)", "outbytes": 0, "name": "CreateRequest", "outinterfaces": ["nn::nifm::detail::IRequest"]}, "7": {"inbytes": 1, "args": "(nn::sf::OutArray<nn::nifm::detail::sf::NetworkProfileBasicInfo> const&,nn::sf::Out<int,void>,unsigned char)", "outbytes": 4, "buffers": [6], "name": "EnumerateNetworkProfiles"}, "6": {"inbytes": 4, "args": "(nn::sf::OutArray<nn::nifm::detail::sf::NetworkInterfaceInfo> const&,nn::sf::Out<int,void>,unsigned int)", "outbytes": 4, "buffers": [10], "name": "EnumerateNetworkInterfaces"}, "9": {"inbytes": 0, "args": "(nn::sf::Out<nn::util::Uuid,void>,nn::nifm::detail::sf::NetworkProfileData const&)", "outbytes": 16, "buffers": [25], "name": "SetNetworkProfile"}, "8": {"inbytes": 16, "args": "(nn::sf::Out<nn::nifm::detail::sf::NetworkProfileData,void>,nn::util::Uuid const&)", "outbytes": 0, "buffers": [26], "name": "GetNetworkProfile"}, "11": {"inbytes": 0, "args": "(nn::sf::OutArray<nn::nifm::detail::sf::AccessPointData> const&,nn::sf::Out<int,void>)", "outbytes": 4, "buffers": [6], "name": "GetScanData"}, "10": {"inbytes": 16, "args": "(nn::util::Uuid const&)", "outbytes": 0, "name": "RemoveNetworkProfile"}, "13": {"inbytes": 0, "args": "(nn::sf::Out<nn::nifm::detail::sf::AccessPointData,void>)", "outbytes": 0, "buffers": [26], "name": "GetCurrentAccessPoint"}, "12": {"inbytes": 0, "args": "(nn::sf::Out<nn::nifm::IpV4Address,void>)", "outbytes": 4, "name": "GetCurrentIpAddress"}, "15": {"inbytes": 0, "args": "(nn::sf::Out<nn::nifm::IpAddressSetting,void>,nn::sf::Out<nn::nifm::DnsSetting,void>)", "outbytes": 22, "name": "GetCurrentIpConfigInfo"}, "14": {"name": "CreateTemporaryNetworkProfile", "inbytes": 0, "args": "(nn::sf::Out<nn::sf::SharedPointer<nn::nifm::detail::INetworkProfile>,void>,nn::sf::Out<nn::util::Uuid,void>,nn::nifm::detail::sf::NetworkProfileData const&)", "outinterfaces": ["nn::nifm::detail::INetworkProfile"], "buffers": [25], "outbytes": 16}, "17": {"inbytes": 0, "args": "(nn::sf::Out<bool,void>)", "outbytes": 1, "name": "IsWirelessCommunicationEnabled"}, "16": {"inbytes": 1, "args": "(bool)", "outbytes": 0, "name": "SetWirelessCommunicationEnabled"}, "19": {"inbytes": 1, "args": "(bool)", "outbytes": 0, "name": "SetEthernetCommunicationEnabled"}, "18": {"inbytes": 0, "args": "(nn::sf::Out<nn::nifm::detail::sf::InternetConnectionStatus,void>)", "outbytes": 3, "name": "GetInternetConnectionStatus"}, "31": {"inbytes": 0, "outhandles": [1], "outbytes": 0, "args": "(nn::sf::Out<nn::sf::NativeHandle,void>)", "name": "GetTelemetorySystemEventReadableHandle"}, "30": {"inbytes": 1, "args": "(bool)", "outbytes": 0, "name": "SetEthernetCommunicationEnabledForTest"}, "33": {"inbytes": 0, "args": "(void)", "outbytes": 0, "name": "ConfirmSystemAvailability"}, "32": {"inbytes": 0, "args": "(nn::sf::Out<nn::nifm::TelemetryInfo,void>)", "outbytes": 0, "buffers": [22], "name": "GetTelemetryInfo"} }, "nn::usb::pm::IPmService": { "1": {"inbytes": 0, "outbytes": 0, "buffers": [6]}, "0": {"inbytes": 0, "outhandles": [1], "outbytes": 0}, "3": {"inbytes": 0, "outbytes": 4}, "2": {"inbytes": 0, "outhandles": [1], "outbytes": 0}, "5": {"inbytes": 4, "outbytes": 4}, "4": {"inbytes": 8, "outbytes": 0} }, "nn::fgm::sf::ISession": { "0": {"inbytes": 0, "args": "(nn::sf::Out<nn::sf::SharedPointer<nn::fgm::sf::IRequest>,void>)", "outbytes": 0, "name": "Initialize", "outinterfaces": ["nn::fgm::sf::IRequest"]} }, "nn::es::IETicketService": { "11": {"inbytes": 0, "outbytes": 4, "buffers": [6]}, "10": {"inbytes": 0, "outbytes": 4}, "13": {"inbytes": 0, "outbytes": 4, "buffers": [6, 5]}, "12": {"inbytes": 0, "outbytes": 4, "buffers": [6]}, "15": {"inbytes": 16, "outbytes": 8}, "14": {"inbytes": 16, "outbytes": 8}, "17": {"inbytes": 16, "outbytes": 8, "buffers": [6]}, "16": {"inbytes": 16, "outbytes": 8, "buffers": [6]}, "19": {"inbytes": 0, "outbytes": 4, "buffers": [6, 5]}, "18": {"inbytes": 0, "outbytes": 0, "buffers": [6, 5]}, "21": {"inbytes": 0, "outbytes": 0, "buffers": [22, 22, 5]}, "1": {"inbytes": 0, "outbytes": 0, "buffers": [5, 5]}, "3": {"inbytes": 0, "outbytes": 0, "buffers": [5]}, "2": {"inbytes": 0, "outbytes": 0, "buffers": [5]}, "5": {"inbytes": 0, "outbytes": 0}, "4": {"inbytes": 4, "outbytes": 0}, "7": {"inbytes": 0, "outbytes": 0, "buffers": [5]}, "6": {"inbytes": 0, "outbytes": 0}, "9": {"inbytes": 0, "outbytes": 4}, "20": {"inbytes": 16, "outbytes": 4, "buffers": [6]}, "8": {"inbytes": 20, "outbytes": 0, "buffers": [22]} }, "nn::lm::ILogger": { "1": {"inbytes": 4, "outbytes": 0}, "0": {"inbytes": 0, "outbytes": 0, "buffers": [33]} }, "nn::bcat::detail::ipc::IServiceCreator": { "1": {"outbytes": 0, "inbytes": 8, "args": "(nn::sf::Out<nn::sf::SharedPointer<nn::bcat::detail::ipc::IDeliveryCacheStorageService>,void>,unsigned long)", "pid": True, "outinterfaces": ["nn::bcat::detail::ipc::IDeliveryCacheStorageService"], "name": "CreateDeliveryCacheStorageService"}, "0": {"outbytes": 0, "inbytes": 8, "args": "(nn::sf::Out<nn::sf::SharedPointer<nn::bcat::detail::ipc::IBcatService>,void>,unsigned long)", "pid": True, "outinterfaces": ["nn::bcat::detail::ipc::IBcatService"], "name": "CreateBcatService"}, "2": {"inbytes": 8, "args": "(nn::sf::Out<nn::sf::SharedPointer<nn::bcat::detail::ipc::IDeliveryCacheStorageService>,void>,nn::ApplicationId)", "outbytes": 0, "name": "CreateDeliveryCacheStorageServiceWithApplicationId", "outinterfaces": ["nn::bcat::detail::ipc::IDeliveryCacheStorageService"]} }, "nn::pcv::detail::IPcvService": { "24": {"inbytes": 4, "args": "(nn::sf::OutArray<nn::pcv::ModuleState> const&,nn::sf::Out<int,void>,int)", "outbytes": 4, "buffers": [10], "name": "GetModuleStateTable"}, "25": {"inbytes": 4, "args": "(nn::sf::OutArray<nn::pcv::PowerDomainState> const&,nn::sf::Out<int,void>,int)", "outbytes": 4, "buffers": [10], "name": "GetPowerDomainStateTable"}, "26": {"inbytes": 4, "args": "(nn::sf::OutArray<unsigned int> const&,nn::sf::Out<int,void>,int)", "outbytes": 4, "buffers": [10], "name": "GetFuseInfo"}, "20": {"inbytes": 8, "args": "(nn::pcv::PowerControlTarget,int)", "outbytes": 0, "name": "ChangeVoltage"}, "21": {"inbytes": 0, "outhandles": [1], "outbytes": 0, "args": "(nn::sf::Out<nn::sf::NativeHandle,void>)", "name": "GetPowerClockInfoEvent"}, "22": {"inbytes": 0, "args": "(nn::sf::Out<unsigned int,void>)", "outbytes": 4, "name": "GetOscillatorClock"}, "23": {"inbytes": 8, "args": "(nn::sf::OutArray<unsigned int> const&,nn::sf::OutArray<int> const&,nn::sf::Out<int,void>,int,int)", "outbytes": 4, "buffers": [10, 10], "name": "GetDvfsTable"}, "1": {"inbytes": 8, "args": "(int,bool)", "outbytes": 0, "name": "SetClockEnabled"}, "0": {"inbytes": 8, "args": "(int,bool)", "outbytes": 0, "name": "SetPowerEnabled"}, "3": {"inbytes": 4, "args": "(nn::sf::Out<unsigned int,void>,int)", "outbytes": 4, "name": "GetClockRate"}, "2": {"inbytes": 8, "args": "(int,unsigned int)", "outbytes": 0, "name": "SetClockRate"}, "5": {"inbytes": 8, "args": "(nn::sf::Out<int,void>,nn::sf::OutArray<unsigned int> const&,nn::sf::Out<int,void>,int,int)", "outbytes": 8, "buffers": [10], "name": "GetPossibleClockRates"}, "4": {"inbytes": 4, "args": "(nn::sf::Out<nn::pcv::ModuleState,void>,int)", "outbytes": 12, "name": "GetState"}, "7": {"inbytes": 8, "args": "(int,bool)", "outbytes": 0, "name": "SetReset"}, "6": {"inbytes": 8, "args": "(int,unsigned int)", "outbytes": 0, "name": "SetMinVClockRate"}, "9": {"inbytes": 4, "args": "(nn::sf::Out<bool,void>,int)", "outbytes": 1, "name": "GetVoltageEnabled"}, "8": {"inbytes": 8, "args": "(int,bool)", "outbytes": 0, "name":
# -*- coding: UTF-8 -*- # Copyright 2011-2018 Rumma & Ko Ltd # License: BSD (see file COPYING for details) import six import sys from datetime import datetime from django.db.models import Q from django.conf import settings from django.db.models import Count from django.contrib.humanize.templatetags.humanize import naturaltime from lino_xl.lib.cal.utils import when_text from lino.api import dd, rt, _ from lino_xl.lib.tickets.ui import Tickets, Projects from lino.utils import ONE_DAY from etgen.html import E, join_elems from lino.utils.quantities import Duration from lino.modlib.system.choicelists import ObservedEvent from lino.mixins.periods import ObservedDateRange from lino_xl.lib.tickets.roles import Triager, TicketsStaff from lino_xl.lib.tickets.choicelists import ( TicketEvents, ProjectEvents, TicketStates) from .roles import Worker from .choicelists import ReportingTypes class TOTAL_KEY(object): pass MIN_DURATION = Duration('0:01') def ensureUtf(s): if sys.version_info < (3,): return unicode(s) else: return str(s) class TicketHasSessions(ObservedEvent): text = _("Has been worked on") def add_filter(self, qs, pv): if pv.start_date: qs = qs.filter(sessions_by_ticket__start_date__gte=pv.start_date) if pv.end_date: qs = qs.filter(sessions_by_ticket__start_date__lte=pv.end_date) qs = qs.annotate(num_sessions=Count('sessions_by_ticket')) qs = qs.filter(num_sessions__gt=0) return qs TicketEvents.add_item_instance(TicketHasSessions("working")) class ProjectHasSessions(ObservedEvent): text = _("Has been worked on") def add_filter(self, qs, pv): if pv.start_date: qs = qs.filter( tickets_by_project__sessions_by_ticket__start_date__gte= pv.start_date) if pv.end_date: qs = qs.filter( tickets_by_project__sessions_by_ticket__end_date__lte= pv.end_date) qs = qs.annotate(num_sessions=Count( 'tickets_by_project__sessions_by_ticket')) qs = qs.filter(num_sessions__gt=0) return qs ProjectEvents.add_item_instance(ProjectHasSessions("working")) class SessionTypes(dd.Table): required_roles = dd.login_required(dd.SiteStaff) model = 'working.SessionType' column_names = 'name *' class Sessions(dd.Table): required_roles = dd.login_required(Worker) model = 'working.Session' column_names = 'ticket user start_date start_time end_date end_time '\ 'break_time summary duration ticket_no *' detail_layout = """ ticket:40 user:20 faculty:20 reporting_type:10 start_date start_time end_date end_time break_time duration summary:60 is_fixing workflow_buttons:20 description """ insert_layout = """ ticket summary session_type """ order_by = ['-start_date', '-start_time', 'id'] # order_by = ['start_date', 'start_time'] # stay_in_grid = True parameters = ObservedDateRange( company=dd.ForeignKey( 'contacts.Company', null=True, blank=True), project=dd.ForeignKey( 'tickets.Project', null=True, blank=True), site=dd.ForeignKey( 'tickets.Site', null=True, blank=True), # ticket=dd.ForeignKey( # dd.plugins.working.ticket_model, null=True, blank=True), # user=dd.ForeignKey('users.User', null=True, blank=True), session_type=dd.ForeignKey( 'working.SessionType', null=True, blank=True), observed_event=dd.PeriodEvents.field( blank=True, default=dd.PeriodEvents.as_callable('active')), ) @classmethod def get_simple_parameters(cls): s = list(super(Sessions, cls).get_simple_parameters()) s += ['session_type', 'ticket'] return s params_layout = """ start_date end_date observed_event company project user session_type ticket site """ auto_fit_column_widths = True @classmethod def get_request_queryset(self, ar): qs = super(Sessions, self).get_request_queryset(ar) pv = ar.param_values ce = pv.observed_event if ce is not None: qs = ce.add_filter(qs, pv) if pv.project: qs = qs.filter(ticket__project__in=pv.project.whole_clan()) if pv.site: qs = qs.filter(ticket__site=pv.site) if pv.company: qs = qs.filter(ticket__site__company=pv.company) # if dd.is_installed('deploy'): # qs = qs.filter( # ticket__deployments_by_ticket__milestone__room__company=pv.company) # else: # qs = qs.filter(ticket__project__company=pv.company) return qs class SessionsByTicket(Sessions): master_key = 'ticket' column_names = 'start_date summary start_time end_time '\ 'break_time duration user is_fixing *' display_mode = 'summary' @classmethod def get_table_summary(self, obj, ar): if ar is None: return '' elems = [] # Button for starting a session from ticket sar = obj.start_session.request_from(ar) # if ar.renderer.is_interactive and sar.get_permission(): if sar.get_permission(): btn = sar.ar2button(obj) elems += [E.p(btn)] # Active sessions: active_sessions = [] session_summaries = E.ul() qs = rt.models.working.Session.objects.filter(ticket=obj) tot = Duration() for ses in qs: d = ses.get_duration() if d is not None: tot += d if ses.end_time is None: txt = "{0} since {1}".format(ses.user, ses.start_time) lnk = ar.obj2html(ses, txt) sar = ses.end_session.request_from(ar) if sar.get_permission(): lnk = E.span(lnk, " ", sar.ar2button(ses)) active_sessions.append(lnk) if ses.summary: session_summaries.insert(0, E.li( "%s %s: %s"%(ses.user, naturaltime(datetime.combine( ses.start_date, ses.start_time)) ,ses.summary) ) ) # elems.append(E.p(_("Total {0} hours.").format(tot))) elems.append(E.p(_("Total %s hours.") % tot)) if len(active_sessions) > 0: elems.append(E.p( ensureUtf(_("Active sessions")), ": ", *join_elems(active_sessions, ', '))) if len(session_summaries) > 0: elems.append(session_summaries) return ar.html_text(E.div(*elems)) class SessionsBySite(Sessions): master_key = 'ticket__site' column_names = 'start_date summary start_time end_time '\ 'break_time duration user is_fixing *' class MySessions(Sessions): column_names = 'start_date start_time end_time '\ 'break_time duration ticket_no ticket__site summary *' @classmethod def param_defaults(self, ar, **kw): kw = super(MySessions, self).param_defaults(ar, **kw) kw.update(user=ar.get_user()) return kw class MySessionsByDate(MySessions): order_by = ['start_date', 'start_time'] label = _("My sessions by date") column_names = ( 'start_time end_time break_time duration summary ticket ' 'workflow_buttons *') @classmethod def param_defaults(self, ar, **kw): kw = super(MySessionsByDate, self).param_defaults(ar, **kw) kw.update(start_date=dd.today()) kw.update(end_date=dd.today()) return kw @classmethod def create_instance(self, ar, **kw): kw.update(start_date=ar.param_values.start_date) return super(MySessions, self).create_instance(ar, **kw) def load_sessions(self, sar): self._root2tot = {} self._tickets = set() grand_tot = Duration() for ses in sar: self._tickets.add(ses.ticket) d = ses.get_duration() or MIN_DURATION grand_tot += d # root = ses.get_root_project() root = ses.get_reporting_type() # if ses.ticket: # root = ses.ticket.reporting_type # else: # root = None tot = self._root2tot.get(root, Duration()) + d self._root2tot[root] = tot self._root2tot[TOTAL_KEY] = grand_tot def compute_invested_time(obj, **spv): # spv = dict(start_date=pv.start_date, end_date=pv.end_date) spv.update(observed_event=dd.PeriodEvents.started) sar = SessionsByTicket.request(master_instance=obj, param_values=spv) tot = Duration() for obj in sar: d = obj.get_duration() if d is not None: tot += d return tot class InvestedTime(dd.Table): @dd.virtualfield(dd.DurationField(_("Time"))) def invested_time(cls, obj, ar): return obj._invested_time @dd.displayfield(_("Description")) def my_description(cls, obj, ar): mi = ar.master_instance if mi is None: return lst = [obj.summary] tpl = u"{0}: {1}" # if obj.site is not None and obj.site == mi.interesting_for: # lst.append(_("site-specific")) if obj.site is not None: # and obj.site != mi.interesting_for: lst.append(tpl.format( ensureUtf(_("Site")), ensureUtf(obj.site))) if obj.user is not None: lst.append(tpl.format( ensureUtf(_("Author")), ensureUtf(obj.user))) if obj.project is not None: lst.append(tpl.format( ensureUtf(_("Project")), ensureUtf(obj.project))) if obj.topic is not None: lst.append(tpl.format( ensureUtf(_("Topic")), ensureUtf(obj.topic))) return E.p(*join_elems(lst, '. ')) def rpttype2vf(func, rpttype, verbose_name): return dd.VirtualField(dd.DurationField(verbose_name), func) MySessionsByDate.column_names = ( 'start_time end_time break_time duration summary ticket ' 'ticket__site workflow_buttons *') from lino.core.tables import VentilatedColumns class WorkedHours(dd.VentilatingTable): """ A table showing one row per day with a summary view of the sesions on that day. """ required_roles = dd.login_required(Worker) label = _("Worked hours") hide_zero_rows = True parameters = ObservedDateRange( user=dd.ForeignKey('users.User', null=True, blank=True)) params_layout = "start_date end_date user" # editable = False auto_fit_column_widths = True class Row(object): def __init__(self, ar, day): self.day = day pv = dict(start_date=day, end_date=day) pv.update(observed_event=dd.PeriodEvents.started) pv.update(user=ar.param_values.user) self.sar = ar.spawn(MySessionsByDate, param_values=pv) load_sessions(self, self.sar) def __unicode__(self): return when_text(self.day) def __repr__(self): return when_text(self.day) @dd.displayfield(_("Description")) def description(self, obj, ar): # pv = dict(start_date=obj.day, end_date=obj.day) # pv.update(observed_event=dd.PeriodEvents.active) # pv.update(user=ar.param_values.user) # sar = ar.spawn(MySessionsByDate, param_values=pv) elems = [obj.sar.ar2button(label=six.text_type(obj))] tickets = [ ar.obj2html(t, "#{0}".format(t.id), title=t.summary) for t in obj._tickets] if len(tickets) > 0: elems.append(" (") elems += join_elems(tickets, ', ') elems.append(")") return E.span(*elems) @classmethod def get_data_rows(cls, ar): pv = ar.param_values start_date = pv.start_date or dd.today(-7) end_date = pv.end_date or dd.today(7) d = end_date while d > start_date: yield cls.Row(ar, d) d -= ONE_DAY @dd.displayfield("Date") def date(cls, row, ar): return dd.fdl(row.day) @classmethod def param_defaults(cls, ar, **kw): kw = super(WorkedHours, cls).param_defaults(ar, **kw) kw.update(start_date=dd.today(-7)) kw.update(end_date=dd.today()) kw.update(user=ar.get_user()) return kw @classmethod def get_ventilated_columns(cls): def w(rpttype, verbose_name): def func(fld, obj, ar): return obj._root2tot.get(rpttype, None) return dd.VirtualField(dd.DurationField(verbose_name), func) for rpttype in ReportingTypes.objects(): yield w(rpttype, six.text_type(rpttype)) # yield w(None, _("N/A")) yield w(TOTAL_KEY, _("Total")) @classmethod def unused_get_ventilated_columns(cls): Project = rt.models.tickets.Project def w(prj, verbose_name): # return a getter function for a RequestField on the given # EntryType. def func(fld, obj, ar): return obj._root2tot.get(prj, None) return dd.VirtualField(dd.DurationField(verbose_name), func) for p in Project.objects.filter(parent__isnull=True).order_by('ref'): yield w(p, six.text_type(p)) yield w(None, _("Total")) class DurationReport(VentilatedColumns): abstract = True @classmethod def get_ventilated_columns(cls): # yield the fields to be insered at {vcolumns} in template def w(rpttype, verbose_name): def func(fld, obj, ar): return obj._root2tot.get(rpttype, None) return dd.VirtualField(dd.DurationField(verbose_name), func) # def w(rpttype, verbose_name): # def func(fld, obj, ar): # if obj.get_reporting_type() == rpttype: # return obj.get_duration() # return None # return dd.VirtualField(dd.DurationField(verbose_name), func) for rpttype in ReportingTypes.objects(): yield w(rpttype, six.text_type(rpttype)) # yield w(None, _("N/A")) class SessionsByReport(Sessions, DurationReport): master = 'working.ServiceReport' column_names_template = "start_date start_time end_time break_time " \ "my_description:50 user {vcolumns}" order_by = ['start_date', 'start_time', 'id'] # @classmethod # def get_data_rows(cls, ar): # for ses in cls.get_request_queryset(ar): # load_sessions(ses, [ses]) # yield ses @classmethod def get_row_by_pk(cls, ar, pk): # fixes #2434 obj = super(SessionsByReport, cls).get_row_by_pk(ar, pk) if obj is not None: load_sessions(obj, [obj]) return obj @classmethod def get_request_queryset(self, ar): mi = ar.master_instance if mi is None: return spv = dict(start_date=mi.start_date, end_date=mi.end_date) # spv = mi.get_tickets_parameters() spv.update(company=mi.interesting_for) spv.update(observed_event=dd.PeriodEvents.started) spv.update(user=mi.user) ar.param_values.update(spv) qs = super(SessionsByReport, self).get_request_queryset(ar) for obj in qs: load_sessions(obj, [obj]) # obj._invested_time = compute_invested_time( # obj, start_date=mi.start_date, end_date=mi.end_date, # user=mi.user) if obj._root2tot.get(TOTAL_KEY): yield obj @dd.displayfield(_("Description")) def my_description(self, obj, ar): elems = [obj.summary] t = obj.ticket elems += [" ", ar.obj2html(t, "#{0}".format(t.id), title=t.summary)] return E.p(*elems) class TicketsByReport(Tickets, DurationReport): """The list of tickets mentioned in a service report.""" master = 'working.ServiceReport' # column_names = "summary id reporter project product site state # invested_time" column_names_template = "id overview site state {vcolumns}" order_by = ['id'] @classmethod def get_request_queryset(self, ar): mi = ar.master_instance if mi is None: return pv = ar.param_values pv.update(start_date=mi.start_date,
<gh_stars>1-10 #!/usr/local/bin/python3 # Functions related to abdominal cavity segmentation import shutil import sys import subprocess import argparse import numpy as np from pathlib import Path import SimpleITK as sitk from cinemri.utils import get_patients from data_extraction import extract_frames, merge_frames from postprocessing import fill_in_holes from utils import patients_from_full_ids from data_extraction import save_frame FRAMES_FOLDER = "frames" MASKS_FOLDER = "masks" METADATA_FOLDER = "images_metadata" MERGED_MASKS_FOLDER = "merged_masks" container_input_dir = Path("/tmp/nnunet/input") container_output_dir = Path("/tmp/nnunet/output") def _patients_from_args(args): # Extract patients with includes studies and slices to run inference for if file with slices IDs is given if args.slices_filter: with open(args.slices_filter) as file: lines = file.readlines() full_ids = [line.strip() for line in lines] patients = patients_from_full_ids(full_ids) else: patients = None return patients def _find_segmented_frame_index(segmentation_path): """ Finds the index of the frame on cine-MRI slice for which segmentation was made. In the last version of annotations it can be any frame and only one frame per slice is annotated Parameters ---------- segmentation_path : Path A path to .mha file with annotation of abdominal cavity Returns ------- index : int The index of the segmented frame """ image = sitk.GetArrayFromImage(sitk.ReadImage(str(segmentation_path))) for index, frame in enumerate(image): # Find first frame with non zero values if len(np.nonzero(frame)[0]) > 0: return index # If not found return None return None def extract_segmentation_data(archive_path, destination_path, images_folder="images", segmentations_folder="cavity_segmentations", target_images_folder="images", target_segmentations_folder="masks"): """Extracts a subset of the archive only related to segmentation Parameters ---------- archive_path : Path A path to the full cine-MRI data archive destination_path : Path A path to save the extracted segmentation subset images_folder : str, default="images" A name of the images folder in the archive segmentations_folder : str, default="cavity_segmentations" A name of the folder with cavity segmentations in the archive target_images_folder : str, default="images" A name of the images folder in the segmentation subset target_segmentations_folder : str, default="masks" A name of the folder with cavity segmentations in the segmentation subset """ # create target paths and folders destination_path = Path(destination_path) destination_path.mkdir(exist_ok=True) target_images_path = destination_path / target_images_folder target_images_path.mkdir(exist_ok=True) target_segmentations_path = destination_path / target_segmentations_folder target_segmentations_path.mkdir(exist_ok=True) images_path = archive_path / images_folder segmentation_path = archive_path / segmentations_folder patients = get_patients(segmentation_path) for patient in patients: # Create patient folder in both images and masks target directories patient.build_path(target_images_path).mkdir(exist_ok=True) patient.build_path(target_segmentations_path).mkdir(exist_ok=True) for study in patient.studies: # Skip studies without slices if len(study.slices) == 0: continue # Create study folder in both images and masks target directories study_images_path = study.build_path(target_images_path) study_images_path.mkdir(exist_ok=True) study_segmentations_path = study.build_path(target_segmentations_path) study_segmentations_path.mkdir(exist_ok=True) for slice in study.slices: # read a segmentation mask, find the segmented frame index mask_path = slice.build_path(segmentation_path) segmented_frame_index = _find_segmented_frame_index(mask_path) if segmented_frame_index is not None: save_frame(mask_path, study_segmentations_path, segmented_frame_index, True) # read an image and extract the segmented frame slice_path = slice.build_path(images_path) save_frame(slice_path, study_images_path, segmented_frame_index) def extract_segmentation(argv): """A command line wrapper of extract_segmentation_data Parameters ---------- argv : list of str """ parser = argparse.ArgumentParser() parser.add_argument('archive_path', type=str, help="a path to the full cine-MRI data archive") parser.add_argument('destination_path', type=str, help="a path to save the extracted segmentation subset") parser.add_argument('--images', type=str, default="images", help="a name of the images folder in the archive") parser.add_argument('--masks', type=str, default="cavity_segmentations", help="a name of the folder with cavity segmentations in the archive") parser.add_argument('--target_images', type=str, default="images", help="a name of the images folder in the segmentation subset") parser.add_argument('--target_masks', type=str, default="masks", help="a name of the folder with cavity segmentations in the segmentation subset") args = parser.parse_args(argv) archive_path = Path(args.archive_path) destination_path = Path(args.destination_path) images_folder = args.images segmentations_folder = args.masks target_images_folder = args.target_images target_segmentations_folder = args.target_masks extract_segmentation_data(archive_path, destination_path, images_folder, segmentations_folder, target_images_folder, target_segmentations_folder) def extract_complete_segmentation_data(images_path, target_path, target_frames_folder=FRAMES_FOLDER, target_metadata_folder=METADATA_FOLDER, patients=None): """ Extracts frames for segmentation and the corresponding metadata from the whole cine-MRI archive Parameters ---------- images_path : Path A path to a folder with cine-MRI images target_path : Path A path where to save the extracted frames and metadata target_frames_folder : Path, default=FRAMES_FOLDER A subfolder of target_path to save the extracted frames target_metadata_folder : Path, default=METADATA_FOLDER A subfolder of target_path to save the metadata patients : list of Patient, optional A list of patients to extract the data for. If not provided the data are extracted for all patients at images_path """ target_path.mkdir() target_images_path = target_path / target_frames_folder target_images_path.mkdir() target_metadata_path = target_path / target_metadata_folder target_metadata_path.mkdir() patients = patients if patients is not None else get_patients(images_path) for patient in patients: for cinemri_slice in patient.cinemri_slices: slice_path = cinemri_slice.build_path(images_path) slice = sitk.ReadImage(str(slice_path)) extract_frames(slice, cinemri_slice.full_id, target_images_path, target_metadata_path) def extract_complete_data(argv): """A command line wrapper of extract_complete_segmentation_data Parameters ---------- argv: list of str Command line arguments """ parser = argparse.ArgumentParser() parser.add_argument("archive_path", type=str, help="a path to the full cine-MRI data archive") parser.add_argument("target_path", type=str, help="a path to save the extracted frames") parser.add_argument("--slices_filter", type=str, required=False, help="a path to a file with full id of slices " "which to extract the data for") args = parser.parse_args(argv) archive_path = Path(args.archive_path) target_path = Path(args.target_path) patients = _patients_from_args(args) extract_complete_segmentation_data(archive_path, target_path, patients=patients) def delete_folder_contents(folder): """ An auxiliary function to delete content of a folder Parameters ---------- folder : str A path string of a folder which content to delete """ for path in Path(folder).iterdir(): if path.is_file(): path.unlink() elif path.is_dir(): shutil.rmtree(path) def _predict_and_save(nnUNet_model_path, nnUNet_input_dir, nnUNet_output_dir, output_path, network, task_id, folds=None): """ Runs inference with nn-UNet on the subset of input files and copies prediction to the specified location. When prediction is copies, nn-UNet input and output folder are emptied. Parameters ---------- nnUNet_model_path : Path A path to nn-UNet model to run inference with nnUNet_input_dir : Path A path to a folder that contains images to run inference for nnUNet_output_dir : Path A path to a folder where to save nn-UNet prediction output_path : Path A path to a folder where to copy nn-UNet prediction network : str A type of nnU-Net network task_id : str An id of a task for nnU-Net folds : str, optional A string, specifying which folds to use for prediction, e.g "0,1,2" """ cmd = [ "nnunet", "predict", task_id, "--results", str(nnUNet_model_path), "--input", str(nnUNet_input_dir), "--output", str(nnUNet_output_dir), "--network", network ] print("First cmd {}".format(cmd)) if folds: print("Adding folds") cmd.append('--folds') cmd.append(folds) print("Second cmd {}".format(cmd)) subprocess.check_call(cmd) masks_files = container_output_dir.glob("*.nii.gz") for mask_path in masks_files: print("Saving a mask for {}".format(mask_path.name)) shutil.copyfile(mask_path, output_path / mask_path.name) delete_folder_contents(nnUNet_input_dir) delete_folder_contents(nnUNet_output_dir) # Currently shutil.copy() does not work on cluster because it is not allowed to change permissions of a file on a mount # The workaround is to split input files in batches, run inference for each batch and then copy the prediction # for a batch to Chansey. This way it less likely to lose the results if a job gets interrupted def segment_abdominal_cavity(nnUNet_model_path, input_path, output_path, task_id="Task101_AbdomenSegmentation", network="2d", batch_size=1000, folds=None): """Runs inference of segmentation with the saved nnU-Net model Parameters ---------- nnUNet_model_path : Path A path to the "results" folder generated during nnU-Net training input_path : Path A path to a folder that contain the images to run inference for output_path : Path A path to a folder where to save the predicted segmentation task_id : str, default="Task101_AbdomenSegmentation" An id of a task for nnU-Net network : str, default="2d" A type of nnU-Net network batch_size : int, default=1000 A number of images in a batch for a single prediction iteration folds : str, optional A string, specifying which folds to use for prediction, e.g "0,1,2" """ # Create temporary input and output folders for nn-UNet inside the container container_input_dir.mkdir(exist_ok=True, parents=True) container_output_dir.mkdir(exist_ok=True, parents=True) output_path.mkdir(parents=True, exist_ok=True) print("Segmenting inspiration and expiration frames with nnU-Net") input_files = input_path.glob("*.nii.gz") for (index, input_frame_path) in enumerate(input_files): shutil.copy(input_frame_path, container_input_dir) if index > 0 and index % batch_size == 0: _predict_and_save(nnUNet_model_path, container_input_dir, container_output_dir, output_path, network, task_id, folds) # Process remaining images _predict_and_save(nnUNet_model_path, container_input_dir, container_output_dir, output_path, network, task_id, folds) print("Running post processing") fill_in_holes(output_path) def segment(argv): """A command line wrapper of segment_abdominal_cavity Parameters ---------- argv: list of str Command line arguments """ parser = argparse.ArgumentParser() parser.add_argument("input", type=str, help="a path to the folder which contains a nnUNet input") parser.add_argument('--output', type=str, help="a path to the folder to save a nnUNet output") parser.add_argument("--nnUNet_results", type=str, required=True, help="a path to the \"results\" folder generated during nnU-Net training") parser.add_argument("--task", type=str, default="Task101_AbdomenSegmentation", help="an id of a task for nnU-Net") parser.add_argument('--folds', type=str, required=False, help="folds which use for prediction") args = parser.parse_args(argv) input_path = Path(args.input) output_path = Path(args.output) nnUNet_model_path = Path(args.nnUNet_results) task_id
""" Developed by ThaumicMekanism [<NAME>.] - all credit goes to him! """ import contextlib import sys from typing import Callable, List from tqdm.contrib import DummyTqdmFile import examtool.api.download from examtool.api.gradescope_upload import APIClient from examtool.api.extract_questions import ( extract_groups, extract_questions, extract_public, ) from fullGSapi.api.client import GradescopeClient from fullGSapi.api.assignment_grader import ( GS_Crop_info, GS_Outline, GS_assignment_Grader, GS_Outline_Question, GS_Question, GroupTypes, RubricItem, QuestionRubric, ) import os import time from tqdm import tqdm def_tqdm_args = {"dynamic_ncols": True} @contextlib.contextmanager def std_out_err_redirect_tqdm(): orig_out_err = sys.stdout, sys.stderr try: sys.stdout, sys.stderr = map(DummyTqdmFile, orig_out_err) yield orig_out_err[0] # Relay exceptions except Exception as exc: raise exc # Always restore sys.stdout/err if necessary finally: sys.stdout, sys.stderr = orig_out_err class GradescopeGrader: def __init__( self, email: str = None, password: str = None, gs_client: GradescopeClient = None, gs_api_client: APIClient = None, ): print(f"Setting up the Gradescope Grader...") if gs_client is None: gs_client = GradescopeClient() if gs_api_client is None: gs_api_client = APIClient() if (not email or not password) and ( not gs_client.is_logged_in() or not gs_api_client.is_logged_in() ): raise ValueError( "You must supply the username and password if you are not already logged into the passed in clients!" ) self.gs_client = gs_client self.gs_api_client = gs_api_client if email and password: if not gs_client.is_logged_in(): print(f"Logging into the normal Gradescope API...") self.gs_client.log_in(email, password) if not self.gs_api_client.is_logged_in(): print(f"Logging into the full Gradescope API...") self.gs_api_client.log_in(email, password) print(f"Finished setting up the Gradescope Grader") def main( self, exams: [str], out: str, name_question_id: str, sid_question_id: str, gs_class_id: str, gs_assignment_id: str = None, # If none, we will create a class. gs_assignment_title: str = "Examtool Exam", emails: [str] = None, blacklist_emails: [str] = None, email_mutation_list: {str: str} = {}, question_numbers: [str] = None, blacklist_question_numbers: [str] = None, custom_grouper_map: { str: Callable[[str, GS_Question, dict, dict], "QuestionGrouper"] } = None, ): if gs_assignment_title is None: gs_assignment_title = "Examtool Exam" if not exams: raise ValueError( "You must specify at least one exam you would like to upload!" ) out = out or "out/export/" + exams[0] exam_json, email_to_data_map = self.fetch_and_export_examtool_exam_data( exams, out, name_question_id, sid_question_id, emails=emails, email_mutation_list=email_mutation_list, ) # Remove blacklisted emails if blacklist_emails is not None: for bemail in blacklist_emails: email_to_data_map.pop(bemail, None) # Create assignment if one is not already created. if gs_assignment_id is None: print("Creating the gradescope assignment...") outline_path = f"{out}/OUTLINE.pdf" gs_assignment_id = self.create_assignment( gs_class_id, gs_assignment_title, outline_path ) if not gs_assignment_id: raise ValueError( "Did not receive a valid assignment id. Did assignment creation fail?" ) print(f"Created gradescope assignment with id {gs_assignment_id}!") else: print(f"Using assignment ({gs_assignment_id}) which was already created!") # Lets now get the assignment grader grader: GS_assignment_Grader = self.get_assignment_grader( gs_class_id, gs_assignment_id ) # Now that we have the assignment and outline pdf, lets generate the outline. print("Generating the examtool outline...") examtool_outline = ExamtoolOutline( grader, exam_json, [name_question_id, sid_question_id] ) # Finally we need to upload and sync the outline. print("Uploading the generated outline...") self.upload_outline(grader, examtool_outline) # We can now upload the student submission since we have an outline print("Uploading student submissions...") failed_uploads = self.upload_student_submissions( out, gs_class_id, gs_assignment_id, emails=email_to_data_map.keys() ) # Removing emails which failed to upload if failed_uploads: print( f"Removing emails which failed to upload. Note: These will NOT be graded! {failed_uploads}" ) for email in tqdm(failed_uploads, **def_tqdm_args): email_to_data_map.pop(email) # For each question, group, add rubric and grade print("Setting the grade type for grouping for each question...") gs_outline = examtool_outline.get_gs_outline() self.set_group_types(gs_outline) # Fetch the student email to question id map print("Fetching the student email to submission id's mapping...") email_to_question_sub_id = grader.email_to_qids() # Check to see which emails may not be in the Gradescope roster and attempt to correct self.attempt_fix_unknown_gs_email( email_to_question_sub_id, email_to_data_map, name_question_id=name_question_id, sid_question_id=sid_question_id, ) # Finally we can process each question print("Grouping and grading questions...") for qid, question in tqdm( list(gs_outline.questions_iterator()), desc="Questions Graded", unit="Question", **def_tqdm_args, ): if ( question_numbers is not None and qid not in question_numbers or blacklist_question_numbers is not None and qid in blacklist_question_numbers ): tqdm.write(f"[{qid}]: Skipping!") continue tqdm.write(f"[{qid}]: Processing question...") try: self.process_question( qid, question.get_gs_question(), email_to_data_map, email_to_question_sub_id, name_question_id, sid_question_id, custom_grouper_map, ) except Exception as e: import traceback traceback.print_exc(file=tqdm) tqdm.write(str(e)) def add_additional_exams( self, exams: [str], out: str, name_question_id: str, sid_question_id: str, gs_class_id: str, gs_assignment_id: str, emails: [str] = None, blacklist_emails: [str] = None, email_mutation_list: {str: str} = {}, question_numbers: [str] = None, blacklist_question_numbers: [str] = None, custom_grouper_map: { str: Callable[[str, GS_Question, dict, dict], "QuestionGrouper"] } = None, ): """ If emails is None, we will import the entire exam, if it has emails in it, it will only upload submissions from the students in the emails list contained in the exams list. If the student has submissions in multiple exams, the tool will warn you and ask which exam you would like to use as the student submission. """ if not exams: raise ValueError( "You must specify at least one exam you would like to upload!" ) if email_mutation_list is None: email_mutation_list = {} out = out or "out/export/" + exams[0] exam_json, email_to_data_map = self.fetch_and_export_examtool_exam_data( exams, out, name_question_id, sid_question_id, emails=emails, email_mutation_list=email_mutation_list, ) # Remove blacklisted emails if blacklist_emails is not None: for bemail in blacklist_emails: email_to_data_map.pop(bemail, None) # Lets now get the assignment grader grader: GS_assignment_Grader = self.get_assignment_grader( gs_class_id, gs_assignment_id ) # Now that we have the assignment and outline pdf, lets generate the outline. print("Generating the examtool outline...") examtool_outline = ExamtoolOutline( grader, exam_json, [name_question_id, sid_question_id] ) # Merge the outline with the existing one outline = grader.get_outline() if not outline: raise ValueError("Failed to fetch the existing outline") examtool_outline.merge_gs_outline_ids(outline) # We can now upload the student submission since we have an outline print("Uploading student submissions...") failed_uploads = self.upload_student_submissions( out, gs_class_id, gs_assignment_id, emails=email_to_data_map.keys() ) # Removing emails which failed to upload if failed_uploads: print( f"Removing emails which failed to upload. Note: These will NOT be graded! {failed_uploads}" ) for email in failed_uploads: email_to_data_map.pop(email) # Fetch the student email to question id map print("Fetching the student email to submission id's mapping...") email_to_question_sub_id = grader.email_to_qids() # Check to see which emails may not be in the Gradescope roster and attempt to correct self.attempt_fix_unknown_gs_email( email_to_question_sub_id, email_to_data_map, name_question_id=name_question_id, sid_question_id=sid_question_id, ) # Finally we can process each question print("Grouping and grading questions...") gs_outline = examtool_outline.get_gs_outline() for qid, question in tqdm( list(gs_outline.questions_iterator()), desc="Questions Graded", unit="Question", **def_tqdm_args, ): if ( question_numbers is not None and qid not in question_numbers or blacklist_question_numbers is not None and qid in blacklist_question_numbers ): tqdm.write(f"[{qid}]: Skipping!") continue tqdm.write(f"[{qid}]: Processing question...") try: self.process_question( qid, question.get_gs_question(), email_to_data_map, email_to_question_sub_id, name_question_id, sid_question_id, custom_grouper_map, ) except Exception as e: import traceback traceback.print_exc(file=tqdm) tqdm.write(str(e)) def fetch_and_export_examtool_exam_data( self, exams: [str], out: str, name_question_id: str, sid_question_id: str, emails: [str] = None, email_mutation_list: {str: str} = {}, ): """ Fetches the submissions from the exams in the exams list. If the emails list is None, it will fetch all emails, if it has emails in it, it will only return data for those emails. The mutation step occurres after the specific emails selection stage if applicable. The mutation list comes in the form of current email to new email. Returns: exam_json - The json of the exam email_to_data_map - the mapping of emails to their data. """ if not exams: raise ValueError( "You must specify at least one exam you would like to upload!" ) if email_mutation_list is None: email_mutation_list = {} print("Downloading exams data...") exam_json = None email_to_data_map = {} email_to_exam_map = {} first_exam = True for exam in exams: tmp_exam_json, tmp_template_questions, tmp_email_to_data_map, tmp_total = examtool.api.download.download( exam ) # Choose only the emails we want to keep. if emails: for email in list(tmp_email_to_data_map.keys()): if email not in emails: tmp_email_to_data_map.pop(email, None) # Next, we want to mutate any emails for orig_email, new_email in email_mutation_list.items(): if orig_email not in tmp_email_to_data_map: print( f"WARNING: Could not perform mutation on email {orig_email} (to {new_email}) because it does not exist in the data map!" ) continue if new_email in tmp_email_to_data_map: print( f"Could not mutate email {new_email} (from {orig_email}) as the original email is already in the data map!" ) continue tmp_email_to_data_map[new_email] = tmp_email_to_data_map.pop(orig_email) # Finally, we should merge together the student responses. for email, data in tmp_email_to_data_map.items(): if email in email_to_data_map: print( f"WARNING: Student with email {email} submitted
<reponame>renaudll/maya-mock """ Session class which hold informations about current nodes, ports and connections. """ import collections import itertools import logging import re import string import six from maya_mock.base import naming from maya_mock.base.connection import MockedConnection from maya_mock.base.constants import ( SHAPE_CLASS, DEFAULT_PREFIX_BY_SHAPE_TYPE, CONVERSION_FACTOR_BY_TYPE, IMPOSSIBLE_CONNECTIONS, ) from maya_mock.base.naming import ( pattern_to_regex, conform_node_name, is_valid_node_name, ) from maya_mock.base.node import MockedNode from maya_mock.base.port import MockedPort from maya_mock.base.schema import MockedSessionSchema from maya_mock.base.signal import Signal LOG = logging.getLogger(__name__) class MockedSession( collections.MutableMapping ): # pylint: disable=too-many-public-methods """ Collection of nodes, ports and connections. :param schema: The schema to use for the session. Optional :type schema: maya_mock.MockedSessionSchema or None """ onNodeAdded = Signal(MockedNode) onNodeRemoved = Signal(MockedNode) onPortAdded = Signal(MockedPort) onPortRemoved = Signal(MockedPort) onConnectionAdded = Signal(MockedConnection) onConnectionRemoved = Signal(MockedConnection) def __init__(self, schema=None): super(MockedSession, self).__init__() self.nodes = set() self.namespaces = set() self.ports = set() self.connections = set() self.selection = set() self.ports_by_node = collections.defaultdict(set) self.schema = schema if schema: if not isinstance(schema, MockedSessionSchema): raise ValueError("Unexpected schema type for %s" % schema) for name, type_ in schema.default_state.items(): self.create_node(type_, name) def __str__(self): return "<MockedSession %s nodes>" % len(self) def __iter__(self): return iter(self.nodes) def __len__(self): return len(self.nodes) def __getitem__(self, item): port = self.get_port_by_match(item) if port: return port node = self.get_node_by_match(item) if node: return node raise KeyError("Found no node or port matching %r" % item) def __setitem__(self, key, value): raise NotImplementedError def __delitem__(self, key): raise NotImplementedError # --- Public methods def node_exist(self, dagpath): """ Determine if dagpath match an existing node or not. :param str dagpath: A dagpath to match. :param parent: If provided, the dagpath will be resolved relative to this node. :type parent: MockedNode or None :return: True if the dagpath match an existing object. False otherwise. :rtype: bool """ return bool(self.get_node_by_match(dagpath, strict=False)) def _unique_name(self, prefix, parent=None): """ Resolve a unique name from a provided base suffix. :param str prefix: :param MockedNode parent: :return: """ counter = itertools.count(1) while True: name = "{}{}".format(prefix, next(counter)) dagpath = naming.join(parent.dagpath, name) if parent else "|" + name if is_valid_node_name(name) and not self.node_exist(dagpath): return name def get_node_by_name(self, name): """ Retrieve a node by it's name. Note that multiple nodes can have the same name. :param str name: The name of the MockedNode. :return: A node or None if no match was found. :rtype: MockedNode or None """ for node in self.nodes: if node.name == name: return node return None def get_nodes_by_match(self, pattern, strict=True): """ :param str pattern: The pattern to match. :param bool strict: If True, a ValueError will be raised if no node are found. :return: A node or None if no match was found. :rtype: MockedNode or None :raise ValueError: If no node are found matching provided pattern AND strict is True. """ result = sorted(self.iter_node_by_match(pattern)) if strict and not result: raise ValueError("No object matches name: {}".format(pattern)) return result def is_pattern_clashing(self, node, pattern): """ Determine if the MEL repr of a node clash with another node dagpath. :param node: :param pattern: :return: True if provided pattern match with another node that provided. False otherwise. :rtype: bool """ matches = self.iter_node_by_match(pattern) for guess in matches: if guess is not node: return True return False def get_node_by_match(self, pattern, **kwargs): """ Retrieve a node by matching it' against a provided pattern. Note that multiple nodes can match the same pattern. """ return next(iter(self.get_nodes_by_match(pattern, **kwargs)), None) def iter_node_by_match(self, pattern): """ Yield all the node which dagpath match the provided pattern. :param pattern: The node we want the MEL representation. :return: A node generator :rtype: Generator[MockedNode] """ regex = pattern_to_regex(pattern) for node in self.nodes: if re.match(regex, node.dagpath): yield node def get_port_by_match(self, pattern): """ Retrieve a port by matching it against a provided pattern. Note that multiple ports can match a same pattern. :param str pattern: The pattern to match. :return: A port or None if no match was found. :rtype: MockedPort or None """ for port in self.ports: if port.match(pattern): return port return None def get_connection_by_ports(self, src, dst): """ Get an existing connection from two ports :param MockedPort src: The source port :param MockedPort dst: The destination port :return: An existing connection. None otherwise. :rtype: MockedConnection or None """ return next( (conn for conn in self.connections if conn.src is src and conn.dst is dst), None, ) @staticmethod def warning(msg): """ Print a warning message. Similar to cmds.warning :param str msg: The message to display """ print("Warning: %s" % msg) def create_node(self, node_type, name=None, parent=None, emit=True): """ Create a new node in the scene. :param str node_type: The type of the node. :param str name: The name of the node. :param parent: The parent of the node if applicable. :type parent: MockedNode or None :param bool emit: If True, the `onPortAdded` signal will be emitted. :return: The created node :rtype: MockedNode """ is_shape_type = node_type in SHAPE_CLASS # Validate if the provided name if any is valid. # Remove any characters from name. # Start by removing any invalid characters from the name. if name: name_conformed = conform_node_name(name) if name != name_conformed: self.warning("Removing invalid characters from name.") name = name_conformed # Handle the case where the resulting name is empty # which can happen if invalid characters are found. # ex: cmds.createNode('transform', name='0') if name == "": raise RuntimeError(u"New name has no legal characters.\n") # If name is not provided, we'll name the object automatically if not name: # If node is a shape, add 'Shape' before the number. if node_type in SHAPE_CLASS: name = "%sShape" % DEFAULT_PREFIX_BY_SHAPE_TYPE.get( node_type, node_type ) # Otherwise, name the node against it's type. else: name = node_type name = self._unique_name(name, parent=parent) else: # Next, if the name is invalid or clash with another node dagpath, # we'll need to add a number suffix. dagpath = "%s|%s" % (parent.dagpath, name) if parent else "|" + name if not is_valid_node_name(name) or self.node_exist(dagpath): name = name.rstrip(string.digits) name = self._unique_name(name, parent=parent) # If we are sure that we can create the node and it is a shape, create it's transform first. if is_shape_type: transform_name_prefix = DEFAULT_PREFIX_BY_SHAPE_TYPE.get( node_type, node_type ) transform_name = self._unique_name(transform_name_prefix) parent = self.create_node("transform", name=transform_name) node = MockedNode(self, node_type, name, parent=parent) if emit: signal = self.onNodeAdded LOG.debug("%s emitted with %s", signal, node) signal.emit(node) self.nodes.add(node) # Add port from configuration if needed if self.schema: node_def = self.schema.get(node_type) if node_def: node_def.apply(self, node) return node def remove_node(self, node, emit=True): """ Remove a node from the graph. :param node: :param bool emit: If True, the `onPortAdded` signal will be emitted. """ # Remove any port that where used by the node. ports = [port for port in self.ports if port.node is node] for port in ports: self.remove_port(port, emit=emit) if emit: self.onNodeRemoved.emit(node) self.nodes.remove(node) def create_port(self, node, name, emit=True, **kwargs): """ Create a new port in the scene. :param MockedNode node: The port parent node :param name: The name of the port :return: The create port :rtype: MockedPort """ port = MockedPort(node, name, **kwargs) self.ports_by_node[node].add(port) self.ports.add(port) if emit: self.onPortAdded.emit(port) return port def remove_port(self, port, emit=True): """ Remove a port from the graph. :param port: :param emit: :return: """ # Remove any connection that used the port connections = [ conn for conn in self.connections if conn.src is port or conn.dst is port ] for conn in connections: self.remove_connection(conn, emit=emit) if emit: self.onPortRemoved.emit(port) self.ports.remove(port) def remove_node_port(self, node, name, emit=True): """ :param MockedNode node: :param name: :param emit: :return: """ port = self.get_node_port_by_name(node, name) self.remove_port(port, emit=emit) def create_connection(self, src, dst, emit=True): """ Create a new connection in the scene. :param MockedPort src: The connection source port. :param MockedPort dst: The connection destination port. :param bool emit: If True, the `onConnectionAdded` signal will be emitted. :return: A connection :rtype: MockedConnection """ # TODO: What do we return if multiple connections are created? assert src.type assert dst.type key = src.type, dst.type if key in IMPOSSIBLE_CONNECTIONS: msg = "The attribute %r cannot be connected to %r." % ( src.dagpath, dst.dagpath, ) raise RuntimeError(msg) # When connecting some port types together, Maya can create a unitConversion node. factor = CONVERSION_FACTOR_BY_TYPE.get((src.type, dst.type)) if factor: node_conversion
expected_new_tensor[2, 0:2, :] = tensor[2, 1:3, :] assert_array_almost_equal(new_tensor.data.numpy(), expected_new_tensor.data.numpy()) expected_new_mask = torch.from_numpy(numpy.array([[0, 0, 0], [1, 1, 1], [1, 1, 0]])).bool() assert (new_mask.data.numpy() == expected_new_mask.data.numpy()).all() def test_add_positional_features(self): # This is hard to test, so we check that we get the same result as the # original tensorflow implementation: # https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/layers/common_attention.py#L270 tensor2tensor_result = numpy.asarray( [ [0.00000000e00, 0.00000000e00, 1.00000000e00, 1.00000000e00], [8.41470957e-01, 9.99999902e-05, 5.40302277e-01, 1.00000000e00], [9.09297407e-01, 1.99999980e-04, -4.16146845e-01, 1.00000000e00], ] ) tensor = torch.zeros([2, 3, 4]) result = util.add_positional_features(tensor, min_timescale=1.0, max_timescale=1.0e4) numpy.testing.assert_almost_equal(result[0].detach().cpu().numpy(), tensor2tensor_result) numpy.testing.assert_almost_equal(result[1].detach().cpu().numpy(), tensor2tensor_result) # Check case with odd number of dimensions. tensor2tensor_result = numpy.asarray( [ [ 0.00000000e00, 0.00000000e00, 0.00000000e00, 1.00000000e00, 1.00000000e00, 1.00000000e00, 0.00000000e00, ], [ 8.41470957e-01, 9.99983307e-03, 9.99999902e-05, 5.40302277e-01, 9.99949992e-01, 1.00000000e00, 0.00000000e00, ], [ 9.09297407e-01, 1.99986659e-02, 1.99999980e-04, -4.16146815e-01, 9.99800026e-01, 1.00000000e00, 0.00000000e00, ], ] ) tensor = torch.zeros([2, 3, 7]) result = util.add_positional_features(tensor, min_timescale=1.0, max_timescale=1.0e4) numpy.testing.assert_almost_equal(result[0].detach().cpu().numpy(), tensor2tensor_result) numpy.testing.assert_almost_equal(result[1].detach().cpu().numpy(), tensor2tensor_result) def test_combine_tensors_and_multiply(self): tensors = [torch.Tensor([[[2, 3]]]), torch.Tensor([[[5, 5]]])] weight = torch.Tensor([4, 5]) combination = "x" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [[8 + 15]] ) combination = "y" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [[20 + 25]] ) combination = "x,y" weight2 = torch.Tensor([4, 5, 4, 5]) assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight2), [[8 + 20 + 15 + 25]] ) combination = "x-y" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [[-3 * 4 + -2 * 5]] ) combination = "y-x" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [[3 * 4 + 2 * 5]] ) combination = "y+x" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [[7 * 4 + 8 * 5]] ) combination = "y*x" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [[10 * 4 + 15 * 5]] ) combination = "y/x" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [[(5 / 2) * 4 + (5 / 3) * 5]], decimal=4, ) combination = "x/y" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [[(2 / 5) * 4 + (3 / 5) * 5]], decimal=4, ) with pytest.raises(ConfigurationError): util.combine_tensors_and_multiply("x+y+y", tensors, weight) with pytest.raises(ConfigurationError): util.combine_tensors_and_multiply("x%y", tensors, weight) def test_combine_tensors_and_multiply_with_same_batch_size_and_embedding_dim(self): # This test just makes sure we handle some potential edge cases where the lengths of all # dimensions are the same, making sure that the multiplication with the weight vector # happens along the right dimension (it should be the last one). tensors = [torch.Tensor([[[5, 5], [4, 4]], [[2, 3], [1, 1]]])] # (2, 2, 2) weight = torch.Tensor([4, 5]) # (2,) combination = "x" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [[20 + 25, 16 + 20], [8 + 15, 4 + 5]], ) tensors = [ torch.Tensor([[[5, 5], [2, 2]], [[4, 4], [3, 3]]]), torch.Tensor([[[2, 3]], [[1, 1]]]), ] weight = torch.Tensor([4, 5]) combination = "x*y" assert_almost_equal( util.combine_tensors_and_multiply(combination, tensors, weight), [ [5 * 2 * 4 + 5 * 3 * 5, 2 * 2 * 4 + 2 * 3 * 5], [4 * 1 * 4 + 4 * 1 * 5, 3 * 1 * 4 + 3 * 1 * 5], ], ) def test_combine_tensors_and_multiply_with_batch_size_one(self): seq_len_1 = 10 seq_len_2 = 5 embedding_dim = 8 combination = "x,y,x*y" t1 = torch.randn(1, seq_len_1, embedding_dim) t2 = torch.randn(1, seq_len_2, embedding_dim) combined_dim = util.get_combined_dim(combination, [embedding_dim, embedding_dim]) weight = torch.Tensor(combined_dim) result = util.combine_tensors_and_multiply( combination, [t1.unsqueeze(2), t2.unsqueeze(1)], weight ) assert_almost_equal(result.size(), [1, seq_len_1, seq_len_2]) def test_combine_tensors_and_multiply_with_batch_size_one_and_seq_len_one(self): seq_len_1 = 10 seq_len_2 = 1 embedding_dim = 8 combination = "x,y,x*y" t1 = torch.randn(1, seq_len_1, embedding_dim) t2 = torch.randn(1, seq_len_2, embedding_dim) combined_dim = util.get_combined_dim(combination, [embedding_dim, embedding_dim]) weight = torch.Tensor(combined_dim) result = util.combine_tensors_and_multiply( combination, [t1.unsqueeze(2), t2.unsqueeze(1)], weight ) assert_almost_equal(result.size(), [1, seq_len_1, seq_len_2]) def test_has_tensor(self): has_tensor = util.has_tensor tensor = torch.tensor([1, 2, 3]) assert has_tensor(["a", 10, tensor]) assert not has_tensor(["a", 10]) assert has_tensor(("a", 10, tensor)) assert not has_tensor(("a", 10)) assert has_tensor({"a": tensor, "b": 1}) assert not has_tensor({"a": 10, "b": 1}) assert has_tensor(tensor) assert not has_tensor(3) assert has_tensor({"x": [0, {"inside": {"double_inside": [3, [10, tensor]]}}]}) def test_combine_initial_dims(self): tensor = torch.randn(4, 10, 20, 17, 5) tensor2d = util.combine_initial_dims(tensor) assert list(tensor2d.size()) == [4 * 10 * 20 * 17, 5] def test_uncombine_initial_dims(self): embedding2d = torch.randn(4 * 10 * 20 * 17 * 5, 12) embedding = util.uncombine_initial_dims(embedding2d, torch.Size((4, 10, 20, 17, 5))) assert list(embedding.size()) == [4, 10, 20, 17, 5, 12] def test_inspect_model_parameters(self): model_archive = str( self.FIXTURES_ROOT / "decomposable_attention" / "serialization" / "model.tar.gz" ) parameters_inspection = str( self.FIXTURES_ROOT / "decomposable_attention" / "parameters_inspection.json" ) model = load_archive(model_archive).model with open(parameters_inspection) as file: parameters_inspection_dict = json.load(file) assert parameters_inspection_dict == util.inspect_parameters(model) def test_move_to_device(self): # We're faking the tensor here so that we can test the calls to .cuda() without actually # needing a GPU. class FakeTensor(torch.Tensor): def __init__(self): self._device = None def cuda(self, device): self._device = device return self class A(NamedTuple): a: int b: torch.Tensor structured_obj = { "a": [A(1, FakeTensor()), A(2, FakeTensor())], "b": FakeTensor(), "c": (1, FakeTensor()), } new_device = 4 moved_obj = util.move_to_device(structured_obj, new_device) assert moved_obj["a"][0].a == 1 assert moved_obj["a"][0].b._device == new_device assert moved_obj["a"][1].b._device == new_device assert moved_obj["b"]._device == new_device assert moved_obj["c"][0] == 1 assert moved_obj["c"][1]._device == new_device def test_extend_layer(self): lin_layer = torch.nn.Linear(10, 5) new_dim = 8 old_weights = lin_layer.weight.data.clone() old_bias = lin_layer.bias.data.clone() util.extend_layer(lin_layer, new_dim) assert lin_layer.weight.data.shape == (8, 10) assert lin_layer.bias.data.shape == (8,) assert (lin_layer.weight.data[:5] == old_weights).all() assert (lin_layer.bias.data[:5] == old_bias).all() assert lin_layer.out_features == new_dim def test_masked_topk_selects_top_scored_items_and_respects_masking(self): items = torch.randn([3, 4, 5]).clamp(min=0.0, max=1.0) items[0, :2, :] = 1 items[1, 2:, :] = 1 items[2, 2:, :] = 1 scores = items.sum(-1) mask = torch.ones([3, 4]).bool() mask[1, 0] = 0 mask[1, 3] = 0 pruned_scores, pruned_mask, pruned_indices = util.masked_topk(scores, mask, 2) # Second element in the batch would have indices 2, 3, but # 3 and 0 are masked, so instead it has 1, 2. numpy.testing.assert_array_equal( pruned_indices.data.numpy(), numpy.array([[0, 1], [1, 2], [2, 3]]) ) numpy.testing.assert_array_equal(pruned_mask.data.numpy(), numpy.ones([3, 2])) # scores should be the result of index_selecting the pruned_indices. correct_scores = util.batched_index_select(scores.unsqueeze(-1), pruned_indices).squeeze(-1) self.assert_array_equal_with_mask(correct_scores, pruned_scores, pruned_mask) def test_masked_topk_works_for_completely_masked_rows(self): items = torch.randn([3, 4, 5]).clamp(min=0.0, max=1.0) items[0, :2, :] = 1 items[1, 2:, :] = 1 items[2, 2:, :] = 1 scores = items.sum(-1) mask = torch.ones([3, 4]).bool() mask[1, 0] = 0 mask[1, 3] = 0 mask[2, :] = 0 # fully masked last batch element. pruned_scores, pruned_mask, pruned_indices = util.masked_topk(scores, mask, 2) # We can't check the last row here, because it's completely masked. # Instead we'll check that the scores for these elements are very small. numpy.testing.assert_array_equal( pruned_indices[:2].data.numpy(), numpy.array([[0, 1], [1, 2]]) ) numpy.testing.assert_array_equal( pruned_mask.data.numpy(), numpy.array([[1, 1], [1, 1], [0, 0]]) ) # scores should be the result of index_selecting the pruned_indices. correct_scores = util.batched_index_select(scores.unsqueeze(-1), pruned_indices).squeeze(-1) self.assert_array_equal_with_mask(correct_scores, pruned_scores, pruned_mask) def test_masked_topk_selects_top_scored_items_and_respects_masking_different_num_items(self): items = torch.randn([3, 4, 5]).clamp(min=0.0, max=1.0) items[0, 0, :] = 1.5 items[0, 1, :] = 2 items[0, 3, :] = 1 items[1, 1:3, :] = 1 items[2, 0, :] = 1 items[2, 1, :] = 2 items[2, 2, :] = 1.5 scores = items.sum(-1) mask = torch.ones([3, 4]).bool() mask[1, 3] = 0 k = torch.tensor([3, 2, 1], dtype=torch.long) pruned_scores, pruned_mask, pruned_indices = util.masked_topk(scores, mask, k) # Second element in the batch would have indices 2, 3, but # 3 and 0 are masked, so instead it has 1, 2. numpy.testing.assert_array_equal( pruned_indices.data.numpy(), numpy.array([[0, 1, 3], [1, 2, 2], [1, 2, 2]]) ) numpy.testing.assert_array_equal( pruned_mask.data.numpy(), numpy.array([[1, 1, 1], [1, 1, 0], [1, 0, 0]]) ) # scores should be the result of index_selecting the pruned_indices. correct_scores = util.batched_index_select(scores.unsqueeze(-1), pruned_indices).squeeze(-1) self.assert_array_equal_with_mask(correct_scores, pruned_scores, pruned_mask) def test_masked_topk_works_for_row_with_no_items_requested(self): # Case where `num_items_to_keep` is a tensor rather than an int. Make sure it does the right # thing when no items are requested for one of the rows. items = torch.randn([3, 4, 5]).clamp(min=0.0, max=1.0) items[0, :3, :] = 1 items[1, 2:, :] = 1 items[2, 2:, :] = 1 scores = items.sum(-1) mask = torch.ones([3, 4]).bool() mask[1, 0] = 0 mask[1, 3] = 0 k = torch.tensor([3, 2, 0], dtype=torch.long) pruned_scores, pruned_mask, pruned_indices = util.masked_topk(scores, mask, k) # First element just picks top three entries. Second would pick entries 2 and 3, but 0 and 3 # are masked, so it takes 1 and 2 (repeating the second index). The third element is # entirely masked and just repeats
<gh_stars>1-10 import copy from math import ceil from typing import List from matplotlib import pyplot as plt import numpy as np import scipy.stats from baselines.ga.multi_pop_ga.multi_population_ga_pcg import MultiPopGAPCG, SingleElementFitnessFunction, SingleElementGAIndividual from games.game import Game from games.level import Level from games.mario.mario_game import MarioGame from games.mario.mario_level import MarioLevel from novelty_neat.novelty.distance_functions.distance import visual_diversity_normalised from novelty_neat.novelty.novelty_metric import DistanceMetric, NoveltyArchive class SparsenessFitnessFunction(SingleElementFitnessFunction): """ This calculates the sparseness fitness function, i.e. the average distance between all pairs of items. The fitness is actually calculated as 1 / (desired_sparseness - sparse), normalised to between 0 and 1. Details in: <NAME> and <NAME>. Multi-faceted evolution of simple arcade games. In Computational Intelligence and Games (CIG), 2011 IEEE Conference on, pages 289–296, 2011 """ def __init__(self, desired_sparseness: float = 0, block_size: int = 10) -> None: super().__init__() self.desired_sparseness = desired_sparseness self.block_size = block_size def sparseness(self, one_indiv: SingleElementGAIndividual) -> float: """What this does is the following: We split the level up into chunks (Ai) of size `self.block_size`, and mean(compute sparse(Ai) for all i) sparse(Ai) is simply 2 * total / (n * n - 1), where n is the number of nonzero elements, and total is the total pairwise distance (absolute difference between index) for each pair of non zero items Args: one_indiv (SingleElementGAIndividual): Single individual Returns: float: Sparseness """ def sparse(array): pos = np.argwhere(array > 0)[:, 0] # with only one element, the sparseness is still 0. if len(pos) <= 1: return 0 total = 0 positions = pos n = len(positions) for i, p in enumerate(positions): for j in range(0, len(positions)): # normalise distance to between 0 and 1. dist = abs(p - positions[j]) / len(array) total += dist return 2 * total / (n * n - 1) L = len(one_indiv.genome) nums = ceil(L / self.block_size) total_sparse = 0 for i in range(nums): temp_arr = one_indiv.genome[i * self.block_size: (i+1)*self.block_size] total_sparse += sparse(temp_arr) return total_sparse / nums def calc_fitness(self, individuals: List[SingleElementGAIndividual]) -> List[float]: X = [abs(self.sparseness(i) - self.desired_sparseness) for i in individuals] X = [1 / max(x, 0.1) / 10 for x in X] return X class EntropyFitnessFunction(SingleElementFitnessFunction): """ Calculates the Entropy fitness, similarly to the sparseness above We split the level up into chunks and calculate the average distance to desired entropy where entropy is the entropy of [x, y, z, ...] where x, y, z, etc are the proportion of that type of block. """ def __init__(self, desired_entropy: float = 1, block_size: int = 114) -> None: super().__init__() self.desired_entropy = desired_entropy self.block_size = block_size def entropy(self, one_indiv: SingleElementGAIndividual) -> float: ans = np.array(one_indiv.genome) L = len(ans) nums = ceil(L / self.block_size) total = 0 for i in range(nums): temp_arr = ans[i * self.block_size: (i+1)*self.block_size] counts = [] for i in np.unique(temp_arr): counts.append((temp_arr == i).sum()) ps = np.array(counts) / temp_arr.size e = scipy.stats.entropy(ps, base=2) if len(ps) >= 2: e /= abs(np.log2(len(ps))) assert -0.01 <= e <= 1.01, f"Entropy is invalid, {e}" total += e return e / nums def calc_fitness(self, individuals: List[SingleElementGAIndividual]) -> List[float]: X = [abs(self.entropy(i) - self.desired_entropy) for i in individuals] X = [1 / max(x, 0.1) / 10 for x in X] return X class NoveltyFitnessFunctionSingleElement(SingleElementFitnessFunction): """ Basically novelty metric for the single population fitness function. """ def __init__(self, distance_function: DistanceMetric, max_dist: float, number_of_neighbours: int, lambd: int, archive_mode: NoveltyArchive): """See NoveltyMetric for more details Args: distance_function (DistanceMetric): This should give the distance between two arrays. max_dist (float): The maximum distance that can be achieved between two levels. This is used to normalise the distances between 0 and 1. number_of_neighbours (int, optional): The amount of closest neighbours to consider when calculating the novelty metric. Defaults to 10. lambd (int, optional): The number of individuals to add to the archive at each step. archive_mode (NoveltyArchive, optional): How we choose which individuals need to get added. RANDOM chooses lambd random individuals, and NOVEL chooses the lambd most novel individuals. """ super().__init__() self.archive: List[SingleElementGAIndividual] = [] self.previously_novel_individuals = None self.number_of_neighbours = number_of_neighbours self.lambd = lambd self.archive_mode = archive_mode self.distance_function = distance_function self.max_dist = max_dist def calc_fitness(self, individuals: List[SingleElementGAIndividual]) -> List[float]: assert self.number_of_neighbours < len(individuals), "Number of neighbours must be less than the number of levels" dist_matrix = np.zeros((len(individuals), len(individuals) + len(self.archive))) def dist(level1: SingleElementGAIndividual, level2: SingleElementGAIndividual) -> float: d = self.distance_function(level1.genome, level2.genome) / self.max_dist assert 0 <= d <= 1 return d # Now calculate pairwise distance: for index1, level1 in enumerate(individuals): dist_matrix[index1, index1] = float('inf') for index2, level2 in list(enumerate(individuals))[index1+1:]: d = dist(level1, level2) dist_matrix[index1, index2] = d dist_matrix[index2, index2] = d # And from archive for index_archive, archived_level in enumerate(self.archive): d = dist(level1, archived_level) dist_matrix[index1, len(individuals) + index_archive] = d final_novelty_metrics = [] # Now we need to calculate the closest K neighbours. for index, row in enumerate(dist_matrix): # Choose K closest neighbours row = sorted(row)[:self.number_of_neighbours] final_novelty_metrics.append(np.mean(row)) # Now add to archive if good enough, or randomly depending on the mode. indices = np.arange(len(individuals)) if self.archive_mode == NoveltyArchive.RANDOM: # Shuffle np.random.shuffle(indices) elif self.archive_mode == NoveltyArchive.NOVEL: # Most novel individuals sorted_list = sorted(zip(final_novelty_metrics, indices), reverse=True) indices = [index for score, index in sorted_list] else: raise Exception( f"{self.archive_mode} is not a valid NovelArchive mode") self.archive.extend([ copy.deepcopy(individuals[index]) for index in indices[:self.lambd] ]) return final_novelty_metrics def __repr__(self) -> str: return f"NoveltyFitnessFunctionSingleElement(nneighbours={self.number_of_neighbours}, lambd={self.lambd}, mode={self.archive_mode})" def reset(self): self.archive = [] return super().reset() class CombinationFitnessFunctionSingleElement(SingleElementFitnessFunction): def __init__(self, fitnesses: List[SingleElementFitnessFunction], weights: List[int]) -> None: super().__init__() assert len(fitnesses) == len(weights) self.fitnesses = fitnesses self.weights = np.array(weights) / sum(weights) def calc_fitness(self, individuals: List[SingleElementGAIndividual]) -> List[float]: ans = 0 for f, w in zip(self.fitnesses, self.weights): ans += np.array(f.calc_fitness(individuals)) * w return list(ans) def reset(self): for f in self.fitnesses: f.reset() class MarioGAPCG(MultiPopGAPCG): """Mario GA PCG from: <NAME>., <NAME>., & <NAME>. (2014, July). A multi-population genetic algorithm for procedural generation of levels for platform games. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation (pp. 45-46). We have separate populations to evolve the ground height, type of block, enemies and coin height. The first one is scored based on entropy and the others on sparseness. """ def __init__(self, game: Game, init_level: Level, pop_size: int = 100, number_of_generations: int = 100, desired_entropy: float = 0, desired_sparseness_enemies: float = 0, desired_sparseness_coins: float = 0.5, desired_sparseness_blocks: float = 1, entropy_block_size: int = 114, enemies_block_size: int = 20, coins_block_size: int = 10, blocks_block_size: int = 10, ground_maximum_height: int = 2, coin_maximum_height: int = 2, use_novelty: bool=False ) -> None: """ Args: game (Game): Game init_level (Level): The initial level pop_size (int, optional): Size of population. Defaults to 100. number_of_generations (int, optional): Number of gens to run. Defaults to 100. These desired values desired_entropy (for the ground) (float, optional): Defaults to 0. desired_sparseness_enemies (float, optional): Defaults to 0. desired_sparseness_coins (float, optional): Defaults to 0.5. desired_sparseness_blocks (float, optional): Defaults to 1. These block sizes control how large the blocks are for which entropy and sparseness is calculated. entropy_block_size (int, optional): Defaults to 114. enemies_block_size (int, optional): Defaults to 20. coins_block_size (int, optional): Defaults to 10. blocks_block_size (int, optional): Defaults to 10. The maximum values for the heights ground_maximum_height (int, optional) . Defaults to 2 coin_maximum_height (int, optional) . Defaults to 2 use_novelty (bool, optional). Uses novelty if this is true. Uses visual diversity. Defaults to False. """ indiv_funcs = [ # ground height, 0 means there is a gap lambda l: SingleElementGAIndividual(l, 0, ground_maximum_height, init=1), # enemies - types, either an enemy or not lambda l: SingleElementGAIndividual(l, 0, 1, init=0), # coins - heights: 0 means no coin there lambda l: SingleElementGAIndividual(l, 0, coin_maximum_height, init=0), # blocks - different types 0 is nothing, 1 is brick, 2 is question, 3 is tube lambda l: SingleElementGAIndividual(l, 0, 3, init=0), ] fitness_funcs = [ EntropyFitnessFunction(desired_entropy, entropy_block_size), SparsenessFitnessFunction(desired_sparseness_enemies, enemies_block_size), SparsenessFitnessFunction(desired_sparseness_coins, coins_block_size), SparsenessFitnessFunction(desired_sparseness_blocks, blocks_block_size) ] self.use_novelty = use_novelty if self.use_novelty: new_funcs = [] for f in fitness_funcs: new_funcs.append( CombinationFitnessFunctionSingleElement([ f,
+ ", ".join(tracking_cols) sql_code = f"SELECT \n{select_block_s} \n" \ f"FROM {name}" add_to_col_trans(selection_map=selection_map, code_ref=code_ref, sql_source=name) # cte_name, sql_code = singleton.sql_logic.finish_sql_call(sql_code, op_id, res_for_map, # tracking_cols=tracking_cols, # non_tracking_cols=all_cols, # operation_type=OperatorType.TRANSFORMER, # cte_name=f"block_impute_mlinid{op_id}") cte_name, sql_code = store_current_sklearn_op(sql_code, op_id, f"block_impute_mlinid{op_id}", result, tracking_cols, all_cols, target_cols) singleton.pipeline_container.add_statement_to_pipe(cte_name, sql_code) if not just_transform: backend_result = singleton.update_hist.sql_update_backend_result(res_for_map, backend_result, curr_sql_expr_name=cte_name, curr_sql_expr_columns=all_cols, keep_previous_res=False, not_materialize=True) fit_data.fully_set = True # TO_SQL DONE! ########################################################################################## dag_node = DagNode(op_id, BasicCodeLocation(self.mlinspect_caller_filename, self.mlinspect_lineno), operator_context, DagNodeDetails("Simple Imputer", columns), get_optional_code_info_or_none(self.mlinspect_optional_code_reference, self.mlinspect_optional_source_code)) add_dag_node(dag_node, [input_info.dag_node], backend_result) return new_return_value return result @gorilla.name('transform') @gorilla.settings(allow_hit=True) def patched_transform(self, *args, **kwargs): original = gorilla.get_original_attribute(impute.SimpleImputer, 'transform') return transform_logic(original, self, *args, **kwargs) # @gorilla.name('fit') # @gorilla.settings(allow_hit=True) # def patched_fit(self, *args, **kwargs): # # pylint: disable=no-method-argument # original = gorilla.get_original_attribute(impute.SimpleImputer, 'fit') # return original(self, *args, **kwargs) @gorilla.patches(preprocessing.OneHotEncoder) class SklearnOneHotEncoderPatching: """ Patches for sklearn OneHotEncoder""" # pylint: disable=too-few-public-methods @gorilla.name('__init__') @gorilla.settings(allow_hit=True) def patched__init__(self, *, categories='auto', drop=None, sparse=True, dtype=numpy.float64, handle_unknown='error', mlinspect_caller_filename=None, mlinspect_lineno=None, mlinspect_optional_code_reference=None, mlinspect_optional_source_code=None): """ Patch for ('sklearn.preprocessing._encoders', 'OneHotEncoder') """ # pylint: disable=no-method-argument, attribute-defined-outside-init original = gorilla.get_original_attribute(preprocessing.OneHotEncoder, '__init__') self.mlinspect_caller_filename = mlinspect_caller_filename self.mlinspect_lineno = mlinspect_lineno self.mlinspect_optional_code_reference = mlinspect_optional_code_reference self.mlinspect_optional_source_code = mlinspect_optional_source_code def execute_inspections(_, caller_filename, lineno, optional_code_reference, optional_source_code): """ Execute inspections, add DAG node """ original(self, categories=categories, drop=drop, sparse=sparse, dtype=dtype, handle_unknown=handle_unknown) self.mlinspect_caller_filename = caller_filename self.mlinspect_lineno = lineno self.mlinspect_optional_code_reference = optional_code_reference self.mlinspect_optional_source_code = optional_source_code return execute_patched_func_no_op_id(original, execute_inspections, self, categories=categories, drop=drop, sparse=sparse, dtype=dtype, handle_unknown=handle_unknown) @gorilla.name('fit_transform') @gorilla.settings(allow_hit=True) def patched_fit_transform(self, *args, **kwargs): """ Patch for ('sklearn.preprocessing._encoders.OneHotEncoder', 'fit_transform') """ # TO_SQL: ############################################################################################### fit_data, just_transform = differentiate_fit_transform(self, args[0]) # TO_SQL DONE! ########################################################################################## if not just_transform: original = gorilla.get_original_attribute(preprocessing.OneHotEncoder, 'fit_transform') function_info = FunctionInfo('sklearn.preprocessing._encoders', 'OneHotEncoder') input_info = get_input_info(args[0], self.mlinspect_caller_filename, self.mlinspect_lineno, function_info, self.mlinspect_optional_code_reference, self.mlinspect_optional_source_code) operator_context = OperatorContext(OperatorType.TRANSFORMER, function_info) input_infos = SklearnBackend.before_call(operator_context, [input_info.annotated_dfobject]) result = original(self, input_infos[0].result_data, *args[1:], **kwargs) backend_result = SklearnBackend.after_call(operator_context, input_infos, result) new_return_value = backend_result.annotated_dfobject.result_data op_id = singleton.get_next_op_id() else: original = gorilla.get_original_attribute(preprocessing.OneHotEncoder, 'transform') result = original(self, *args, **kwargs) op_id = singleton.sql_logic.get_unique_id() # TO_SQL: ############################################################################################### code_ref = self.mlinspect_optional_code_reference name, ti, target_cols, res_for_map, cols_to_drop = find_target(code_ref, args[0], result) tracking_cols = ti.tracking_cols all_cols = [x for x in ti.non_tracking_cols if not x in set(cols_to_drop)] selection_map = {} select_block = [] from_block = [] where_block = [] for col in all_cols: if col in target_cols: # Access fitted data if possible: if not fit_data.fully_set: fit_lookup_table, fit_lookup_code = singleton.sql_logic.column_one_hot_encoding(name, col) fit_data.col_to_fit_block_name[col] = fit_lookup_table singleton.pipeline_container.add_statement_to_pipe(fit_lookup_table, fit_lookup_code) if singleton.sql_obj.mode == SQLObjRep.VIEW: singleton.dbms_connector.run(fit_lookup_code) else: fit_lookup_table = fit_data.col_to_fit_block_name[col] select_block.append(f"\t{col[:-1]}_one_hot\" AS {col}") from_block.append(f"{fit_lookup_table}") where_block.append(f"\t{name}.{col} = {fit_lookup_table}.{col}") selection_map[col] = select_block[-1] else: select_block.append(f"\t{col}") select_block_s = ",\n".join(select_block) + ",\n\t" + ", ".join(tracking_cols) from_block_s = ", ".join(from_block + [name]) where_block_s = " AND \n".join(where_block) sql_code = f"SELECT \n{select_block_s} \n" \ f"FROM {from_block_s}\n" \ f"WHERE\n {where_block_s}" add_to_col_trans(selection_map=selection_map, from_block=from_block, where_block=where_block, code_ref=code_ref, sql_source=name) # cte_name, sql_code = singleton.sql_logic.finish_sql_call(sql_code, op_id, res_for_map, # tracking_cols=tracking_cols, # non_tracking_cols=all_cols, # operation_type=OperatorType.TRANSFORMER, # cte_name=f"block_onehot_mlinid{op_id}") cte_name, sql_code = store_current_sklearn_op(sql_code, op_id, f"block_onehot_mlinid{op_id}", result, tracking_cols, all_cols, target_cols) singleton.pipeline_container.add_statement_to_pipe(cte_name, sql_code) if not just_transform: backend_result = singleton.update_hist.sql_update_backend_result(res_for_map, backend_result, curr_sql_expr_name=cte_name, curr_sql_expr_columns=all_cols, keep_previous_res=True, not_materialize=True) fit_data.fully_set = True # TO_SQL DONE! ########################################################################################## dag_node = DagNode(op_id, BasicCodeLocation(self.mlinspect_caller_filename, self.mlinspect_lineno), operator_context, DagNodeDetails("One-Hot Encoder", ['array']), get_optional_code_info_or_none(self.mlinspect_optional_code_reference, self.mlinspect_optional_source_code)) add_dag_node(dag_node, [input_info.dag_node], backend_result) return new_return_value return result @gorilla.name('transform') @gorilla.settings(allow_hit=True) def patched_transform(self, *args, **kwargs): # pylint: disable=no-method-argument original = gorilla.get_original_attribute(preprocessing.OneHotEncoder, 'transform') return transform_logic(original, self, *args, **kwargs) # @gorilla.name('fit') # @gorilla.settings(allow_hit=True) # def patched_fit(self, *args, **kwargs): # # pylint: disable=no-method-argument # original = gorilla.get_original_attribute(preprocessing.OneHotEncoder, 'fit') # return original(self, *args, **kwargs) @gorilla.patches(preprocessing.StandardScaler) class SklearnStandardScalerPatching: """ Patches for sklearn StandardScaler""" # pylint: disable=too-few-public-methods @gorilla.name('__init__') @gorilla.settings(allow_hit=True) def patched__init__(self, *, copy=True, with_mean=True, with_std=True, mlinspect_caller_filename=None, mlinspect_lineno=None, mlinspect_optional_code_reference=None, mlinspect_optional_source_code=None): """ Patch for ('sklearn.preprocessing._data', 'StandardScaler') """ # pylint: disable=no-method-argument, attribute-defined-outside-init original = gorilla.get_original_attribute(preprocessing.StandardScaler, '__init__') self.mlinspect_caller_filename = mlinspect_caller_filename self.mlinspect_lineno = mlinspect_lineno self.mlinspect_optional_code_reference = mlinspect_optional_code_reference self.mlinspect_optional_source_code = mlinspect_optional_source_code def execute_inspections(_, caller_filename, lineno, optional_code_reference, optional_source_code): """ Execute inspections, add DAG node """ original(self, copy=copy, with_mean=with_mean, with_std=with_std) self.mlinspect_caller_filename = caller_filename self.mlinspect_lineno = lineno self.mlinspect_optional_code_reference = optional_code_reference self.mlinspect_optional_source_code = optional_source_code return execute_patched_func_no_op_id(original, execute_inspections, self, copy=copy, with_mean=with_mean, with_std=with_std) @gorilla.name('fit_transform') @gorilla.settings(allow_hit=True) def patched_fit_transform(self, *args, **kwargs): """ Patch for ('sklearn.preprocessing._data.StandardScaler', 'fit_transform') """ # pylint: disable=no-method-argument # TO_SQL: ############################################################################################### fit_data, just_transform = differentiate_fit_transform(self, args[0]) # TO_SQL DONE! ########################################################################################## if not just_transform: original = gorilla.get_original_attribute(preprocessing.StandardScaler, 'fit_transform') function_info = FunctionInfo('sklearn.preprocessing._data', 'StandardScaler') input_info = get_input_info(args[0], self.mlinspect_caller_filename, self.mlinspect_lineno, function_info, self.mlinspect_optional_code_reference, self.mlinspect_optional_source_code) operator_context = OperatorContext(OperatorType.TRANSFORMER, function_info) input_infos = SklearnBackend.before_call(operator_context, [input_info.annotated_dfobject]) result = original(self, input_infos[0].result_data, *args[1:], **kwargs) backend_result = SklearnBackend.after_call(operator_context, input_infos, result) new_return_value = backend_result.annotated_dfobject.result_data assert isinstance(new_return_value, MlinspectNdarray) op_id = singleton.get_next_op_id() else: original = gorilla.get_original_attribute(preprocessing.StandardScaler, 'transform') result = original(self, *args, **kwargs) op_id = singleton.sql_logic.get_unique_id() # TO_SQL: ############################################################################################### code_ref = self.mlinspect_optional_code_reference name, ti, target_cols, res_for_map, cols_to_drop = find_target(code_ref, args[0], result) tracking_cols = ti.tracking_cols all_cols = [x for x in ti.non_tracking_cols if not x in set(cols_to_drop)] selection_map = {} if not (self.with_mean and self.with_std): raise NotImplementedError select_block = [] for col in all_cols: if col in target_cols: # Access fitted data if possible: if not fit_data.fully_set: fit_lookup_table, fit_lookup_code = singleton.sql_logic.std_scalar_values(name, col) fit_data.col_to_fit_block_name[col] = fit_lookup_table singleton.pipeline_container.add_statement_to_pipe(fit_lookup_table, fit_lookup_code) if singleton.sql_obj.mode == SQLObjRep.VIEW: singleton.dbms_connector.run(fit_lookup_code) else: fit_lookup_table = fit_data.col_to_fit_block_name[col] select_block.append( f"\t(({col} - (SELECT avg_col_std_scal FROM {fit_lookup_table})) / " f"(SELECT std_dev_col_std_scal FROM {fit_lookup_table})) " f"AS {col}") selection_map[col] = select_block[-1] else: select_block.append(f"\t{col}") select_block_s = ',\n'.join(select_block) + ",\n\t" + ", ".join(tracking_cols) sql_code = f"SELECT \n{select_block_s} \n" \ f"FROM {name}" add_to_col_trans(selection_map=selection_map, code_ref=code_ref, sql_source=name) # cte_name, sql_code = singleton.sql_logic.finish_sql_call(sql_code, op_id, res_for_map, # tracking_cols=tracking_cols, # non_tracking_cols=all_cols, # operation_type=OperatorType.TRANSFORMER, # cte_name=f"block_stdscaler_mlinid{op_id}") cte_name, sql_code = store_current_sklearn_op(sql_code, op_id, f"block_stdscaler_mlinid{op_id}", result, tracking_cols, all_cols, target_cols) singleton.pipeline_container.add_statement_to_pipe(cte_name, sql_code) if not just_transform: fit_data.fully_set = True # TO_SQL DONE! ########################################################################################## dag_node = DagNode(singleton.get_next_op_id(), BasicCodeLocation(self.mlinspect_caller_filename, self.mlinspect_lineno), operator_context, DagNodeDetails("Standard Scaler", ['array']), get_optional_code_info_or_none(self.mlinspect_optional_code_reference, self.mlinspect_optional_source_code)) backend_result = singleton.update_hist.sql_update_backend_result(res_for_map, backend_result, curr_sql_expr_name=cte_name, curr_sql_expr_columns=all_cols, keep_previous_res=True, not_materialize=True) add_dag_node(dag_node, [input_info.dag_node], backend_result) return new_return_value return result @gorilla.name('transform') @gorilla.settings(allow_hit=True) def patched_transform(self, *args, **kwargs): # pylint: disable=no-method-argument original = gorilla.get_original_attribute(preprocessing.StandardScaler, 'transform') return transform_logic(original, self, *args, **kwargs) # @gorilla.name('fit') # @gorilla.settings(allow_hit=True) # def patched_fit(self, *args, **kwargs): # # pylint: disable=no-method-argument # original = gorilla.get_original_attribute(preprocessing.StandardScaler, 'fit') # return original(self, *args, **kwargs) @gorilla.patches(preprocessing.KBinsDiscretizer) class SklearnKBinsDiscretizerPatching: """ Patches for sklearn KBinsDiscretizer""" # pylint: disable=too-few-public-methods @gorilla.name('__init__') @gorilla.settings(allow_hit=True) def patched__init__(self, n_bins=5, *, encode='onehot', strategy='quantile', mlinspect_caller_filename=None, mlinspect_lineno=None, mlinspect_optional_code_reference=None, mlinspect_optional_source_code=None): """ Patch for ('sklearn.preprocessing._discretization', 'KBinsDiscretizer') """ # pylint: disable=no-method-argument, attribute-defined-outside-init original = gorilla.get_original_attribute(preprocessing.KBinsDiscretizer, '__init__') self.mlinspect_caller_filename = mlinspect_caller_filename self.mlinspect_lineno = mlinspect_lineno self.mlinspect_optional_code_reference = mlinspect_optional_code_reference self.mlinspect_optional_source_code = mlinspect_optional_source_code def execute_inspections(_, caller_filename, lineno, optional_code_reference, optional_source_code): """ Execute inspections, add DAG node """ original(self, n_bins=n_bins, encode=encode, strategy=strategy) self.mlinspect_caller_filename = caller_filename self.mlinspect_lineno = lineno self.mlinspect_optional_code_reference = optional_code_reference self.mlinspect_optional_source_code = optional_source_code return execute_patched_func_no_op_id(original, execute_inspections, self, n_bins=n_bins, encode=encode, strategy=strategy) @gorilla.name('fit_transform') @gorilla.settings(allow_hit=True) def patched_fit_transform(self, *args, **kwargs): """ Patch for ('sklearn.preprocessing._discretization.KBinsDiscretizer', 'fit_transform') """ # pylint: disable=no-method-argument # TO_SQL: ############################################################################################### fit_data, just_transform = differentiate_fit_transform(self, args[0]) # TO_SQL DONE! ########################################################################################## if not just_transform: original = gorilla.get_original_attribute(preprocessing.KBinsDiscretizer, 'fit_transform') function_info = FunctionInfo('sklearn.preprocessing._discretization', 'KBinsDiscretizer') input_info = get_input_info(args[0], self.mlinspect_caller_filename, self.mlinspect_lineno, function_info, self.mlinspect_optional_code_reference, self.mlinspect_optional_source_code) operator_context = OperatorContext(OperatorType.TRANSFORMER, function_info) input_infos = SklearnBackend.before_call(operator_context, [input_info.annotated_dfobject]) result = original(self, input_infos[0].result_data, *args[1:], **kwargs) backend_result = SklearnBackend.after_call(operator_context, input_infos, result) new_return_value = backend_result.annotated_dfobject.result_data assert isinstance(new_return_value, MlinspectNdarray) op_id = singleton.get_next_op_id() else: original = gorilla.get_original_attribute(preprocessing.KBinsDiscretizer, 'transform') result = original(self, *args, **kwargs) op_id = singleton.sql_logic.get_unique_id() # TO_SQL: ############################################################################################### code_ref = self.mlinspect_optional_code_reference name, ti, target_cols, res_for_map, cols_to_drop = find_target(code_ref, args[0], result) tracking_cols = ti.tracking_cols all_cols = [x for x in ti.non_tracking_cols if not x in set(cols_to_drop)] num_bins = self.n_bins if not (self.encode == "ordinal", self.strategy == "uniform"): raise NotImplementedError selection_map = {} select_block = [] for col in all_cols: if col in target_cols: # Access fitted data if possible: if not fit_data.fully_set: # create table for min: min_lookup_table = f"block_kbin_fit_{singleton.sql_logic.get_unique_id()}_min" min_lookup_code = f"SELECT MIN({col}) AS min_val from {name} " min_lookup_table, min_lookup_code = singleton.sql_logic.wrap_in_sql_obj(min_lookup_code, block_name=min_lookup_table) min_lookup_table, min_lookup_code = singleton.sql_logic.materialize_if_possible(min_lookup_table, min_lookup_code) singleton.pipeline_container.add_statement_to_pipe(min_lookup_table, min_lookup_code) if singleton.sql_obj.mode == SQLObjRep.VIEW: singleton.dbms_connector.run(min_lookup_code) fit_data.extra_info[col] = min_lookup_table # create table for step_sizes: fit_lookup_table, fit_lookup_code = singleton.sql_logic.step_size_kbin(name, col, num_bins) fit_data.col_to_fit_block_name[col] = fit_lookup_table singleton.pipeline_container.add_statement_to_pipe(fit_lookup_table, fit_lookup_code) if singleton.sql_obj.mode == SQLObjRep.VIEW: singleton.dbms_connector.run(fit_lookup_code) else: fit_lookup_table = fit_data.col_to_fit_block_name[col] min_lookup_table = fit_data.extra_info[col] # select_block_sub = "\t(CASE\n" \ # f"\t\tWHEN {col} < (SELECT min_val FROM {min_lookup_table}) +" \ # f" (SELECT step FROM {fit_lookup_table}) THEN 0\n" \ # f"\t\tWHEN {col} < (SELECT min_val FROM {min_lookup_table}) +" \ # f" {num_bins - 1} * (SELECT step FROM {fit_lookup_table}) " \ # f"THEN FLOOR(({col} - (SELECT min_val FROM {min_lookup_table}))/(SELECT step FROM {fit_lookup_table}))\n" \ # f"\t\tELSE {num_bins - 1}\n\tEND) AS {col}" select_block_sub = f"(\n" \ f"\tLEAST({num_bins - 1},
method.""" cmd = "show statistics" lines = self.device.send_command(cmd) lines = lines.split('\n') counters = {} for line in lines: port_block = re.match('\s*PORT (\S+) Counters:.*', line) if port_block: interface = port_block.group(1) counters.setdefault(interface, {}) elif len(line) == 0: continue else: octets = re.match(r"\s+InOctets\s+(\d+)\s+OutOctets\s+(\d+)\.*", line) if octets: counters[interface]['rx_octets'] = octets.group(1) counters[interface]['tx_octets'] = octets.group(2) continue packets = re.match(r"\s+InUnicastPkts\s+(\d+)\s+OutUnicastPkts\s+(\d+)\.*", line) if packets: counters[interface]['rx_unicast_packets'] = packets.group(1) counters[interface]['tx_unicast_packets'] = packets.group(2) continue broadcast = re.match(r"\s+InBroadcastPkts\s+(\d+)\s+OutBroadcastPkts\s+(\d+)\.*", line) if broadcast: counters[interface]['rx_broadcast_packets'] = broadcast.group(1) counters[interface]['tx_broadcast_packets'] = broadcast.group(2) continue multicast = re.match(r"\s+InMulticastPkts\s+(\d+)\s+OutMulticastPkts\s+(\d+)\.*", line) if multicast: counters[interface]['rx_multicast_packets'] = multicast.group(1) counters[interface]['tx_multicast_packets'] = multicast.group(2) continue error = re.match(r"\s+InErrors\s+(\d+)\s+OutErrors\s+(\d+)\.*", line) if error: counters[interface]['rx_errors'] = error.group(1) counters[interface]['tx_errors'] = error.group(2) continue discard = re.match(r"\s+InDiscards\s+(\d+)\s+OutDiscards\s+(\d+)\.*", line) if discard: counters[interface]['rx_discards'] = discard.group(1) counters[interface]['tx_discards'] = discard.group(2) return counters def get_environment(self): """ Note this only partially implemented. Currently only Returns a dictionary where: * fans is a dictionary of dictionaries where the key is the location and the values: * status (True/False) - True if it's ok, false if it's broken * temperature is a dict of dictionaries where the key is the location and the values: * temperature (float) - Temperature in celsius the sensor is reporting. * is_alert (True/False) - True if the temperature is above the alert threshold * is_critical (True/False) - True if the temp is above the critical threshold * power is a dictionary of dictionaries where the key is the PSU id and the values: * status (True/False) - True if it's ok, false if it's broken * capacity (float) - Capacity in W that the power supply can support * output (float) - Watts drawn by the system * cpu is a dictionary of dictionaries where the key is the ID and the values * %usage * memory is a dictionary with: * available_ram (int) - Total amount of RAM installed in the device * used_ram (int) - RAM in use in the device """ # todo: add cpu, memory environment = { 'memory': {'used_ram': 0, 'available_ram': 0}, 'temperature': {}, 'cpu': [{'%usage': 0.0}], 'power': {}, 'fans': {}, 'memory_detail': {}, 'cpu_detail': {} } lines = self.device.send_command('show cpu-utilization average all 300 | include idle') for line in lines.split('\n'): r1 = re.match(r'^idle\s+.*(\d+)$', line) if r1: environment['cpu'][0]['%usage'] = 100 - int(r1.group(1)) _data = napalm_base.helpers.textfsm_extractor( self, "show_cpu_lp", self.device.send_command('show cpu-utilization lp')) if _data: for d in _data: _slot = d.get('slot') _pct = napalm_base.helpers.convert(int, d.get('util'), 0) if _slot: environment['cpu_detail']['LP{}'.format(_slot)] = {'%usage': _pct} # process memory _data = napalm_base.helpers.textfsm_extractor(self, 'show_memory', self.device.send_command('show memory')) # print(json.dumps(_data, indent=2)) if _data: for d in _data: _name = d.get('name') _module = d.get('module') _state = d.get('state') _avail = napalm_base.helpers.convert(int, d.get('avail_ram'), 0) _total = napalm_base.helpers.convert(int, d.get('total_ram'), 0) _used = _avail/_total if _avail > 0 else 0 _pct = d.get('avail_ram_pct') if _name and _module: environment['memory_detail'][_module] = { 'used_ram': _used, 'available_ram': _avail } if 'MP' in _module and _state and _state == 'active': environment['memory'] = { 'available_ram': _avail, 'used_ram': _avail } # todo replace with 'show chassis' tpl command = 'show chassis' lines = self.device.send_command(command) _data = napalm_base.helpers.textfsm_extractor(self, 'show_chassis', lines) _chassis_modules = {'TEMP': 'temperature', 'FAN': 'fans', 'POWER': 'power'} if _data: for d in _data: _module = d.get('module') _mod_name = _chassis_modules.get(_module) if not _mod_name: continue _name = d.get('name') _status = d.get('status') if _module and _name: if _module == 'TEMP': environment[_mod_name][_name] = {'temperature': d.get('temp', '0')} elif _module == 'FAN': environment[_mod_name][_name] = { 'status': _status, 'speed': d.get('speed', '') } elif _module == 'POWER': environment[_mod_name][_name] = { 'status': _status, 'capacity': d.get('value', 'N/A'), 'output': 'N/A'} ''' print(json.dumps(_data, indent=2)) lines = lines.split("\n") lines = lines[3:] for line in lines: # Power 2: Installed (Failed or Disconnected) r1 = re.match(r'^Power\s+(\d+):\s+Installed \(Failed or Disconnected\)', line) # Power 7: (23-yyyyyyyy xxxxxxxxx - AC 1800W): Installed (OK) r2 = re.match(r'^Power\s+(\d+):\s+.*AC\s+(\S+)\): Installed \(OK\)', line) # CER: Power 1 ( 3I50 - AC 504W): Installed (OK) r3 = re.match(r'^Power\s+(\d+)\s+.*AC\s+(\S+)\): Installed \(OK\)', line) if r1: psu = r1.group(1) environment['power'][psu] = dict() environment[psu] = {'status': False, 'capacity': 'N/A', 'output': 'N/A'} elif r2: psu = r2.group(1) environment['power'][psu] = dict() environment['power'][psu] = {'status': True, 'capacity': r2.group(2), 'output': 'N/A'} elif r3: psu = r3.group(1) environment['power'][psu] = dict() environment['power'][psu] = {'status': True, 'capacity': r3.group(2), 'output': 'N/A'} # Back Fan A-1: Status = OK, Speed = MED (60%) r3 = re.match(r'^(.*):\s+Status = (\S+),\s+Speed\s+=\s+(\S+)\s+\((\d+)%\)', line) if r3: fan = r3.group(1) status = False if r3.group(2) == "OK": status = True environment['fans'][fan] = {'status': status} ''' return environment def get_arp_table(self, vrf=""): """ Returns a list of dictionaries having the following set of keys: * interface (string) * mac (string) * ip (string) * age (float) 'vrf' of null-string will default to all VRFs. Specific 'vrf' will return the ARP table entries for that VRFs (including potentially 'default' or 'global'). In all cases the same data structure is returned and no reference to the VRF that was used is included in the output. Example:: [ { 'interface' : 'MgmtEth0/RSP0/CPU0/0', 'mac' : '5C:5E:AB:DA:3C:F0', 'ip' : '172.17.17.1', 'age' : 1454496274.84 }, { 'interface' : 'MgmtEth0/RSP0/CPU0/0', 'mac' : '5C:5E:AB:DA:3C:FF', 'ip' : '172.17.17.2', 'age' : 1435641582.49 } ] """ arp_table = list() arp_cmd = 'show arp {}'.format(vrf) output = self.device.send_command(arp_cmd) output = output.split('\n') output = output[7:] for line in output: fields = line.split() if len(fields) == 6: num, address, mac, typ, age, interface = fields try: if age == 'None': age = 0 age = float(age) except ValueError: logger.warn("Unable to convert age value to float: {}".format(age)) # Do not include 'Pending' entries if typ == 'Dynamic' or typ == 'Static': entry = { 'interface': interface, 'mac': napalm_base.helpers.mac(mac), 'ip': address, 'age': age } arp_table.append(entry) return arp_table def cli(self, commands): """ Execute a list of commands and return the output in a dictionary format using the command as the key. Example input: ['show clock', 'show calendar'] Output example: { 'show calendar': u'22:02:01 UTC Thu Feb 18 2016', 'show clock': u'*22:01:51.165 UTC Thu Feb 18 2016'} """ cli_output = dict() if type(commands) is not list: raise TypeError('Please enter a valid list of commands!') for command in commands: output = self._send_command(command) if 'Invalid input detected' in output: raise ValueError('Unable to execute command "{}"'.format(command)) cli_output.setdefault(command, {}) cli_output[command] = output return cli_output def get_ntp_servers(self): """ Returns the NTP servers configuration as dictionary. The keys of the dictionary represent the IP Addresses of the servers. Inner dictionaries do not have yet any available keys. Example:: { '192.168.0.1': {}, '192.168.127.12': {}, '172.16.31.10': {}, '172.16.31.10': {} } """ _ntp_servers = {} # as a quick implementation; call get_ntp_stats to get a list of ntp servers _ntp_info = self.get_ntp_stats() if _ntp_info: for n in _ntp_info: _ntp_servers[n.get('remote')] = {} return _ntp_servers def get_ntp_stats(self): """ Note this was copied from the ios driver. Need to revisit type. Returns a list of NTP synchronization statistics. * remote (string) * referenceid (string) * synchronized (True/False) * stratum (int) * type (string) * when (string) * hostpoll (int) * reachability (int) * delay (float) * offset (float) * jitter (float) Example:: [ { 'remote' : u'172.16.58.3', 'referenceid' : u'172.16.17.32', 'synchronized' : True, 'stratum' : 4, 'type' : u'-', 'when' : u'107', 'hostpoll' : 256, 'reachability' : 377, 'delay' : 164.228, 'offset' : -13.866, 'jitter' : 2.695 } ] """ ntp_stats = [] command = 'show ntp associations' output = self._send_command(command) for line in output.splitlines(): # Skip first two lines and last line of command output if line == "" or 'address' in line or 'sys.peer' in line: continue if '%NTP is not enabled' in line: return [] elif len(line.split()) == 9: address, ref_clock, st, when, poll, reach, delay, offset, disp = line.split() address_regex = re.match(r'(\W*)([0-9.*]*)', address) try: ntp_stats.append({ 'remote': py23_compat.text_type(address_regex.group(2)), 'synchronized': ('*' in address_regex.group(1)), 'referenceid': py23_compat.text_type(ref_clock), 'stratum': int(st), 'type': u'-', 'when': py23_compat.text_type(when), 'hostpoll': int(poll), 'reachability': int(reach), 'delay': float(delay), 'offset': float(offset), 'jitter': float(disp) }) except Exception: continue return ntp_stats def get_mac_address_table(self): """get_mac_address_table method.""" cmd = "show mac-address" lines = self.device.send_command(cmd) lines = lines.split('\n') mac_address_table = [] # Headers may change whether there are static entries,
# SVMs for Food Experiment Data import matplotlib import numpy as np import matplotlib.pyplot as pp import optparse import unittest import random import itertools from sklearn import decomposition from sklearn import svm from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import cross_val_score from sklearn.model_selection import LeaveOneOut from sklearn.model_selection import KFold from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from scipy.interpolate import InterpolatedUnivariateSpline from scipy.ndimage.filters import gaussian_filter1d import sys import lib_get_subset as lib_get_subset import lib_overlapping as lib_overlapping import matplotlib.pyplot as plt try: import cPickle as pkl def load_pickle(filename): with open(filename, 'rb') as f: return pkl.load(f) except: import pickle as pkl def load_pickle(filename): with open(filename, 'rb') as f: return pkl.load(f, encoding = 'latin1') TRIAL_ORDER = [[1, 4, 6, 2, 3, 7, 0, 5, 8], [0, 3, 6, 2, 5, 7, 1, 4, 8], [2, 3, 7, 1, 4, 6, 0, 5, 8], [2, 4, 6, 0, 5, 8, 1, 3, 7], [0, 5, 6, 1, 4, 7, 2, 3, 8], [1, 3, 8, 0, 4, 6, 2, 5, 7], [0, 4, 7, 1, 3, 8, 2, 5, 6], [2, 3, 8, 0, 4, 6, 1, 5, 7], [0, 3, 8, 2, 4, 6, 1, 5, 7], [1, 3, 6, 0, 5, 7, 2, 4, 8], [0, 5, 7, 1, 3, 8, 2, 4, 6], [1, 3, 8, 0, 4, 7, 2, 5, 6], [1, 5, 6, 0, 4, 7, 2, 3, 8], [1, 5, 8, 2, 4, 6, 0, 3, 7], [1, 5, 8, 2, 4, 6, 0, 3, 7], [1, 4, 6, 2, 5, 7, 0, 3, 8], [0, 4, 6, 2, 5, 8, 1, 3, 7], [2, 4, 7, 0, 5, 6, 1, 3, 8], [1, 5, 7, 0, 3, 6, 2, 4, 8], [2, 5, 7, 0, 4, 8, 1, 3, 6], [2, 5, 7, 1, 4, 6, 0, 3, 8], [0, 5, 7, 2, 3, 6, 1, 4, 8], [2, 4, 8, 0, 5, 6, 1, 3, 7], [2, 5, 7, 0, 4, 8, 1, 3, 6], [1, 3, 7, 2, 5, 8, 0, 4, 6], [2, 3, 8, 1, 5, 7, 0, 4, 6], [2, 3, 7, 0, 5, 8, 1, 4, 6], [1, 4, 7, 0, 5, 6, 2, 3, 8], [2, 5, 8, 1, 3, 6, 0, 4, 7], [1, 4, 8, 0, 5, 7, 2, 3, 6]] NUM_TRIAL_SETS = len(TRIAL_ORDER) def create_dataset(train_type, alignment_type): R_W1, R_W2, R_W3 = [], [], [] R_CW1, R_CW2, R_CW3 = [], [], [] R_M1, R_M2, R_M3 = [], [], [] for trial_set in range(len(TRIAL_ORDER)): if trial_set == NUM_TRIAL_SETS: break directory = "./test/R_"+str(trial_set + 1) print(directory+"/M_count_"+str(TRIAL_ORDER[trial_set][6])+"_sensor_29.5.pkl") R_M1.append(load_pickle(directory+"/M_count_"+str(TRIAL_ORDER[trial_set][6])+"_sensor_29.5.pkl")) R_M2.append(load_pickle(directory+"/M_count_"+str(TRIAL_ORDER[trial_set][7])+"_sensor_29.5.pkl")) R_M3.append(load_pickle(directory+"/M_count_"+str(TRIAL_ORDER[trial_set][8])+"_sensor_29.5.pkl")) R_W1.append(load_pickle(directory+"/W_count_"+str(TRIAL_ORDER[trial_set][0])+"_sensor_29.5.pkl")) R_W2.append(load_pickle(directory+"/W_count_"+str(TRIAL_ORDER[trial_set][1])+"_sensor_29.5.pkl")) R_W3.append(load_pickle(directory+"/W_count_"+str(TRIAL_ORDER[trial_set][2])+"_sensor_29.5.pkl")) R_CW1.append(load_pickle(directory+"/CW_count_"+str(TRIAL_ORDER[trial_set][3])+"_sensor_29.5.pkl")) R_CW2.append(load_pickle(directory+"/CW_count_"+str(TRIAL_ORDER[trial_set][4])+"_sensor_29.5.pkl")) R_CW3.append(load_pickle(directory+"/CW_count_"+str(TRIAL_ORDER[trial_set][5])+"_sensor_29.5.pkl")) x_full = R_CW1[0][:, 0] #match_set_LOW = R_CW3[0][250:1250,2] #match_set_HIGH = R_W1[0][280:1280,2] match_set_W = load_pickle("./test/R_1/W_count_"+str(TRIAL_ORDER[0][0])+"_sensor_29.5.pkl")[184:1184,2] match_set_CW = load_pickle("./test/R_1/CW_count_"+str(TRIAL_ORDER[0][3])+"_sensor_29.5.pkl")[32:1032,2] match_set_M = load_pickle("./test/R_1/M_count_"+str(TRIAL_ORDER[0][6])+"_sensor_29.5.pkl")[130:1130,2] print(R_M1[0][:,4]) data_dict = {} for trial_set in range(NUM_TRIAL_SETS): if train_type == 'all_active' or train_type == 'noCW_active': if alignment_type == 'firstsec': data_dict['trial_set_'+str(trial_set+1)+'_X'] = np.vstack(( lib_get_subset.get_x_matched_set(x_full, match_set_M, R_M1[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_set(x_full, match_set_M, R_M2[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_set(x_full, match_set_M, R_M3[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_set(x_full, match_set_W, R_W1[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_set(x_full, match_set_W, R_W2[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_set(x_full, match_set_W, R_W3[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_set(x_full, match_set_CW, R_CW1[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_set(x_full, match_set_CW, R_CW2[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_set(x_full, match_set_CW, R_CW3[trial_set][:, 1:3], is_heated=True))) elif alignment_type == 'curvemax': data_dict['trial_set_'+str(trial_set+1)+'_X'] = np.vstack(( lib_get_subset.get_x_matched_deriv_set(x_full, R_M1[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_deriv_set(x_full, R_M2[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_deriv_set(x_full, R_M3[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_deriv_set(x_full, R_W1[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_deriv_set(x_full, R_W2[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_deriv_set(x_full, R_W3[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_deriv_set(x_full, R_CW1[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_deriv_set(x_full, R_CW2[trial_set][:, 1:3], is_heated=True), lib_get_subset.get_x_matched_deriv_set(x_full, R_CW3[trial_set][:, 1:3], is_heated=True))) elif alignment_type == 'force': data_dict['trial_set_'+str(trial_set+1)+'_X'] = np.vstack(( lib_get_subset.get_x_matched_force_set(x_full, R_M1[trial_set][:, 1:5], is_heated=True), lib_get_subset.get_x_matched_force_set(x_full, R_M2[trial_set][:, 1:5], is_heated=True), lib_get_subset.get_x_matched_force_set(x_full, R_M3[trial_set][:, 1:5], is_heated=True), lib_get_subset.get_x_matched_force_set(x_full, R_W1[trial_set][:, 1:5], is_heated=True), lib_get_subset.get_x_matched_force_set(x_full, R_W2[trial_set][:, 1:5], is_heated=True), lib_get_subset.get_x_matched_force_set(x_full, R_W3[trial_set][:, 1:5], is_heated=True), lib_get_subset.get_x_matched_force_set(x_full, R_CW1[trial_set][:, 1:5], is_heated=True), lib_get_subset.get_x_matched_force_set(x_full, R_CW2[trial_set][:, 1:5], is_heated=True), lib_get_subset.get_x_matched_force_set(x_full, R_CW3[trial_set][:, 1:5], is_heated=True))) elif train_type == 'all_active_passive' or train_type == 'noCW_active_passive': if alignment_type == 'firstsec': data_dict['trial_set_'+str(trial_set+1)+'_X'] = np.vstack(( lib_get_subset.get_x_matched_set(x_full, match_set_M, R_M1[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_set(x_full, match_set_M, R_M1[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_set(x_full, match_set_M, R_M2[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_set(x_full, match_set_M, R_M2[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_set(x_full, match_set_M, R_M3[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_set(x_full, match_set_M, R_M3[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_set(x_full, match_set_W, R_W1[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_set(x_full, match_set_W, R_W1[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_set(x_full, match_set_W, R_W2[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_set(x_full, match_set_W, R_W2[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_set(x_full, match_set_W, R_W3[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_set(x_full, match_set_W, R_W3[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_set(x_full, match_set_CW, R_CW1[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_set(x_full, match_set_CW, R_CW1[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_set(x_full, match_set_CW, R_CW2[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_set(x_full, match_set_CW, R_CW2[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_set(x_full, match_set_CW, R_CW3[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_set(x_full, match_set_CW, R_CW3[trial_set][:, 1:3], is_heated=False))) elif alignment_type == 'curvemax': data_dict['trial_set_'+str(trial_set+1)+'_X'] = np.vstack(( lib_get_subset.get_x_matched_deriv_set(x_full, R_M1[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_deriv_set(x_full, R_M1[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_deriv_set(x_full, R_M2[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_deriv_set(x_full, R_M2[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_deriv_set(x_full, R_M3[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_deriv_set(x_full, R_M3[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_deriv_set(x_full, R_W1[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_deriv_set(x_full, R_W1[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_deriv_set(x_full, R_W2[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_deriv_set(x_full, R_W2[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_deriv_set(x_full, R_W3[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_deriv_set(x_full, R_W3[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_deriv_set(x_full, R_CW1[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_deriv_set(x_full, R_CW1[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_deriv_set(x_full, R_CW2[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_deriv_set(x_full, R_CW2[trial_set][:, 1:3], is_heated=False), lib_get_subset.get_x_matched_deriv_set(x_full, R_CW3[trial_set][:, 1:3], is_heated=True)+lib_get_subset.get_x_matched_deriv_set(x_full, R_CW3[trial_set][:, 1:3], is_heated=False))) #print(data_dict['trial_set_1_X'].shape) #print(data_dict['trial_set_'+str(trial_set+1)+'_X'].shape) #data_dict['trial_set_'+str(trial_set+1)+'_y'] = np.array([1, 1, 1, 0, 0, 0, 0, 0, 0]) data_dict['trial_set_'+str(trial_set+1)+'_y'] = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2]) #print(data_dict['trial_set_'+str(trial_set+1)+'_y'].shape) if train_type == 'all_active':# or train_type == 'all_active_passive': #data_dict = lib_overlapping.get_most_overlapping(x_full, data_dict, num_pairs = 10) #data_dict = lib_overlapping.get_overlapping_reference(x_full, data_dict, num_referenced = 20) data_dict = lib_overlapping.get_meanstd_overlapping2(data_dict) #x_full, cross_val_dict = {} for trial_set in range(NUM_TRIAL_SETS): X_train = [] y_train = [] for other_trial_set in range(NUM_TRIAL_SETS): if other_trial_set != trial_set: if train_type == 'all_active' or train_type == 'all_active_passive': try: X_train = np.concatenate((X_train, data_dict['trial_set_'+str(other_trial_set+1)+'_X']), axis = 0) except: X_train = data_dict['trial_set_'+str(other_trial_set+1)+'_X'] try: y_train = np.concatenate((y_train, data_dict['trial_set_'+str(other_trial_set+1)+'_y']), axis = 0) except: y_train = data_dict['trial_set_'+str(other_trial_set+1)+'_y'] elif train_type == 'noCW_active' or train_type == 'noCW_active_passive': try: X_train = np.concatenate((X_train, data_dict['trial_set_' + str(other_trial_set + 1) + '_X'][0:6, :]), axis=0) except: X_train = data_dict['trial_set_' + str(other_trial_set + 1) + '_X'][0:6, :] try: y_train = np.concatenate((y_train, data_dict['trial_set_' + str(other_trial_set + 1) + '_y'][0:6]), axis=0) except: y_train = data_dict['trial_set_' + str(other_trial_set + 1) + '_y'][0:6] cross_val_dict['fold'+str(trial_set+1)+'_X_train'] = X_train #shape is (8x9) x (1000) cross_val_dict['fold'+str(trial_set+1)+'_y_train'] = y_train #shape is (8x9) cross_val_dict['fold'+str(trial_set+1)+'_X_test'] = data_dict['trial_set_'+str(trial_set+1)+'_X'] #shape is (1x9) x (1000) cross_val_dict['fold'+str(trial_set+1)+'_y_test'] = data_dict['trial_set_'+str(trial_set+1)+'_y'] #shape is (1x9) print("created dataset") return cross_val_dict, data_dict def run_crossvalidation_all(cross_val_dict, trial_set, svm_type): #print("cross val on trial set", trial_set) scores_list = [] X_train = cross_val_dict['fold'+str(trial_set+1)+'_X_train'] y_train = cross_val_dict['fold'+str(trial_set+1)+'_y_train'] X_test = cross_val_dict['fold'+str(trial_set+1)+'_X_test'] y_test = cross_val_dict['fold'+str(trial_set+1)+'_y_test'] #print(np.shape(X_train), np.shape(X_test), np.shape(y_train), np.shape(y_test)) #print(X_test, y_test) #print(X_train, y_train) svc = svm.SVC(kernel=svm_type) #svc = svm.SVC(kernel='rbf') #svc = svm.SVC(kernel='poly', degree=3) clf = svc.fit(X_train, y_train) preds = svc.predict(X_test) #print("ground tr", y_test) #print("predicted", preds) #knn = KNeighborsClassifier(n_neighbors=10) #clf = knn.fit(X_train, y_train) #preds = knn.predict(X_test) cm = confusion_matrix(y_test, preds) scores = clf.score(X_test, y_test) #print("Confusion matrix created") #cmat = cmat + cm total = float(cm.sum()) true = float(np.trace(cm)) scores_list.append(true/total) #print(scores_list) print(y_test) if y_test[3] == 0: y_test[3] += 2 if y_test[4] == 0: y_test[4] += 2 if y_test[5] == 0: y_test[5] += 2 preds_cm = preds.copy() if preds_cm[3] == 0: preds_cm[3] += 2 if preds_cm[4] == 0: preds_cm[4] += 2 if preds_cm[5] == 0: preds_cm[5] += 2 cm = confusion_matrix(y_test, preds_cm) scores = clf.score(X_test, y_test) print(y_test, preds_cm) #print(cm) print(np.array(preds), cm) return np.array(preds), cm def run_leave_one_block_out_crossvalidation_all(train_type, data_dict, svm_type): # print("cross val on trial set", trial_set) scores_list = [] X_data = [] y_data = [] for trial_set in range(NUM_TRIAL_SETS): for block in range(len(data_dict['trial_set_' + str(trial_set + 1) + '_X'])): # print(trial_set, block) # X_train.append(data_dict['trial_set_'+str(trial_set+1)+'_X'][block, :]) if train_type == 'all_active' or train_type == 'all_active_passive': X_data.append(data_dict['trial_set_' + str(trial_set + 1) + '_X'][block, :]) y_data.append(data_dict['trial_set_' + str(trial_set + 1) + '_y'][block]) elif train_type == 'noCW_active' or train_type == 'noCW_active_passive': if block < 6: X_data.append(data_dict['trial_set_' + str(trial_set + 1) + '_X'][block, :]) y_data.append(data_dict['trial_set_' + str(trial_set + 1) + '_y'][block]) print(np.shape(X_data)) print(np.shape(y_data)) X_data = np.array(X_data) y_data = np.array(y_data) cm_total = np.zeros((3, 3)) # skf = StratifiedKFold(y_data, n_folds=3, shuffle=True) loo = LeaveOneOut(n=9) #block_sets_test = [[0, 1, 2], [0, 1, 5], [0, 1, 8], [0, 2, 4], [0, 2, 7], [0, 4, 5], [0, 4, 8], [0, 5, 7],[0, 7, 8], # [3, 1, 2], [3, 1, 5], [3, 1, 8], [3, 2, 4], [3, 2, 7], [3, 4, 5], [3, 4, 8], [3, 5, 7],[3, 7, 8], # [6, 1, 2], [6, 1, 5], [6, 1, 8], [6, 2, 4], [6, 2, 7], [6, 4, 5], [6, 4, 8], [6, 5, 7],[6,
<filename>apps/tcp/tests/run_tests.py """ <Program> run_tests.py <Author> <NAME> <Started> July 3rd, 2008 <Edits> <NAME> - Updated so that it works with the new way of specifying ports dynamically. Also changed how it is run, it now assumes you are running it in a directory generated by running preparetest.py -t. See usage for details. <NAME> 1-5-09 - Added the ability to run node manager tests by specifying the -n flag when running the script. <Usage> To run the repy unit tests locally, first navigate to trunk, then use these commands: 1. python preparetest.py -t <directory> 2. cd <directory> 3. python run_tests.py To run the node manager unit tests locally, open two shells (or command prompts). Navigate to trunk in each. 1. On the FIRST command prompt type the following command python preparetest.py -t <directory> The -t flag must be included. 2. On the SECOND command prompt enter the following sequence of commands: cd <directory> python nminit.py python nmmain.py 3. On the FIRST command prompt, enter the follwing sequence of commands: cd <directory> python run_tests.py -n <Description> python script to run the repy test cases locally... Adapted from a bash script When the -n option is specified, the node manager tests are run instead of the repy tests. The types of repy tests are: s_*.py -- The correct result is the same as when run with python e_*.py -- The restricted program produces output on stderr (most likely because it throws an exception) z_*.py -- The restricted program produces no output n_*.py -- The restricted program produces some output on stdout and no output on stderr b_*.py -- The restricted program produces output on both stderr and stdout u_*.py -- The result of running these programs is undefined. They are not tested but may be useful as examples l_*.py -- Use circular logging while testing. These test cases indicate an error by using exitall instead of letting the program terminate normally... any of these types of tests may be preceeded by a 'r', which indicates that there is a specific restriction file for the test. For example: re_foo_bar.py -- run the test (expecting an exception) with foo as the restriction file. The node manager tests are: nmtest*.py -- nmmain.py must be running on a separate shell, or these tests will fail. These tests will throw an exception or produce output in the event they fail. All these tests use the restrictions.test restrictions file. """ import glob import subprocess import os import sys # import testportfiller # what we print at the end... endput = '' def run_test(testname): global passcount global failcount if testname.startswith('rs_') or testname.startswith('re_') or \ testname.startswith('rz_') or testname.startswith('rn_') or \ testname.startswith('rb_') or testname.startswith('rl_'): # must have three parts: r?_restrictionfn_testname.py if len(testname.split('_')) != 3: raise Exception, "Test name '"+testname+"' does not have 3 parts!" # take the 2nd character of the testname 'rs_' -> 's' testtype = testname[1] restrictfn = testname.split('_')[1] elif testname.startswith('s_') or testname.startswith('e_') or \ testname.startswith('z_') or testname.startswith('n_') or \ testname.startswith('b_') or testname.startswith('l_'): # take the 1st character of the testname 's_' -> 's' testtype = testname[0] restrictfn = "restrictions.default" elif testname.startswith('ru_') or testname.startswith('u_'): # Do not run undefined tests... return elif testname.startswith('nmtest'): testtype = 'z' restrictfn = "restrictions.test" else: raise Exception, "Test name '"+testname+"' of an unknown type!" logstream.write("Running test %-50s [" % testname) logstream.flush() result = do_actual_test(testtype, restrictfn, testname) if result: passcount = passcount + 1 logstream.write(" PASS ]\n") else: failcount = failcount + 1 logstream.write("FAILED]\n") logstream.flush() def exec_command(command): # Windows does not like close_fds and we shouldn't need it so... p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # get the output and close theout = p.stdout.read() p.stdout.close() # get the errput and close theerr = p.stderr.read() p.stderr.close() # FreeBSD prints on stdout, when it gets a signal... # I want to look at the last line. it ends in \n, so I use index -2 if len(theout.split('\n')) > 1 and theout.split('\n')[-2].strip() == 'Terminated': # lose the last line theout = '\n'.join(theout.split('\n')[:-2]) # however we threw away an extra '\n' if anything remains, let's replace it if theout != '': theout = theout + '\n' # everyone but FreeBSD uses stderr if theerr.strip() == 'Terminated': theerr = '' # Windows isn't fond of this either... # clean up after the child # os.waitpid(p.pid,0) return (theout, theerr) def do_actual_test(testtype, restrictionfn, testname): global endput # match python if testtype == 's': (pyout, pyerr) = exec_command('python '+testname) (testout, testerr) = exec_command('python repy.py --simple '+restrictionfn+" "+testname) same = True if pyout != testout: # stdout differs endput = endput+testname+"\n"+ "standard out mismatch '"+pyout+"' != '"+testout+"'\n\n" same = False if pyerr != testerr: # stderr differs endput = endput+testname+"\n"+ "standard err mismatch '"+pyerr+"' != '"+testerr+"'\n\n" same = False return same # any out, no err... elif testtype == 'n': (testout, testerr) = exec_command('python repy.py --status foo '+restrictionfn+" "+testname) if testout != '' and testerr == '': return True else: endput = endput+testname+"\nout:"+testout+"err:"+ testerr+"\n\n" return False # any err, no out... elif testtype == 'e': (testout, testerr) = exec_command('python repy.py --status foo '+restrictionfn+" "+testname) if testout == '' and testerr != '': return True else: endput = endput+testname+"\nout:"+testout+"err:"+ testerr+"\n\n" return False # no err, no out... elif testtype == 'z': (testout, testerr) = exec_command('python repy.py --status foo '+restrictionfn+" "+testname) if testout == '' and testerr == '': return True else: endput = endput+testname+"\nout:"+testout+"err:"+ testerr+"\n\n" return False # any err, any out... elif testtype == 'b': (testout, testerr) = exec_command('python repy.py --status foo '+restrictionfn+" "+testname) if testout != '' and testerr != '': return True else: endput = endput+testname+"\nout:"+testout+"err:"+ testerr+"\n\n" return False # no err, no out (logging)... elif testtype == 'l': # remove any existing log try: os.remove("experiment.log.old") os.remove("experiment.log.new") except OSError: pass # run the experiment (testout, testerr) = exec_command('python repy.py --logfile experiment.log --status foo '+restrictionfn+" "+testname) # first, check to make sure there was no output or error if testout == '' and testerr == '': try: myfo = file("experiment.log.old","r") logdata = myfo.read() myfo.close() if os.path.exists("experiment.log.new"): myfo = file("experiment.log.new","r") logdata = logdata + myfo.read() myfo.close() # use only the last 16KB logdata = logdata[-16*1024:] except: endput = endput+testname+"\nCan't read log!\n\n" return False if "Fail" in logdata: endput = endput+testname+"\nString 'Fail' in logdata\n\n" return False elif "Success" not in logdata: endput = endput+testname+"\nString 'Success' not in logdata\n\n" return False else: return True else: endput = endput+testname+"\nHad output or errput! out:"+testout+"err:"+ testerr+"\n\n" return False else: raise Exception, "Unknown test type '"+str(testout)+"'" def do_oddballtests(): global passcount global failcount global endput # oddball "stop" tests... logstream.write("Running test %-50s [" % "Stop Test 1") logstream.flush() (testout, testerr) = exec_command('python repy.py --stop nonexist --status foo restrictions.default stop_testsleep.py') if testout == '' and testerr == '': passcount = passcount + 1 logstream.write(" PASS ]\n") else: failcount = failcount + 1 endput = endput+"Stop Test 1\noutput or errput! out:"+testout+"err:"+ testerr+"\n\n" logstream.write("FAILED]\n") # oddball "stop" test2... logstream.write("Running test %-50s [" % "Stop Test 2") logstream.flush() (testout, testerr) = exec_command('python repy.py --stop repy.py --status foo restrictions.default stop_testsleep.py') if testout == '' and testerr != '': passcount = passcount + 1 logstream.write(" PASS ]\n") else: failcount = failcount + 1 logstream.write("FAILED]\n") endput = endput+"Stop Test 2\noutput or no errput! out:"+testout+"err:"+ testerr+"\n\n" # remove the junk file... try: os.remove("junk_test.out") except: pass # oddball "stop" test3... logstream.write("Running test %-50s [" % "Stop Test 3") logstream.flush() (testout, testerr) = exec_command('python repy.py --stop junk_test.out --status foo restrictions.default stop_testsleepwrite.py') if testout == '' and testerr == '': passcount = passcount + 1 logstream.write(" PASS ]\n") else: failcount = failcount + 1 logstream.write("FAILED]\n") endput = endput+"Stop Test 3\noutput or errput! out:"+testout+"err:"+ testerr+"\n\n" if len(sys.argv) > 1 and sys.argv[1] == '-q': logstream = file("test.output","w") else: logstream = sys.stdout # these are updated in run_test passcount=0 failcount=0 # Have the testportfiller fill in all of the messport/connport # tags with default values so that the tests can be successfully #
help="Place mass pts along spokes uniform in volume (if omitted placement will be random and uniform in volume") parser.add_argument("--linear-spoked", action="store_true", help="Place mass pts along spokes linear in radial distance (if omitted placement will be random and uniform in volume") parser.add_argument("--grid-cartesian", action="store_true", help="Place mass points using a cartesian grid") parser.add_argument("--grid-cartesian-npts", default=100, type=int) parser.add_argument("--skip-overlap",action='store_true', help="If true, the grid is generated without actually performing overlaps. Very helpful for uncertain configurations or low SNR") parser.add_argument("--reset-grid-via-match",action='store_true',help="Reset the parameter_range results so each parameter's range is limited by match_value. Use this ONLY for estimating the fisher matrix quickly!") parser.add_argument("--no-reset-parameter",action='append',help="Don't reset the range of this parameter via tuning. Important for spin parameters, which can be over-tuned due to strong correlations") parser.add_argument("--use-fisher-resampling",action='store_true',help="Resample the grid using the fisher matrix. Requires fisher matrix") # Cutoff options parser.add_argument("--match-value", type=float, default=0.01, help="Use this as the minimum match value. Default is 0.01 (i.e., keep almost everything)") # Overlap options parser.add_argument("--fisher-psd",type=str,default="SimNoisePSDaLIGOZeroDetHighPower",help="psd name (attribute in lalsimulation). SimNoisePSDiLIGOSRD, lalsim.SimNoisePSDaLIGOZeroDetHighPower, lalsimutils.Wrapper_AdvLIGOPsd, .SimNoisePSDiLIGOSRD... ") parser.add_argument("--psd-file", help="File name for PSD (assumed hanford). Overrides --fisher-psd if provided") parser.add_argument("--srate",type=int,default=16384,help="Sampling rate") parser.add_argument("--seglen", type=float,default=256*2., help="Default window size for processing.") parser.add_argument("--fref",type=float,default=0.); # External grid parser.add_argument("--use-eos", default=None, help="Equation of state to determine lambdas for given mass ranges. Filename, not EOS name (no internal database)") parser.add_argument("--external-grid-xml", default=None,help="Inspiral XML file (injection form) for alternate grid") parser.add_argument("--external-grid-txt", default=None, help="Cartesian grid. Must provide parameter names in header. Exactly like output of code. Last column not used.") # Base point parser.add_argument("--inj", dest='inj', default=None,help="inspiral XML file containing the base point.") parser.add_argument("--event",type=int, dest="event_id", default=None,help="event ID of injection XML to use.") parser.add_argument("--fmin", default=35,type=float,help="Mininmum frequency in Hz, default is 40Hz to make short enough waveforms. Focus will be iLIGO to keep comutations short") parser.add_argument("--fmax",default=2000,type=float,help="Maximum frequency in Hz, used for PSD integral.") parser.add_argument("--mass1", default=1.50,type=float,help="Mass in solar masses") # 150 turns out to be ok for Healy et al sims parser.add_argument("--mass2", default=1.35,type=float,help="Mass in solar masses") parser.add_argument("--s1z", default=0.,type=float,help="Spin1z") #parser.add_argument("--lambda1",default=590,type=float) #parser.add_argument("--lambda2", default=590,type=float) parser.add_argument("--eff-lambda", type=float, help="Value of effective tidal parameter. Optional, ignored if not given") parser.add_argument("--deff-lambda", type=float, help="Value of second effective tidal parameter. Optional, ignored if not given") parser.add_argument("--lmax", default=2, type=int) parser.add_argument("--approx",type=str,default=None) # Output options parser.add_argument("--fname", default="overlap-grid", help="Base output file for ascii text (.dat) and xml (.xml.gz)") parser.add_argument("--verbose", action="store_true",default=False, help="Required to build post-frame-generating sanity-test plots") parser.add_argument("--save-plots",default=False,action='store_true', help="Write plots to file (only useful for OSX, where interactive is default") opts= parser.parse_args() if opts.verbose: True #lalsimutils.rosDebugMessagesContainer[0]=True # enable error logging inside lalsimutils ### ### Handle NR arguments ### if hasNR and not ( opts.NR_signal_group in nrwf.internal_ParametersAvailable.keys()): if opts.NR_signal_group: print(" ===== UNKNOWN NR PARAMETER ====== ") print(opts.NR_signal_group, opts.NR_signal_param) elif hasNR: if opts.NR_signal_param: opts.NR_signal_param = eval(str(opts.NR_signal_param)) # needs to be evaluated if not ( opts.NR_signal_param in nrwf.internal_ParametersAvailable[opts.NR_signal_group]): print(" ===== UNKNOWN NR PARAMETER ====== ") print(opts.NR_signal_group, opts.NR_signal_param) if hasNR and not ( opts.NR_template_group in nrwf.internal_ParametersAvailable.keys()): if opts.NR_template_group: print(" ===== UNKNOWN NR PARAMETER ====== ") print(opts.NR_template_group, opts.NR_template_param) elif hasNR: if opts.NR_template_param: opts.NR_template_param = eval(opts.NR_template_param) # needs to be evaluated if not ( opts.NR_template_param in nrwf.internal_ParametersAvailable[opts.NR_template_group]): print(" ===== UNKNOWN NR PARAMETER ====== ") print(opts.NR_template_group, opts.NR_template_param) ### ### Define grid overlap functions ### - Python's 'multiprocessing' module seems to cause process lock ### use_external_EOB=opts.use_external_EOB Lmax = 2 def eval_overlap(grid,P_list, IP,indx): global opts # if opts.verbose: # print " Evaluating for ", indx global use_external_EOB global Lmax global opts P2 = P_list[indx] T_here = 1./IP.deltaF P2.deltaF=1./T_here # P2.print_params() if not opts.skip_overlap: if not use_external_EOB: hf2 = lalsimutils.complex_hoff(P2) else: print(" Waiting for EOB waveform ....", indx, " with duration ", T_here) wfP = eobwf.WaveformModeCatalog(P2,lmax=Lmax) # only include l=2 for us. hf2 = wfP.complex_hoff(force_T=T_here) nm2 = IP.norm(hf2); hf2.data.data *= 1./nm2 # if opts.verbose: # print " Waveform normalized for ", indx ip_val = IP.ip(hfBase,hf2) line_out = [] line_out = list(grid[indx]) if not opts.skip_overlap: line_out.append(ip_val) else: line_out.append(-1) if opts.verbose: print(" Answer ", indx, line_out) return line_out def calc_lambda_from_m(m, eos_fam): if m<10**15: m=m*lal.MSUN_SI #eos=et.read_eos(eos) #eos_fam=lalsim.CreateSimNeutronStarFamily(eos) k2=lalsim.SimNeutronStarLoveNumberK2(m, eos_fam) r=lalsim.SimNeutronStarRadius(m, eos_fam) m=m*lal.G_SI/lal.C_SI**2 lam=2./(3*lal.G_SI)*k2*r**5 dimensionless_lam=lal.G_SI*lam*(1/m)**5 return dimensionless_lam def evaluate_overlap_on_grid(hfbase,param_names, grid): global downselect_dict # Validate grid is working: Create a loop and print for each one. # WARNING: Assumes grid for mass-unit variables hass mass units (!) P_list = [] grid_revised = [] for line in grid: Pgrid = P.manual_copy() Pgrid.ampO=opts.amplitude_order # include 'full physics' Pgrid.phaseO = opts.phase_order # Set attributes that are being changed as necessary, leaving all others fixed for indx in np.arange(len(param_names)): Pgrid.assign_param(param_names[indx], line[indx]) # Downselect include_item =True if not(opts.enforce_duration_bound is None): if lalsimutils.estimateWaveformDuration(Pgrid)> opts.enforce_duration_bound: include_item = False for param in downselect_dict: if Pgrid.extract_param(param) < downselect_dict[param][0] or Pgrid.extract_param(param) > downselect_dict[param][1]: include_item =False if include_item: grid_revised.append(line) if Pgrid.m2 <= Pgrid.m1: # do not add grid elements with m2> m1, to avoid possible code pathologies ! P_list.append(Pgrid) else: Pgrid.swap_components() # IMPORTANT. This should NOT change the physical functionality FOR THE PURPOSES OF OVERLAP (but will for PE - beware phiref, etc!) P_list.append(Pgrid) else: # print "skipping" # Pgrid.print_params() True # print " skipping " # print "Length check", len(P_list), len(grid) ### ### Loop over grid and make overlaps : see effective fisher code for wrappers ### # FIXME: More robust multiprocessing implementation -- very heavy! # p=Pool(n_threads) # PROBLEM: Pool code doesn't work in new configuration. if len(grid_revised) ==0 : return [],[] grid_out = np.array(map(functools.partial(eval_overlap, grid_revised, P_list,IP), np.arange(len(grid_revised)))) # Remove mass units at end for p in ['mc', 'm1', 'm2', 'mtot']: if p in param_names: indx = param_names.index(p) grid_out[:,indx] /= lal.MSUN_SI # remove distance units at end for p in ['distance', 'dist']: if p in param_names: indx = param_names.index(p) grid_out[:,indx] /= lal.PC_SI*1e6 # Truncate grid so overlap with the base point is > opts.min_match. Make sure to CONSISTENTLY truncate all lists (e.g., the P_list) grid_out_new = [] P_list_out_new = [] for indx in np.arange(len(grid_out)): if opts.skip_overlap or grid_out[indx,-1] > opts.match_value: grid_out_new.append(grid_out[indx]) P_list_out_new.append(P_list[indx]) grid_out = np.array(grid_out_new) return grid_out, P_list_out_new ### ### Define base point ### # Handle PSD # FIXME: Change to getattr call, instead of 'eval' eff_fisher_psd = lalsim.SimNoisePSDiLIGOSRD if not opts.psd_file: #eff_fisher_psd = eval(opts.fisher_psd) eff_fisher_psd = getattr(lalsim, opts.fisher_psd) # --fisher-psd SimNoisePSDaLIGOZeroDetHighPower now analyticPSD_Q=True else: print(" Importing PSD file ", opts.psd_file) eff_fisher_psd = lalsimutils.load_resample_and_clean_psd(opts.psd_file, 'H1', 1./opts.seglen) analyticPSD_Q = False # from matplotlib import pyplot as plt # plt.plot(eff_fisher_psd.f0+np.arange(eff_fisher_psd.data.length)*eff_fisher_psd.deltaF,np.log10(eff_fisher_psd.data.data)) # plt.show() P=lalsimutils.ChooseWaveformParams() if opts.inj: from glue.ligolw import lsctables, table, utils # check all are needed filename = opts.inj event = opts.event_id xmldoc = utils.load_filename(filename, verbose = True,contenthandler =lalsimutils.cthdler) sim_inspiral_table = table.get_table(xmldoc, lsctables.SimInspiralTable.tableName) P.copy_sim_inspiral(sim_inspiral_table[int(event)]) if opts.approx: P.approx = lalsim.GetApproximantFromString(opts.approx) if not (P.approx in [lalsim.TaylorT1,lalsim.TaylorT2, lalsim.TaylorT3, lalsim.TaylorT4]): # Do not use tidal parameters in approximant which does not implement them print(" Do not use tidal parameters in approximant which does not implement them ") P.lambda1 = 0 P.lambda2 = 0 else: P.m1 = opts.mass1 *lal.MSUN_SI P.m2 = opts.mass2 *lal.MSUN_SI P.s1z = opts.s1z P.dist = 150*1e6*lal.PC_SI if opts.eff_lambda and Psig: lambda1, lambda2 = 0, 0 if opts.eff_lambda is not None: lambda1, lambda2 = lalsimutils.tidal_lambda_from_tilde(m1, m2, opts.eff_lambda, opts.deff_lambda or 0) Psig.lambda1 = lambda1 Psig.lambda2 = lambda2 P.fmin=opts.fmin # Just for comparison! Obviously only good for iLIGO P.ampO=opts.amplitude_order # include 'full physics' P.phaseO = opts.phase_order if opts.approx: P.approx = lalsim.GetApproximantFromString(opts.approx) if not (P.approx in [lalsim.TaylorT1,lalsim.TaylorT2, lalsim.TaylorT3, lalsim.TaylorT4]): # Do not use tidal parameters in approximant which does not implement them print(" Do not use tidal parameters in approximant which does not implement them ") P.lambda1 = 0 P.lambda2 = 0 else: P.approx = lalsim.GetApproximantFromString("TaylorT4") P.deltaT=1./16384 P.taper = lalsim.SIM_INSPIRAL_TAPER_START P.deltaF = 1./opts.seglen #lalsimutils.findDeltaF(P) P.fref = opts.fref P.print_params() Pbase = P.copy() # Define base COMPLEX signal. ASSUME length long enough via seglen for this to work always # Define base COMPLEX overlap hfBase = None if opts.skip_overlap: print(" ---- NO WAVEFORM GENERATION ---- ") hfBase = None IP=lalsimutils.InnerProduct() # Default, so IP.deltaF code etc does not need to be wrapped else: if hasEOB and opts.use_external_EOB_source: print(" -------INTERFACE ------") print(" Using external EOB interface (Bernuzzi) with window ", opts.seglen) # Code WILL FAIL IF LAMBDA=0 if P.lambda1<1: P.lambda1=1 if P.lambda2<1: P.lambda2=1 if P.deltaT > 1./16384: print() wfP = eobwf.WaveformModeCatalog(P,lmax=Lmax) # only include l=2 for us. if opts.verbose: print(" Duration of stored signal (cut if necessary) ", wfP.estimateDurationSec()) hfBase = wfP.complex_hoff(force_T=opts.seglen) print("EOB waveform length ", hfBase.data.length) print("EOB waveform duration", -hfBase.epoch) elif opts.use_external_EOB_source and not hasEOB: # do not do something else silently! print(" Failure: EOB requested but impossible ") sys.exit(0) elif opts.use_external_NR_source and hasNR: m1Msun = P.m1/lal.MSUN_SI; m2Msun = P.m2/lal.MSUN_SI if m1Msun < 50 or m2Msun < 50: print(" Invalid NR mass ") sys.exit(0) print(" Using NR ", opts.NR_signal_group, opts.NR_signal_param) T_window = 16. # default wfP = nrwf.WaveformModeCatalog(opts.NR_signal_group, opts.NR_signal_param, clean_initial_transient=True,clean_final_decay=True,
#!/usr/bin/env python3 # Copyright 2018 Mitsubishi Electric Research Labs (<NAME>) # Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) import torch import numpy as np import six class CTCPrefixScoreTH(object): """Batch processing of CTCPrefixScore which is based on Algorithm 2 in WATANABE et al. "HYBRID CTC/ATTENTION ARCHITECTURE FOR END-TO-END SPEECH RECOGNITION," but extended to efficiently compute the probablities of multiple labels simultaneously """ def __init__(self, x, xlens, blank, eos, beam, scoring_ratio=1.5, margin=0): """Construct CTC prefix scorer :param torch.Tensor x: input label posterior sequences (B, T, O) :param torch.Tensor xlens: input lengths (B,) :param int blank: blank label id :param int eos: end-of-sequence id :param int beam: beam size :param float scoring_ratio: ratio of #scored hypos to beam size :param int margin: margin parameter for windowing (0 means no windowing) """ # In the comment lines, we assume T: input_length, B: batch size, W: beam width, O: output dim. self.logzero = -10000000000.0 self.blank = blank self.eos = eos self.batch = x.size(0) self.input_length = x.size(1) self.odim = x.size(2) self.beam = beam self.n_bb = self.batch * beam self.device = torch.device('cuda:%d' % x.get_device()) if x.is_cuda else torch.device('cpu') # Pad the rest of posteriors in the batch # TODO(takaaki-hori): need a better way without for-loops for i, l in enumerate(xlens): if l < self.input_length: x[i, l:, :] = self.logzero x[i, l:, blank] = 0 # Expand input posteriors for fast computation self.scoring_num = int(beam * scoring_ratio) if self.scoring_num > 0 and self.scoring_num < self.odim: xn = x.transpose(0, 1) else: xn = x.transpose(0, 1).unsqueeze(2).repeat(1, 1, beam, 1).view(-1, self.n_bb, self.odim) xb = xn[:, :, self.blank].unsqueeze(2).expand(-1, -1, self.odim) self.x = torch.stack([xn, xb]) # (2, T, B, O) or (2, T, BW, O) # Setup CTC windowing self.margin = margin if margin > 0: self.frame_ids = torch.arange(self.input_length, dtype=torch.float32, device=self.device) # Precompute end frames (BW,) self.end_frames = (torch.as_tensor(xlens) - 1).view(self.batch, 1).repeat(1, beam).view(-1) # Precompute base indices to convert label ids to corresponding element indices self.pad_b = (torch.arange(self.batch, device=self.device) * beam).view(-1, 1) self.pad_bo = (torch.arange(self.batch, device=self.device) * (beam * self.odim)).view(-1, 1) self.pad_o = (torch.arange(self.batch, device=self.device) * self.odim).unsqueeze(1).repeat(1, beam).view(-1, 1) self.bb_idx = torch.arange(self.n_bb, device=self.device).view(-1, 1) def __call__(self, y, state, pre_scores=None, att_w=None): """Compute CTC prefix scores for next labels :param list y: prefix label sequences :param tuple state: previous CTC state :param torch.Tensor pre_scores: scores for pre-selection of hypotheses (BW, O) :param torch.Tensor att_w: attention weights to decide CTC window :return new_state, ctc_local_scores (BW, O) """ output_length = len(y[0]) - 1 # ignore sos last_ids = [yi[-1] for yi in y] # last output label ids # prepare state info if state is None: if self.scoring_num > 0: r_prev = torch.full((self.input_length, 2, self.batch, self.beam), self.logzero, dtype=torch.float32, device=self.device) r_prev[:, 1] = torch.cumsum(self.x[0, :, :, self.blank], 0).unsqueeze(2) r_prev = r_prev.view(-1, 2, self.n_bb) else: r_prev = torch.full((self.input_length, 2, self.n_bb), self.logzero, dtype=torch.float32, device=self.device) r_prev[:, 1] = torch.cumsum(self.x[0, :, :, self.blank], 0) s_prev = 0.0 f_min_prev = 0 f_max_prev = 1 else: r_prev, s_prev, f_min_prev, f_max_prev = state # select input dimensions for scoring if self.scoring_num > 0 and pre_scores is not None: pre_scores[:, self.blank] = self.logzero # ignore blank from pre-selection scoring_ids = torch.topk(pre_scores, self.scoring_num, 1)[1] scoring_idmap = torch.full((self.n_bb, self.odim), -1, dtype=torch.long, device=self.device) snum = scoring_ids.size(1) scoring_idmap[self.bb_idx, scoring_ids] = torch.arange(snum, device=self.device) scoring_idx = (scoring_ids + self.pad_o).view(-1) x_ = torch.index_select(self.x.view(2, -1, self.batch * self.odim), 2, scoring_idx).view(2, -1, self.n_bb, snum) else: scoring_ids = None scoring_idmap = None snum = self.odim x_ = self.x # new CTC forward probs are prepared as a (T x 2 x BW x S) tensor # that corresponds to r_t^n(h) and r_t^b(h) in a batch. r = torch.full((self.input_length, 2, self.n_bb, snum), self.logzero, dtype=torch.float32, device=self.device) if output_length == 0: r[0, 0] = x_[0, 0] r_sum = torch.logsumexp(r_prev, 1) log_phi = r_sum.unsqueeze(2).repeat(1, 1, snum) if scoring_ids is not None: for idx in range(self.n_bb): pos = scoring_idmap[idx, last_ids[idx]] if pos >= 0: log_phi[:, idx, pos] = r_prev[:, 1, idx] else: for idx in range(self.n_bb): log_phi[:, idx, last_ids[idx]] = r_prev[:, 1, idx] # decide start and end frames based on attention weights if att_w is not None and self.margin > 0: f_arg = torch.matmul(att_w, self.frame_ids) f_min = max(int(f_arg.min().cpu()), f_min_prev) f_max = max(int(f_arg.max().cpu()), f_max_prev) start = min(f_max_prev, max(f_min - self.margin, output_length, 1)) end = min(f_max + self.margin, self.input_length) else: f_min = f_max = 0 start = max(output_length, 1) end = self.input_length # compute forward probabilities log(r_t^n(h)) and log(r_t^b(h)) for t in range(start, end): rp = r[t - 1] rr = torch.stack([rp[0], log_phi[t - 1], rp[0], rp[1]]).view(2, 2, self.n_bb, snum) r[t] = torch.logsumexp(rr, 1) + x_[:, t] # compute log prefix probabilites log(psi) log_phi_x = torch.cat((log_phi[0].unsqueeze(0), log_phi[:-1]), dim=0) + x_[0] if scoring_ids is not None: log_psi = torch.full((self.n_bb, self.odim), self.logzero, device=self.device) log_psi_ = torch.logsumexp(torch.cat((log_phi_x[start:end], r[start - 1, 0].unsqueeze(0)), dim=0), dim=0) for si in range(self.n_bb): log_psi[si, scoring_ids[si]] = log_psi_[si] else: log_psi = torch.logsumexp(torch.cat((log_phi_x[start:end], r[start - 1, 0].unsqueeze(0)), dim=0), dim=0) for si in range(self.n_bb): log_psi[si, self.eos] = r_sum[self.end_frames[si], si] return (r, log_psi, f_min, f_max, scoring_idmap), log_psi - s_prev def index_select_state(self, state, best_ids): """Select CTC states according to best ids :param state : CTC state :param best_ids : index numbers selected by beam pruning (B, W) :return selected_state """ r, s, f_min, f_max, scoring_idmap = state # convert ids to BWO space vidx = (best_ids + self.pad_bo).view(-1) # select hypothesis scores s_new = torch.index_select(s.view(-1), 0, vidx) s_new = s_new.view(-1, 1).repeat(1, self.odim).view(self.n_bb, self.odim) # convert ids to BWS space (S: scoring_num) if scoring_idmap is not None: snum = self.scoring_num beam_idx = (torch.div(best_ids, self.odim) + self.pad_b).view(-1) label_ids = torch.fmod(best_ids, self.odim).view(-1) score_idx = scoring_idmap[beam_idx, label_ids] score_idx[score_idx == -1] = 0 vidx = score_idx + beam_idx * snum else: snum = self.odim # select forward probabilities r_new = torch.index_select(r.view(-1, 2, self.n_bb * snum), 2, vidx).view(-1, 2, self.n_bb) return r_new, s_new, f_min, f_max class CTCPrefixScore(object): """Compute CTC label sequence scores which is based on Algorithm 2 in WATANABE et al. "HYBRID CTC/ATTENTION ARCHITECTURE FOR END-TO-END SPEECH RECOGNITION," but extended to efficiently compute the probablities of multiple labels simultaneously """ def __init__(self, x, blank, eos, xp): self.xp = xp self.logzero = -10000000000.0 self.blank = blank self.eos = eos self.input_length = len(x) self.x = x def initial_state(self): """Obtain an initial CTC state :return: CTC state """ # initial CTC state is made of a frame x 2 tensor that corresponds to # r_t^n(<sos>) and r_t^b(<sos>), where 0 and 1 of axis=1 represent # superscripts n and b (non-blank and blank), respectively. r = self.xp.full((self.input_length, 2), self.logzero, dtype=np.float32) r[0, 1] = self.x[0, self.blank] for i in six.moves.range(1, self.input_length): r[i, 1] = r[i - 1, 1] + self.x[i, self.blank] return r def __call__(self, y, cs, r_prev): """Compute CTC prefix scores for next labels :param y : prefix label sequence :param cs : array of next labels :param r_prev: previous CTC state :return ctc_scores, ctc_states """ # initialize CTC states output_length = len(y) - 1 # ignore sos # new CTC states are prepared as a frame x (n or b) x n_labels tensor # that corresponds to r_t^n(h) and r_t^b(h). r = self.xp.ndarray((self.input_length, 2, len(cs)), dtype=np.float32) xs = self.x[:, cs] if output_length == 0: r[0, 0] = xs[0] r[0, 1] = self.logzero else: r[output_length - 1] = self.logzero # prepare forward probabilities for the last label r_sum = self.xp.logaddexp(r_prev[:, 0], r_prev[:, 1]) # log(r_t^n(g) + r_t^b(g)) last = y[-1] if output_length > 0 and last in cs: log_phi = self.xp.ndarray((self.input_length, len(cs)), dtype=np.float32) for i in six.moves.range(len(cs)): log_phi[:, i] = r_sum if cs[i] != last else r_prev[:, 1] else: log_phi = r_sum # compute forward probabilities log(r_t^n(h)), log(r_t^b(h)), # and log prefix probabilites log(psi) start = max(output_length, 1) log_psi = r[start - 1, 0] for t in six.moves.range(start, self.input_length): r[t, 0] = self.xp.logaddexp(r[t - 1, 0], log_phi[t - 1]) + xs[t] r[t, 1] = self.xp.logaddexp(r[t - 1, 0], r[t - 1, 1]) + self.x[t, self.blank] log_psi = self.xp.logaddexp(log_psi, log_phi[t - 1] + xs[t]) # get P(...eos|X) that ends with the prefix itself eos_pos = self.xp.where(cs ==
# -*- coding: utf-8 -*- from __future__ import print_function, division import numpy as np from .params import Params import re import copy import types from PyAstronomy.pyaC import pyaErrors as PE from .nameIdentBase import ModelNameIdentBase from PyAstronomy import pyaC from time import time as timestamp from .fufDS import FufDS from .extFitter import NelderMead import six import six.moves as smo from PyAstronomy.funcFit import _pymcImport, _scoImport, ic if _pymcImport: import pymc if _scoImport: import scipy.optimize as sco if ic.check["emcee"]: import emcee if ic.check["progressbar"]: import progressbar def addEval(self, x): return (self.leftCompo.evaluate(x) + self.rightCompo.evaluate(x)) def subEval(self, x): return (self.leftCompo.evaluate(x) - self.rightCompo.evaluate(x)) def divEval(self, x): return (self.leftCompo.evaluate(x) / self.rightCompo.evaluate(x)) def mulEval(self, x): return (self.leftCompo.evaluate(x) * self.rightCompo.evaluate(x)) def powEval(self, x): return (self.leftCompo.evaluate(x) ** self.rightCompo.evaluate(x)) class MiniFunc: """ This decorator can be applied to use self-defined objective functions. Applied to an objective function, it adds the functionality needed to evaluate the model given a certain parameter vector, so that the user does only have to take care about the quantity to be minimized. Parameters ---------- odf : fitting object The fitting object that is supposed to use the self-defined objective function. """ def __init__(self, odf): """ Parameter: - `odf` - An instance of a fitting objects such as for example *GaussFit1d*. """ # Save a REFERENCE to the fitting object self.odf = odf def __call__(self, f): """ Parameter: - `f` - The user-defined objective function. """ def miniFunc(P): # Update the parameter values in the 'Params' class instance. self.odf.pars.setFreeParams(P) # Obtain penalties pfac, pdict = self.odf.pars.getPenalty( penaltyFact=self.odf.penaltyFactor) # Assign penalty val = pfac # Apply conditional restrictions val += self.odf.pars.applyConditionalRestrictions() # Assign x, y, and yerr attributes. This is a not-so-beautiful # way of not breaking the API and having the funcFit data # storage object. self.odf.x = self.odf._fufDS.x self.odf.y = self.odf._fufDS.y self.odf.yerr = self.odf._fufDS.yerr try: # Update self.model to hold the evaluated function. self.odf.updateModel() except Exception as e: if val > 0.0: # Allow model evaluation to fail if parameters wandered outside the # valid range (returning immediately maybe an option, but this largely maintains # behavior, preventing annoying errors) return val # If no penalty is present, raise exception raise(PE.PyAAlgorithmFailure("Could not evaluate model for parameters: " + str(self.odf.parameters()), \ where="ondeDFit", \ tbfe=e, \ solution=["Try to define 'restrictions' via setRestriction if parameter values are invalid.", \ "Adjust implementation of model to prevent error."])) # Add value of actual objective function val += f(self.odf, P) return val return miniFunc class _PyMCSampler: """ This class encapsulates a number of methods helping to set up the PyMC sampler. """ def _dictComplete(self, l, d, whichDict, forget=[]): """ Checks whether the list `l` contains all keys present in dictionary `d`. Parameters: `l` - A list of strings `d` - A dictionary with string keys. `whichDict` - string, Which dictionary (e.g., start values) is under consideration? `forget` - A list of string specifying keys, which can be omitted in d. """ if len(l) != (len(d) + len(forget)): message = "Dictionary of " + whichDict + \ " has not the correct number of entries - " if len(l) < (len(d) + len(forget)): message += " there are too many.\n" else: message += " there are too few.\n" message += " Needed entries: " + ', '.join(l) + "\n" given = list(d) given.extend(forget) message += " Given entries: " + ', '.join(given) raise(PE.PyAValError(message, where="fitMCMC", solution="Adjust input dictionary.")) for elem in l: if elem in forget: continue if elem not in d: message = "Error in " + whichDict + " dictionary.\n" message += " Key " + elem + " is missing!" raise(PE.PyAValError(message, where="fitMCMC", solution="Adjust input dictionary.")) def _basicStatMCMCOutput(self, bs): print("Basic statistics of MCMC analysis: ") print("-----------------------------------------------------") for k in six.iterkeys(bs): print("Parameter: ", k) if bs[k] is None: print(" No output available!") continue for k2, v2 in six.iteritems(bs[k]): if k2 != "quantiles": print(" " + k2 + " : " + str(v2)) else: s = " quantiles : " for qk, qv in six.iteritems(bs[k]["quantiles"]): print(s + str(qk) + " : " + str(qv)) s = " " print("-----------------------------------------------------") def _checkDbArgs(self, dbArgs): """ Check whether database arguments are given. If not, use default parameters. """ # If no db is specified, use pickle if not "db" in dbArgs: dbArgs["db"] = "pickle" if dbArgs["db"] == "pickle": if not "dbname" in dbArgs: dbArgs["dbname"] = "tmp.pickle" return dbArgs elif dbArgs["db"] == "hdf5": if not "dbname" in dbArgs: dbArgs["dbname"] = "tmp.zlib" if not "dbmode" in dbArgs: dbArgs["dbmode"] = "w" if not "dbcomplevel" in dbArgs: dbArgs["dbcomplevel"] = 9 if not "dbcomplib" in dbArgs: dbArgs["dbcomplib"] = "zlib" return dbArgs else: raise(PE.PyAValError("Database: " + str(dbArgs["db"] + " currently not supported.", solution="Use another database (e.g., pickle or hdf5)."))) return dbArgs def MCMCautoParameters(self, ranges, picky=True, stepsize=1e-2, setRestrictionsFromPriors=False): """ Convenience function to generate parameters for MCMC fit. This function prepares the `X0`, `Lims`, and `Steps` dictionaries needed to call :py:func:`fitMCMC`. For `X0`, the current parameter values are used. `Lims` is constructed using the `ranges` parameter, and `Steps` is defined on the basis of the `stepsize` and `ranges` parameters. The `picky` parameter determines how this functions handles behaves if it encounters parameters in the `range` dictionary, which were either not thawed or have been thawed but have not been specified in `ranges`. If `picky` is True (the default), this function throws an error if `ranges` does not cover all and only the free parameters. If picky is False, the function will automatically thaw all parameters specified through `ranges` and freeze the rest. .. warning:: There is NO guarantee that the sampling parameters (start values, limits for the uniform priors, and initial step sizes for the sampler) are reasonable. You need to check the results. Parameters ---------- ranges : dictionary Holds the fit ranges for the individual parameters. If single values are given, the sampling range (uniform prior) will be arranged symmetrically around the parameter's current value. It is also possible to specify a range directly using, e.g., "A1":[0,100]. stepsize : float, optional Defines the step size as a fraction of the fit range given in `ranges`. picky : boolean, optional If True (default), the list of free parameters has to match exactly the list of parameters specified in `ranges`. If False, the list of free parameters will be adapted to those given in `ranges`. setRestrictionsFromPriors : boolean, optional Default: False. If True, parameter restrictions are applied according to the ranges of the uniform priors. Returns ------- X0 : dictionary Maps parameter name to start value. lims : dictionary Maps parameter name to [lower, upper] limit. steps : dictionary Maps parameter name to step size. Examples -------- :: from PyAstronomy import funcFit as fuf import numpy as np import matplotlib.pylab as plt x = np.linspace(0,30,1000) gauss = fuf.GaussFit1d() gauss["A"] = 1 gauss["mu"] = 23. gauss["sig"] = 0.5 yerr = np.random.normal(0., 0.05, len(x)) y = gauss.evaluate(x) + yerr # This step is not necessary if <picky>=False in MCMCautoParameters. gauss.thaw(["A","mu","sig"]) X0, lims, steps = gauss.MCMCautoParameters({"A":[0,10],"mu":3, "sig":[0.1,1.0]}) gauss.fitMCMC(x, y, X0, lims, steps, yerr=yerr, iter=1000) plt.plot(x, y, 'k+') plt.plot(x, gauss.evaluate(x), 'r--') plt.show() """ X0 = {} steps = {} lims = {} # parameters which are free but not given missingParams = [] # parameters which are not free but given extraParams = [] # Check if ranges are given for all free parameters for p in self.freeParameters(): if p not in ranges: if picky == False: self.freeze(p) missingParams.append(p) # Check if all given ranges pertain to free parameters for p in ranges: if p not in self.freeParameters(): if picky == False: self.thaw(p) extraParams.append(p) # Raise exception if missing or extra parameters exits unless picky=False if picky: if len(missingParams) > 0: raise(PE.PyAValError("Not enough parameters: " + str(missingParams), where="_PyMCSampler::MCMCautoParameters", why="Ranges for these free fit parameters are missing.")) if len(extraParams) > 0: raise(PE.PyAValError("Too many parameters: " + str(extraParams), where="_PyMCSampler::MCMCautoParameters", why="Ranges were given for non-free parameters.")) if stepsize >= 1.0: raise(PE.PyAValError("The step size is larger than one. Must be a fraction of the range, i.e., smaller than
<reponame>MiCHiLU/google_appengine_sdk #!/usr/bin/env python # # Copyright 2007 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # """Stub implementation for Log Service that uses sqlite.""" import atexit import codecs import logging import time import sqlite3 from google.appengine.api import apiproxy_stub from google.appengine.api import appinfo from google.appengine.api.logservice import log_service_pb from google.appengine.runtime import apiproxy_errors _REQUEST_LOG_CREATE = """ CREATE TABLE IF NOT EXISTS RequestLogs ( id INTEGER NOT NULL PRIMARY KEY, user_request_id TEXT NOT NULL, app_id TEXT NOT NULL, version_id TEXT NOT NULL, module TEXT NOT NULL, ip TEXT NOT NULL, nickname TEXT NOT NULL, start_time INTEGER NOT NULL, end_time INTEGER DEFAULT 0 NOT NULL, method TEXT NOT NULL, resource TEXT NOT NULL, http_version TEXT NOT NULL, status INTEGER DEFAULT 0 NOT NULL, response_size INTEGER DEFAULT 0 NOT NULL, user_agent TEXT NOT NULL, url_map_entry TEXT DEFAULT '' NOT NULL, host TEXT NOT NULL, task_queue_name TEXT DEFAULT '' NOT NULL, task_name TEXT DEFAULT '' NOT NULL, latency INTEGER DEFAULT 0 NOT NULL, mcycles INTEGER DEFAULT 0 NOT NULL, finished INTEGER DEFAULT 0 NOT NULL ); """ _REQUEST_LOG_ADD_MODULE_COLUMN = """ ALTER TABLE RequestLogs ADD COLUMN module TEXT DEFAULT '%s' NOT NULL; """ % appinfo.DEFAULT_MODULE _APP_LOG_CREATE = """ CREATE TABLE IF NOT EXISTS AppLogs ( id INTEGER NOT NULL PRIMARY KEY, request_id INTEGER NOT NULL, timestamp INTEGER NOT NULL, level INTEGER NOT NULL, message TEXT NOT NULL, FOREIGN KEY(request_id) REFERENCES RequestLogs(id) ); """ class LogServiceStub(apiproxy_stub.APIProxyStub): """Python stub for Log Service service.""" THREADSAFE = True _ACCEPTS_REQUEST_ID = True _DEFAULT_READ_COUNT = 20 _MIN_COMMIT_INTERVAL = 5 def __init__(self, persist=False, logs_path=None, request_data=None): """Initializer. Args: persist: For backwards compatability. Has no effect. logs_path: A str containing the filename to use for logs storage. Defaults to in-memory if unset. request_data: A apiproxy_stub.RequestData instance used to look up state associated with the request that generated an API call. """ super(LogServiceStub, self).__init__('logservice', request_data=request_data) self._request_id_to_request_row_id = {} if logs_path is None: logs_path = ':memory:' self._conn = sqlite3.connect(logs_path, check_same_thread=False) self._conn.row_factory = sqlite3.Row self._conn.execute(_REQUEST_LOG_CREATE) self._conn.execute(_APP_LOG_CREATE) column_names = set(c['name'] for c in self._conn.execute('PRAGMA table_info(RequestLogs)')) if 'module' not in column_names: self._conn.execute(_REQUEST_LOG_ADD_MODULE_COLUMN) self._last_commit = time.time() atexit.register(self._conn.commit) @staticmethod def _get_time_usec(): return int(time.time() * 1e6) def _maybe_commit(self): now = time.time() if (now - self._last_commit) > self._MIN_COMMIT_INTERVAL: self._conn.commit() self._last_commit = now @apiproxy_stub.Synchronized def start_request(self, request_id, user_request_id, ip, app_id, version_id, nickname, user_agent, host, method, resource, http_version, start_time=None, module=None): """Starts logging for a request. Each start_request call must be followed by a corresponding end_request call to cleanup resources allocated in start_request. Args: request_id: A unique string identifying the request associated with the API call. user_request_id: A user-visible unique string for retrieving the request log at a later time. ip: The user's IP address. app_id: A string representing the application ID that this request corresponds to. version_id: A string representing the version ID that this request corresponds to. nickname: A string representing the user that has made this request (that is, the user's nickname, e.g., 'foobar' for a user logged in as '<EMAIL>'). user_agent: A string representing the agent used to make this request. host: A string representing the host that received this request. method: A string containing the HTTP method of this request. resource: A string containing the path and query string of this request. http_version: A string containing the HTTP version of this request. start_time: An int containing the start time in micro-seconds. If unset, the current time is used. module: The string name of the module handling this request. """ if module is None: module = appinfo.DEFAULT_MODULE if version_id is None: version_id = 'NO-VERSION' major_version_id = version_id.split('.', 1)[0] if start_time is None: start_time = self._get_time_usec() cursor = self._conn.execute( 'INSERT INTO RequestLogs (user_request_id, ip, app_id, version_id, ' 'nickname, user_agent, host, start_time, method, resource, ' 'http_version, module)' ' VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)', ( user_request_id, ip, app_id, major_version_id, nickname, user_agent, host, start_time, method, resource, http_version, module)) self._request_id_to_request_row_id[request_id] = cursor.lastrowid self._maybe_commit() @apiproxy_stub.Synchronized def end_request(self, request_id, status, response_size, end_time=None): """Ends logging for a request. Args: request_id: A unique string identifying the request associated with the API call. status: An int containing the HTTP status code for this request. response_size: An int containing the content length of the response. end_time: An int containing the end time in micro-seconds. If unset, the current time is used. """ row_id = self._request_id_to_request_row_id.pop(request_id, None) if not row_id: return if end_time is None: end_time = self._get_time_usec() self._conn.execute( 'UPDATE RequestLogs SET ' 'status = ?, response_size = ?, end_time = ?, finished = 1 ' 'WHERE id = ?', ( status, response_size, end_time, row_id)) self._maybe_commit() def _Dynamic_Flush(self, request, unused_response, request_id): """Writes application-level log messages for a request.""" group = log_service_pb.UserAppLogGroup(request.logs()) self._insert_app_logs(request_id, group.log_line_list()) @apiproxy_stub.Synchronized def _insert_app_logs(self, request_id, log_lines): row_id = self._request_id_to_request_row_id.get(request_id) if row_id is None: return new_app_logs = (self._tuple_from_log_line(row_id, log_line) for log_line in log_lines) self._conn.executemany( 'INSERT INTO AppLogs (request_id, timestamp, level, message) VALUES ' '(?, ?, ?, ?)', new_app_logs) self._maybe_commit() @staticmethod def _tuple_from_log_line(row_id, log_line): message = log_line.message() if isinstance(message, str): message = codecs.decode(message, 'utf-8', 'replace') return (row_id, log_line.timestamp_usec(), log_line.level(), message) @apiproxy_stub.Synchronized def _Dynamic_Read(self, request, response, request_id): if (request.module_version_size() < 1 and request.version_id_size() < 1 and request.request_id_size() < 1): raise apiproxy_errors.ApplicationError( log_service_pb.LogServiceError.INVALID_REQUEST) if request.module_version_size() > 0 and request.version_id_size() > 0: raise apiproxy_errors.ApplicationError( log_service_pb.LogServiceError.INVALID_REQUEST) if (request.request_id_size() and (request.has_start_time() or request.has_end_time() or request.has_offset())): raise apiproxy_errors.ApplicationError( log_service_pb.LogServiceError.INVALID_REQUEST) if request.request_id_size(): for request_id in request.request_id_list(): log_row = self._conn.execute( 'SELECT * FROM RequestLogs WHERE user_request_id = ?', (request_id,)).fetchone() if log_row: log = response.add_log() self._fill_request_log(log_row, log, request.include_app_logs()) return if request.has_count(): count = request.count() else: count = self._DEFAULT_READ_COUNT filters, values = self._extract_read_filters(request) filter_string = ' WHERE %s' % ' and '.join(filters) if request.has_minimum_log_level(): query = ('SELECT * FROM RequestLogs INNER JOIN AppLogs ON ' 'RequestLogs.id = AppLogs.request_id%s GROUP BY ' 'RequestLogs.id ORDER BY id DESC') else: query = 'SELECT * FROM RequestLogs%s ORDER BY id DESC' logs = self._conn.execute(query % filter_string, values).fetchmany(count + 1) if logging.getLogger(__name__).isEnabledFor(logging.DEBUG): self._debug_query(filter_string, values, len(logs)) for log_row in logs[:count]: log = response.add_log() self._fill_request_log(log_row, log, request.include_app_logs()) if len(logs) > count: response.mutable_offset().set_request_id(str(logs[-2]['id'])) def _debug_query(self, filter_string, values, result_count): logging.debug('\n\n') logging.debug(filter_string) logging.debug(values) logging.debug('%d results.', result_count) logging.debug('DB dump:') for l in self._conn.execute('SELECT * FROM RequestLogs'): logging.debug('%r %r %d %d %s', l['module'], l['version_id'], l['start_time'], l['end_time'], l['finished'] and 'COMPLETE' or 'INCOMPLETE') def _fill_request_log(self, log_row, log, include_app_logs): log.set_request_id(str(log_row['user_request_id'])) log.set_app_id(log_row['app_id']) log.set_version_id(log_row['version_id']) log.set_module_id(log_row['module']) log.set_ip(log_row['ip']) log.set_nickname(log_row['nickname']) log.set_start_time(log_row['start_time']) log.set_host(log_row['host']) log.set_end_time(log_row['end_time']) log.set_method(log_row['method']) log.set_resource(log_row['resource']) log.set_status(log_row['status']) log.set_response_size(log_row['response_size']) log.set_http_version(log_row['http_version']) log.set_user_agent(log_row['user_agent']) log.set_url_map_entry(log_row['url_map_entry']) log.set_latency(log_row['latency']) log.set_mcycles(log_row['mcycles']) log.set_finished(log_row['finished']) log.mutable_offset().set_request_id(str(log_row['id'])) time_seconds = (log_row['end_time'] or log_row['start_time']) / 10**6 date_string = time.strftime('%d/%b/%Y:%H:%M:%S %z', time.localtime(time_seconds)) log.set_combined('%s - %s [%s] "%s %s %s" %d %d - "%s"' % (log_row['ip'], log_row['nickname'], date_string, log_row['method'], log_row['resource'], log_row['http_version'], log_row['status'] or 0, log_row['response_size'] or 0, log_row['user_agent'])) if include_app_logs: log_messages = self._conn.execute( 'SELECT timestamp, level, message FROM AppLogs ' 'WHERE request_id = ?', (log_row['id'],)).fetchall() for message in log_messages: line = log.add_line() line.set_time(message['timestamp']) line.set_level(message['level']) line.set_log_message(message['message']) @staticmethod def _extract_read_filters(request): """Extracts SQL filters from the LogReadRequest. Args: request: the incoming LogReadRequest. Returns: a pair of (filters, values); filters is a list of SQL filter expressions, to be joined by AND operators; values is a list of values to be interpolated into the filter expressions by the db library. """ filters = [] values = [] module_filters = [] module_values = [] for module_version in request.module_version_list(): module_filters.append('(version_id = ? AND module = ?)') module_values.append(module_version.version_id()) module = appinfo.DEFAULT_MODULE if module_version.has_module_id(): module = module_version.module_id() module_values.append(module) if module_filters: filters.append('(' + ' or '.join(module_filters) + ')') values += module_values if request.has_offset(): try: filters.append('RequestLogs.id < ?') values.append(int(request.offset().request_id())) except ValueError: logging.error('Bad offset in log request: "%s"', request.offset()) raise apiproxy_errors.ApplicationError( log_service_pb.LogServiceError.INVALID_REQUEST) if request.has_minimum_log_level(): filters.append('AppLogs.level >= ?') values.append(request.minimum_log_level()) finished_filter = 'finished = 1 ' finished_filter_values = [] unfinished_filter = 'finished = 0' unfinished_filter_values = [] if request.has_start_time(): finished_filter += ' and
<gh_stars>0 #!/usr/bin/env python # coding: utf-8 # In[1]: import numpy as np import matplotlib.pyplot as plt import pandas as pd import panel as pn pn.extension("katex") # # Tutorial 3 - Darcy Law and Conductivity # # # _(The contents presented in this section were re-developed principally by Dr. <NAME>. The original contents are from Prof. <NAME>l)_ # # + **tutorial problems on Darcy's law and intrinsic permeability** # # # + **homework problems on Darcy's law and intrinsic permeability** # # # # ### Tutorial Problems on ### # # + Darcy's Law and # + Intrinsic Permeability # ### Tutorial Problem 7: Flow Direction and Hydraulic Gradient # # Indicate the direction of flow shown in the figure in next slides, and determine the hydraulic gradient for a Darcy column with length L = 50 cm! (Figures not to scale.) # # #### Tutorial Problem 7 – Solution #### # # **The relevant topic is covered in Lecture 04, slide 8** # # In[2]: png_pane = pn.pane.PNG("images/T03_TP7_a1.png", width=600) png_pane # In[3]: # Image (a) L = 0.5 # m, length of the column Ea_hl = 0.2 #, m, elevation head, left Pa_hl = 0.1 #, m pressure head, left Ea_hr = 0.2 #, m, elevation head, right Pa_hr = 0.3 #, m pressure head, right Ha_hl = Ea_hl + Pa_hl # m, hydraulic head, left Ha_hr = Ea_hr + Pa_hr # m, hydraulic head, right DH_a = Ha_hr - Ha_hl H_ga = DH_a/L#, no unit, hydraulic gradient print("Hydraulic head LEFT: {0:1.1f}".format(Ha_hl),"m"); print("Hydraulic head RIGHT:: {0:1.1f}".format(Ha_hr),"m") print("Hydraulic Head Difference: {0:1.1f}".format(DH_a),"m");print("Hydraulic gradient: {0:1.1f}".format(H_ga)) png_pane.object = "images/T03_TP7_a2.png" # In[4]: png_pane2 = pn.pane.PNG("images/T03_TP7_b1.png", width=500) png_pane2 # In[5]: # Image (b) L = 0.5 # m, length of the column Eb_hl = 0.2 #, m, elevation head, left Pb_hl = 0.1 #, m pressure head, left Eb_hr = 0.05 #, m, elevation head, right Pb_hr = 1.3 #, m pressure head, right Hb_hl = Eb_hl + Pb_hl # m, hydraulic head, left Hb_hr = Eb_hr + Pb_hr # m, hydraulic head, right DH_b = Hb_hr - Hb_hl H_gb = DH_b/L#, no unit, hydraulic gradient print("Hydraulic head LEFT: {0:1.1f}".format(Hb_hl),"m");print("Hydraulic head RIGHT:: {0:1.1f}".format(Hb_hr),"m") print("Hydraulic Head Difference: {0:1.1f}".format(DH_b),"m");print("Hydraulic gradient: {0:1.1f}".format(H_gb)) png_pane2.object = "images/T03_TP7_b2.png" # In[6]: png_pane3 = pn.pane.PNG("images/T03_TP7_c1.png", width=400) png_pane3 # In[7]: # Image (c) L = 0.5 # m, length of the column Ec_hl = 0.3 #, m, elevation head, left Pc_hl = 0.1 #, m pressure head, left Ec_hr = 0.2 #, m, elevation head, right Pc_hr = 0.2 #, m pressure head, right Hc_hl = Ec_hl + Pc_hl # m, hydraulic head, left Hc_hr = Ec_hr + Pc_hr # m, hydraulic head, right DH_c = Hc_hr - Hc_hl H_gc = DH_c/L#, no unit, hydraulic gradient print("Hydraulic head LEFT: {0:1.1f}".format(Hc_hl),"m");print("Hydraulic head RIGHT:: {0:1.1f}".format(Hc_hr),"m") print("Hydraulic Head Difference: {0:1.1f}".format(DH_c),"m");print("Hydraulic gradient: {0:1.1f}".format(H_gc)) png_pane3.object = "images/T03_TP7_c2.png" # In[8]: png_pane4 = pn.pane.PNG("images/T03_TP7_d1.png", width=400) png_pane4 # In[9]: # Image (d) L = 0.5 # m, length of the column Ed_hl = 0.3 #, m, elevation head, left Pd_hl = 0.1 #, m pressure head, left Ed_hr = 0.2 #, m, elevation head, right Pd_hr = 0.1 #, m pressure head, right Hd_hl = Ed_hl + Pd_hl # m, hydraulic head, left Hd_hr = Ed_hr + Pd_hr # m, hydraulic head, right DH_d = Hd_hr - Hd_hl H_gd = DH_d/L#, no unit, hydraulic gradient #output print("Hydraulic head LEFT: {0:1.1f}".format(Hd_hl),"m");print("Hydraulic head Right:: {0:1.1f}".format(Hd_hr),"m") print("Hydraulic Head Difference: {0:1.1f}".format(DH_d),"m");print("Hydraulic gradient: {0:1.1f}".format(H_gd)) png_pane4.object = "images/T03_TP7_d2.png" # ### Tutorial Problem 8 ### # The hydraulic conductivity of a fine sand sample was found to be equal to $1.36\times 10^{-5}$ m/s in a Darcy experiment using water at a temperature of $20^\circ$C. What is the intrinsic permeability of this sample? Give results in cm$^2$ and D. # (density of water at $20^\circ$C: 998.2 kg/m$^3$; dynamic viscosity of water at $20^\circ$C: $1.0087 \times 10^{-3}$ Pa$\cdot$s; 1 D = $0.987\times 10^{-12}$ m$^2$) # # #### Tutorial Problem 8 - Solution #### # # **Relevant topics are covered in Lecture 04 slides 18-20.** # # Relationship between hydraulic conductivity $K$ and intrinsic permeability $k$ from lecture notes: # $$ # K_{water} = k\cdot \frac{\rho_{water}\cdot g}{\eta_{water}} # $$ # # Solve for , $k$ # # $$ # k = \frac{\eta_{water}\cdot K_{water}}{\rho_{water}\cdot g}{} # $$ # In[10]: #Given Darcy = 0.987 * 10**-12 # m^2, 1D = 0.987*10^-12 m^2 nu_w = 1.00087*10**-3 # Pa-S dynamic viscosity of water K_w = 1.36*10**-5 # m/s Conductivity of water g = 9.81 # m/s^2 accln due to gravity rho_w = 998.2 # kg/m^3, density of water # Solution k = (nu_w*K_w)/(rho_w*g)#, m^2, permeability of water k_D = k/Darcy print("The permeability is {0:1.1E}".format(k),"m\u00b2") print("The permeability in Darcy unite is: {0:1.2f}".format(k_D),"D") # ### Tutorial Problem 9: Constant-Head Permeameter ### # In[11]: ## Tutorial Problem 9: Constant-Head Permeameter r1 = pn.pane.LaTeX(r""" (a). Derive the expression for $K$ given below. $$ K = \frac{QL}{A(h_{in}-h_{out})} $$ (b). The hydraulic conductivity of a medium sand sample (length 15 cm, cross-sectional area 25 cm$^2$) is to be determined. The hydraulic head difference is 5 cm and a water volume of 100 cm³ pas-sed the sample during an experimental period of 12 min. <br><br> (c). How long would 100 cm$^3$ diesel (density: 0.85 g/cm$^3$, dynamic viscosity: 3.5$\cdot 10^{-3}$ Pa$\cdot$s at 20$^\circ$C) need to pass the sample under a head difference of 5 cm (density and dynamic viscosity of water at 20$^\circ$C: 998.2 kg/m$^3$ and 1.0087$\cdot 10^{-3}$ Pa$\cdot$s, resp.)? """, width=450, style={'font-size': '12pt'}) spacer = pn.Spacer(width=100) r2 =pn.pane.PNG("images/T03_TP9.png", width=400) pn.Row(r1,spacer, r2) # In[12]: ### Tutorial Problem 9 – Solution ### r1 = pn.pane.LaTeX(r""" Solution Problem 9: <br> The relevant topic can be found in lecture 04, slides 15, 18-20 <br><br> Let the reference datum be at the bottom. Then from Darcy's Law: $$Q= -A\cdot K\frac{h_{out}-h_{in}}{L}$$ With, Q = discharge [L$^3$/T]<br> L =column length [L]<br> A = cross-sectional area of column [L$^2$]<br> K = hydraulic conductivity [L/T]<br> h$_{in}$ =hydraulic head at column inlet [L]<br> h$_{out}$ = hydraulic head at column outlet [L]<br> <br> Solve for $K$: $$ K= \frac{Q\cdot L}{A\cdot(h_{in}-h_{out})} $$ If the reference level $(z = 0)$ is located at the downgradient overflow, then set $h_{out} = 0$. """, width= 500, style={'font-size': '13pt'} ) spacer = pn.Spacer(width=100) r2 =pn.pane.PNG("images/T03_TP9a.png", width=300) pn.Row(r1,spacer, r2, width=1000) # In[13]: #Given (solution on 9b) L = 15 #cm, length of column A = 25 # cm^2, surface area of column h_diff = 5 # cm, h_in-h_out Q = 100/12 # cm^3/min discharge per min # Solution using derived equation in first part of the problem # K = QL/A(h_in- h_out) K = (Q*L)/(A*h_diff)# cm/min, required conductivity K_1 = K*10**-2/60 #, m/s, conductivity in m/s #output print("The conductivity in column is {0:1.2E}".format(K),"cm/min") print("The conductivity in column is {0:1.2E}".format(K_1),"m/s \n") if K_1 <= 1.67*10**-4: print("Fine to medium sand") else: print("to check further") # to be completed later. # Continue solution on 9c # # Discharge and Darcy's law: $Q_{water} = \frac{V}{t_{water}}=-A\cdot K_{water}\cdot\frac{\Delta h}{L}$ # # Solve for $t_{water}$: $t_{water} = \frac{V}{Q_{water}}=-\frac{V}{A\cdot K_{water}\cdot\Delta h/L} = -\frac{V\cdot L}{A\cdot K_{water}\cdot\Delta h}$ # # Same step for $t_{diesel}$: $t_{diesel} = -\frac{V\cdot L}{A\cdot K_{diesel}\cdot\Delta h}$ # # # # time ratio:$\frac{t_{diesel}}{t_{water}} = \frac{-\frac{V\cdot L}{A\cdot K_{diesel}\cdot\Delta h}}{-\frac{V\cdot L}{A\cdot K_{water}\cdot\Delta h}} = \frac{K_{water}}{K_{diesel}}$ # # Use relationship between conductivity $K$ and permeability $k$ from lecture notes (slides 18) # $$\frac{K_{water}}{K_{diesel}} = \frac{k\cdot \frac{\rho_{water}\cdot g}{\eta_{water}}}{k\cdot \frac{\rho_{diesel}\cdot g}{\eta_{diesel}}} = \frac{\rho_{water}\cdot\eta_{diesel}}{\rho_{diesel}\cdot\eta_{water}}$$ # # Solve for $t_{diesel}$ # # In[14]: # Given data rho_w = 920.2 # kg/m^3, density of water at 20°C eta_w = 1.0087*10**-3#, Pa-S dynamic viscosity of water rho_d = 0.85 # g/cm^3, density of diesel at 20°C eta_d = 3.5*10**-3#, Pa-S dynamic viscosity of diesel V_d = 100 # cm^3 volume of diesel t_w = 12 # min, time taken by water # Calculations t_d = (rho_w*eta_d)/(rho_d*1000*eta_w)*t_w # multiplied by 1000 to convert unit g/cm^3 to kg/m^3 print("The time required for diesel will be: {0:0.2f}".format(t_d), "min") # ### Tutorial Problem 10: Falling-Head Permeameter ### # In[15]: # Tutorial Problem 10 r10_1 = pn.pane.LaTeX(r""" $$ K = \frac{d_t^2 L}{d_c^2 L}\cdot \ln\frac{h_{in}(0)-h_{out}}{h_{in}(t)-h_{out}} $$ """, width=400, style={'font-size': '12pt'}) r10_2 = pn.pane.Markdown(""" 1. Derive the expression for K given above. <br><br> 2. The hydraulic conductivity of a fine sand sample (length 15 cm, diameter 10 cm) is to be determined. The hydraulic head difference at the beginning and at the end of the experiment after 528 min is 5 cm and 0.5 cm, resp. The inner tube diameter is 2 cm. """, width= 500, style={'font-size': '12pt'}) r10_2a = pn.pane.Markdown(""" ### Tutorial Problem 10: Solution ### <br> **Relevant information can be found in Lecture 04, Slides 14 and 16** """, width= 500, style={'font-size': '12pt'}) col1 = pn.Column(r10_1, r10_2, r10_2a) r10_3 =pn.pane.PNG("images/T03_TP10.png", width=350) spacer3 = pn.Spacer(width=50) pn.Row(col1, spacer3, r10_3) # In[16]: # Tutorial Problem 10: Solution r10_a1 = pn.pane.LaTeX(r""" Darcy's Law: $$ Q(t) =
"""Competitions for parameter tuning using Monte-carlo tree search.""" from __future__ import division import operator import random from heapq import nlargest from math import exp, log, sqrt from gomill import compact_tracebacks from gomill import game_jobs from gomill import competitions from gomill import competition_schedulers from gomill.competitions import ( Competition, NoGameAvailable, CompetitionError, ControlFileError, Player_config) from gomill.settings import * class Node(object): """A MCTS node. Public attributes: children -- list of Nodes, or None for unexpanded wins visits value -- wins / visits rsqrt_visits -- 1 / sqrt(visits) """ def count_tree_size(self): if self.children is None: return 1 return sum(child.count_tree_size() for child in self.children) + 1 def recalculate(self): """Update value and rsqrt_visits from changed wins and visits.""" self.value = self.wins / self.visits self.rsqrt_visits = sqrt(1 / self.visits) def __getstate__(self): return (self.children, self.wins, self.visits) def __setstate__(self, state): self.children, self.wins, self.visits = state self.recalculate() __slots__ = ( 'children', 'wins', 'visits', 'value', 'rsqrt_visits', ) def __repr__(self): return "<Node:%.2f{%s}>" % (self.value, repr(self.children)) class Tree(object): """A tree of MCTS nodes representing N-dimensional parameter space. Parameters (available as read-only attributes): splits -- subdivisions of each dimension (list of integers, one per dimension) max_depth -- number of generations below the root initial_visits -- visit count for newly-created nodes initial_wins -- win count for newly-created nodes exploration_coefficient -- constant for UCT formula (float) Public attributes: root -- Node dimensions -- number of dimensions in the parameter space All changing state is in the tree of Node objects started at 'root'. References to 'optimiser_parameters' below mean a sequence of length 'dimensions', whose values are floats in the range 0.0..1.0 representing a point in this space. Each node in the tree represents an N-cuboid of parameter space. Each expanded node has prod(splits) children, tiling its cuboid. (The splits are the same in each generation.) Instantiate with: all parameters listed above parameter_formatter -- function optimiser_parameters -> string """ def __init__(self, splits, max_depth, exploration_coefficient, initial_visits, initial_wins, parameter_formatter): self.splits = splits self.dimensions = len(splits) self.branching_factor = reduce(operator.mul, splits) self.max_depth = max_depth self.exploration_coefficient = exploration_coefficient self.initial_visits = initial_visits self.initial_wins = initial_wins self._initial_value = initial_wins / initial_visits self._initial_rsqrt_visits = 1 / sqrt(initial_visits) self.format_parameters = parameter_formatter # map child index -> coordinate vector # coordinate vector -- tuple length 'dimensions' with values in # range(splits[d]) # The first dimension changes most slowly. self._cube_coordinates = [] for child_index in xrange(self.branching_factor): v = [] i = child_index for split in reversed(splits): i, coord = divmod(i, split) v.append(coord) v.reverse() self._cube_coordinates.append(tuple(v)) def new_root(self): """Initialise the tree with an expanded root node.""" self.node_count = 1 # For description only self.root = Node() self.root.children = None self.root.wins = self.initial_wins self.root.visits = self.initial_visits self.root.value = self.initial_wins / self.initial_visits self.root.rsqrt_visits = self._initial_rsqrt_visits self.expand(self.root) def set_root(self, node): """Use the specified node as the tree's root. This is used when restoring serialised state. Raises ValueError if the node doesn't have the expected number of children. """ if not node.children or len(node.children) != self.branching_factor: raise ValueError self.root = node self.node_count = node.count_tree_size() def expand(self, node): """Add children to the specified node.""" assert node.children is None node.children = [] child_count = self.branching_factor for _ in xrange(child_count): child = Node() child.children = None child.wins = self.initial_wins child.visits = self.initial_visits child.value = self._initial_value child.rsqrt_visits = self._initial_rsqrt_visits node.children.append(child) self.node_count += child_count def is_ripe(self, node): """Say whether a node has been visted enough times to be expanded.""" return node.visits != self.initial_visits def parameters_for_path(self, choice_path): """Retrieve the point in parameter space given by a node. choice_path -- sequence of child indices Returns optimiser_parameters representing the centre of the region of parameter space represented by the node of interest. choice_path must represent a path from the root to the node of interest. """ lo = [0.0] * self.dimensions breadths = [1.0] * self.dimensions for child_index in choice_path: cube_pos = self._cube_coordinates[child_index] breadths = [f / split for (f, split) in zip(breadths, self.splits)] for d, coord in enumerate(cube_pos): lo[d] += breadths[d] * coord return [f + .5 * breadth for (f, breadth) in zip(lo, breadths)] def retrieve_best_parameters(self): """Find the parameters with the most promising simulation results. Returns optimiser_parameters This walks the tree from the root, at each point choosing the node with most wins, and returns the parameters corresponding to the leaf node. """ simulation = self.retrieve_best_parameter_simulation() return simulation.get_parameters() def retrieve_best_parameter_simulation(self): """Return the Greedy_simulation used for retrieve_best_parameters.""" simulation = Greedy_simulation(self) simulation.walk() return simulation def get_test_parameters(self): """Return a 'typical' optimiser_parameters.""" return self.parameters_for_path([0]) def describe_choice(self, choice): """Return a string describing a child's coordinates in its parent.""" return str(self._cube_coordinates[choice]).replace(" ", "") def describe(self): """Return a text description of the current state of the tree. This currently dumps the full tree to depth 2. """ def describe_node(node, choice_path): parameters = self.format_parameters( self.parameters_for_path(choice_path)) choice_s = self.describe_choice(choice_path[-1]) return "%s %s %.3f %3d" % ( choice_s, parameters, node.value, node.visits - self.initial_visits) root = self.root wins = root.wins - self.initial_wins visits = root.visits - self.initial_visits try: win_rate = "%.3f" % (wins / visits) except ZeroDivisionError: win_rate = "--" result = [ "%d nodes" % self.node_count, "Win rate %d/%d = %s" % (wins, visits, win_rate) ] for choice, node in enumerate(self.root.children): result.append(" " + describe_node(node, [choice])) if node.children is None: continue for choice2, node2 in enumerate(node.children): result.append(" " + describe_node(node2, [choice, choice2])) return "\n".join(result) def summarise(self, out, summary_spec): """Write a summary of the most-visited parts of the tree. out -- writeable file-like object summary_spec -- list of ints summary_spec says how many nodes to describe at each depth of the tree (so to show only direct children of the root, pass a list of length 1). """ def p(s): print >> out, s def describe_node(node, choice_path): parameters = self.format_parameters( self.parameters_for_path(choice_path)) choice_s = " ".join(map(self.describe_choice, choice_path)) return "%s %-40s %.3f %3d" % ( choice_s, parameters, node.value, node.visits - self.initial_visits) def most_visits((child_index, node)): return node.visits last_generation = [([], self.root)] for i, n in enumerate(summary_spec): depth = i + 1 p("most visited at depth %s" % (depth)) this_generation = [] for path, node in last_generation: if node.children is not None: this_generation += [ (path + [child_index], child) for (child_index, child) in enumerate(node.children)] for path, node in sorted( nlargest(n, this_generation, key=most_visits)): p(describe_node(node, path)) last_generation = this_generation p("") class Simulation(object): """A single monte-carlo simulation. Instantiate with the Tree the simulation will run in. Use the methods in the following order: run() get_parameters() update_stats(b) describe() """ def __init__(self, tree): self.tree = tree # list of Nodes self.node_path = [] # corresponding list of child indices self.choice_path = [] # bool self.candidate_won = None def _choose_action(self, node): """Choose the best action from the specified node. Returns a pair (child index, node) """ uct_numerator = (self.tree.exploration_coefficient * sqrt(log(node.visits))) def urgency((i, child)): return child.value + uct_numerator * child.rsqrt_visits start = random.randrange(len(node.children)) children = list(enumerate(node.children)) return max(children[start:] + children[:start], key=urgency) def walk(self): """Choose a node sequence, without expansion.""" node = self.tree.root while node.children is not None: choice, node = self._choose_action(node) self.node_path.append(node) self.choice_path.append(choice) def run(self): """Choose the node sequence for this simulation. This walks down from the root, using _choose_action() at each level, until it reaches a leaf; if the leaf has already been visited, this expands it and chooses one more action. """ self.walk() node = self.node_path[-1] if (len(self.node_path) < self.tree.max_depth and self.tree.is_ripe(node)): self.tree.expand(node) choice, child = self._choose_action(node) self.node_path.append(child) self.choice_path.append(choice) def get_parameters(self): """Retrieve the parameters corresponding to the simulation's leaf node. Returns optimiser_parameters """ return self.tree.parameters_for_path(self.choice_path) def update_stats(self, candidate_won): """Update the tree's node statistics with the simulation's results. This updates visits (and wins, if appropriate) for each node in the simulation's node sequence. """ self.candidate_won = candidate_won for node in self.node_path: node.visits += 1 if candidate_won: node.wins += 1 node.recalculate() self.tree.root.visits += 1 if candidate_won: self.tree.root.wins += 1 # For description only self.tree.root.recalculate() def describe_steps(self): """Return a text description of the simulation's node sequence.""" return " ".join(map(self.tree.describe_choice, self.choice_path)) def describe(self): """Return a one-line-ish text description of the simulation.""" result = "%s [%s]" % ( self.tree.format_parameters(self.get_parameters()), self.describe_steps()) if self.candidate_won is not None: result += (" lost", " won")[self.candidate_won] return result def describe_briefly(self): """Return a shorter description
import torch import torch.nn as nn from torch.autograd import Variable import torch.nn.functional as F def conv2d(in_planes, out_planes, kernel_size, stride, pad, dilation): return nn.Sequential( nn.Conv2d( in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=dilation if dilation > 1 else pad, dilation=dilation, bias=True, ) ) def image_warp(img, depth, padding_mode="zeros"): # img: the source image (where to sample pixels) -- [B, 3, H, W] # depth: depth map of the target image -- [B, 1, H, W] # Returns: Source image warped to the target image b, _, h, w = depth.size() i_range = torch.autograd.Variable( torch.linspace(0, h - 1, steps=h).view(1, h, 1).expand(1, h, w), requires_grad=False, ) # [1, H, W] copy 0-height for w times : y coord j_range = torch.autograd.Variable( torch.linspace(0, w - 1, steps=w).view(1, 1, w).expand(1, h, w), requires_grad=False, ) # [1, H, W] copy 0-width for h times : x coord pixel_coords = torch.stack((j_range, i_range), dim=1).float().cuda() # [1, 2, H, W] batch_pixel_coords = ( pixel_coords[:, :, :, :].expand(b, 2, h, w).contiguous().view(b, 2, -1) ) # [B, 2, H*W] X = batch_pixel_coords[:, 0, :] - depth.contiguous().view(b, -1) # [B, H*W] Y = batch_pixel_coords[:, 1, :] X_norm = X * 2 / (w - 1) - 1 Y_norm = Y * 2 / (h - 1) - 1 pixel_coords = torch.stack([X_norm, Y_norm], dim=2) # [B, H*W, 2] pixel_coords = pixel_coords.view(b, h, w, 2) # [B, H, W, 2] projected_img = torch.nn.functional.grid_sample( img, pixel_coords, padding_mode=padding_mode ) return projected_img class stem_block(nn.Module): def __init__(self): super(stem_block, self).__init__() self.conv1 = nn.Sequential(conv2d(3, 64, 7, 2, 3, 1), nn.ReLU(inplace=True)) self.up_1 = nn.Sequential( nn.ConvTranspose2d( 64, 32, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ), nn.ReLU(inplace=True), ) self.conv2 = nn.Sequential(conv2d(64, 128, 5, 2, 2, 1), nn.ReLU(inplace=True)) self.up_2 = nn.Sequential( nn.ConvTranspose2d( 128, 32, kernel_size=8, padding=2, output_padding=0, stride=4, bias=True ), nn.ReLU(inplace=True), ) self.up_12 = nn.Sequential(conv2d(64, 32, 1, 1, 0, 1), nn.ReLU(inplace=True)) def forward(self, x): batch_size = x.size()[0] height = x.size()[2] width = x.size()[3] conv1 = self.conv1(x) up_1 = self.up_1(conv1) conv2 = self.conv2(conv1) up_2 = self.up_2(conv2) # up_1+up_2 concat = Variable( torch.FloatTensor(batch_size, 64, height, width).zero_() ).cuda() concat[:, :32, :, :] = up_1[:, :, :, :] concat[:, 32:, :, :] = up_2[:, :, :, :] concat = concat.contiguous() up_12 = self.up_12(concat) return conv1, conv2, up_12 class disparity_estimation(nn.Module): def __init__(self): super(disparity_estimation, self).__init__() self.conv_redir = nn.Sequential( conv2d(128, 64, 1, 1, 0, 1), nn.ReLU(inplace=True) ) self.conv3_1 = nn.Sequential( conv2d(145, 256, 3, 2, 1, 1), nn.ReLU(inplace=True), conv2d(256, 256, 3, 1, 1, 1), nn.ReLU(inplace=True), ) self.conv4_1 = nn.Sequential( conv2d(256, 512, 3, 2, 1, 1), nn.ReLU(inplace=True), conv2d(512, 512, 3, 1, 1, 1), nn.ReLU(inplace=True), ) self.conv5_1 = nn.Sequential( conv2d(512, 512, 3, 2, 1, 1), nn.ReLU(inplace=True), conv2d(512, 512, 3, 1, 1, 1), nn.ReLU(inplace=True), ) self.conv6_1 = nn.Sequential( conv2d(512, 1024, 3, 2, 1, 1), nn.ReLU(inplace=True), conv2d(1024, 1024, 3, 1, 1, 1), nn.ReLU(inplace=True), ) self.disp6 = nn.Sequential(conv2d(1024, 1, 3, 1, 1, 1)) self.udisp6 = nn.Sequential( nn.ConvTranspose2d( 1, 1, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ) ) self.uconv5 = nn.Sequential( nn.ConvTranspose2d( 1024, 512, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True, ), nn.ReLU(inplace=True), ) self.iconv5 = nn.Sequential( conv2d(1025, 512, 3, 1, 1, 1), nn.ReLU(inplace=True) ) self.disp5 = nn.Sequential(conv2d(512, 1, 3, 1, 1, 1)) self.udisp5 = nn.Sequential( nn.ConvTranspose2d( 1, 1, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ) ) self.uconv4 = nn.Sequential( nn.ConvTranspose2d( 512, 256, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True, ), nn.ReLU(inplace=True), ) self.iconv4 = nn.Sequential(conv2d(769, 256, 3, 1, 1, 1), nn.ReLU(inplace=True)) self.disp4 = nn.Sequential(conv2d(256, 1, 3, 1, 1, 1)) self.udisp4 = nn.Sequential( nn.ConvTranspose2d( 1, 1, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ) ) self.uconv3 = nn.Sequential( nn.ConvTranspose2d( 256, 128, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True, ), nn.ReLU(inplace=True), ) self.iconv3 = nn.Sequential(conv2d(385, 128, 3, 1, 1, 1), nn.ReLU(inplace=True)) self.disp3 = nn.Sequential(conv2d(128, 1, 3, 1, 1, 1)) self.udisp3 = nn.Sequential( nn.ConvTranspose2d( 1, 1, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ) ) self.uconv2 = nn.Sequential( nn.ConvTranspose2d( 128, 64, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ), nn.ReLU(inplace=True), ) self.iconv2 = nn.Sequential(conv2d(193, 64, 3, 1, 1, 1), nn.ReLU(inplace=True)) self.disp2 = nn.Sequential(conv2d(64, 1, 3, 1, 1, 1)) self.udisp2 = nn.Sequential( nn.ConvTranspose2d( 1, 1, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ) ) self.uconv1 = nn.Sequential( nn.ConvTranspose2d( 64, 32, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ), nn.ReLU(inplace=True), ) self.iconv1 = nn.Sequential(conv2d(97, 32, 3, 1, 1, 1), nn.ReLU(inplace=True)) self.disp1 = nn.Sequential(conv2d(32, 1, 3, 1, 1, 1)) self.udisp1 = nn.Sequential( nn.ConvTranspose2d( 1, 1, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ) ) self.uconv0 = nn.Sequential( nn.ConvTranspose2d( 32, 32, kernel_size=4, padding=1, output_padding=0, stride=2, bias=True ), nn.ReLU(inplace=True), ) self.iconv0 = nn.Sequential(conv2d(65, 32, 3, 1, 1, 1), nn.ReLU(inplace=True)) self.disp0 = nn.Sequential(conv2d(32, 1, 3, 1, 1, 1)) def forward(self, conv1a, up_1a2a, conv2a, corr1d): batch_size = conv2a.size()[0] height = conv2a.size()[2] width = conv2a.size()[3] conv_redir = self.conv_redir(conv2a) corr1d_conv_redir = Variable( torch.FloatTensor(batch_size, corr1d.size()[1] + 64, height, width).zero_() ).cuda() corr1d_conv_redir[:, : corr1d.size()[1], :, :] = corr1d[:, :, :, :] corr1d_conv_redir[:, corr1d.size()[1] :, :, :] = conv_redir[:, :, :, :] corr1d_conv_redir = corr1d_conv_redir.contiguous() conv3_1 = self.conv3_1(corr1d_conv_redir) conv4_1 = self.conv4_1(conv3_1) conv5_1 = self.conv5_1(conv4_1) conv6_1 = self.conv6_1(conv5_1) disp6 = self.disp6(conv6_1) udisp6 = self.udisp6(disp6) uconv5 = self.uconv5(conv6_1) uconv5_disp6_conv5_1 = Variable( torch.FloatTensor( batch_size, uconv5.size()[1] + 1 + conv5_1.size()[1], height // 8, width // 8, ).zero_() ).cuda() uconv5_disp6_conv5_1[:, : uconv5.size()[1], :, :] = uconv5[:, :, :, :] uconv5_disp6_conv5_1[:, uconv5.size()[1] : uconv5.size()[1] + 1, :, :] = udisp6[ :, :, :, : ] uconv5_disp6_conv5_1[:, uconv5.size()[1] + 1 :, :, :] = conv5_1[:, :, :, :] uconv5_disp6_conv5_1 = uconv5_disp6_conv5_1.contiguous() iconv5 = self.iconv5(uconv5_disp6_conv5_1) disp5 = self.disp5(iconv5) udisp5 = self.udisp5(disp5) uconv4 = self.uconv4(iconv5) uconv4_disp5_conv4_1 = Variable( torch.FloatTensor( batch_size, uconv4.size()[1] + 1 + conv4_1.size()[1], height // 4, width // 4, ).zero_() ).cuda() uconv4_disp5_conv4_1[:, : uconv4.size()[1], :, :] = uconv4[:, :, :, :] uconv4_disp5_conv4_1[:, uconv4.size()[1] : uconv4.size()[1] + 1, :, :] = udisp5[ :, :, :, : ] uconv4_disp5_conv4_1[:, uconv4.size()[1] + 1 :, :, :] = conv4_1[:, :, :, :] uconv4_disp5_conv4_1 = uconv4_disp5_conv4_1.contiguous() iconv4 = self.iconv4(uconv4_disp5_conv4_1) disp4 = self.disp4(iconv4) udisp4 = self.udisp4(disp4) uconv3 = self.uconv3(iconv4) uconv3_disp4_conv3_1 = Variable( torch.FloatTensor( batch_size, uconv3.size()[1] + 1 + conv3_1.size()[1], height // 2, width // 2, ).zero_() ).cuda() uconv3_disp4_conv3_1[:, : uconv3.size()[1], :, :] = uconv3[:, :, :, :] uconv3_disp4_conv3_1[:, uconv3.size()[1] : uconv3.size()[1] + 1, :, :] = udisp4[ :, :, :, : ] uconv3_disp4_conv3_1[:, uconv3.size()[1] + 1 :, :, :] = conv3_1[:, :, :, :] uconv3_disp4_conv3_1 = uconv3_disp4_conv3_1.contiguous() iconv3 = self.iconv3(uconv3_disp4_conv3_1) disp3 = self.disp3(iconv3) udisp3 = self.udisp3(disp3) uconv2 = self.uconv2(iconv3) uconv2_disp3_conv2a = Variable( torch.FloatTensor( batch_size, uconv2.size()[1] + 1 + conv2a.size()[1], height, width ).zero_() ).cuda() uconv2_disp3_conv2a[:, : uconv2.size()[1], :, :] = uconv2[:, :, :, :] uconv2_disp3_conv2a[:, uconv2.size()[1] : uconv2.size()[1] + 1, :, :] = udisp3[ :, :, :, : ] uconv2_disp3_conv2a[:, uconv2.size()[1] + 1 :, :, :] = conv2a[:, :, :, :] uconv2_disp3_conv2a = uconv2_disp3_conv2a.contiguous() iconv2 = self.iconv2(uconv2_disp3_conv2a) disp2 = self.disp2(iconv2) udisp2 = self.udisp2(disp2) uconv1 = self.uconv1(iconv2) uconv1_disp2_conv1a = Variable( torch.FloatTensor( batch_size, uconv1.size()[1] + 1 + conv1a.size()[1], height * 2, width * 2, ).zero_() ).cuda() uconv1_disp2_conv1a[:, : uconv1.size()[1], :, :] = uconv1[:, :, :, :] uconv1_disp2_conv1a[:, uconv1.size()[1] : uconv1.size()[1] + 1, :, :] = udisp2[ :, :, :, : ] uconv1_disp2_conv1a[:, uconv1.size()[1] + 1 :, :, :] = conv1a[:, :, :, :] uconv1_disp2_conv1a = uconv1_disp2_conv1a.contiguous() iconv1 = self.iconv1(uconv1_disp2_conv1a) disp1 = self.disp1(iconv1) udisp1 = self.udisp1(disp1) uconv0 = self.uconv0(iconv1) uconv0_disp1_up_1a2a = Variable( torch.FloatTensor( batch_size, uconv0.size()[1] + 1 + up_1a2a.size()[1], height * 4, width * 4, ).zero_() ).cuda() uconv0_disp1_up_1a2a[:, : uconv0.size()[1], :, :] = uconv0[:, :, :, :] uconv0_disp1_up_1a2a[:, uconv0.size()[1] : uconv0.size()[1] + 1, :, :] = udisp1[ :, :, :, : ] uconv0_disp1_up_1a2a[:, uconv0.size()[1] + 1 :, :, :] = up_1a2a[:, :, :, :] uconv0_disp1_up_1a2a = uconv0_disp1_up_1a2a.contiguous() iconv0 = self.iconv0(uconv0_disp1_up_1a2a) disp0 = self.disp0(iconv0) return disp0, disp1, disp2, disp3, disp4, disp5, disp6 class disparity_refinement(nn.Module): def __init__(self): super(disparity_refinement, self).__init__() self.r_conv0 = nn.Sequential(conv2d(65, 32, 3, 1, 1, 1), nn.ReLU(inplace=True)) self.r_conv1 = nn.Sequential(conv2d(32, 64, 3, 2, 1, 1), nn.ReLU(inplace=True)) self.c_conv1 = nn.Sequential(conv2d(64, 16, 3, 1, 1, 1), nn.ReLU(inplace=True)) self.r_conv1_1 = nn.Sequential( conv2d(105, 64, 3, 1, 1, 1), nn.ReLU(inplace=True) ) self.r_conv2 = nn.Sequential(conv2d(64, 128, 3, 2, 1, 1), nn.ReLU(inplace=True)) self.r_conv2_1 = nn.Sequential( conv2d(128, 128, 3, 1, 1, 1), nn.ReLU(inplace=True) ) self.r_res2 = nn.Sequential(conv2d(128, 1, 3, 1, 1, 1)) self.r_res2_up = nn.Sequential( nn.ConvTranspose2d( 1, 1, kernel_size=4, padding=1, output_padding=0,
# Copyright 2019 ZTE corporation. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 # pylint: disable=C0103, R0914, W1203, E0401, E0611, W0601 """ Train""" import os import time import math import random import logging import argparse from test import test import yaml import torch from torch import nn import torch.nn.functional as F from torch import optim import torch.distributed as dist from torch.utils.tensorboard import SummaryWriter from tqdm import tqdm from utils.models import Model, load_darknet_weights from utils.torch_utils import intersect_dicts, select_device from utils.dataset import LoadImagesAndLabels from utils.utils import init_seeds, plot_results, adjust_learning_rate from utils.loss import compute_loss from utils.prune_utils import parse_module_index, gather_bn_weights, bn_l1_regularization, distillation_loss logger = logging.getLogger(__name__) mixed_precision = True try: from apex import amp except ImportWarning: print("Not install apex!") mixed_precision = False # Directories of the save weights weights_dir = 'weights' + os.sep last = weights_dir + 'last.pt' best = weights_dir + 'best.pt' results_file = 'results.txt' def parse(): """Parser for command-line options, arguments and sub-commands""" parser = argparse.ArgumentParser() parser.add_argument('--epochs', type=int, default=273, help='500200 batches at bs 16, 117263 images = 273 epochs') parser.add_argument('--batch_size', type=int, default=16, help='effective bs = batch_size * ccumulate = 16*4 = 64') parser.add_argument('--accumulate', type=int, default=4, help='batches to accumulate before optimizing') parser.add_argument('--cfg', type=str, default='cfg/yolov3.cfg', help='cfg file path') parser.add_argument('--teacher_cfg', type=str, default='', help='teacher model cfg file for knowledge distillation') parser.add_argument('--data', type=str, default='data/coco2017.yaml', help='*.data file path') parser.add_argument('--hyp', type=str, default='data/hyp.yaml', help='the file of hyp path') parser.add_argument('--multi-scale', action='store_true', help='adjust (67% - 150%) img_size every 10 batches') parser.add_argument('--img_size', type=int, default=608, help='inference size (pixels)') parser.add_argument('--rect', action='store_true', help='rectangular training') parser.add_argument('--resume', action='store_true', help='resume training from last.pt') parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') parser.add_argument('--notest', action='store_true', help='only test final epoch') parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters') parser.add_argument('--cache-images', action='store_true', help='cache images for faster training') parser.add_argument('--weights', type=str, default='', help='initial weights') parser.add_argument('--teacher_weights', type=str, default='', help='teacher model weights') parser.add_argument('--arc', type=str, default='defaultpw', help='yolo architecture, defaultpw, uCE, uBCE') parser.add_argument('--name', default='', help='renames results.txt to results_name.txt if supplied') parser.add_argument('--device', default='0,1,2,3,4,5,6,7', help='device id (i.e. 0 or 0,1) or cpu') parser.add_argument('--adam', action='store_true', help='use adam optimizer') parser.add_argument('--sparsity_training', '-st', dest='st', action='store_true', help='train with channel sparsity regularization') parser.add_argument('--penalty_factor', '-pf', type=float, default=0.0001, help='scale sparse rate') parser.add_argument('--prune', type=int, default=0, help='0:nomal prune 1:other prune ') parser.add_argument("--local_rank", type=int, default=-1, help="Distributed training - Local rank") parser.add_argument("--seed", type=int, default=56, help="Random seed") args = parser.parse_args() return args def main(): """ Train and test :param opt: args :param writer: tensorboard :return: """ global opt opt = parse() arc = opt.arc cfg = opt.cfg teacher_cfg = opt.teacher_cfg img_size = opt.img_size epochs = opt.epochs batch_size = opt.batch_size accumulate = opt.accumulate # effective bs = batch_size * accumulate = 16 * 4 = 64 weights = opt.weights teacher_weights = opt.teacher_weights multi_scale = opt.multi_scale sparsity_training = opt.st opt.weights = last if opt.resume else opt.weights # Initial logging logging.basicConfig( format="%(message)s", level=logging.INFO if opt.local_rank in [-1, 0] else logging.WARN) # Train logger.info(opt) if opt.local_rank in [-1, 0]: logger.info('Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/') writer = SummaryWriter() # Hyperparameters with open(opt.hyp) as f_hyp: hyp = yaml.safe_load(f_hyp) # data dict with open(opt.data) as f_data: data = yaml.safe_load(f_data) # Distributed training initialize device = select_device(opt.device) if opt.local_rank != -1: dist.init_process_group(init_method="env://", backend='nccl') torch.cuda.set_device(opt.local_rank) device = torch.device(f"cuda:{opt.local_rank}") # world_size = torch.distributed.get_world_size() init_seeds() cuda = device.type != 'cpu' torch.backends.cudnn.benchmark = True if multi_scale: img_size_min = round(img_size / 32 / 1.5) + 1 img_size_max = round(img_size / 32 * 1.5) - 1 img_size = img_size_max * 32 # initiate with maximum multi_scale size logger.info(f'Using multi-scale {img_size_min * 32} - {img_size}') train_path = data['train'] num_classes = int(data['num_classes']) # number of classes # Load dataset dataset = LoadImagesAndLabels(train_path, img_size, batch_size, augment=True, hyp=hyp, rect=opt.rect) train_sampler = torch.utils.data.distributed.DistributedSampler(dataset) if opt.local_rank != -1 else None num_worker = os.cpu_count() // torch.cuda.device_count() dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, num_workers=min([num_worker, batch_size, 8]), shuffle=not (opt.rect or train_sampler), sampler=train_sampler, pin_memory=True, collate_fn=dataset.collate_fn) # Load model model = Model(cfg, img_size, arc=arc).to(device) # Load teacher model if teacher_cfg: teacher_model = Model(teacher_cfg, img_size, arc).to(device) # optimizer parameter groups param_group0, param_group1 = [], [] for key, value in model.named_parameters(): if 'Conv2d.weight' in key: param_group1.append(value) else: param_group0.append(value) if opt.adam: optimizer = optim.Adam(param_group0, lr=hyp['lr0']) else: optimizer = optim.SGD(param_group0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True) # add param_group1 with weight_decay optimizer.add_param_group({'params': param_group1, 'weight_decay': hyp['weight_decay']}) logger.info(f'Optimizer groups: {len(param_group1)} conv.weight, {len(param_group0)} other') del param_group0, param_group1 start_epoch = 0 best_fitness = 0. if weights.endswith('.pt'): checkpoint = torch.load(weights, map_location=device) state_dict = intersect_dicts(checkpoint['model'], model.state_dict()) model.load_state_dict(state_dict, strict=False) print('loaded weights from', weights, '\n') # load optimizer if checkpoint['optimizer'] is not None: optimizer.load_state_dict(checkpoint['optimizer']) best_fitness = checkpoint['best_fitness'] # load results if checkpoint.get('training_results') is not None: with open(results_file, 'w') as file: file.write(checkpoint['training_results']) # resume if opt.resume: start_epoch = checkpoint['epoch'] + 1 del checkpoint elif len(weights) > 0: # weights are 'yolov4.weights', 'darknet53.conv.74' etc. load_darknet_weights(model, weights) logger.info(f'loaded weights from {weights}\n') # Load teacher weights if teacher_cfg: if teacher_weights.endswith('.pt'): teacher_model.load_state_dict(torch.load(teacher_weights, map_location=device)['model']) elif teacher_weights.endswith('.weights'): load_darknet_weights(teacher_model, teacher_weights) else: raise Exception('pls provide proper teacher weights for knowledge distillation') if not mixed_precision: teacher_model.eval() logger.info('<......................using knowledge distillation....................>') logger.info(f'teacher model: {teacher_weights}\n') # Sparsity training if opt.prune == 0: _, _, prune_index = parse_module_index(model.module_dicts) if sparsity_training: logger.info('normal sparse training') if mixed_precision: if teacher_cfg: [model, teacher_model], optimizer = amp.initialize([model, teacher_model], optimizer, opt_level='O1', verbosity=1) else: model, optimizer = amp.initialize(model, optimizer, opt_level='O1', verbosity=1) # SyncBatchNorm and distributed training if cuda and opt.local_rank != -1: model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) model = model.to(device) model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[opt.local_rank]) model.module_list = model.module.module_list model.yolo_layers = model.module.yolo_layers for index in prune_index: bn_weights = gather_bn_weights(model.module_list, [index]) if opt.local_rank == 0: writer.add_histogram('before_train_per_layer_bn_weights/hist', bn_weights.numpy(), index, bins='doane') # Start training model.num_classes = num_classes model.arc = opt.arc model.hyp = hyp num_batch_size = len(dataloader) # 'P', 'R', 'mAP', 'F1', 'val GIoU', 'val Objectness', 'val Classification' results = (0, 0, 0, 0, 0, 0, 0) start_train_time = time.time() logger.info('Image sizes %d \n Starting training for %d epochs...', img_size, epochs) for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ model.train() mean_losses = torch.zeros(4).to(device) mean_soft_target = torch.zeros(1).to(device) pbar = enumerate(dataloader) logger.info(('\n %10s %10s %10s %10s %10s %10s %10s %10s'), 'Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'total', 'targets', 'img_size') if opt.local_rank in [-1, 0]: pbar = tqdm(pbar, total=num_batch_size) optimizer.zero_grad() for i, (imgs, targets, _, _) in pbar: # batch ------------------------------------------------------------- num_integrated_batches = i + num_batch_size * epoch # Adjust the learning rate learning_rate = adjust_learning_rate(optimizer, num_integrated_batches, num_batch_size, hyp, epoch, epochs) if i == 0 and opt.local_rank in [-1, 0]: logger.info(f'learning rate: {learning_rate}') imgs = imgs.to(device) / 255.0 targets = targets.to(device) # Multi-Scale training if multi_scale: if num_integrated_batches / accumulate % 10 == 0: img_size = random.randrange(img_size_min, img_size_max + 1) * 32 scale_factor = img_size / max(imgs.shape[2:]) if scale_factor != 1: new_shape = [math.ceil(x * scale_factor / 32.) * 32 for x in imgs.shape[2:]] imgs = F.interpolate(imgs, size=new_shape, mode='bilinear', align_corners=False) pred = model(imgs) # Compute loss loss, loss_items = compute_loss(pred, targets, model) # knowledge distillation soft_target = 0 if teacher_cfg: if mixed_precision: with torch.no_grad(): output_teacher = teacher_model(imgs) else: _, output_teacher = teacher_model(imgs) soft_target = distillation_loss(pred, output_teacher, model.num_classes, imgs.size(0)) loss += soft_target # Scale loss by nominal batch_size of 64 loss *= batch_size / 64 # Compute gradient if mixed_precision: with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() else: loss.backward() # Sparse the BN layer that needs pruning if sparsity_training: # bn_l1_regularization(model.module_list, opt.penalty_factor, cba_index, epoch, epochs) bn_l1_regularization(model.module_list, opt.penalty_factor, prune_index, epoch, epochs) # Accumulate gradient for x batches before optimizing if num_integrated_batches % accumulate == 0: optimizer.step() optimizer.zero_grad() if opt.local_rank in [-1, 0]: mean_losses = (mean_losses * i + loss_items) / (i + 1) mean_soft_target = (mean_soft_target * i + soft_target) / (i + 1) memory = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0 # (GB) description = ('%10s' * 2 + '%10.3g' * 6) % ( '%g/%g' % (epoch, epochs - 1), '%.3gG' % memory, *mean_losses, mean_soft_target, img_size) pbar.set_description(description) # end batch ------------------------------------------------------------------------------------------------ # Update scheduler # scheduler.step() if opt.local_rank in [-1, 0]: final_epoch = epoch + 1 == epochs # Calculate mAP if not (opt.notest or opt.nosave) or final_epoch: with torch.no_grad(): results, _ = test(cfg, data, batch_size=batch_size, img_size=opt.img_size, model=model, conf_thres=0.001 if final_epoch and epoch > 0 else 0.1, # 0.1 for speed save_json=final_epoch and epoch > 0) # Write epoch results with open(results_file, 'a') as file: # P, R, mAP, F1, test_losses=(GIoU, obj, cls) file.write(description + '%10.3g' * 7 % results + '\n') # Write Tensorboard results if writer: outputs = list(mean_losses) + list(results) titles = ['GIoU', 'Objectness', 'Classification', 'Train loss', 'Precision', 'Recall',
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import tensorflow as tf import numpy as np import cv2 from tensorpack import dataflow from tensorpack.dataflow.base import RNGDataFlow, ProxyDataFlow try: import ipdb as pdb except Exception: import pdb def decode_image(img_str, resize=None): """ Decode image from tfrecord data :param img_str: image encoded as a png in a string :param resize: tuple width two elements that defines the new size of the image. optional :return: image as a numpy array """ nparr = np.fromstring(img_str, np.uint8) img_str = cv2.imdecode(nparr, -1) if resize is not None: img_str = cv2.resize(img_str, resize) return img_str def raw_images_to_array(images): """ Decode and normalize multiple images from tfrecord data :param images: list of images encoded as a png in a string :return: a numpy array of size (N, 56, 56, channels), normalized for training """ image_list = [] for image_str in images: image = decode_image(image_str, (56, 56)) # size:(56,56) image = scale_observation(np.atleast_3d(image.astype(np.float32))) image_list.append(image) return np.stack(image_list, axis=0) def scale_observation(x): """ Normalizes observation input, either an rgb image or a depth image :param x: observation input as numpy array, either an rgb image or a depth image :return: numpy array, a normalized observation """ if x.ndim == 2 or x.shape[2] == 1: # depth return x * (2.0 / 100.0) - 1.0 else: # rgb return x * (2.0/255.0) - 1.0 # value is between [0, 2] def bounding_box(img): """ Bounding box of non-zeros in an array (inclusive). Used with 2D maps :param img: numpy array :return: inclusive bounding box indices: top_row, bottom_row, leftmost_column, rightmost_column """ # helper function to rows = np.any(img, axis=1) # Test whether any array element along a given axis evaluates to True. cols = np.any(img, axis=0) rmin, rmax = np.where(rows)[0][[0, -1]] # np.where: Return elements chosen from x or y depending on condition. cmin, cmax = np.where(cols)[0][[0, -1]] return rmin, rmax, cmin, cmax class BatchDataWithPad(dataflow.BatchData): """ Stacks datapoints into batches. Selected elements can be padded to the same size in each batch. """ def __init__(self, ds, batch_size, remainder=False, use_list=False, padded_indices=()): """ :param ds: input dataflow. Same as BatchData :param batch_size: mini batch size. Same as BatchData :param remainder: if data is not enough to form a full batch, it makes a smaller batch when true. Same as BatchData. :param use_list: if True, components will contain a list of datapoints instead of creating a new numpy array. Same as BatchData. :param padded_indices: list of filed indices for which all elements will be padded with zeros to mach the largest in the batch. Each batch may produce a different size datapoint. """ super(BatchDataWithPad, self).__init__(ds, batch_size, remainder, use_list) self.padded_indices = padded_indices def get_data(self): """ Yields: Batched data by stacking each component on an extra 0th dimension. """ holder = [] for data in self.ds.get_data(): holder.append(data) if len(holder) == self.batch_size: yield BatchDataWithPad._aggregate_batch(holder, self.use_list, self.padded_indices) del holder[:] if self.remainder and len(holder) > 0: yield BatchDataWithPad._aggregate_batch(holder, self.use_list, self.padded_indices) @staticmethod def _aggregate_batch(data_holder, use_list=False, padded_indices=()): """ Re-implement the parent function with the option to pad selected fields to the largest in the batch. """ assert not use_list # cannot match shape if they must be treated as lists size = len(data_holder[0]) result = [] for k in range(size): dt = data_holder[0][k] if type(dt) in [int, bool]: tp = 'int32' elif type(dt) == float: tp = 'float32' else: try: tp = dt.dtype except AttributeError: raise TypeError("Unsupported type to batch: {}".format(type(dt))) try: if k in padded_indices: # pad this field shapes = np.array([x[k].shape for x in data_holder], 'i') # assumes ndim are the same for all assert shapes.shape[1] == 3 # only supports 3D arrays for now, e.g. images (height, width, ch) matching_shape = shapes.max(axis=0).tolist() new_data = np.zeros([shapes.shape[0]] + matching_shape, dtype=tp) for i in range(len(data_holder)): shape = data_holder[i][k].shape new_data[i, :shape[0], :shape[1], :shape[2]] = data_holder[i][k] result.append(new_data) else: # no need to pad this field, simply create batch result.append(np.asarray([x[k] for x in data_holder], dtype=tp)) except Exception as e: # exception handling. same as in parent class pdb.set_trace() dataflow.logger.exception("Cannot batch data. Perhaps they are of inconsistent shape?") if isinstance(dt, np.ndarray): s = dataflow.pprint.pformat([x[k].shape for x in data_holder]) dataflow.logger.error("Shape of all arrays to be batched: " + s) try: # open an ipython shell if possible import IPython as IP; IP.embed() # noqa except ImportError: pass return result class BreakForBPTT(ProxyDataFlow): """ Breaks long trajectories into multiple smaller segments for training with BPTT. Adds an extra field for indicating the first segment of a trajectory. """ def __init__(self, ds, timed_indices, trajlen, bptt_steps): """ :param ds: input dataflow :param timed_indices: field indices for which the second dimension corresponds to timestep along the trajectory :param trajlen: full length of trajectories :param bptt_steps: segment length, number of backprop steps for BPTT. Must be an integer divisor of trajlen """ super(BreakForBPTT, self).__init__(ds) self.timed_indiced = timed_indices self.bptt_steps = bptt_steps assert trajlen % bptt_steps == 0 self.num_segments = trajlen // bptt_steps def size(self): return self.ds.size() * self.num_segments def get_data(self): """ Yields multiple datapoints per input datapoints corresponding segments of the trajectory. Adds an extra field for indicating the first segment of a trajectory. """ for data in self.ds.get_data(): for split_i in range(self.num_segments): new_data = [] for i in range(len(data)): if i in self.timed_indiced: new_data.append(data[i][:, split_i*self.bptt_steps:(split_i+1)*self.bptt_steps]) else: new_data.append(data[i]) new_data.append((split_i == 0)) yield new_data class House3DTrajData(RNGDataFlow): """ Process tfrecords data of House3D trajectories. Produces a dataflow with the following fields: true state, global map, initial particles, observations, odometries """ def __init__(self, files, mapmode, obsmode, trajlen, num_particles, init_particles_distr, init_particles_cov, seed=None): """ :param files: list of data file names. assumed to be tfrecords files :param mapmode: string, map type. Possible values: wall / wall-door / wall-roomtype / wall-door-roomtype :param obsmode: string, observation type. Possible values: rgb / depth / rgb-depth. Vrf is not yet supported :param trajlen: int, length of trajectories :param num_particles: int, number of particles :param init_particles_distr: string, type of initial particle distribution. Possible values: tracking / one-room. Does not support two-rooms and all-rooms yet. :param init_particles_cov: numpy array of shape (3,3), coveriance matrix for the initial particles. Ignored when init_particles_distr != 'tracking'. :param seed: int or None. Random seed will be fixed if not None. """ self.files = files self.mapmode = mapmode self.obsmode = obsmode self.trajlen = trajlen self.num_particles = num_particles self.init_particles_distr = init_particles_distr self.init_particles_cov = init_particles_cov self.seed = seed # count total number of entries count = 0 for f in self.files: if not os.path.isfile(f): raise ValueError('Failed to find file: ' + f) record_iterator = tf.python_io.tf_record_iterator(f) for _ in record_iterator: count += 1 self.count = count def size(self): return self.count def reset_state(self): """ Reset state. Fix numpy random seed if needed.""" super(House3DTrajData, self).reset_state() if self.seed is not None: np.random.seed(1) else: np.random.seed(self.rng.randint(0, 99999999)) def get_data(self): """ Yields datapoints, all numpy arrays, with the following fields. true states: (trajlen, 3). Second dimension corresponds to x, y, theta coordinates. global map: (n, m, ch). shape is different for each map. number of channels depend on the mapmode setting initial particles: (num_particles, 3) observations: (trajlen, 56, 56, ch) number of channels depend on the obsmode setting odometries: (trajlen, 3) relative motion in the robot coordinate frame """ for file in self.files: gen = tf.python_io.tf_record_iterator(file) for data_i, string_record in enumerate(gen): result = tf.train.Example.FromString(string_record) # decord message from binary file features = result.features.feature # process maps map_wall = self.process_wall_map(features['map_wall'].bytes_list.value[0]) global_map_list = [map_wall] if 'door' in self.mapmode: map_door = self.process_door_map(features['map_door'].bytes_list.value[0]) global_map_list.append(map_door) if 'roomtype' in self.mapmode: map_roomtype = self.process_roomtype_map(features['map_roomtype'].bytes_list.value[0]) global_map_list.append(map_roomtype) if self.init_particles_distr == 'tracking': map_roomid = None else: map_roomid = self.process_roomid_map(features['map_roomid'].bytes_list.value[0]) # input global map is a concatentation of semantic channels global_map = np.concatenate(global_map_list, axis=-1) # concatenate in the last axis # rescale to 0~2 range. this way zero padding will produce the equivalent of obstacles global_map = global_map.astype(np.float32) * (2.0 / 255.0) # process true states true_states = features['states'].bytes_list.value[0] true_states = np.frombuffer(true_states, np.float32).reshape((-1, 3)) #frombuffer:Interpret a buffer as a 1-dimensional array. # trajectory may be longer than what we use for training data_trajlen = true_states.shape[0] assert data_trajlen >=
return ff_type,ff_number,entry_data,entry_values,i def store_ffio_data(self, ff_type, ff_number, entry_data, entry_values): self.stored_ffio_data[ff_type] = dict() self.stored_ffio_data[ff_type]['ff_type'] = ff_type self.stored_ffio_data[ff_type]['ff_number'] = ff_number self.stored_ffio_data[ff_type]['entry_data'] = entry_data self.stored_ffio_data[ff_type]['entry_values'] = entry_values def retrive_ffio_data(self, ff_type): return [self.stored_ffio_data[ff_type]['ff_number'], self.stored_ffio_data[ff_type]['entry_data'], self.stored_ffio_data[ff_type]['entry_values'] ] def parse_vdwtypes(self, type, current_molecule_type): ff_number, entry_data, entry_values = self.retrive_ffio_data(type) # molecule name is at sites, but vdwtypes come # before sites. So we store info in vdwtypes and # edit it later at sites. Eventually, we should # probably move to a model where we store sections # we can't use yet, and then process them in the # order we want. logger.debug("Parsing [ vdwtypes ] ...") for j in range(ff_number): self.vdwtypes.append(entry_values[j].split()[3:]) #THIS IS ASSUMING ALL VDWTYPES ARE STORED AS LJ12_6_SIG_EPSILON self.vdwtypeskeys.append(entry_values[j].split()[1]) def parse_sites(self, type, molname, i, start): ff_number, entry_data, entry_values = self.retrive_ffio_data(type) #correlate with atomtypes and atoms in GROMACS logger.debug("Parsing [ sites ] ...") #set indices to avoid continually calling list functions. ivdwtype = entry_data.index('s_ffio_vdwtype')+1 icharge = entry_data.index('r_ffio_charge')+1 imass = entry_data.index('r_ffio_mass')+1 stemp = None etemp = None if 'i_ffio_resnr' in entry_data: iresnum = entry_data.index('i_ffio_resnr')+1 iresidue = entry_data.index('s_ffio_residue')+1 cgnr = 0 # create the atom type container for the datax current_molecule_type = MoleculeType(name=molname) current_molecule_type.nrexcl = 0 #PLACEHOLDER FOR NREXCL...WE NEED TO FIND OUT WHERE IT IS #MRS: basically, we have to figure out the furthest number of bonds out # to exclude OR explicitly set gromacs exclusions. Either should work. # for now, we'll go with the latter self.system.add_molecule_type(current_molecule_type) current_molecule = Molecule(name=molname) # should this be the same molname several as lines up? for j in range(ff_number): split = entry_values[j].split() if split[1] == "atom": if ('i_ffio_resnr' in entry_data): atom = Atom(int(split[0]), split[ivdwtype], int(split[iresnum]), split[iresidue]) else: # No residuenr, means we will have identical atoms sharing this. atom = Atom(int(split[0]), split[ivdwtype]) atom.atomtype = (0, split[ivdwtype]) atom.charge = (0, float(split[icharge])*units.elementary_charge) atom.mass = (0, float(split[imass]) * units.amu) stemp = float(self.vdwtypes[self.vdwtypeskeys.index(split[ivdwtype])][0]) * units.angstroms #was in angstroms etemp = float(self.vdwtypes[self.vdwtypeskeys.index(split[ivdwtype])][1]) * units.kilocalorie_per_mole #was in kilocal per mol atom.sigma = (0, stemp) atom.epsilon = (0, etemp) atom.cgnr = cgnr cgnr+=1 newAtomType = None current_molecule.add_atom(atom) if not self.system._atomtypes.get(AbstractAtomType(atom.atomtype.get(0))): #if atomtype not in self.system, add it if self.system.combination_rule == 'Multiply-C6C12': sigma = (etemp/stemp)**(1/6) epsilon = (stemp)/(4*sigma**6) newAtomType = AtomCType(split[ivdwtypes], #atomtype/name split[ivdwtype], #bondtype -1, #atomic_number float(split[imass]) * units.amu, #mass float(split[icharge]) * units.elementary_charge, #charge--NEED TO CONVERT TO ACTUAL UNIT 'A', #pcharge...saw this in top--NEED TO CONVERT TO ACTUAL UNITS sigma * units.kilocalorie_per_mole * angstroms**(6), epsilon * units.kilocalorie_per_mole * unit.angstro,s**(12)) elif (self.system.combination_rule == 'Lorentz-Berthelot') or (self.system.combination_rule == 'Multiply-Sigeps'): newAtomType = AtomSigepsType(split[ivdwtype], #atomtype/name split[ivdwtype], #bondtype -1, #atomic_number float(split[imass]) * units.amu, #mass--NEED TO CONVERT TO ACTUAL UNITS float(split[icharge]) * units.elementary_charge, #charge--NEED TO CONVERT TO ACTUAL UNIT 'A', #pcharge...saw this in top--NEED TO CONVERT TO ACTUAL UNITS stemp, etemp) self.system.add_atomtype(newAtomType) if len(self.atom_blockpos) > 1: #LOADING M_ATOMS if self.atom_blockpos[0] < start: # generate the new molecules for this block; the number of molecules depends on # The number of molecules depends on the number of entries in ffio_sites (ff_number) new_molecules = self.loadMAtoms(self.lines, self.atom_blockpos[0], i, current_molecule, ff_number) self.atom_blockpos.pop(0) index = 0 for molecule in new_molecules: self.system.add_molecule(molecule) # now construct an atomlist with all the atoms for atom in molecule.atoms: # does this need to be a deep copy? # tmpatom = copy.deepcopy(atom) # tmpatom.index = index self.atomlist.append(atom) index +=1 return self.system._molecule_types[molname] def parse_bonds(self, type, current_molecule_type, i, start): ff_number, entry_data, entry_values = self.retrive_ffio_data(type) if len(self.bond_blockpos) > 1: #LOADING M_BONDS if self.bond_blockpos[0] < start: for molecule in iter(current_molecule_type.molecules): npermol = len(molecule.atoms) break # of the parsers, this is the only one that uses 'lines'. Can we remove? current_molecule_type.bond_forces = self.loadMBonds(self.lines, self.bond_blockpos[0], i, npermol) self.bond_blockpos.pop(0) logger.debug("Parsing [ bonds ]...") for j in range(ff_number): entries = entry_values[j].split() key = entries[3].upper() atoms = [int(x) for x in entries[1:3]] bondingtypes = [self.atomlist[atom-1].name for atom in atoms] atoms.extend(bondingtypes) params = [float(x) for x in entries[4:6]] new_bond = self.create_forcetype(self.desmond_bonds[key], atoms, params) kwds = self.get_parameter_kwds_from_force(new_bond) new_bond = self.canonical_bond(new_bond, kwds, direction = 'into', name = key) # removing the placeholder from matoms (should be a better way to do this?) if new_bond: old_bond = current_molecule_type.match_bonds(new_bond) if old_bond: new_bond.order = old_bond.order current_molecule_type.bond_forces.remove(old_bond) current_molecule_type.bond_forces.add(new_bond) def parse_pairs(self, type, current_molecule_type): ff_number, entry_data, entry_values = self.retrive_ffio_data(type) logger.debug("Parsing [ pairs ] ...") for j in range(ff_number): ljcorr = False coulcorr = False new_pair = None split = entry_values[j].split() atoms = [int(x) for x in split[1:3]] bondingtypes = [self.atomlist[atom-1].name for atom in atoms] params = atoms + bondingtypes key = split[3].upper() if key == "LJ12_6_SIG_EPSILON": new_pair = self.create_forcetype(LjSigepsPair, params, [float(x) for x in split[4:6]]) elif key == "LJ" or key == "COULOMB": # I think we just need LjSigepsPair, not LjPair? new_pair = self.create_forcetype(LjDefaultPair, params, [0, 0]) if key == "LJ": ljcorr = float(split[4]) new_pair.scaleLJ = ljcorr elif key == "COULOMB": coulcorr = float(split[4]) new_pair.scaleQQ = coulcorr else: warn("ReadError: didn't recognize type %s in line %s", split[3], entry_values[j]) # now, we catch the matches and read them into a single potential pair_match = current_molecule_type.match_pairs(new_pair) if pair_match: # we found a pair with the same atoms; let's insert or delete information as needed. remove_old = False remove_new = False if isinstance(new_pair, LjSigepsPair) and isinstance(pair_match, LjDefaultPair) and pair_match.scaleQQ: #Need to add old scaleQQ to this new pair new_pair.scaleQQ = pair_match.scaleQQ remove_old = True elif isinstance(pair_match, LjSigepsPair) and isinstance(new_pair, LjDefaultPair) and new_pair.scaleQQ: #Need to add the scaleQQ to the old pair pair_match.scaleQQ = new_pair.scaleQQ remove_new = True elif isinstance(new_pair,LjDefaultPair) and isinstance(pair_match,LjDefaultPair): if pair_match.scaleQQ and not new_pair.scaleQQ: new_pair.scaleQQ = pair_match.scaleQQ remove_old = True elif not pair_match.scaleQQ and new_pair.scaleQQ: pair_match.scaleQQ = new_pair.scaleQQ remove_new = True if pair_match.scaleLJ and not new_pair.scaleLJ: new_pair.scaleLJ = pair_match.scaleLJ remove_new = True elif not pair_match.scaleLJ and new_pair.scaleLJ: pair_match.scaleLJ = new_pair.scaleLJ remove_old = True if remove_old: current_molecule_type.pair_forces.remove(pair_match) if remove_new: new_pair = None if coulcorr: self.system.coulomb_correction = coulcorr # need this for gromacs to have the global declared #If we have difference between global and local, catch in gromacs. if ljcorr: self.system.lj_correction = ljcorr # need this for gromacs to have the global declared #If we have difference between global and local, catch in gromacs. if new_pair: current_molecule_type.pair_forces.add(new_pair) # IMPORTANT: we are going to assume that all pairs are both LJ and COUL. # if COUL is not included, then it is because the charges are zero, and they will give the # same energy. This could eventually be improved by checking versus the sites. def parse_angles(self, type, current_molecule_type): ff_number, entry_data, entry_values = self.retrive_ffio_data(type) logger.debug("Parsing [ angles ] ...") for j in range(ff_number): split = entry_values[j].split() key = split[4].upper() atoms = [int(x) for x in split[1:4]] bondingtypes = [self.atomlist[atom-1].name for atom in atoms] atoms.extend(bondingtypes) kwds = [float(x) for x in split[5:7]] new_angle = self.create_forcetype(self.desmond_angles[key], atoms, kwds) kwds = self.get_parameter_kwds_from_force(new_angle) new_angle = self.canonical_angle(new_angle, kwds, direction = 'into', name = key, molecule_type = current_molecule_type) if new_angle: current_molecule_type.angle_forces.add(new_angle) def parse_dihedrals(self, type, current_molecule_type): ff_number, entry_data, entry_values = self.retrive_ffio_data(type) logger.debug("Parsing [ dihedrals ] ...") for j in range(ff_number): split = entry_values[j].split() new_dihedral = None dihedral_type = None atoms = [int(x) for x in split[1:5]] bondingtypes = [self.atomlist[atom-1].name for atom in atoms] key = split[5].upper() atoms.extend(bondingtypes) # not sure how to put the following lines in canonical, since it expects keywords, # not strings of variable length. will have to fix later. if key == "IMPROPER_HARM": kwds = [float(split[6]), 2*float(split[7])] elif key == "PROPER_TRIG" or key == "IMPROPER_TRIG": kwds = [float(x) for x in split[6:14]] elif key == "OPLS_PROPER" or key == "OPLS_IMPROPER": # next 3 lines definitely not the right way to do it. opls_kwds = {key: value for key, value in zip("c1 c2 c3 c4".split(), [units.kilocalorie_per_mole * float(s) for s in split[7:11]])} opls_kwds = convert_dihedral_from_fourier_to_trig(opls_kwds) kwds = np.zeros(8) # will fill this in later. new_dihedral = self.create_forcetype(self.desmond_dihedrals[key], atoms, kwds) # really should be some way to get rid of this code below if key == "OPLS_PROPER" or key == "OPLS_IMPROPER": for key in opls_kwds.keys(): setattr(new_dihedral,key,opls_kwds[key]) # really should be some way to get rid of this code above kwds
""" Created on Tuesday 9 July 2019 @author: s0899345 """ import matplotlib.pyplot as plt import iris import iris.coord_categorisation as iriscc import iris.plot as iplt import iris.analysis.cartography import numpy as np import calendar import cf_units from cf_units import Unit #this file is split into parts as follows: #PART 1: Load and Format all Past Models #PART 2: Load and Format all Future Models #PART 3: Load and Format Observed Data #PART 4: Format Data General #PART 5: Format Data to be Geographically Specific and Re-Baseline #PART 6: print data def main(): #promote iris.FUTURE to true to fix cube iris.FUTURE.netcdf_promote = True #------------------------------------------------------------------------- #PART 1: LOAD and FORMAT ALL PAST MODELS CCCmaCanRCM_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_CCCma-CanESM2_historical_r1i1p1_CCCma-CanRCM4_r2_day_19710101-20001231.nc' CCCmaSMHI_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_CCCma-CanESM2_historical_r1i1p1_SMHI-RCA4_v1_day_19710101-20001231.nc' CNRM_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_CNRM-CERFACS-CNRM-CM5_historical_r1i1p1_CLMcom-CCLM4-8-17_v1_day_19710101-20001231.nc' CNRMSMHI_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_CNRM-CERFACS-CNRM-CM5_historical_r1i1p1_SMHI-RCA4_v1_day_19710101-20001231.nc' CSIRO_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_CSIRO-QCCCE-CSIRO-Mk3-6-0_historical_r1i1p1_SMHI-RCA4_v1_day_19710101-20001231.nc' ICHECDMI_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_ICHEC-EC-EARTH_historical_r3i1p1_DMI-HIRHAM5_v2_day_19710101-20001231.nc' ICHECCCLM_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_ICHEC-EC-EARTH_historical_r12i1p1_CLMcom-CCLM4-8-17_v1_day_19710101-20001231.nc' ICHECKNMI_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_ICHEC-EC-EARTH_historical_r1i1p1_KNMI-RACMO22T_v1_day_19710101-20001231.nc' ICHECMPI_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_ICHEC-EC-EARTH_historical_r12i1p1_MPI-CSC-REMO2009_v1_day_19710101-20001231.nc' ICHECSMHI_past = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/Historical_daily/tasmax_AFR-44_ICHEC-EC-EARTH_historical_r12i1p1_SMHI-RCA4_v1_day_19710101-20001231.nc' #Load exactly one cube from given file CCCmaCanRCM_past = iris.load_cube(CCCmaCanRCM_past) CCCmaSMHI_past = iris.load_cube(CCCmaSMHI_past) CNRM_past = iris.load_cube(CNRM_past) CNRMSMHI_past = iris.load_cube(CNRMSMHI_past) CSIRO_past = iris.load_cube(CSIRO_past) ICHECDMI_past = iris.load_cube(ICHECDMI_past, 'air_temperature') ICHECCCLM_past = iris.load_cube(ICHECCCLM_past) ICHECKNMI_past = iris.load_cube(ICHECKNMI_past) ICHECMPI_past = iris.load_cube(ICHECMPI_past) ICHECSMHI_past = iris.load_cube(ICHECSMHI_past) #remove flat latitude and longitude and only use grid latitude and grid longitude to make consistent with the observed data, also make sure all of the longitudes are monotonic. lats = iris.coords.DimCoord(CCCmaCanRCM_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CCCmaCanRCM_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CCCmaCanRCM_past.remove_coord('latitude') CCCmaCanRCM_past.remove_coord('longitude') CCCmaCanRCM_past.remove_coord('grid_latitude') CCCmaCanRCM_past.remove_coord('grid_longitude') CCCmaCanRCM_past.add_dim_coord(lats, 1) CCCmaCanRCM_past.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CCCmaSMHI_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CCCmaSMHI_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CCCmaSMHI_past.remove_coord('latitude') CCCmaSMHI_past.remove_coord('longitude') CCCmaSMHI_past.remove_coord('grid_latitude') CCCmaSMHI_past.remove_coord('grid_longitude') CCCmaSMHI_past.add_dim_coord(lats, 1) CCCmaSMHI_past.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CNRM_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CNRM_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CNRM_past.remove_coord('latitude') CNRM_past.remove_coord('longitude') CNRM_past.remove_coord('grid_latitude') CNRM_past.remove_coord('grid_longitude') CNRM_past.add_dim_coord(lats, 1) CNRM_past.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CNRMSMHI_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CNRMSMHI_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CNRMSMHI_past.remove_coord('latitude') CNRMSMHI_past.remove_coord('longitude') CNRMSMHI_past.remove_coord('grid_latitude') CNRMSMHI_past.remove_coord('grid_longitude') CNRMSMHI_past.add_dim_coord(lats, 1) CNRMSMHI_past.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CSIRO_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CSIRO_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CSIRO_past.remove_coord('latitude') CSIRO_past.remove_coord('longitude') CSIRO_past.remove_coord('grid_latitude') CSIRO_past.remove_coord('grid_longitude') CSIRO_past.add_dim_coord(lats, 1) CSIRO_past.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECDMI_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECDMI_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECDMI_past.remove_coord('latitude') ICHECDMI_past.remove_coord('longitude') ICHECDMI_past.remove_coord('grid_latitude') ICHECDMI_past.remove_coord('grid_longitude') ICHECDMI_past.add_dim_coord(lats, 1) ICHECDMI_past.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECCCLM_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECCCLM_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECCCLM_past.remove_coord('latitude') ICHECCCLM_past.remove_coord('longitude') ICHECCCLM_past.remove_coord('grid_latitude') ICHECCCLM_past.remove_coord('grid_longitude') ICHECCCLM_past.add_dim_coord(lats, 1) ICHECCCLM_past.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECKNMI_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECKNMI_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECKNMI_past.remove_coord('latitude') ICHECKNMI_past.remove_coord('longitude') ICHECKNMI_past.remove_coord('grid_latitude') ICHECKNMI_past.remove_coord('grid_longitude') ICHECKNMI_past.add_dim_coord(lats, 1) ICHECKNMI_past.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECMPI_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECMPI_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECMPI_past.remove_coord('latitude') ICHECMPI_past.remove_coord('longitude') ICHECMPI_past.remove_coord('grid_latitude') ICHECMPI_past.remove_coord('grid_longitude') ICHECMPI_past.add_dim_coord(lats, 1) ICHECMPI_past.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECSMHI_past.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECSMHI_past.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECSMHI_past.remove_coord('latitude') ICHECSMHI_past.remove_coord('longitude') ICHECSMHI_past.remove_coord('grid_latitude') ICHECSMHI_past.remove_coord('grid_longitude') ICHECSMHI_past.add_dim_coord(lats, 1) ICHECSMHI_past.add_dim_coord(lons, 2) #guess bounds CCCmaCanRCM_past.coord('latitude').guess_bounds() CCCmaSMHI_past.coord('latitude').guess_bounds() CNRM_past.coord('latitude').guess_bounds() CNRMSMHI_past.coord('latitude').guess_bounds() CSIRO_past.coord('latitude').guess_bounds() ICHECDMI_past.coord('latitude').guess_bounds() ICHECCCLM_past.coord('latitude').guess_bounds() ICHECKNMI_past.coord('latitude').guess_bounds() ICHECMPI_past.coord('latitude').guess_bounds() ICHECSMHI_past.coord('latitude').guess_bounds() CCCmaCanRCM_past.coord('longitude').guess_bounds() CCCmaSMHI_past.coord('longitude').guess_bounds() CNRM_past.coord('longitude').guess_bounds() CNRMSMHI_past.coord('longitude').guess_bounds() CSIRO_past.coord('longitude').guess_bounds() ICHECDMI_past.coord('longitude').guess_bounds() ICHECCCLM_past.coord('longitude').guess_bounds() ICHECKNMI_past.coord('longitude').guess_bounds() ICHECMPI_past.coord('longitude').guess_bounds() ICHECSMHI_past.coord('longitude').guess_bounds() #------------------------------------------------------------------------- #PART 2: LOAD and FORMAT PROJECTED MODELS CCCmaCanRCM= '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_CCCma-CanESM2_rcp45_r1i1p1_CCCma-CanRCM4_r2_day_20060101-20701231.nc' CCCmaSMHI= '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_CCCma-CanESM2_rcp45_r1i1p1_SMHI-RCA4_v1_day_20060101-20701231.nc' CNRM= '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_CNRM-CERFACS-CNRM-CM5_rcp45_r1i1p1_CLMcom-CCLM4-8-17_v1_day_20060101-20701231.nc' CNRMSMHI= '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_CNRM-CERFACS-CNRM-CM5_rcp45_r1i1p1_SMHI-RCA4_v1_day_20060101-20701231.nc' CSIRO = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_CSIRO-QCCCE-CSIRO-Mk3-6-0_rcp45_r1i1p1_SMHI-RCA4_v1_day_20060101-20701231.nc' ICHECDMI = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp45_r3i1p1_DMI-HIRHAM5_v2_day_20060101-20701231.nc' ICHECCCLM = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp45_r12i1p1_CLMcom-CCLM4-8-17_v1_day_20060101-20701231.nc' ICHECKNMI = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp45_r1i1p1_KNMI-RACMO22T_v1_day_20060101-20701231.nc' ICHECMPI = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp45_r12i1p1_MPI-CSC-REMO2009_v1_day_20060101-20701231.nc' ICHECSMHI = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/4.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp45_r12i1p1_SMHI-RCA4_v1_day_20060101-20701231.nc' CCCmaCanRCM85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_CCCma-CanESM2_rcp85_r1i1p1_CCCma-CanRCM4_r2_day_20060101-20701231.nc' CCCmaSMHI85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_CCCma-CanESM2_rcp85_r1i1p1_SMHI-RCA4_v1_day_20060101-20701231.nc' CNRM85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_CNRM-CERFACS-CNRM-CM5_rcp85_r1i1p1_CLMcom-CCLM4-8-17_v1_day_20060101-20701231.nc' CNRMSMHI85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_CNRM-CERFACS-CNRM-CM5_rcp85_r1i1p1_SMHI-RCA4_v1_day_20060101-20701231.nc' CSIRO85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_CSIRO-QCCCE-CSIRO-Mk3-6-0_rcp85_r1i1p1_SMHI-RCA4_v1_day_20060101-20701231.nc' ICHECDMI85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp85_r3i1p1_DMI-HIRHAM5_v2_day_20060101-20701231.nc' ICHECCCLM85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp85_r12i1p1_CLMcom-CCLM4-8-17_v1_day_20060101-20701231.nc' ICHECKNMI85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp85_r1i1p1_KNMI-RACMO22T_v1_day_20060101-20701231.nc' ICHECMPI85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp85_r12i1p1_MPI-CSC-REMO2009_v1_day_20060101-20701231.nc' ICHECSMHI85 = '/exports/csce/datastore/geos/users/s0899345/AFR_44_tasmax/8.5/tasmax_AFR-44_ICHEC-EC-EARTH_rcp85_r12i1p1_SMHI-RCA4_v1_day_20060101-20701231.nc' #Load exactly one cube from given file CCCmaCanRCM = iris.load_cube(CCCmaCanRCM) CCCmaSMHI = iris.load_cube(CCCmaSMHI) CNRM = iris.load_cube(CNRM) CNRMSMHI = iris.load_cube(CNRMSMHI) CSIRO = iris.load_cube(CSIRO) ICHECDMI = iris.load_cube(ICHECDMI, 'air_temperature') ICHECCCLM = iris.load_cube(ICHECCCLM) ICHECKNMI = iris.load_cube(ICHECKNMI) ICHECMPI = iris.load_cube(ICHECMPI) ICHECSMHI = iris.load_cube(ICHECSMHI) CCCmaCanRCM85 = iris.load_cube(CCCmaCanRCM85) CCCmaSMHI85 = iris.load_cube(CCCmaSMHI85) CNRM85 = iris.load_cube(CNRM85) CNRMSMHI85 = iris.load_cube(CNRMSMHI85) CSIRO85 = iris.load_cube(CSIRO85) ICHECDMI85 = iris.load_cube(ICHECDMI85, 'air_temperature') ICHECCCLM85 = iris.load_cube(ICHECCCLM85) ICHECKNMI85 = iris.load_cube(ICHECKNMI85) ICHECMPI85 = iris.load_cube(ICHECMPI85) ICHECSMHI85 = iris.load_cube(ICHECSMHI85) #remove flat latitude and longitude and only use grid latitude and grid longitude to make consistent with the observed data, also make sure all of the longitudes are monotonic. lats = iris.coords.DimCoord(CCCmaCanRCM.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CCCmaCanRCM.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CCCmaCanRCM.remove_coord('latitude') CCCmaCanRCM.remove_coord('longitude') CCCmaCanRCM.remove_coord('grid_latitude') CCCmaCanRCM.remove_coord('grid_longitude') CCCmaCanRCM.add_dim_coord(lats, 1) CCCmaCanRCM.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CCCmaSMHI.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CCCmaSMHI.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CCCmaSMHI.remove_coord('latitude') CCCmaSMHI.remove_coord('longitude') CCCmaSMHI.remove_coord('grid_latitude') CCCmaSMHI.remove_coord('grid_longitude') CCCmaSMHI.add_dim_coord(lats, 1) CCCmaSMHI.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CNRM.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CNRM.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CNRM.remove_coord('latitude') CNRM.remove_coord('longitude') CNRM.remove_coord('grid_latitude') CNRM.remove_coord('grid_longitude') CNRM.add_dim_coord(lats, 1) CNRM.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CNRMSMHI.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CNRMSMHI.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CNRMSMHI.remove_coord('latitude') CNRMSMHI.remove_coord('longitude') CNRMSMHI.remove_coord('grid_latitude') CNRMSMHI.remove_coord('grid_longitude') CNRMSMHI.add_dim_coord(lats, 1) CNRMSMHI.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CSIRO.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CSIRO.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CSIRO.remove_coord('latitude') CSIRO.remove_coord('longitude') CSIRO.remove_coord('grid_latitude') CSIRO.remove_coord('grid_longitude') CSIRO.add_dim_coord(lats, 1) CSIRO.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECDMI.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECDMI.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECDMI.remove_coord('latitude') ICHECDMI.remove_coord('longitude') ICHECDMI.remove_coord('grid_latitude') ICHECDMI.remove_coord('grid_longitude') ICHECDMI.add_dim_coord(lats, 1) ICHECDMI.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECCCLM.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECCCLM.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECCCLM.remove_coord('latitude') ICHECCCLM.remove_coord('longitude') ICHECCCLM.remove_coord('grid_latitude') ICHECCCLM.remove_coord('grid_longitude') ICHECCCLM.add_dim_coord(lats, 1) ICHECCCLM.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECKNMI.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECKNMI.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECKNMI.remove_coord('latitude') ICHECKNMI.remove_coord('longitude') ICHECKNMI.remove_coord('grid_latitude') ICHECKNMI.remove_coord('grid_longitude') ICHECKNMI.add_dim_coord(lats, 1) ICHECKNMI.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECMPI.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECMPI.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECMPI.remove_coord('latitude') ICHECMPI.remove_coord('longitude') ICHECMPI.remove_coord('grid_latitude') ICHECMPI.remove_coord('grid_longitude') ICHECMPI.add_dim_coord(lats, 1) ICHECMPI.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECSMHI.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECSMHI.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECSMHI.remove_coord('latitude') ICHECSMHI.remove_coord('longitude') ICHECSMHI.remove_coord('grid_latitude') ICHECSMHI.remove_coord('grid_longitude') ICHECSMHI.add_dim_coord(lats, 1) ICHECSMHI.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CCCmaCanRCM85.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CCCmaCanRCM85.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CCCmaCanRCM85.remove_coord('latitude') CCCmaCanRCM85.remove_coord('longitude') CCCmaCanRCM85.remove_coord('grid_latitude') CCCmaCanRCM85.remove_coord('grid_longitude') CCCmaCanRCM85.add_dim_coord(lats, 1) CCCmaCanRCM85.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CCCmaSMHI85.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CCCmaSMHI85.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CCCmaSMHI85.remove_coord('latitude') CCCmaSMHI85.remove_coord('longitude') CCCmaSMHI85.remove_coord('grid_latitude') CCCmaSMHI85.remove_coord('grid_longitude') CCCmaSMHI85.add_dim_coord(lats, 1) CCCmaSMHI85.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CNRM85.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CNRM85.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CNRM85.remove_coord('latitude') CNRM85.remove_coord('longitude') CNRM85.remove_coord('grid_latitude') CNRM85.remove_coord('grid_longitude') CNRM85.add_dim_coord(lats, 1) CNRM85.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CNRMSMHI85.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CNRMSMHI85.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CNRMSMHI85.remove_coord('latitude') CNRMSMHI85.remove_coord('longitude') CNRMSMHI85.remove_coord('grid_latitude') CNRMSMHI85.remove_coord('grid_longitude') CNRMSMHI85.add_dim_coord(lats, 1) CNRMSMHI85.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(CSIRO85.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = CSIRO85.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') CSIRO85.remove_coord('latitude') CSIRO85.remove_coord('longitude') CSIRO85.remove_coord('grid_latitude') CSIRO85.remove_coord('grid_longitude') CSIRO85.add_dim_coord(lats, 1) CSIRO85.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECDMI85.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECDMI85.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECDMI85.remove_coord('latitude') ICHECDMI85.remove_coord('longitude') ICHECDMI85.remove_coord('grid_latitude') ICHECDMI85.remove_coord('grid_longitude') ICHECDMI85.add_dim_coord(lats, 1) ICHECDMI85.add_dim_coord(lons, 2) lats = iris.coords.DimCoord(ICHECCCLM85.coord('latitude').points[:,0], standard_name='latitude', units='degrees') lons = ICHECCCLM85.coord('longitude').points[0] for i in range(len(lons)): if lons[i]>100.: lons[i] = lons[i]-360. lons = iris.coords.DimCoord(lons, standard_name='longitude', units='degrees') ICHECCCLM85.remove_coord('latitude')
:alt: alternate text :align: center Returns ------- res : fig or None Either a figure if file_path is not specified or nothing. """ assert not (df_devs is None and df_tcorr is None) title = "Triggercount with sliding window of " + t_window color = 'trigger count' cbarlabel = 'counts' if df_tcorr is None: df = device_tcorr(df_devs, lst_devs=lst_devs, t_window=t_window) else: df = df_tcorr # get the list of cross tabulations per t_window vals = df.astype(int).values.T devs = list(df.index) num_dev = len(devs) figsize = (_num_items_2_heatmap_square_figsize_ver2(num_dev) if figsize is None else figsize) cmap = (get_sequential_color() if cmap is None else cmap) fig, ax = plt.subplots(figsize=figsize) log = True if z_scale == 'log' else False valfmt = "{x:.0f}" im, cbar = heatmap_square(vals, devs, devs, ax=ax, cmap=cmap, cbarlabel=cbarlabel, log=log)#, cbar_kw=cbar_kw) # show numbers for small sizes if numbers is None: if num_dev < 20: texts = annotate_heatmap(im, textcolors=("white", "black"), log=log, valfmt=valfmt) elif numbers: texts = annotate_heatmap(im, textcolors=("white", "black"), log=log, valfmt=valfmt) ax.set_title(title) fig.tight_layout() if file_path is not None: savefig(fig, file_path) return else: return fig def heatmap_cross_correlation(df_devs=None, lst_devs=None, df_dur_corr=None, figsize=None, numbers=None, file_path=None): """ Plots the cross correlation between the device signals Parameters ---------- df_devs : pd.DataFrame, optional recorded devices from a dataset. Fore more information refer to the :ref:`user guide<device_dataframe>`. lst_devs : lst of str, optional A list of devices that are included in the statistic. The list can be a subset of the recorded activities or contain activities that are not recorded. df_tcorr : pd.DataFrame A precomputed correlation table. If the *df_tcorr* parameter is given, parameters *df_devs* and *lst_devs* are ignored. The transition table can be computed in :ref:`stats <stats_devs_tcorr>`. figsize : (float, float), default: None width, height in inches. If not provided, the figsize is inferred by automatically. numbers : bool, default: True Whether to display numbers inside the heatmaps fields or not. file_path : str, optional If set, saves the plot under the given file path and return *None* instead of returning the figure. Examples -------- >>> from pyadlml.plot import plot_dev_hm_similarity >>> plot_dev_hm_similarity(data.df_devs) .. image:: ../_static/images/plots/dev_hm_dur_cor.png :height: 400px :width: 500 px :scale: 90 % :alt: alternate text :align: center Returns ------- res : fig or None Either a figure if file_path is not specified or nothing. """ assert not (df_devs is None and df_dur_corr is None) title = 'Devices cross-correlation' cmap = 'RdBu' cbarlabel = 'similarity' if df_dur_corr is None: ct = duration_correlation(df_devs, lst_devs=lst_devs) else: ct = df_dur_corr ct = ct.replace(pd.NA, np.inf) vals = ct.values.T devs = list(ct.index) num_dev = len(devs) figsize = (_num_items_2_heatmap_square_figsize_ver2(num_dev) if figsize is None else figsize) fig, ax = plt.subplots(figsize=figsize) im, cbar = heatmap_square(vals, devs, devs, ax=ax, cmap=cmap, cbarlabel=cbarlabel, vmin=-1, vmax=1) if numbers is None: if num_dev < 15: valfmt = "{x:.2f}" texts = annotate_heatmap(im, textcolors=("black", "white"), threshold=0.5, valfmt=valfmt) elif num_dev < 30: valfmt = "{x:.1f}" texts = annotate_heatmap(im, textcolors=("black", "white"), threshold=0.5, valfmt=valfmt) if numbers: texts = annotate_heatmap(im, textcolors=("black", "white"), threshold=0.5, valfmt="{x:.2f}") ax.set_title(title) fig.tight_layout() if file_path is not None: savefig(fig, file_path) return else: return fig def hist_on_off(df_devs=None, lst_devs=None, df_on_off=None, figsize=None, color=None, color_sec=None, order='frac_on', file_path=None): """ Plot bars the on/off fraction of all devices Parameters ---------- df_devs : pd.DataFrame, optional recorded devices from a dataset. Fore more information refer to the :ref:`user guide<device_dataframe>`. lst_devs : lst of str, optional A list of devices that are included in the statistic. The list can be a subset of the recorded activities or contain activities that are not recorded. df_on_off : pd.DataFrame A precomputed correlation table. If the *df_tcorr* parameter is given, parameters *df_devs* and *lst_devs* are ignored. The transition table can be computed in :ref:`stats <stats_devs_tcorr>`. figsize : (float, float), default: None width, height in inches. If not provided, the figsize is inferred by automatically. color : str, optional sets the primary color of the plot. When not set, the primary theming color is used. Learn more about theming in the :ref:`user guide <theming>` color_sec : str, optional sets the secondary color of the plot. When not set, the secondary theming color is used. Learn more about theming in the :ref:`user guide <theming>` order : {'frac_on', 'alphabetically', 'area'}, default='frac_on' determines the order in which the devices are listed. file_path : str, optional If set, saves the plot under the given file path and return *None* instead of returning the figure. Examples -------- >>> from pyadlml.plot import plot_device_on_off >>> plot_device_on_off(data.df_devs) .. image:: ../_static/images/plots/dev_on_off.png :height: 300px :width: 500 px :scale: 100 % :alt: alternate text :align: center Returns ------- res : fig or None Either a figure if file_path is not specified or nothing. """ assert not (df_devs is None and df_on_off is None) assert order in ['frac_on', 'name', 'area'] title = 'Devices fraction on/off' xlabel ='Percentage in binary states' ylabel = 'Devices' on_label = 'on' off_label = 'off' color = (get_primary_color() if color is None else color) color2 = (get_secondary_color()if color_sec is None else color_sec) if df_on_off is None: df = devices_on_off_stats(df_devs, lst_devs=lst_devs) else: df = df_on_off num_dev = len(df) figsize = (_num_bars_2_figsize(num_dev) if figsize is None else figsize) if order == 'frac_on': df = df.sort_values(by='frac_on', axis=0) elif order == 'name': df = df.sort_values(by=DEVICE, axis=0) else: raise NotImplementedError('room order will be implemented in the future') dev_lst = list(df[DEVICE]) # Figure Size fig, ax = plt.subplots(figsize=figsize) if lst_devs is not None: df['tmp'] = 0 plt.barh(df[DEVICE], df['tmp'].values, alpha=0.0) plt.barh(df[DEVICE], df['frac_off'].values, label=off_label, color=color) plt.barh(df[DEVICE], df['frac_on'].values, left=df['frac_off'], label=on_label, color=color2) else: plt.barh(dev_lst, df['frac_off'].values, label=off_label, color=color) # careful: notice "bottom" parameter became "left" plt.barh(dev_lst, df['frac_on'].values, left=df['frac_off'], label=on_label, color=color2) # we also need to switch the labels plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) # set the text centers to the middle for the greater fraction widths = df['frac_off'].apply(lambda x: x if x >= 0.5 else 1-x) xcenters = df['frac_off'].apply(lambda x: x/2 if x >= 0.5 else (1-x)/2 + x) first_number_left = True for y, c, w in zip(range(len(xcenters)), xcenters, widths): if y == len(xcenters)-1 and c < 0.5: first_number_left = False if c > 0.5: text_color='black' else: text_color='white' ax.text(c, y, '{:.4f}'.format(w), ha='center', va='center', color=text_color) if first_number_left: ax.legend(ncol=2, bbox_to_anchor=(0, 1), loc='upper left', fontsize='small') else: ax.legend(ncol=2, bbox_to_anchor=(1,1), loc='upper right', fontsize='small') # Remove axes splines for s in ['top', 'right']: ax.spines[s].set_visible(False) if file_path is not None: savefig(fig, file_path) return else: return fig def hist_counts(df_devs=None, lst_devs=None, df_tc=None, figsize=None, y_scale='linear', color=None, order='count', file_path=None): """ bar chart displaying how often activities are occurring Parameters ---------- df_devs : pd.DataFrame, optional recorded devices from a dataset. Fore more information refer to the :ref:`user guide<device_dataframe>`. lst_devs : lst of str, optional A list of devices that are included in the statistic. The list can be a subset of the recorded activities or contain activities that are not recorded. df_tc : pd.DataFrame A precomputed correlation table. If the *df_tcorr* parameter is given, parameters *df_devs* and *lst_devs* are ignored. The transition table can be computed in :ref:`stats <stats_devs_tcorr>`. y_scale : {"log", "linear"}, default: None The axis scale type to apply. figsize : (float, float), default: None width, height in inches. If not provided, the figsize is inferred by automatically. color : str, optional sets the primary color of the plot. When not set, the primary theming color is used. Learn more about theming in the :ref:`user guide <theming>` order : {'count', 'alphabetically', 'area'}, default='count' determines the order in which the devices are listed. file_path : str, optional If set, saves the plot under the given file path and return *None* instead of returning the figure. Examples -------- >>> from pyadlml.plots import plot_device_bar_count >>> plot_device_bar_count(data.df_devs) .. image:: ../_static/images/plots/dev_bar_trigger.png :height: 300px :width: 500 px :scale: 90 % :alt: alternate text :align: center Returns ------- res : fig or None Either a figure if file_path is not specified or nothing. """ assert not (df_devs is None and df_tc is None) assert y_scale in ['log', 'linear'] assert order in ['alphabetic', 'count', 'room'] title = 'Device triggers'
+ m.s1s412 + m.s1s413 == 0) m.c262 = Constraint(expr= - m.b208 + m.s1s414 + m.s1s415 + m.s1s416 + m.s1s417 + m.s1s418 + m.s1s419 + m.s1s420 == 0) m.c263 = Constraint(expr= - m.b209 + m.s1s421 + m.s1s422 + m.s1s423 + m.s1s424 + m.s1s425 + m.s1s426 + m.s1s427 == 0) m.c264 = Constraint(expr= - m.b210 + m.s1s428 + m.s1s429 + m.s1s430 + m.s1s431 + m.s1s432 + m.s1s433 + m.s1s434 == 0) m.c265 = Constraint(expr= - m.b211 + m.s1s435 + m.s1s436 + m.s1s437 + m.s1s438 + m.s1s439 + m.s1s440 + m.s1s441 == 0) m.c266 = Constraint(expr= - m.b212 + m.s1s442 + m.s1s443 + m.s1s444 + m.s1s445 + m.s1s446 + m.s1s447 + m.s1s448 == 0) m.c267 = Constraint(expr= - m.b213 + m.s1s449 + m.s1s450 + m.s1s451 + m.s1s452 + m.s1s453 + m.s1s454 + m.s1s455 == 0) m.c268 = Constraint(expr= - m.b214 + m.s1s456 + m.s1s457 + m.s1s458 + m.s1s459 + m.s1s460 + m.s1s461 + m.s1s462 == 0) m.c269 = Constraint(expr= - m.b215 + m.s1s463 + m.s1s464 + m.s1s465 + m.s1s466 + m.s1s467 + m.s1s468 + m.s1s469 == 0) m.c270 = Constraint(expr= - m.b216 + m.s1s470 + m.s1s471 + m.s1s472 + m.s1s473 + m.s1s474 + m.s1s475 + m.s1s476 == 0) m.c271 = Constraint(expr= - m.b217 + m.s1s477 + m.s1s478 + m.s1s479 + m.s1s480 + m.s1s481 + m.s1s482 + m.s1s483 == 0) m.c272 = Constraint(expr= - m.b218 + m.s1s484 + m.s1s485 + m.s1s486 + m.s1s487 + m.s1s488 + m.s1s489 + m.s1s490 == 0) m.c273 = Constraint(expr= - m.b219 + m.s1s491 + m.s1s492 + m.s1s493 + m.s1s494 + m.s1s495 + m.s1s496 + m.s1s497 == 0) m.c274 = Constraint(expr= - m.b220 + m.s1s498 + m.s1s499 + m.s1s500 + m.s1s501 + m.s1s502 + m.s1s503 + m.s1s504 == 0) m.c275 = Constraint(expr= - m.b221 + m.s1s505 + m.s1s506 + m.s1s507 + m.s1s508 + m.s1s509 + m.s1s510 + m.s1s511 == 0) m.c276 = Constraint(expr= - m.b222 + m.s1s512 + m.s1s513 + m.s1s514 + m.s1s515 + m.s1s516 + m.s1s517 + m.s1s518 == 0) m.c277 = Constraint(expr= - m.b223 + m.s1s519 + m.s1s520 + m.s1s521 + m.s1s522 + m.s1s523 + m.s1s524 + m.s1s525 == 0) m.c278 = Constraint(expr= - m.b224 + m.s1s526 + m.s1s527 + m.s1s528 + m.s1s529 + m.s1s530 + m.s1s531 + m.s1s532 == 0) m.c279 = Constraint(expr= - m.b225 + m.s1s533 + m.s1s534 + m.s1s535 + m.s1s536 + m.s1s537 + m.s1s538 + m.s1s539 == 0) m.c280 = Constraint(expr= - m.b226 + m.s1s540 + m.s1s541 + m.s1s542 + m.s1s543 + m.s1s544 + m.s1s545 + m.s1s546 == 0) m.c281 = Constraint(expr= - m.b227 + m.s1s547 + m.s1s548 + m.s1s549 + m.s1s550 + m.s1s551 + m.s1s552 + m.s1s553 == 0) m.c282 = Constraint(expr= - m.b228 + m.s1s554 + m.s1s555 + m.s1s556 + m.s1s557 + m.s1s558 + m.s1s559 + m.s1s560 == 0) m.c283 = Constraint(expr= - m.b229 + m.s1s561 + m.s1s562 + m.s1s563 + m.s1s564 + m.s1s565 + m.s1s566 + m.s1s567 == 0) m.c284 = Constraint(expr= - m.b230 + m.s1s568 + m.s1s569 + m.s1s570 + m.s1s571 + m.s1s572 + m.s1s573 + m.s1s574 == 0) m.c285 = Constraint(expr= - m.b231 + m.s1s575 + m.s1s576 + m.s1s577 + m.s1s578 + m.s1s579 + m.s1s580 + m.s1s581 == 0) m.c286 = Constraint(expr= - m.b232 + m.s1s582 + m.s1s583 + m.s1s584 + m.s1s585 + m.s1s586 + m.s1s587 + m.s1s588 == 0) m.c287 = Constraint(expr= - m.b233 + m.s1s589 + m.s1s590 + m.s1s591 + m.s1s592 + m.s1s593 + m.s1s594 + m.s1s595 == 0) m.c288 = Constraint(expr= - m.b234 + m.s1s596 + m.s1s597 + m.s1s598 + m.s1s599 + m.s1s600 + m.s1s601 + m.s1s602 == 0) m.c289 = Constraint(expr= - m.b235 + m.s1s603 + m.s1s604 + m.s1s605 + m.s1s606 + m.s1s607 + m.s1s608 + m.s1s609 == 0) m.c290 = Constraint(expr= - m.b236 + m.s1s610 + m.s1s611 + m.s1s612 + m.s1s613 + m.s1s614 + m.s1s615 + m.s1s616 == 0) m.c291 = Constraint(expr= - m.b237 + m.s1s617 + m.s1s618 + m.s1s619 + m.s1s620 + m.s1s621 + m.s1s622 + m.s1s623 == 0) m.c292 = Constraint(expr= - m.b238 + m.s1s624 + m.s1s625 + m.s1s626 + m.s1s627 + m.s1s628 + m.s1s629 + m.s1s630 == 0) m.c293 = Constraint(expr= m.x1 - 0.0122927400295263*m.s1s239 - 0.047959606911495*m.s1s240 - 0.167824502578647*m.s1s241 - 0.494468436052303*m.s1s242 - 1.92914775438588*m.s1s243 - 2*m.s1s244 - 2*m.s1s245 <= 0) m.c294 = Constraint(expr= m.x2 - 0.0176041976445841*m.s1s246 - 0.0686820348432157*m.s1s247 - 0.240338257044582*m.s1s248 - 0.708118780382974*m.s1s249 - 2*m.s1s250 - 2*m.s1s251 - 2*m.s1s252 <= 0) m.c295 = Constraint(expr= m.x3 - 0.0192122784105588*m.s1s253 - 0.0749558941482069*m.s1s254 - 0.262292300976835*m.s1s255 - 0.772802909347502*m.s1s256 - 2*m.s1s257 - 2*m.s1s258 - 2*m.s1s259 <= 0) m.c296 = Constraint(expr= m.x4 - 0.0139851216849881*m.s1s260 - 0.0545623625823394*m.s1s261 - 0.190929449792929*m.s1s262 - 0.56254352007505*m.s1s263 - 2*m.s1s264 - 2*m.s1s265 - 2*m.s1s266 <= 0) m.c297 = Constraint(expr= m.x5 - 0.0131870388642087*m.s1s267 - 0.0514486761076021*m.s1s268 - 0.180033762412234*m.s1s269 - 0.530441095124783*m.s1s270 - 2*m.s1s271 - 2*m.s1s272 - 2*m.s1s273 <= 0) m.c298 = Constraint(expr= m.x6 - 0.0110279676651009*m.s1s274 - 0.0430251508598196*m.s1s275 - 0.15055741709363*m.s1s276 - 0.443593691162434*m.s1s277 - 1.7306620822916*m.s1s278 - 2*m.s1s279 - 2*m.s1s280 <= 0) m.c299 = Constraint(expr= m.x7 - 0.0137502828767635*m.s1s281 - 0.0536461488738445*m.s1s282 - 0.187723353667753*m.s1s283 - 0.553097263345606*m.s1s284 - 2*m.s1s285 - 2*m.s1s286 - 2*m.s1s287 <= 0) m.c300 = Constraint(expr= m.x8 - 0.0122927400295263*m.s1s288 - 0.047959606911495*m.s1s289 - 0.167824502578647*m.s1s290 - 0.494468436052303*m.s1s291 - 1.92914775438588*m.s1s292 - 2*m.s1s293 - 2*m.s1s294 <= 0) m.c301 = Constraint(expr= m.x9 - 0.0153698320860398*m.s1s295 - 0.0599647518268192*m.s1s296 - 0.209833968534382*m.s1s297 - 0.618242703881818*m.s1s298 - 2*m.s1s299 - 2*m.s1s300 - 2*m.s1s301 <= 0) m.c302 = Constraint(expr= m.x10 - 0.0120532217270583*m.s1s302 - 0.0470251363535167*m.s1s303 - 0.1645545204694*m.s1s304 - 0.484833949343662*m.s1s305 - 1.89155921072266*m.s1s306 - 2*m.s1s307 - 2*m.s1s308 <= 0) m.c303 = Constraint(expr= m.x11 - 0.0120510911159401*m.s1s309 - 0.0470168238640743*m.s1s310 - 0.164525432670402*m.s1s311 - 0.484748246730172*m.s1s312 - 1.89122484558971*m.s1s313 - 2*m.s1s314 - 2*m.s1s315 <= 0) m.c304 = Constraint(expr= m.x12 - 0.0142414920290718*m.s1s316 - 0.0555625806701283*m.s1s317 - 0.194429501479406*m.s1s318 - 0.572855870518057*m.s1s319 - 2*m.s1s320 - 2*m.s1s321 - 2*m.s1s322 <= 0) m.c305 = Constraint(expr= m.x13 - 0.0190758342372385*m.s1s323 - 0.0744235629590588*m.s1s324 - 0.260429520550158*m.s1s325 - 0.767314520523847*m.s1s326 - 2*m.s1s327 - 2*m.s1s328 - 2*m.s1s329 <= 0) m.c306 = Constraint(expr= m.x14 - 0.0188299954674205*m.s1s330 - 0.0734644333642121*m.s1s331 - 0.257073249355929*m.s1s332 - 0.757425796631457*m.s1s333 - 2*m.s1s334 - 2*m.s1s335 - 2*m.s1s336 <= 0) m.c307 = Constraint(expr= m.x15 - 0.0176041976445841*m.s1s337 - 0.0686820348432157*m.s1s338 - 0.240338257044582*m.s1s339 - 0.708118780382974*m.s1s340 - 2*m.s1s341 - 2*m.s1s342 - 2*m.s1s343 <= 0) m.c308 = Constraint(expr= m.x16 - 0.0153698320860398*m.s1s344 - 0.0599647518268192*m.s1s345 - 0.209833968534382*m.s1s346 - 0.618242703881818*m.s1s347 - 2*m.s1s348 - 2*m.s1s349 - 2*m.s1s350 <= 0) m.c309 = Constraint(expr= m.x17 - 0.0194226083350049*m.s1s351 - 0.0757764874800376*m.s1s352 - 0.265163793814297*m.s1s353 - 0.781263310246409*m.s1s354 - 2*m.s1s355 - 2*m.s1s356 - 2*m.s1s357 <= 0) m.c310 = Constraint(expr= m.x18 - 0.0174381887671401*m.s1s358 - 0.0680343582075014*m.s1s359 - 0.238071849619242*m.s1s360 - 0.701441168247406*m.s1s361 - 2*m.s1s362 - 2*m.s1s363 - 2*m.s1s364 <= 0) m.c311 = Constraint(expr= m.x19 - 0.0190758342372385*m.s1s365 - 0.0744235629590588*m.s1s366 - 0.260429520550158*m.s1s367 - 0.767314520523847*m.s1s368 - 2*m.s1s369 - 2*m.s1s370 - 2*m.s1s371 <= 0) m.c312 = Constraint(expr= m.x20 - 0.0139201415373155*m.s1s372 - 0.0543088452760314*m.s1s373 - 0.190042319589699*m.s1s374 - 0.559929730804558*m.s1s375 - 2*m.s1s376 - 2*m.s1s377 - 2*m.s1s378 <= 0) m.c313 = Constraint(expr= m.x21 - 0.0150776355652448*m.s1s379 - 0.0588247594211735*m.s1s380 - 0.205844806180028*m.s1s381 - 0.606489265973719*m.s1s382 - 2*m.s1s383 - 2*m.s1s384 - 2*m.s1s385 <= 0) m.c314 = Constraint(expr= m.x22 - 0.0192122784105588*m.s1s386 - 0.0749558941482069*m.s1s387 - 0.262292300976835*m.s1s388 - 0.772802909347502*m.s1s389 - 2*m.s1s390 - 2*m.s1s391 - 2*m.s1s392 <= 0) m.c315 = Constraint(expr= m.x23 - 0.0120532217270583*m.s1s393 - 0.0470251363535167*m.s1s394 - 0.1645545204694*m.s1s395 - 0.484833949343662*m.s1s396 - 1.89155921072266*m.s1s397 - 2*m.s1s398 - 2*m.s1s399 <= 0) m.c316 = Constraint(expr= m.x24 - 0.0194226083350049*m.s1s400 - 0.0757764874800376*m.s1s401 - 0.265163793814297*m.s1s402 - 0.781263310246409*m.s1s403 - 2*m.s1s404 - 2*m.s1s405 - 2*m.s1s406 <= 0) m.c317 = Constraint(expr= m.x25 - 0.0197779487583483*m.s1s407 - 0.0771628331590627*m.s1s408 - 0.270015017353593*m.s1s409 - 0.795556675515238*m.s1s410 - 2*m.s1s411 - 2*m.s1s412 - 2*m.s1s413 <= 0) m.c318 = Constraint(expr= m.x26 - 0.015876050278038*m.s1s414 - 0.0619397407586086*m.s1s415 - 0.216745024658915*m.s1s416 - 0.638605041090392*m.s1s417 - 2*m.s1s418 - 2*m.s1s419 - 2*m.s1s420 <= 0) m.c319 = Constraint(expr= m.x27 - 0.0114959997704606*m.s1s421 - 0.0448511583846756*m.s1s422 - 0.156947144289045*m.s1s423 - 0.462420014879005*m.s1s424 - 1.80411219047469*m.s1s425 - 2*m.s1s426 - 2*m.s1s427 <= 0) m.c320 = Constraint(expr= m.x28 - 0.0122577120780475*m.s1s428 - 0.0478229468357263*m.s1s429 - 0.167346289542402*m.s1s430 - 0.493059456740591*m.s1s431 - 1.92365068100974*m.s1s432 - 2*m.s1s433 - 2*m.s1s434 <= 0) m.c321 = Constraint(expr= m.x29 - 0.0139851216849881*m.s1s435 - 0.0545623625823394*m.s1s436 - 0.190929449792929*m.s1s437 - 0.56254352007505*m.s1s438 - 2*m.s1s439 - 2*m.s1s440 - 2*m.s1s441 <= 0) m.c322 = Constraint(expr= m.x30 - 0.0120510911159401*m.s1s442 - 0.0470168238640743*m.s1s443 - 0.164525432670402*m.s1s444 - 0.484748246730172*m.s1s445 - 1.89122484558971*m.s1s446 - 2*m.s1s447 - 2*m.s1s448 <= 0) m.c323 = Constraint(expr= m.x31 - 0.0174381887671401*m.s1s449 - 0.0680343582075014*m.s1s450 - 0.238071849619242*m.s1s451 - 0.701441168247406*m.s1s452 - 2*m.s1s453 - 2*m.s1s454 - 2*m.s1s455 <= 0) m.c324 = Constraint(expr= m.x32 - 0.0197779487583483*m.s1s456 - 0.0771628331590627*m.s1s457 - 0.270015017353593*m.s1s458 - 0.795556675515238*m.s1s459 - 2*m.s1s460 - 2*m.s1s461 - 2*m.s1s462 <= 0) m.c325 = Constraint(expr= m.x33 - 0.02056968839856*m.s1s463 - 0.0802517719822704*m.s1s464 - 0.280824105561038*m.s1s465 - 0.827403949655566*m.s1s466 - 2*m.s1s467 - 2*m.s1s468 - 2*m.s1s469 <= 0) m.c326 = Constraint(expr= m.x34 - 0.0123243005973977*m.s1s470 - 0.0480827391363186*m.s1s471 - 0.168255377761185*m.s1s472 - 0.495737941841801*m.s1s473 - 1.93410067769589*m.s1s474 - 2*m.s1s475 - 2*m.s1s476 <= 0) m.c327 = Constraint(expr= m.x35 - 0.0114258635818418*m.s1s477 - 0.0445775250020163*m.s1s478 -
<filename>tests/spark/test_harness.py # Copyright 2019 Yelp # Copyright 2020 Affirm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Test the Spark Harness.""" import inspect import random import string import json from io import BytesIO from contextlib import contextmanager from os import listdir from os.path import abspath from os.path import dirname from os.path import join from tempfile import gettempdir from shutil import rmtree from unittest import skipIf # mrjob.spark.harness is imported within tests because it requires pyspark from mrjob.examples.mr_count_lines_by_file import MRCountLinesByFile from mrjob.examples.mr_nick_nack import MRNickNack from mrjob.examples.mr_nick_nack_input_format import \ MRNickNackWithHadoopInputFormat from mrjob.examples.mr_word_freq_count import MRWordFreqCount from mrjob.job import MRJob from mrjob.util import cmd_line from mrjob.util import to_lines # tests.mr_spark_harness is imported below because it requires pyspark from tests.mr_counting_job import MRCountingJob from tests.mr_doubler import MRDoubler from tests.mr_no_mapper import MRNoMapper from tests.mr_pass_thru_arg_test import MRPassThruArgTest from tests.mr_streaming_and_spark import MRStreamingAndSpark from tests.mr_sort_and_group import MRSortAndGroup from tests.mr_sort_and_group_reversed_text import MRSortAndGroupReversedText from tests.mr_two_step_job import MRTwoStepJob from tests.mr_word_freq_count_with_combiner_cmd import \ MRWordFreqCountWithCombinerCmd from tests.py2 import Mock from tests.sandbox import pyspark # None if not installed from tests.sandbox import SandboxedTestCase from tests.sandbox import SingleSparkContextTestCase class MRWordFreqCountCombinerYieldsTwo(MRWordFreqCount): def combiner(self, word, counts): yield (word, sum(counts)) yield (word, 1000) class MRWordFreqCountCombinerYieldsZero(MRWordFreqCount): def combiner(self, word, counts): return class MRWordFreqCountFailingCombiner(MRWordFreqCount): unique_exception_str = '619dc867a1a34793b2157c258c859e78' def combiner(self, word, counts): raise Exception(self.unique_exception_str) class MRSumValuesByWord(MRJob): # if combiner is run, keys with values that sum to 0 # will be eliminated def mapper(self, _, line): word, value_str = line.split('\t', 1) yield word, int(value_str) def combiner(self, word, values): values_sum = sum(values) if values_sum != 0: yield word, values_sum def reducer(self, word, values): yield word, sum(values) class MRSumValuesByWordWithCombinerPreFilter(MRSumValuesByWord): def combiner_pre_filter(self): return 'cat' class MRSumValuesByWordWithNoCombinerJobs(MRSumValuesByWord): def __init__(self, *args, **kwargs): super(MRSumValuesByWordWithNoCombinerJobs, self).__init__( *args, **kwargs) if self.options.run_combiner: raise NotImplementedError("Can't init combiner jobs") @skipIf(pyspark is None, 'no pyspark module') class SparkHarnessOutputComparisonBaseTestCase( SandboxedTestCase, SingleSparkContextTestCase): def _spark_harness_path(self): import mrjob.spark.harness path = mrjob.spark.harness.__file__ if path.endswith('.pyc'): path = path[:-1] return path def _reference_job(self, job_class, input_bytes=b'', input_paths=(), runner_alias='inline', job_args=[]): args = ['-r', runner_alias] + list(job_args) + list(input_paths) reference_job = job_class(args) reference_job.sandbox(stdin=BytesIO(input_bytes)) return reference_job def _harness_job(self, job_class, input_bytes=b'', input_paths=(), runner_alias='inline', compression_codec=None, job_args=None, spark_conf=None, first_step_num=None, last_step_num=None, counter_output_dir=None, num_reducers=None, max_output_files=None, emulate_map_input_file=False, skip_internal_protocol=False): from tests.mr_spark_harness import MRSparkHarness job_class_path = '%s.%s' % (job_class.__module__, job_class.__name__) harness_job_args = ['-r', runner_alias, '--job-class', job_class_path] if spark_conf: for key, value in spark_conf.items(): harness_job_args.append('--jobconf') harness_job_args.append('%s=%s' % (key, value)) if compression_codec: harness_job_args.append('--compression-codec') harness_job_args.append(compression_codec) if job_args: harness_job_args.extend(['--job-args', cmd_line(job_args)]) if first_step_num is not None: harness_job_args.extend(['--first-step-num', str(first_step_num)]) if last_step_num is not None: harness_job_args.extend(['--last-step-num', str(last_step_num)]) if counter_output_dir is not None: harness_job_args.extend( ['--counter-output-dir', counter_output_dir]) if num_reducers is not None: harness_job_args.extend( ['--num-reducers', str(num_reducers)]) if max_output_files is not None: harness_job_args.extend( ['--max-output-files', str(max_output_files)]) if emulate_map_input_file: harness_job_args.append('--emulate-map-input-file') if skip_internal_protocol: harness_job_args.append('--skip-internal-protocol') harness_job_args.extend(input_paths) harness_job = MRSparkHarness(harness_job_args) harness_job.sandbox(stdin=BytesIO(input_bytes)) return harness_job def _count_output_files(self, runner): return sum( 1 for f in listdir(runner.get_output_dir()) if f.startswith('part-') ) def _assert_output_matches( self, job_class, input_bytes=b'', input_paths=(), job_args=[], num_reducers=None, max_output_files=None, emulate_map_input_file=False, skip_internal_protocol=False): # run classes defined in this module in inline mode, classes # with their own script files in local mode. used by # test_skip_combiner_that_runs_cmd() if job_class.__module__ == __name__: ref_job_runner_alias = 'inline' else: ref_job_runner_alias = 'local' reference_job = self._reference_job( job_class, input_bytes=input_bytes, input_paths=input_paths, job_args=job_args, runner_alias=ref_job_runner_alias) with reference_job.make_runner() as runner: runner.run() reference_output = sorted(to_lines(runner.cat_output())) if emulate_map_input_file: # uses dataframes, which don't seem to work in inline mode: # # java.util.ArrayList cannot be cast to org.apache.spark.sql.Column harness_job_runner_alias = 'local' else: harness_job_runner_alias = 'inline' harness_job = self._harness_job( job_class, input_bytes=input_bytes, input_paths=input_paths, job_args=job_args, max_output_files=max_output_files, num_reducers=num_reducers, emulate_map_input_file=emulate_map_input_file, skip_internal_protocol=skip_internal_protocol, runner_alias=harness_job_runner_alias) with harness_job.make_runner() as runner: runner.run() harness_output = sorted(to_lines(runner.cat_output())) self.assertEqual(harness_output, reference_output) class SparkHarnessOutputComparisonTestCase( SparkHarnessOutputComparisonBaseTestCase): def test_basic_job(self): input_bytes = b'one fish\ntwo fish\nred fish\nblue fish\n' self._assert_output_matches(MRWordFreqCount, input_bytes=input_bytes) def test_two_step_job(self): input_bytes = b'foo\nbar\n' self._assert_output_matches(MRTwoStepJob, input_bytes=input_bytes) def test_mixed_job(self): # can we run just the streaming part of a job? input_bytes = b'foo\nbar\n' job = self._harness_job( MRStreamingAndSpark, input_bytes=input_bytes, first_step_num=0, last_step_num=0) with job.make_runner() as runner: runner.run() # the streaming part is just an identity mapper, but it converts # lines to pairs of JSON self.assertEqual(set(to_lines(runner.cat_output())), {b'null\t"foo"\n', b'null\t"bar"\n'}) def test_range_of_steps(self): # check for off-by-one errors, etc. input_bytes = b'"three"\t3\n"five"\t5' # sanity-check self._assert_output_matches(MRDoubler, input_bytes=input_bytes, job_args=['-n', '5']) # just run two of the five steps steps_2_and_3_job = self._harness_job( MRDoubler, input_bytes=input_bytes, job_args=['-n', '5'], first_step_num=2, last_step_num=3) with steps_2_and_3_job.make_runner() as runner: runner.run() # parse_output() works because internal and output protocols match self.assertEqual( dict(steps_2_and_3_job.parse_output(runner.cat_output())), dict(three=12, five=20), ) def test_compression(self): compression_codec = 'org.apache.hadoop.io.compress.GzipCodec' input_bytes = b'fa la la la la\nla la la la\n' job = self._harness_job( MRWordFreqCount, input_bytes=input_bytes, compression_codec=compression_codec) with job.make_runner() as runner: runner.run() self.assertTrue(runner.fs.exists( join(runner.get_output_dir(), 'part*.gz'))) self.assertEqual(dict(job.parse_output(runner.cat_output())), dict(fa=1, la=8)) def test_sort_values(self): input_bytes = ( b'alligator\nactuary\nbowling\nartichoke\nballoon\nbaby\n') self._assert_output_matches(MRSortAndGroup, input_bytes=input_bytes) def test_sort_values_sorts_encoded_values(self): input_bytes = ( b'alligator\nactuary\nbowling\nartichoke\nballoon\nbaby\n') job = self._harness_job(MRSortAndGroupReversedText, input_bytes=input_bytes) with job.make_runner() as runner: runner.run() self.assertEqual( dict(job.parse_output(runner.cat_output())), dict(a=['artichoke', 'alligator', 'actuary'], b=['bowling', 'balloon', 'baby'])) def test_passthru_args(self): input_bytes = b'\n'.join([ b'to be or', b'not to be', b'that is the question']) self._assert_output_matches( MRPassThruArgTest, input_bytes=input_bytes, job_args=['--chars', '--ignore', 'to']) def test_combiner_called(self): input_bytes = b'one two three one two three one two three' job = self._harness_job( MRWordFreqCountFailingCombiner, input_bytes=input_bytes) with job.make_runner() as runner, \ self.assertRaises(Exception) as assert_raises_context: runner.run() exception_text = assert_raises_context.exception.__str__() expected_str = MRWordFreqCountFailingCombiner.unique_exception_str assert expected_str in exception_text def test_combiner_that_yields_two_values(self): input_bytes = b'one two three one two three one two three' job = self._harness_job(MRWordFreqCountCombinerYieldsTwo, input_bytes=input_bytes) # Given that the combiner for this job yields the count and 1000 for # each word and the Spark harness' combiner helper stops running # jobs' combiners if they do not reduce the mapper's output to a single # value per key, we expect the combiner to run once per word, resulting # in an extra 1000 being added to each word's count. # # Note that if combiner_helper did not stop running the combiner in # this case, we would expect 2000 to be added to each word's count # since each word appears 3 times, resulting in 2 calls to # combine_pairs. with job.make_runner() as runner: runner.run() self.assertEqual( dict(job.parse_output(runner.cat_output())), dict(one=1003, two=1003, three=1003)) def test_combiner_that_yields_zero_values(self): input_bytes = b'a b c\na b c\na b c\na b c' job = self._harness_job(MRWordFreqCountCombinerYieldsZero, input_bytes=input_bytes) with job.make_runner() as runner: runner.run() self.assertEqual( dict(job.parse_output(runner.cat_output())), dict(), ) def test_combiner_that_sometimes_yields_zero_values(self): # another test that the combiner actually runs # # would like to test this against a reference job, but whether # the combiner can see and eliminate the "sad" values that sum # to zero depends on the reference runner's partitioning # note that "sad" values sum to zero input_bytes = b'happy\t5\nsad\t3\nhappy\t2\nsad\t-3\n' job = self._harness_job(MRSumValuesByWord, input_bytes=input_bytes) with job.make_runner() as runner: runner.run() self.assertEqual( dict(job.parse_output(runner.cat_output())), dict(happy=7), # combiner should eliminate sad=0 ) def test_skip_combiner_if_runs_subprocesses(self): # same as above, but we have to skip combiner because of its pre-filter input_bytes = b'happy\t5\nsad\t3\nhappy\t2\nsad\t-3\n' job = self._harness_job(MRSumValuesByWordWithCombinerPreFilter, input_bytes=input_bytes) with job.make_runner() as runner: runner.run() self.assertEqual( dict(job.parse_output(runner.cat_output())), dict(happy=7, sad=0), ) def test_skip_combiner_if_cant_init_job(self): input_bytes = b'happy\t5\nsad\t3\nhappy\t2\nsad\t-3\n' job = self._harness_job(MRSumValuesByWordWithNoCombinerJobs, input_bytes=input_bytes) with job.make_runner() as runner: runner.run() self.assertEqual( dict(job.parse_output(runner.cat_output())), dict(happy=7, sad=0), ) def test_skip_combiner_that_runs_cmd(self): input_bytes = b'one fish\ntwo fish\nred fish\nblue fish\n' self._assert_output_matches( MRWordFreqCountWithCombinerCmd, input_bytes=input_bytes) @contextmanager def create_temp_counter_dir(self): random_folder_name = 'test_' + ''.join(random.choice( string.ascii_uppercase + string.digits) for _ in range(10)) output_counter_dir = join(gettempdir(), random_folder_name) yield output_counter_dir rmtree(output_counter_dir) def test_increment_counter(self): input_bytes = b'one fish\ntwo fish\nred fish\nblue fish\n' reference_job = self._reference_job( MRCountingJob, input_bytes=input_bytes) with reference_job.make_runner() as runner: runner.run() reference_counters = runner.counters() with self.create_temp_counter_dir() as output_counter_dir: harness_job = self._harness_job( MRCountingJob, input_bytes=input_bytes, counter_output_dir='file://{}'.format(output_counter_dir) ) with harness_job.make_runner() as runner: runner.run() harness_counters = json.loads( self.spark_context.textFile( 'file://' + output_counter_dir ).collect()[0]) self.assertEqual(harness_counters, reference_counters) def test_job_with_no_mapper_in_second_step(self): # mini-regression test for not passing step_num to step.description() input_bytes = b'one fish\ntwo fish\nred fish\nblue fish\n' self._assert_output_matches(MRNoMapper, input_bytes=input_bytes) class SkipInternalProtocolTestCase( SparkHarnessOutputComparisonBaseTestCase): def test_basic_job(self): input_bytes = b'one fish\ntwo fish\nred fish\nblue fish\n' self._assert_output_matches(MRWordFreqCount, input_bytes=input_bytes, skip_internal_protocol=True) def test_two_step_job(self): input_bytes = b'foo\nbar\n' self._assert_output_matches(MRTwoStepJob, input_bytes=input_bytes, skip_internal_protocol=True) def test_sort_values(self): input_bytes = ( b'alligator\nactuary\nbowling\nartichoke\nballoon\nbaby\n') self._assert_output_matches(MRSortAndGroup, input_bytes=input_bytes, skip_internal_protocol=True) def test_sort_values_sorts_unencoded_values(self): # compare to test_sort_values_sorts_encoded_values(), above. # It won't matter that MRSortAndGroupReversedText uses # ReversedTextProtocol as its internal protocol, because we ignore that input_bytes = ( b'alligator\nactuary\nbowling\nartichoke\nballoon\nbaby\n') job = self._harness_job(MRSortAndGroupReversedText, input_bytes=input_bytes, skip_internal_protocol=True) with job.make_runner() as runner: runner.run() self.assertEqual( dict(job.parse_output(runner.cat_output())), dict(a=['actuary', 'alligator', 'artichoke'], b=['baby', 'balloon', 'bowling'])) class SparkConfigureReducerTestCase(SparkHarnessOutputComparisonBaseTestCase): def
<filename>src/jpg2pdf.py import datetime import json import os import sys import webbrowser from PIL import Image from PyQt5 import QtCore, QtGui from PyQt5.QtCore import QProcessEnvironment, QUrl, QSettings from PyQt5.QtGui import QPixmap, QGuiApplication, QIcon, QDesktopServices from PyQt5.QtWidgets import QApplication, QMainWindow, QTableWidgetItem, QHeaderView, QFileDialog, QGraphicsView, \ QGraphicsScene, QGraphicsPixmapItem, QMessageBox, QAbstractItemView, QStyle from helper import load_images_from_folder, check_default_location, humanbytes, get_download_path, \ check_for_already_file_exists, get_valid_images, get_initial_download_dir from convert_pdf_threads import ConvertToPdfThread from setting_module import AdvanceSettingPage, AppSettingPage, AboutPage, DonatePage from pixmap_loading_thread import PixMapLoadingThread from theme_set import set_theme, popup_theme from jpg2pdf_ui import Ui_MainWindow PRODUCT_NAME = 'JPG2PDF' THEME_PATH = '/snap/jpg2pdf/current/' class MainWindow(QMainWindow, Ui_MainWindow): def __init__(self): super(MainWindow, self).__init__() self.ui = Ui_MainWindow() self.advance_setting_ui = AdvanceSettingPage() self.general_setting_ui = AppSettingPage() self.about_ui = AboutPage() self.donate_ui = DonatePage() self.ui.setupUi(self) self.setWindowTitle("JPEG2PDF PRO") self.settings = QSettings("warlordsoft", "jpg2pdf") # jpg2pdf settings initials------------------------------------------------------------------------------------- self.image_hide = False self.ui.hide_image.setText("Hide icons") self.Default_loc = get_initial_download_dir() self.ui.output_path.setText(self.Default_loc + "/JPG2PDF") self.ui.checkBox_protect_pdf.setChecked(False) self.enable_pdf_password() self.show_full_path_flag = False self.ui.show_full_path.setText("Show full path") self.pop_up_stylesheet = 'background-color:rgb(48,48,48);color:white;' self.all_images_list = [] self.image_dimension = [] self.image_size = [] self.image_extension = [] self.all_pixmap_data = [] self.selected_list = [] self.ui.tableWidget.verticalHeader().setVisible(False) self.ui.tableWidget.verticalHeader().setDefaultSectionSize(30) self.progress_bar_disable() largeWidth = QGuiApplication.primaryScreen().size().width() self.ui.splitter_2.setSizes([largeWidth / 3, 120]) self.ui.image.setVisible(False) self.counter = 0 self.toggle = 0 self.default_selected = 0 # general settings initials ------------------------------------------------------------------------------------ self.theme = 'dark' self.general_setting_ui.ui.dark.setChecked(True) self.file_dialog = 'native' self.main_table_pointer = 70 self.ask_for_export = True self.overwrite_warning = True self.Default_loc_import = get_initial_download_dir() self.jpg = True self.jpeg = True self.png = True self.tif = True self.tiff = True self.bmp = True self.all_files = True # advance settings initials ------------------------------------------------------------------------------------ self.mm = True self.cm = False self.pt = False self.inch = False self.l_margin = 0.00 self.r_margin = 0.00 self.t_margin = 0.00 self.b_margin = 0.00 self.zoom = 0 self.layout = 0 self.h_value = 0.00 self.v_value = 0.00 self.auto_resolution = True self.dpi = 150 self.page_from = "" self.page_to = "" self.select_angle = 0 self.page_from_grayscale = "" self.page_to_grayscale = "" self.select_scale = 0 self.show_page_no = False self.page_starts = 1 self.font_position = 15 self.font_size = 8 self.font_align = 0 self.comboBox = 0 self.bold = False self.italic = False self.underline = False self.r = 0 self.g = 0 self.b = 0 self.keywords = "" self.producer = "" self.creator = "" self.created_on = "" self.load_settings() self.general_setting_defaults(default=False) self.advance_settings_defaults(default=False) self.ui.actionAdd_image.triggered.connect(self.load_images) self.ui.actionSettings.triggered.connect(self.show_general_setting) self.ui.actionAbout.triggered.connect(self.show_about_page) self.ui.actionDonate.triggered.connect(self.show_donate_page) self.ui.actionAdd_folder.triggered.connect(self.load_folder) self.ui.actionClear_all.triggered.connect(self.clear_all_table_data) self.ui.actionRemove_Selected.triggered.connect(self.remove_selected) self.ui.zoomin.clicked.connect(self.zoom_in_functionality) self.ui.zoomout.clicked.connect(self.zoom_out_functionality) self.ui.rotate.clicked.connect(self.rotate_functionality) self.ui.change.clicked.connect(self.open_download_path) self.ui.preview_2.clicked.connect(self.preview_image) self.ui.show_full_path.clicked.connect(self.toggle_full_half_path) self.ui.next.clicked.connect(self.next_button_clicked) self.ui.prev.clicked.connect(self.prev_button_clicked) self.ui.remove_2.clicked.connect(self.remove_item_from_table) self.ui.selectall.currentIndexChanged.connect(self.select_all_button_function) self.ui.checkBox_protect_pdf.stateChanged.connect(self.enable_pdf_password) self.ui.tableWidget.itemDoubleClicked.connect(self.select_item_on_double_clicked) self.ui.moveup.clicked.connect(self.move_up_item) self.ui.movedown.clicked.connect(self.move_down_item) self.ui.start_convert.clicked.connect(self.start_convert_process) self.ui.sort.currentIndexChanged.connect(self.sort_asc_desc) self.ui.more_setting_button.clicked.connect(self.show_advance_setting) self.ui.hide_image.clicked.connect(self.hide_image_thumbnail) self.ui.duplicate.clicked.connect(self.remove_duplicate) self.ui.select_item.clicked.connect(self.select_item_one) self.ui.stop.clicked.connect(self.kill_process) self.ui.info_main.clicked.connect(self.main_info) # advance settings self.advance_setting_ui.ui.okay.clicked.connect(self.ok_setting_clicked) self.advance_setting_ui.ui.page_from.textChanged.connect(self.check_from_to_page_validation) self.advance_setting_ui.ui.page_to.textChanged.connect(self.check_from_to_page_validation) self.advance_setting_ui.ui.page_to_grayscale.textChanged.connect(self.check_grayscale_validation) self.advance_setting_ui.ui.page_from_grayscale.textChanged.connect(self.check_grayscale_validation) self.advance_setting_ui.ui.select_angle.currentIndexChanged.connect(self.validation_for_angle) self.advance_setting_ui.ui.select_scale.currentIndexChanged.connect(self.validation_for_scale) self.advance_setting_ui.ui.mm.clicked.connect(self.change_default_unit) self.advance_setting_ui.ui.cm.clicked.connect(self.change_default_unit) self.advance_setting_ui.ui.pt.clicked.connect(self.change_default_unit) self.advance_setting_ui.ui.inch.clicked.connect(self.change_default_unit) self.advance_setting_ui.ui.show_page_no.clicked.connect(self.on_page_no) self.advance_setting_ui.ui.auto_resolution.clicked.connect(self.enable_auto_manual_resolution) self.advance_setting_ui.ui.info_dpi.clicked.connect(self.dpi_info) self.advance_setting_ui.ui.info_image_pos.clicked.connect(self.image_pos_info) self.advance_setting_ui.ui.info_margin.clicked.connect(self.margin_info_details) self.advance_setting_ui.ui.reset_defaults.clicked.connect(self.reset_advanced_settings) # general settings self.general_setting_ui.ui.dark.clicked.connect(self.set_theme) self.general_setting_ui.ui.light.clicked.connect(self.set_theme) self.general_setting_ui.ui.native_dialog.clicked.connect(self.set_file_dialog) self.general_setting_ui.ui.qt_dialog.clicked.connect(self.set_file_dialog) self.general_setting_ui.ui.overwrite_warning.clicked.connect(self.set_overwrite_warning) self.general_setting_ui.ui.auto_generate_pdf_name.clicked.connect(self.pdf_name_set) self.general_setting_ui.ui.jpg.clicked.connect(self.set_image_filter) self.general_setting_ui.ui.jpeg.clicked.connect(self.set_image_filter) self.general_setting_ui.ui.png.clicked.connect(self.set_image_filter) self.general_setting_ui.ui.tif.clicked.connect(self.set_image_filter) self.general_setting_ui.ui.tiff.clicked.connect(self.set_image_filter) self.general_setting_ui.ui.bmp.clicked.connect(self.set_image_filter) self.general_setting_ui.ui.all_files.clicked.connect(self.set_filter_disable) self.general_setting_ui.ui.close.clicked.connect(self.hide_general_settings) self.general_setting_ui.ui.icon_size.valueChanged.connect(self.adjust_thumbnail_size) self.general_setting_ui.ui.reset_default.clicked.connect(self.reset_app_settings) self.general_setting_ui.ui.change_import.clicked.connect(self.change_import_path) # about page self.about_ui.ui.warlordsoft_button.clicked.connect(self.redirect_to_warlordsoft) self.about_ui.ui.donate_button.clicked.connect(self.redirect_to_paypal_donation) self.about_ui.ui.rate_button.clicked.connect(self.redirect_to_rate_snapstore) self.about_ui.ui.feedback_button.clicked.connect(self.redirect_to_feedback_button) self.about_ui.ui.ge_more_apps.clicked.connect(self.ge_more_apps) self.donate_ui.ui.donate_button.clicked.connect(self.redirect_to_paypal_donation) # scroll zoom functionality:- self._zoom = 0 self._empty = False self._scene = QGraphicsScene(self) self._photo = QGraphicsPixmapItem() self._scene.addItem(self._photo) self.ui.graphicsView.setScene(self._scene) self.ui.graphicsView.scale(2, 2) self.factor = 1 self.setAcceptDrops(True) def dragEnterEvent(self, event): event.accept() def dropEvent(self, event): event.setDropAction(QtCore.Qt.CopyAction) self.load_images = [item.toLocalFile() for item in event.mimeData().urls()] try: is_running = self.convert_pdf_thread.isRunning() except Exception as e: is_running = False try: is_running_pixmap = self.pixmap_load_thread.isRunning() except Exception as e: is_running_pixmap = False if is_running or is_running_pixmap: self.popup_message(title="Task Already In Queue", message="Please wait for the Running task to finish!") else: self.ui.stop.setEnabled(False) if len(self.load_images) == 0: return False self.load_images, invalid_list = get_valid_images(self.load_images) if invalid_list: message = "\n\n".join( [f"{count}: {str(item).split('/')[-1]}" for count, item in enumerate(invalid_list, 1)]) self.popup_message(title="Invalid image file(s) in your Import!\nBelow file(s) Could not be imported!", message=message) self.all_images_list += self.load_images self.pixmap_load_thread = PixMapLoadingThread(self.load_images) self.pixmap_load_thread.finish.connect(self.setProgressVal_pixmap_finish) self.pixmap_load_thread.progress.connect(self.setProgressVal_pixmap) self.pixmap_load_thread.start() event.accept() def save_settings(self): # jpg2pdf settings save: -------------------------------------------------------------------------------------- self.settings.setValue("Default_loc", self.Default_loc) self.settings.setValue("status_protect_pdf", self.ui.checkBox_protect_pdf.isChecked()) # general settings save:---------------------------------------------------------------------------------------- self.settings.setValue("main_table_pointer", self.main_table_pointer) self.settings.setValue("theme", self.theme) self.settings.setValue("file_dialog", self.file_dialog) self.settings.setValue("Default_loc_import", self.Default_loc_import) self.settings.setValue("overwrite_warning", self.overwrite_warning) self.settings.setValue("ask_for_export", self.ask_for_export) self.settings.setValue("jpg", self.general_setting_ui.ui.jpg.isChecked()) self.settings.setValue("jpeg", self.general_setting_ui.ui.jpeg.isChecked()) self.settings.setValue("png", self.general_setting_ui.ui.png.isChecked()) self.settings.setValue("tif", self.general_setting_ui.ui.tif.isChecked()) self.settings.setValue("tiff", self.general_setting_ui.ui.tiff.isChecked()) self.settings.setValue("bmp", self.general_setting_ui.ui.bmp.isChecked()) self.settings.setValue("all_files", self.general_setting_ui.ui.all_files.isChecked()) # advance settings save:---------------------------------------------------------------------------------------- self.settings.setValue("mm", self.advance_setting_ui.ui.mm.isChecked()) self.settings.setValue("cm", self.advance_setting_ui.ui.cm.isChecked()) self.settings.setValue("pt", self.advance_setting_ui.ui.pt.isChecked()) self.settings.setValue("inch", self.advance_setting_ui.ui.inch.isChecked()) self.settings.setValue("l_margin", self.advance_setting_ui.ui.l_margin.value()) self.settings.setValue("r_margin", self.advance_setting_ui.ui.r_margin.value()) self.settings.setValue("t_margin", self.advance_setting_ui.ui.t_margin.value()) self.settings.setValue("b_margin", self.advance_setting_ui.ui.b_margin.value()) self.settings.setValue("zoom", self.advance_setting_ui.ui.zoom.currentIndex()) self.settings.setValue("layout", self.advance_setting_ui.ui.layout.currentIndex()) self.settings.setValue("h_value", self.advance_setting_ui.ui.h_value.value()) self.settings.setValue("v_value", self.advance_setting_ui.ui.v_value.value()) self.settings.setValue("auto_resolution", self.advance_setting_ui.ui.auto_resolution.isChecked()) self.settings.setValue("dpi", self.advance_setting_ui.ui.dpi.value()) self.settings.setValue("page_from", self.advance_setting_ui.ui.page_from.text()) self.settings.setValue("page_to", self.advance_setting_ui.ui.page_to.text()) self.settings.setValue("select_angle", self.advance_setting_ui.ui.select_angle.currentIndex()) self.settings.setValue("page_from_grayscale", self.advance_setting_ui.ui.page_from_grayscale.text()) self.settings.setValue("page_to_grayscale", self.advance_setting_ui.ui.page_to_grayscale.text()) self.settings.setValue("select_scale", self.advance_setting_ui.ui.select_scale.currentIndex()) self.settings.setValue("show_page_no", self.advance_setting_ui.ui.show_page_no.isChecked()) self.settings.setValue("page_starts", self.advance_setting_ui.ui.page_starts.value()) self.settings.setValue("font_position", self.advance_setting_ui.ui.font_position.value()) self.settings.setValue("font_size", self.advance_setting_ui.ui.font_size.value()) self.settings.setValue("font_align", self.advance_setting_ui.ui.font_align.currentIndex()) self.settings.setValue("comboBox", self.advance_setting_ui.ui.comboBox.currentIndex()) self.settings.setValue("bold", self.advance_setting_ui.ui.bold.isChecked()) self.settings.setValue("italic", self.advance_setting_ui.ui.italic.isChecked()) self.settings.setValue("underline", self.advance_setting_ui.ui.underline.isChecked()) self.settings.setValue("r", self.advance_setting_ui.ui.r.value()) self.settings.setValue("g", self.advance_setting_ui.ui.g.value()) self.settings.setValue("b", self.advance_setting_ui.ui.b.value()) self.settings.setValue("keywords", self.advance_setting_ui.ui.keywords.text()) self.settings.setValue("producer", self.advance_setting_ui.ui.producer.text()) self.settings.setValue("creator", self.advance_setting_ui.ui.creator.text()) self.settings.setValue("created_on", self.advance_setting_ui.ui.created_on.text()) # one time congratulate self.settings.setValue("one_time_congratulate", self.one_time_congratulate) # save window state self.settings.setValue("geometry", self.saveGeometry()) self.settings.setValue("windowState", self.saveState()) def load_settings(self): # jpg2pdf settings loads: -------------------------------------------------------------------------------------- if self.settings.contains("Default_loc"): self.Default_loc = self.settings.value("Default_loc") self.ui.output_path.setText(self.Default_loc + "/JPG2PDF") if self.settings.contains("status_protect_pdf"): self.ui.checkBox_protect_pdf.setChecked(json.loads(self.settings.value("status_protect_pdf"))) self.enable_pdf_password() # general settings loads:--------------------------------------------------------------------------------------- if self.settings.contains("main_table_pointer"): self.main_table_pointer = int(self.settings.value("main_table_pointer")) if self.settings.contains("theme"): self.theme = self.settings.value("theme") if self.settings.contains("file_dialog"): self.file_dialog = self.settings.value("file_dialog") if self.settings.contains("Default_loc_import"): self.Default_loc_import = self.settings.value("Default_loc_import") if self.settings.contains("overwrite_warning"): self.overwrite_warning = json.loads(self.settings.value("overwrite_warning")) if self.settings.contains("ask_for_export"): self.ask_for_export = json.loads(self.settings.value("ask_for_export")) if self.settings.contains("jpg"): self.jpg = json.loads(self.settings.value("jpg")) if self.settings.contains("jpeg"): self.jpeg = json.loads(self.settings.value("jpeg")) if self.settings.contains("png"): self.png = json.loads(self.settings.value("png")) if self.settings.contains("tif"): self.tif = json.loads(self.settings.value("tif")) if self.settings.contains("tiff"): self.tiff = json.loads(self.settings.value("tiff")) if self.settings.contains("bmp"): self.bmp = json.loads(self.settings.value("bmp")) if self.settings.contains("all_files"): self.all_files = json.loads(self.settings.value("all_files")) # advance setting load:----------------------------------------------------------------------------------------- if self.settings.contains("mm"): self.mm = json.loads(self.settings.value("mm")) if self.settings.contains("cm"): self.cm = json.loads(self.settings.value("cm")) if self.settings.contains("pt"): self.pt = json.loads(self.settings.value("pt")) if self.settings.contains("inch"): self.inch = json.loads(self.settings.value("inch")) if self.settings.contains("l_margin"): self.l_margin = float(self.settings.value("l_margin")) if self.settings.contains("r_margin"): self.r_margin = float(self.settings.value("r_margin")) if self.settings.contains("t_margin"): self.t_margin = float(self.settings.value("t_margin")) if self.settings.contains("b_margin"): self.b_margin = float(self.settings.value("b_margin")) if self.settings.contains("zoom"): self.zoom = int(self.settings.value("zoom")) if self.settings.contains("layout"): self.layout = int(self.settings.value("layout")) if self.settings.contains("h_value"): self.h_value = float(self.settings.value("h_value")) if self.settings.contains("v_value"): self.v_value = float(self.settings.value("v_value")) if self.settings.contains("auto_resolution"): self.auto_resolution = json.loads(self.settings.value("auto_resolution")) if self.settings.contains("dpi"): self.dpi = int(self.settings.value("dpi")) if self.settings.contains("page_from"): self.page_from = self.settings.value("page_from") if self.settings.contains("page_to"): self.page_to = self.settings.value("page_to") if self.settings.contains("select_angle"): self.select_angle = int(self.settings.value("select_angle")) if self.settings.contains("page_from_grayscale"): self.page_from_grayscale = self.settings.value("page_from_grayscale") if self.settings.contains("page_to_grayscale"): self.page_to_grayscale = self.settings.value("page_to_grayscale") if self.settings.contains("select_scale"): self.select_scale = int(self.settings.value("select_scale")) if self.settings.contains("show_page_no"): self.show_page_no = json.loads(self.settings.value("show_page_no")) if self.settings.contains("page_starts"): self.page_starts = int(self.settings.value("page_starts")) if self.settings.contains("font_position"): self.font_position = int(self.settings.value("font_position")) if self.settings.contains("font_size"): self.font_size = int(self.settings.value("font_size")) if self.settings.contains("font_align"): self.font_align = int(self.settings.value("font_align")) if self.settings.contains("comboBox"): self.comboBox = int(self.settings.value("comboBox")) if self.settings.contains("bold"): self.bold = json.loads(self.settings.value("bold")) if self.settings.contains("italic"): self.italic = json.loads(self.settings.value("italic")) if self.settings.contains("underline"): self.underline = json.loads(self.settings.value("underline")) if self.settings.contains("r"): self.r = int(self.settings.value("r")) if self.settings.contains("g"): self.g = int(self.settings.value("g")) if self.settings.contains("b"): self.b = int(self.settings.value("b")) if self.settings.contains("keywords"): self.keywords = self.settings.value("keywords") if self.settings.contains("producer"): self.producer = self.settings.value("producer") if self.settings.contains("creator"): self.creator = self.settings.value("creator") if self.settings.contains("created_on"): self.created_on = self.settings.value("created_on") # one time congratulate if self.settings.contains("one_time_congratulate"): self.one_time_congratulate = json.loads(self.settings.value("one_time_congratulate")) # load window state if self.settings.contains("geometry"): self.restoreGeometry(self.settings.value("geometry")) if self.settings.contains("windowState"): self.restoreState(self.settings.value("windowState", "")) def closeEvent(self, event): self.save_settings() self.advance_setting_ui.hide() self.general_setting_ui.hide() self.about_ui.hide() super().closeEvent(event) def main_info(self): title = "Image Orientation and page format detail info!" message = "Tip1: Orientation = 'Auto' | Page format = 'Auto'\nAutomatically adjust page orientation and page format (Best fit).\n\n" \ "Tip2: Orientation = 'Auto' | Page format = Fixed (for eg. A4)\nAutomatically adjust page orientation with specified page format.\n\n" \ "Tip3: Orientation = 'Auto' | Page format = Fixed (Fit view) (for eg. A4)\nAutomatically adjust page orientation with image streched in specified page format.\n\n" \ "Tip4: Orientation = Fixed (for eg. Portrait) | Page format = Fixed\nSpecified page orientation with specified page format.\n\n" \ "Tip5: For custom options, use Advanced settings." self.popup_message(title, message) def margin_info_details(self): title = "Page Margin detail info!" message = "A margin is the area between the main content of a page and the page edges.\n\n" \ "Note: Bottom margin is restricted in following pdf settings.\n\n" \ "1: Orientation = Auto | Page format = Fixed\n" \ "2: Orientation = Fixed | Page format = Fixed\n" \ "3: Orientation = Fixed | Page format = Auto\n" self.popup_message(title, message) def dpi_info(self): title = "Image resolution DPI info!" message = "DPI is a measure of the resolution of the given image.\n\n" \ "Note: DPI option will be force set to 'Auto image size' in following pdf settings.\n\n" \ "1: Orientation = Auto | Page format = Auto\n" \ "2: Orientation = Auto | Page format = Fixed (Fit view)\n" \ "3: Orientation = Fixed | Page format = Fixed (Fit view)\n\n" \ "Tip: Use 'Auto image size' option for best fit images in pdf." self.popup_message(title, message) def image_pos_info(self): title = "Image position on page info!" message = "Horizontal position and vertical position is the position of image on the page.\n" \ "Position will start from Top-left side of the page\n\n" \ "Note: If page margin were given, then position of image will be consider according to the margins." self.popup_message(title, message) def enable_auto_manual_resolution(self): if self.advance_setting_ui.ui.auto_resolution.isChecked(): self.advance_setting_ui.ui.dpi.setEnabled(False) else: self.advance_setting_ui.ui.dpi.setEnabled(True) def on_page_no(self): if self.advance_setting_ui.ui.show_page_no.isChecked():
<filename>itests/test_server.py import json import os import shutil from base64 import urlsafe_b64encode from urllib.parse import unquote from uuid import uuid4 import pytest import yaml from .client_fixtures import get_static_path from .server_fixtures import * # NoQA from .utils import assert_files_equals, ensure_success def test_hello_world(session_1): response = session_1.get("/hello-world") ensure_success(response) assert response.text == "Hello, World!" def test_not_found(session_1): response = session_1.get("/not-found") assert response.status_code == 404 @pytest.mark.parametrize( "query,echoed", [ ("?foo=120&name=Foo&age=20", {"foo": ["120"], "name": ["Foo"], "age": ["20"]}), ("?foo=120&foo=66&foo=124", {"foo": ["120", "66", "124"]}), ( "?foo=120&foo=66&foo=124&x=Hello%20World!!%20%3F", {"foo": ["120", "66", "124"], "x": ["Hello World!! ?"]}, ), ], ) def test_query(session_1, query, echoed): response = session_1.get("/echo-query" + query) ensure_success(response) content = response.json() assert content == echoed @pytest.mark.parametrize( "fragment,echoed", [ ("Hello/World/777", {"one": "Hello", "two": "World", "three": "777"}), ( "Hello%20World!!%20%3F/items/archived", {"one": "Hello World!! ?", "two": "items", "three": "archived"}, ), ], ) def test_route(session_1, fragment, echoed): response = session_1.get("/echo-route/" + fragment) ensure_success(response) content = response.json() assert content == echoed @pytest.mark.parametrize( "fragment,echoed", [ ("Hello/World/777", {"one": "Hello", "two": "World", "three": "777"}), ( "Hello%20World!!%20%3F/items/archived", {"one": "Hello World!! ?", "two": "items", "three": "archived"}, ), ], ) def test_query_autobind(session_1, fragment, echoed): response = session_1.get("/echo-route-autobind/" + fragment) ensure_success(response) content = response.json() assert content == echoed @pytest.mark.parametrize( "headers", [{"x-foo": str(uuid4())}, {"x-a": "Hello", "x-b": "World", "x-c": "!!"}] ) def test_headers(session_1, headers): response = session_1.head("/echo-headers", headers=headers) ensure_success(response) for key, value in headers.items(): header = response.headers[key] assert value == header @pytest.mark.parametrize( "cookies", [{"x-foo": str(uuid4())}, {"x-a": "Hello", "x-b": "World", "x-c": "!!"}] ) def test_cookies(session_1, cookies): response = session_1.get("/echo-cookies", cookies=cookies) ensure_success(response) data = response.json() for key, value in cookies.items(): header = data[key] assert value == header @pytest.mark.parametrize( "name,value", [("Foo", "Foo"), ("Character-Name", "<NAME>")] ) def test_set_cookie(session_1, name, value): response = session_1.get("/set-cookie", params=dict(name=name, value=value)) ensure_success(response) assert value == unquote(response.cookies[name]) @pytest.mark.parametrize( "data", [ {"name": "<NAME>", "type": "Sword"}, {"id": str(uuid4()), "price": 15.15, "name": "Ravenclaw T-Shirt"}, ], ) def test_post_json(session_1, data): response = session_1.post("/echo-posted-json", json=data) ensure_success(response) assert response.json() == data @pytest.mark.parametrize( "data", [{"name": "<NAME>", "power": 9000}, {"name": "<NAME>", "power": 15.80}], ) def test_post_json_autobind(session_1, data): response = session_1.post("/echo-posted-json-autobind", json=data) ensure_success(response) assert response.json() == data @pytest.mark.parametrize( "data,echoed", [ ( {"name": "<NAME>", "type": "Sword"}, {"name": "<NAME>", "type": "Sword"}, ), ( {"id": 123, "price": 15.15, "name": "Ravenclaw T-Shirt"}, {"id": "123", "price": "15.15", "name": "Ravenclaw T-Shirt"}, ), ], ) def test_post_form_urlencoded(session_1, data, echoed): response = session_1.post("/echo-posted-form", data=data) ensure_success(response) content = response.json() assert content == echoed def test_post_multipart_form_with_files(session_1): if os.path.exists("out"): shutil.rmtree("out") response = session_1.post( "/upload-files", files=[ ( "images", ( "one.jpg", open(get_static_path("pexels-photo-126407.jpeg"), "rb"), "image/jpeg", ), ), ( "images", ( "two.jpg", open(get_static_path("pexels-photo-923360.jpeg"), "rb"), "image/jpeg", ), ), ], ) ensure_success(response) assert_files_equals(f"./out/one.jpg", get_static_path("pexels-photo-126407.jpeg")) assert_files_equals(f"./out/two.jpg", get_static_path("pexels-photo-923360.jpeg")) def test_exception_handling_with_details(session_1): response = session_1.get("/crash") assert response.status_code == 500 details = response.text assert "app.py" in details assert "itests.utils.CrashTest: Crash Test!" in details def test_exception_handling_without_details(session_2): # By default, the server must hide error details response = session_2.get("/crash") assert response.status_code == 500 assert response.text == "Internal server error." def test_exception_handling_with_response(session_2): # By default, the server must hide error details response = session_2.get("/handled-crash") assert response.status_code == 200 assert response.text == "Fake exception, to test handlers" @pytest.mark.parametrize( "url_path,file_name", [("/pexels-photo-923360.jpeg", "example.jpg"), ("/example.html", "example.html")], ) def test_get_file(session_1, url_path, file_name): response = session_1.get(url_path, stream=True) ensure_success(response) with open(file_name, "wb") as output_file: for chunk in response: output_file.write(chunk) assert_files_equals(get_static_path(url_path), file_name) def test_get_file_response_with_path(session_1): response = session_1.get("/file-response-with-path", stream=True) ensure_success(response) with open("nice-cat.jpg", "wb") as output_file: for chunk in response: output_file.write(chunk) assert_files_equals(get_static_path("pexels-photo-923360.jpeg"), "nice-cat.jpg") def test_get_file_response_with_generator(session_1): response = session_1.get("/file-response-with-generator", stream=True) ensure_success(response) body = bytearray() for chunk in response: body.extend(chunk) text = body.decode("utf8") assert ( text == """Black Knight: None shall pass. King Arthur: What? Black Knight: None shall pass. King Arthur: I have no quarrel with you, good Sir Knight, but I must cross this bridge. Black Knight: Then you shall die. King Arthur: I command you, as King of the Britons, to stand aside! Black Knight: I move for no man. King Arthur: So be it! [rounds of melee, with Arthur cutting off the left arm of the black knight.] King Arthur: Now stand aside, worthy adversary. Black Knight: Tis but a scratch. """ ) def test_get_file_with_bytes(session_1): response = session_1.get("/file-response-with-bytes") ensure_success(response) text = response.text assert ( text == """Black Knight: None shall pass. King Arthur: What? Black Knight: None shall pass. King Arthur: I have no quarrel with you, good Sir Knight, but I must cross this bridge. Black Knight: Then you shall die. King Arthur: I command you, as King of the Britons, to stand aside! Black Knight: I move for no man. King Arthur: So be it! [rounds of melee, with Arthur cutting off the left arm of the black knight.] King Arthur: Now stand aside, worthy adversary. Black Knight: Tis but a scratch. """ ) def test_get_file_with_bytesio(session_1): response = session_1.get("/file-response-with-bytesio") ensure_success(response) text = response.text assert text == """some initial binary data: """ def test_xml_files_are_not_served(session_1): response = session_1.get("/example.xml", stream=True) assert response.status_code == 404 @pytest.mark.parametrize( "claims,expected_status", [(None, 401), ({"id": "001", "name": "<NAME>"}, 204)], ) def test_requires_authenticated_user(session_2, claims, expected_status): headers = ( {"Authorization": urlsafe_b64encode(json.dumps(claims).encode("utf8")).decode()} if claims else {} ) response = session_2.get("/only-for-authenticated-users", headers=headers) assert response.status_code == expected_status @pytest.mark.parametrize( "claims,expected_status", [ (None, 401), ({"id": "001", "name": "<NAME>", "role": "user"}, 401), ( { "id": "002", "name": "Snoopy", "role": "admin", # according to rules coded in app_two.py }, 204, ), ], ) def test_requires_admin_user(session_2, claims, expected_status): headers = ( {"Authorization": urlsafe_b64encode(json.dumps(claims).encode("utf8")).decode()} if claims else {} ) response = session_2.get("/only-for-admins", headers=headers) assert response.status_code == expected_status def test_open_api_ui(session_2): response = session_2.get("/docs") assert response.status_code == 200 text = response.text assert ( text.strip() == """ <!DOCTYPE html> <html> <head> <title>Cats API</title> <link rel="icon" href="/favicon.png"/> <link type="text/css" rel="stylesheet" href="https://cdn.jsdelivr.net/npm/swagger-ui-dist@3.30.0/swagger-ui.css"> </head> <body> <div id="swagger-ui"></div> <script src="https://cdn.jsdelivr.net/npm/swagger-ui-dist@3.30.0/swagger-ui-bundle.js"></script> <script> const ui = SwaggerUIBundle({ url: '/openapi.json', oauth2RedirectUrl: window.location.origin + '/docs/oauth2-redirect', dom_id: '#swagger-ui', presets: [ SwaggerUIBundle.presets.apis, SwaggerUIBundle.SwaggerUIStandalonePreset ], layout: "BaseLayout", deepLinking: true, showExtensions: true, showCommonExtensions: true }) </script> </body> </html> """.strip() ) def test_open_api_redoc_ui(session_2): response = session_2.get("/redocs") assert response.status_code == 200 text = response.text assert ( text.strip() == """ <!DOCTYPE html> <html> <head> <title>Cats API</title> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" href="/favicon.png"/> <link href="https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700" rel="stylesheet"> <style> body { margin: 0; padding: 0; } </style> </head> <body> <redoc spec-url="/openapi.json"></redoc> <script src="https://cdn.jsdelivr.net/npm/redoc@next/bundles/redoc.standalone.js"> </script> </body> </html> """.strip() ) def test_open_api_json(session_2): response = session_2.get("/openapi.json") assert response.status_code == 200 text = response.text assert json.loads(text) is not None def test_open_api_yaml(session_2): response = session_2.get("/openapi.yaml") assert response.status_code == 200 text = response.text assert yaml.safe_load(text) is not None def test_open_api_json_parameters_docs(session_2): response = session_2.get("/openapi.json") assert response.status_code == 200 data = response.json() paths = data.get("paths") cats3 = paths.get("/api/cats/cats3") assert cats3 == { "get": { "responses": { "200": { "description": "Success response", "content": { "application/json": { "schema": {"$ref": "#/components/schemas/CatsList"} } }, } }, "tags": ["Cats"], "operationId": "get_cats_alt3", "summary": "Note: in this scenario, query parameters can be read from the request object", "description": "Note: in this scenario, query parameters can be read from the request object", "parameters": [ { "name": "page", "in": "query", "schema": {"type": "integer", "format": "int64", "nullable": False}, "description": "Optional page number (default 1)", "required": False, }, { "name": "page_size", "in": "query", "schema": {"type": "integer", "format": "int64", "nullable": False}, "description": "Optional page size (default 30)", "required": False, }, { "name": "search", "in": "query", "schema": {"type": "string", "nullable": False}, "description": "Optional search filter", "required": False, }, ], } } def test_open_api_json_parameters_docs_from_epytext_docstring(session_2): response = session_2.get("/openapi.json") assert response.status_code == 200 data = response.json() paths = data.get("paths") cats4 = paths.get("/api/cats/cats4") assert cats4 == { "get": { "responses": { "200": { "description": "Success response", "content": { "application/json": { "schema": {"$ref": "#/components/schemas/CatsList"} } }, } }, "tags": ["Cats"], "operationId": "get_cats_alt4", "summary": "Returns a paginated set of cats.", "description": "Returns a paginated set of cats.", "parameters": [ { "name": "page", "in": "query", "schema": {"type": "integer", "format": "int64", "nullable": False}, "description": "Optional page number (default 1).", "required": False, }, { "name": "page_size", "in": "query", "schema": {"type": "integer", "format": "int64", "nullable": False}, "description": "Optional page size (default 30).", "required": False, }, { "name": "search", "in": "query", "schema": {"type": "string", "nullable": False}, "description": "Optional search filter.", "required": False, }, ], } } def test_open_api_deprecated(session_2): response = session_2.get("/openapi.json") assert response.status_code == 200 data = response.json() paths = data.get("paths") deprecated_api = paths.get("/api/cats/deprecated") assert deprecated_api == { "get": { "responses": {}, "tags": ["Cats", "Deprecated"], "operationId": "deprecated_api", "summary": "Some deprecated API", "description": "This endpoint is deprecated.", "parameters": [], "deprecated": True, } } def test_open_api_request_body_description_from_docstring(session_2): response = session_2.get("/openapi.json") assert response.status_code == 200 data = response.json() paths = data.get("paths") update_foo = paths.get("/api/cats/foo") assert update_foo == { "post": { "responses": { "200": { "description": "Success response", "content": { "application/json": { "schema": {"$ref": "#/components/schemas/Foo"} } }, } }, "tags": ["Cats"], "operationId": "update_foo", "summary": "Updates a foo by id.", "description": "Updates a foo by id.", "parameters": [ { "name": "foo_id", "in": "query", "schema":
<reponame>bgoli/stochpy<gh_stars>10-100 #! /usr/bin/env python """ StochPyTools ============ Written by <NAME>, Amsterdam, The Netherlands E-mail: <EMAIL> Last Change: June 08, 2015 """ import re,sys,copy from stochpy import model_dir as stochpy_model_dir from ..modules.PyscesMiniModel import PySCeS_Connector try: import numpy as np np.seterr(divide = 'ignore') # catch the divide by zero error if species start at zero except ImportError: print("Make sure that the NumPy module is installed") print("This program does not work without NumPy") print("See http://numpy.scipy.org/ for more information about NumPy") sys.exit() class Species(): def __init__(self): """ Object that is created to store the species amounts """ pass __species__ = Species() class StochPySSA_Shared(): def Parse(self,model_file,model_dir,IsTauleaping=False,IsNRM=False,IsDelayed = False,IsSMM = False,IsQuiet=False): """ Parses the PySCeS MDL input file, where the model is desribed Input: - *model_file* filename.psc - *model_dir* /home/user/Stochpy/pscmodels/filename.psc """ try: self.parse = PySCeS_Connector(model_file,model_dir,IsTauleaping = IsTauleaping, IsNRM = IsNRM,IsDelayed = IsDelayed,IsSMM = IsSMM,IsQuiet=IsQuiet) # Parse model if self.parse._IsConverted: model_file += '.psc' model_dir = stochpy_model_dir self.N_matrix_transpose = copy.deepcopy(self.parse.N_matrix.transpose()) # June 5th 2012 self.X_matrixinit = copy.deepcopy(self.parse.X_matrix.transpose()[0]) self.rate_names = copy.deepcopy(self.parse.Mod.__reactions__) self.rate_pos = {r_id:j for j,r_id in enumerate(self.rate_names)} # Determine once for each rate it's position self.n_reactions = len(self.parse.Mod.__reactions__) self.n_species = len(self.parse.species) self.fixed_species = copy.deepcopy(self.parse.Mod.__fixed_species__) self.__aDict__ = copy.deepcopy(self.parse.Mod.__aDict__) # support of assignments self.__eDict__ = copy.deepcopy(self.parse.Mod.__eDict__) # support of events (with triggers) self.species_names = copy.deepcopy(self.parse.species) self.species_names += [species for species in list(self.__aDict__)] self.species_names += [species for species in self.fixed_species] self.species_pos = {s_id:i for i,s_id in enumerate(self.species_names)} # Determine once for each species (variable, assigned, fixed) it's position if IsDelayed or IsSMM: self.N_matrix_transpose_reactants = copy.copy(self.parse.N_matrix_reactants.transpose()) #24-10-2013 self.N_matrix_transpose_products = copy.copy(self.parse.N_matrix_products.transpose()) #24-10-2013 if IsSMM: # If depends_on and reactants don't correspond, like in a net catalyzed reaction, then force consumption and production of the catalyst. # Result: update of the catalyst in the tau_arrays without showing in the N_matrix self.products = copy.deepcopy(self.parse.product_indices) # 28-10-2013 #Indices for j in range(self.n_reactions): self.products[j].extend( list(set(self.parse.depends_on[j]) - set(self.parse.reactant_indices[j])) ) except Exception as er: print(er) print("Error: StochPy failed parsing input file '{0:s}' from directory '{1:s}'".format(model_file, model_dir) ) sys.exit() def SpeciesSelection(self): """ Prepare output indices (if specific species are selected) """ self._IsSpeciesSelection = False if self.settings.species_selection: self.sim_output_indices = [0] for s_id in self.settings.species_selection: self.sim_output_indices.append(self.species_pos[s_id] + 1) # (time on first index) self.sim_output_indices.append(-1) self._IsSpeciesSelection = True def RateSelection(self): """ Prepare output indices (if specific rates are selected) """ self._IsRateSelection = False if self.settings.rate_selection: self.rate_output_indices = [0] for r_id in self.settings.rate_selection: self.rate_output_indices.append(self.rate_pos[r_id] + 1) # (time on first index) self._IsRateSelection = True def SetEvents(self): """ Initialize events """ self.__events__ = copy.deepcopy(self.parse.Mod.__events__) # deepcopy, very important! Augustus 21, 2014 self._IsPerformEvent = False for ev in self.__events__: for s_id in sorted(self.species_names, reverse=True): # makes sure that the longest identifiers are replaced first if s_id not in self.fixed_species: ev.code_string = ev.code_string.replace('self.mod.{0:s}'.format(s_id),'X_matrix[{0:d}]'.format(self.species_pos[s_id]) ) ev.xcode = compile("self.state = {0:s}".format(ev.code_string),'event{0}'.format(ev),'exec') def Propensities(self,IsTauleaping=False): """ Determines the propensities to fire for each reaction at the current time point. At t=0, all the rate equations are compiled. Input: - *IsTauleaping* (boolean) [default = False] """ if self._IsInitial: code_str = self.volume_code + '\n' # 27-01-2014 self.sim_a_mu = np.zeros([self.n_reactions]) # Initialize a(mu) for i in range(self.n_reactions): code_str += "r_vec[{0:d}]={1}\n".format(i,self.parse.propensities[i]) self.req_eval_code = compile(code_str,"RateEqEvaluationCode","exec") [setattr(__species__,self.parse.species[s],self.X_matrix[s]) for s in range(self.n_species)] # Set species quantities [setattr(__species__,self.fixed_species[s],self.fixed_species_amount[s]) for s in range(len(self.fixed_species))] self._IsInitial = False #print(code_str) else: if not IsTauleaping: [setattr(__species__,self.parse.species[s],self.X_matrix[s]) for s in self.species_to_update] else: [setattr(__species__,self.parse.species[s],self.X_matrix[s]) for s in range(self.n_species)] # Set species quantities self.rateFunc(self.req_eval_code,self.sim_a_mu) # Calc. Propensities assert self.sim_a_mu.min() >= 0, "Error: Negative propensities are found. Make sure that your rate equations are defined correctly!" self.sim_a_mu = abs(self.sim_a_mu) self.sim_a_0 = self.sim_a_mu.sum() def BuildPropensityCodes(self, propensities = None): # 21-11-2013 """ Makes a list of compiled propensity codes for each reaction. If a reaction fires, its code is executed. Input: - *propensities*: optional argument for providing the propensities that should be pre-compiled. If none, *self.propensities* is used. """ #Note2: This assumes that own reaction index is already inserted in the dep_graph. if not propensities: #26-11-2013 propensities = self.parse.propensities self.propensity_codes = [] for n,dependencies in enumerate(self.parse.dep_graph): code_str = self.volume_code + '\n' code_str += '\n'.join(['r_vec[{0:d}]={1:s}'.format(i,propensities[i]) for i in dependencies]) self.propensity_codes.append(compile(code_str,"PropensityEvalCode_{0}".format(n+1),"exec")) code_str_all = self.volume_code + '\n' code_str_all += '\n'.join(['r_vec[{0:d}]={1:s}'.format(i,propensity) for i,propensity in enumerate(propensities)]) self.propensity_codes.append(compile(code_str_all,"PropensityEvalAllCode","exec")) def HandleEvents(self,IsTauleapingStep=False): """ Event handling We distuingish two types of events: 1. time events where we reset the simulation time to the trigger time 2. trigger events which can involve species copy numbers, ..., ..., and also time. """ self._IsPerformEvent = False for ev in self.__events__: IsTrigger = ev(self.sim_t,self.X_matrix) IsModify = False if IsTrigger: if '_TIME_' in ev.symbols and len(ev.symbols) == 1: # pure time event n = re.search("\d*\.\d+|\d+",ev.formula) ev.reset() self._IsTimeEvent = True # 10-04-2014 if not ev(10**-99,self.X_matrix): # _TIME_ > 3.0 self.sim_t = float(n.group(0)) self.__events__.remove(ev) IsModify = True if np.isnan(self.reaction_index): # reaction_index = nan, ignore nothing happend (probably the end time is reached) pass elif not IsTauleapingStep: self.X_matrix -= self.N_matrix_transpose[self.reaction_index] # reset reaction elif IsTauleapingStep: self.X_matrix -= np.dot(self.parse.N_matrix,self.K_vector).ravel() # reset reactions else: # _TIME < 3.0, these can fire as long as it's valid IsModify = True ev.reset() else: # Trigger event IsModify = True if IsModify: for s_id in list(self.__eDict__[ev.name]['assignments']): if s_id not in self.fixed_species: s_index = self.species_pos[s_id] try: self.X_matrix[s_index] = float(int(self.__eDict__[ev.name]['assignments'][s_id])) # convert to int except ValueError: raise ValueError("Invalid assignment '{0:s}' for identifier {1:s}".format(self.__eDict__[ev.name]['assignments'][s_id],s_id)) else: s_index = self.fixed_species.index(s_id) self.fixed_species_amount[s_index] = float(self.__eDict__[ev.name]['assignments'][s_id]) # march 11, 2015: do not convert to integer, because it could be a parameter which does not have to be an integer setattr(__species__,s_id, self.fixed_species_amount[s_index]) self._IsPerformEvent = True # SBML event self.reaction_index = np.nan def AssignmentRules(self): """ Builds the assignment rules # updated version 06/08/14 http://sbml.org/Software/libSBML/docs/java-api/org/sbml/libsbml/AssignmentRule.html """ code_string = """""" if self.sim_t == 0: self.assignment_labels = list(self.__aDict__) self.assignment_species = np.zeros(len(self.__aDict__)) self._assignment_rules = [] # indices of species matrix species used for assignments for s_id in self.parse.species: for assign_species in list(self.__aDict__): if s_id in self.__aDict__[assign_species]['formula']: # if 'normal' species in assignment relationship index = self.species_pos[s_id] if index not in self._assignment_rules: self._assignment_rules.append(index) for index in self._assignment_rules: species_value = self.X_matrix[index] code_string += "{0:s}={1}\n".format(self.parse.species[index],species_value) for i,species in enumerate(self.__aDict__): code_string += "self.assignment_species[{0:d}]={1}\n".format(i,self.__aDict__[species]['formula']) self.rateFunc(code_string,self.assignment_species) def rateFunc(self,rate_eval_code,r_vec): """ Calculate propensities from the compiled rate equations Input: - *rate_eval_code* compiled rate equations - *r_vec* output for the calculated propensities """ try: exec(rate_eval_code) except Exception as er: print(er) print("Error: Propensities cannot be determined. Please check if all variable species amounts are initialized") sys.exit() def Initial_Conditions(self,IsTauleaping = False): """ This function initiates the output format with the initial concentrations """ if self._IsTrackPropensities: output_init = self.sim_a_mu.tolist() output_init.insert(0,self.sim_t) if self._IsRateSelection: output_init = [output_init[j] for j in self.rate_output_indices] self.propensities_output.append(output_init) output_init = [self.sim_t] for init in self.X_matrix: # Output at t = 0 assert init >= 0, "Error: StochPy detected (initial) negative species amounts." output_init.append(int(init)) if self.__aDict__ != {}: self.AssignmentRules() output_init += [value for value in self.assignment_species] for amount in self.fixed_species_amount: output_init.append(amount) if not IsTauleaping: output_init.append(np.NAN) if self._IsSpeciesSelection: output_init = [output_init[i] for i in self.sim_output_indices] self.sim_output.append(output_init) self.V_output = [self._current_volume] # May 26, 2015 def GenerateOutput(self,IsTauleaping = False,completion_delayed = False): """ Add data of current state (species copy numbers, volume and propensities) to the output. Input: - *IsTauleaping* (boolean) [default = False] - *completion_delayed* (boolean) [default = False] Different output is generated for the tauleaping method and if there are completion delays """ if completion_delayed: r_index = - (self.reaction_index + 1) # Completion reaction = - reaction index if not completion_delayed and not IsTauleaping: r_index = self.reaction_index + 1 # Initiation reaction =
# # Copyright 2018-2021 Elyra Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from enum import IntEnum from glob import glob import json import os import re from typing import Dict from typing import List from typing import Optional import networkx as nx from traitlets.config import SingletonConfigurable from elyra.pipeline.component import Component from elyra.pipeline.component_registry import ComponentRegistry from elyra.pipeline.pipeline import Operation from elyra.pipeline.pipeline import PIPELINE_CURRENT_SCHEMA from elyra.pipeline.pipeline import PIPELINE_CURRENT_VERSION from elyra.pipeline.pipeline_definition import PipelineDefinition from elyra.pipeline.processor import PipelineProcessorManager from elyra.util.path import get_expanded_path class ValidationSeverity(IntEnum): Error = 1 Warning = 2 Information = 3 Hint = 4 class ValidationResponse(object): def __init__(self): self._response = {"title": "Elyra Pipeline Diagnostics", "description": "Issues discovered when parsing the pipeline", "issues": [] } self._has_fatal = False @property def response(self) -> Dict: """ :return: The dict of validation errors and warnings found in the pipeline """ return self._response @property def has_fatal(self): return self._has_fatal def add_message(self, message: str, message_type: Optional[str] = "", data: Optional[Dict] = "", severity: ValidationSeverity = ValidationSeverity.Warning): """ Helper function to add a diagnostic message to the response to be sent back :param message: A simple message describing the issue :param message_type: The type of message to send back e.g. invalidNodeType, invalidPipeline :param data: a Dict with granular details regarding the error e.g. the nodeID, pipelineID, linkID etc. :param severity: the severity level of the issue :return: """ valid_severity_levels = [ValidationSeverity.Error, ValidationSeverity.Warning, ValidationSeverity.Information, ValidationSeverity.Hint] if severity in valid_severity_levels: diagnostic = {"severity": severity.value, "source": "Elyra Pipeline Validation Service", "type": message_type, "message": message, "data": data } self._response['issues'].append(diagnostic) if severity is ValidationSeverity.Error: self._has_fatal = True def to_json(self): return self._response class PipelineValidationManager(SingletonConfigurable): def __init__(self, **kwargs): super().__init__(**kwargs) self.root_dir = get_expanded_path(kwargs.get('root_dir')) async def validate(self, pipeline: Dict) -> ValidationResponse: """ Validates the pipeline JSON payload :param pipeline: the pipeline definition to be validated :return: ValidationResponse containing any and all issues discovered during the validation """ response = ValidationResponse() pipeline_definition = PipelineDefinition(pipeline_definition=pipeline) issues = pipeline_definition.validate() for issue in issues: response.add_message(severity=ValidationSeverity.Error, message_type="invalidJSON", message=issue) try: pipeline_definition.primary_pipeline except ValueError: response.add_message(severity=ValidationSeverity.Error, message_type="invalidJSON", message="Invalid JSON detected, unable to continue.") return response pipeline_runtime = pipeline_definition.primary_pipeline.runtime # local, kfp, airflow if PipelineProcessorManager.instance().is_supported_runtime(pipeline_runtime) is False: response.add_message(severity=ValidationSeverity.Error, message_type="invalidRuntime", message="Unsupported pipeline runtime", data={"pipelineRuntime": pipeline_runtime}) pipeline_type = pipeline_definition.primary_pipeline.type # generic, kfp, airflow if pipeline_type == 'generic' and \ PipelineProcessorManager.instance().is_supported_runtime(pipeline_runtime) is False: response.add_message(severity=ValidationSeverity.Error, message_type="invalidRuntime", message="Unsupported pipeline type", data={"pipelineRuntime": pipeline_type}) self._validate_pipeline_structure(pipeline_definition=pipeline_definition, response=response) await self._validate_compatibility(pipeline_definition=pipeline_definition, pipeline_type=pipeline_type, pipeline_runtime=pipeline_runtime, response=response) self._validate_pipeline_graph(pipeline=pipeline, response=response) if response.has_fatal: return response await self._validate_node_properties(pipeline_definition=pipeline_definition, pipeline_type=pipeline_type, pipeline_runtime=pipeline_runtime, response=response) return response def _validate_pipeline_structure(self, pipeline_definition: PipelineDefinition, response: ValidationResponse) -> None: """ Validates the pipeline structure based on version of schema :param pipeline_definition: the pipeline definition to be validated :param response: ValidationResponse containing the issue list to be updated """ # Validate pipeline schema version if float(pipeline_definition.schema_version) != PIPELINE_CURRENT_SCHEMA: response.add_message(severity=ValidationSeverity.Error, message_type="invalidPipeline", message="Incompatible pipeline schema version detected.", data={"supported_schema_version": PIPELINE_CURRENT_SCHEMA, "detected_schema_version": float(pipeline_definition.schema_version)}) # validate pipeline version compatibility pipeline_version = int(pipeline_definition.primary_pipeline.version) if pipeline_version not in range(PIPELINE_CURRENT_VERSION + 1): response.add_message(severity=ValidationSeverity.Error, message_type="invalidPipeline", message="Primary pipeline version field has an invalid value.", data={"supported_version": PIPELINE_CURRENT_VERSION, "detected_version": pipeline_version}) elif pipeline_version < PIPELINE_CURRENT_VERSION: # Pipeline needs to be migrated response.add_message(severity=ValidationSeverity.Error, message_type="invalidPipeline", message=f'Pipeline version {pipeline_version} is out of date and needs to be migrated ' f'using the Elyra pipeline editor.') elif pipeline_version > PIPELINE_CURRENT_VERSION: # New version of Elyra is needed response.add_message(severity=ValidationSeverity.Error, message_type="invalidPipeline", message='Pipeline was last edited in a newer version of Elyra. ' 'Update Elyra to use this pipeline.', data={"supported_version": PIPELINE_CURRENT_VERSION, "detected_version": pipeline_version}) async def _validate_compatibility(self, pipeline_definition: PipelineDefinition, pipeline_type: str, pipeline_runtime: str, response: ValidationResponse) -> None: """ Checks that the pipeline payload is compatible with this version of elyra (ISSUE #938) as well as verifying all nodes in the pipeline are supported by the runtime :param pipeline_definition: the pipeline definition to be validated :param pipeline_type: name of the pipeline runtime being used e.g. kfp, airflow, generic :param pipeline_runtime: name of the pipeline runtime for execution e.g. kfp, airflow, local :param response: ValidationResponse containing the issue list to be updated """ primary_pipeline_id = pipeline_definition.primary_pipeline.id supported_ops = [] if pipeline_runtime: if pipeline_runtime != pipeline_type and pipeline_type != 'generic': response.add_message(severity=ValidationSeverity.Error, message_type="invalidRuntime", message="Pipeline runtime platform is not compatible " "with selected runtime configuration.", data={"pipelineID": primary_pipeline_id, "pipelineType": pipeline_type, "pipelineRuntime": pipeline_runtime}) elif PipelineProcessorManager.instance().is_supported_runtime(pipeline_runtime): component_list = await PipelineProcessorManager.instance().get_components(pipeline_runtime) for component in component_list: supported_ops.append(component.op) # Checks pipeline node types are compatible with the runtime selected for sub_pipeline in pipeline_definition.pipelines: for node in sub_pipeline.nodes: if node.type == "execution_node" and node.op not in supported_ops: response.add_message(severity=ValidationSeverity.Error, message_type="invalidNodeType", message="This component was not found in the registry. Please add it " "to your component registry or remove this node from the " "pipeline", data={"nodeID": node.id, "nodeOpName": node.op, "nodeName": node.label, "pipelineId": sub_pipeline.id}) else: response.add_message(severity=ValidationSeverity.Error, message_type="invalidRuntime", message="Unsupported pipeline runtime", data={"pipelineRuntime": pipeline_runtime, "pipelineType": pipeline_type, "pipelineId": primary_pipeline_id}) async def _validate_node_properties(self, pipeline_definition: PipelineDefinition, pipeline_type: str, pipeline_runtime: str, response: ValidationResponse) -> None: """ Validates each of the node's structure for required fields/properties as well as their values :param pipeline_definition: the pipeline definition to be validated :param pipeline_type: name of the pipeline runtime being used e.g. kfp, airflow, generic :param pipeline_runtime: name of the pipeline runtime for execution e.g. kfp, airflow, local :param response: ValidationResponse containing the issue list to be updated """ if pipeline_runtime: # don't check if incompatible pipeline type and runtime if pipeline_runtime != pipeline_type and pipeline_type != 'generic': return for pipeline in pipeline_definition.pipelines: component_list = await PipelineProcessorManager.instance().get_components(pipeline_runtime) components = ComponentRegistry.to_canvas_palette(component_list) for node in pipeline.nodes: if node.type == 'execution_node': node_label = node.label if Operation.is_generic_operation(node.op): image_name = node.get_component_parameter('runtime_image') filename = node.get_component_parameter("filename") dependencies = node.get_component_parameter("dependencies") env_vars = node.get_component_parameter("env_vars") self._validate_filepath(node_id=node.id, node_label=node_label, property_name='filename', filename=filename, response=response) # If not running locally, we check resource and image name if pipeline_runtime != 'local': self._validate_container_image_name(node.id, node_label, image_name, response=response) for resource_name in ['cpu', 'gpu', 'memory']: resource_value = node.get_component_parameter(resource_name) if resource_value: self._validate_resource_value(node.id, node_label, resource_name=resource_name, resource_value=resource_value, response=response) self._validate_label(node_id=node.id, node_label=node_label, response=response) if dependencies: notebook_root_relative_path = os.path.dirname(filename) for dependency in dependencies: self._validate_filepath(node_id=node.id, node_label=node_label, file_dir=os.path.join(self.root_dir, notebook_root_relative_path), property_name='dependencies', filename=dependency, response=response) if env_vars: for env_var in env_vars: self._validate_environmental_variables(node.id, node_label, env_var=env_var, response=response) # Validate runtime components against specific node properties in component registry else: # This is the full dict of properties for the operation e.g. current params, optionals etc property_dict = await self._get_component_properties(pipeline_type, components, node.op) cleaned_property_list = list(map(lambda x: str(x).replace('elyra_', ''), property_dict['current_parameters'].keys())) # Remove the non component_parameter jinja templated values we do not check against cleaned_property_list.remove('component_source') cleaned_property_list.remove('label') for node_property in cleaned_property_list: component_param = node.get_component_parameter(node_property) if not component_param: if self._is_required_property(property_dict, node_property): response.add_message(severity=ValidationSeverity.Error, message_type="invalidNodeProperty", message="Node is missing required property.", data={"nodeID": node.id, "nodeName": node_label, "propertyName": node_property}) elif self._get_component_type(property_dict, node_property) == 'inputpath': # Any component property with type `InputPath` will be a dictionary of two keys # "value": the node ID of the parent node containing the output # "option": the name of the key (which is an output) of the above referenced node if not isinstance(component_param, dict) or \ len(component_param) != 2 or \ set(component_param.keys()) != {'value', 'option'}: response.add_message(severity=ValidationSeverity.Error, message_type="invalidNodeProperty", message="Node has malformed `InputPath` parameter structure", data={"nodeID": node.id, "nodeName": node_label}) node_ids = list(x.get('node_id_ref', None) for x in node.component_links) parent_list = self._get_parent_id_list(pipeline_definition, node_ids, []) if node.get_component_parameter(node_property)['value'] not in parent_list: response.add_message(severity=ValidationSeverity.Error, message_type="invalidNodeProperty", message="Node contains an invalid inputpath reference. Please " "check your node-to-node connections", data={"nodeID": node.id, "nodeName": node_label}) def _validate_container_image_name(self, node_id: str, node_label: str, image_name: str, response: ValidationResponse) -> None: """ Validates the image name exists and is proper in syntax :param node_id: the unique ID of the node :param node_label: the given node name or user customized name/label of the node :param image_name: container image name to be evaluated :param response: ValidationResponse containing the issue list to be updated """ if not image_name: response.add_message(severity=ValidationSeverity.Error, message_type="invalidNodeProperty", message="Required property value is missing.", data={"nodeID": node_id, "nodeName": node_label, "propertyName": 'runtime_image'}) else: image_regex = re.compile(r"[^/ ]+/[^/ ]+$") matched = image_regex.search(image_name) if not matched: response.add_message(severity=ValidationSeverity.Error, message_type="invalidNodeProperty", message="Node
- m.x6158 == 0) m.c3416 = Constraint(expr= - 3.5161*m.x4284 + m.x4684 - m.x6156 - m.x6157 - m.x6158 == 0) m.c3417 = Constraint(expr= - 3.1062*m.x4285 + m.x4685 - m.x6156 - m.x6157 - m.x6158 == 0) m.c3418 = Constraint(expr= - 2.611*m.x4286 + m.x4686 - m.x6156 - m.x6157 - m.x6158 == 0) m.c3419 = Constraint(expr= - 5.1882*m.x4287 + m.x4687 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3420 = Constraint(expr= - 3.3895*m.x4288 + m.x4688 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3421 = Constraint(expr= - 3.0344*m.x4289 + m.x4689 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3422 = Constraint(expr= - 4.2397*m.x4290 + m.x4690 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3423 = Constraint(expr= - 3.8963*m.x4291 + m.x4691 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3424 = Constraint(expr= - 3.1446*m.x4292 + m.x4692 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3425 = Constraint(expr= - 3.2128*m.x4293 + m.x4693 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3426 = Constraint(expr= - 2.8954*m.x4294 + m.x4694 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3427 = Constraint(expr= - 4.6836*m.x4295 + m.x4695 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3428 = Constraint(expr= - 2.7903*m.x4296 + m.x4696 - m.x6159 - m.x6160 - m.x6161 == 0) m.c3429 = Constraint(expr= - 4.4621*m.x4297 + m.x4697 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3430 = Constraint(expr= - 5.5156*m.x4298 + m.x4698 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3431 = Constraint(expr= - 4.6402*m.x4299 + m.x4699 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3432 = Constraint(expr= - 4.0001*m.x4300 + m.x4700 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3433 = Constraint(expr= - 4.9224*m.x4301 + m.x4701 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3434 = Constraint(expr= - 4.5283*m.x4302 + m.x4702 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3435 = Constraint(expr= - 3.4167*m.x4303 + m.x4703 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3436 = Constraint(expr= - 3.904*m.x4304 + m.x4704 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3437 = Constraint(expr= - 5.3641*m.x4305 + m.x4705 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3438 = Constraint(expr= - 4.4297*m.x4306 + m.x4706 - m.x6162 - m.x6163 - m.x6164 == 0) m.c3439 = Constraint(expr= - 6.6589*m.x4307 + m.x4707 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3440 = Constraint(expr= - 5.2228*m.x4308 + m.x4708 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3441 = Constraint(expr= - 6.1673*m.x4309 + m.x4709 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3442 = Constraint(expr= - 5.8404*m.x4310 + m.x4710 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3443 = Constraint(expr= - 4.6564*m.x4311 + m.x4711 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3444 = Constraint(expr= - 4.5707*m.x4312 + m.x4712 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3445 = Constraint(expr= - 6.4759*m.x4313 + m.x4713 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3446 = Constraint(expr= - 4.6653*m.x4314 + m.x4714 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3447 = Constraint(expr= - 6.714*m.x4315 + m.x4715 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3448 = Constraint(expr= - 6.9471*m.x4316 + m.x4716 - m.x6165 - m.x6166 - m.x6167 == 0) m.c3449 = Constraint(expr= - 3.0648*m.x4317 + m.x4717 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3450 = Constraint(expr= - 1.9137*m.x4318 + m.x4718 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3451 = Constraint(expr= - 4.4266*m.x4319 + m.x4719 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3452 = Constraint(expr= - 1.9016*m.x4320 + m.x4720 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3453 = Constraint(expr= - 3.3901*m.x4321 + m.x4721 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3454 = Constraint(expr= - 3.3369*m.x4322 + m.x4722 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3455 = Constraint(expr= - 1.6545*m.x4323 + m.x4723 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3456 = Constraint(expr= - 1.8068*m.x4324 + m.x4724 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3457 = Constraint(expr= - 3.648*m.x4325 + m.x4725 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3458 = Constraint(expr= - 2.9341*m.x4326 + m.x4726 - m.x6168 - m.x6169 - m.x6170 == 0) m.c3459 = Constraint(expr= - 1.1023*m.x4327 + m.x4727 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3460 = Constraint(expr= - 1.1302*m.x4328 + m.x4728 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3461 = Constraint(expr= - 4.1291*m.x4329 + m.x4729 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3462 = Constraint(expr= - 1.9107*m.x4330 + m.x4730 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3463 = Constraint(expr= - 2.0708*m.x4331 + m.x4731 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3464 = Constraint(expr= - 2.4981*m.x4332 + m.x4732 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3465 = Constraint(expr= - 2.5915*m.x4333 + m.x4733 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3466 = Constraint(expr= - 1.8244*m.x4334 + m.x4734 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3467 = Constraint(expr= - 0.8562*m.x4335 + m.x4735 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3468 = Constraint(expr= - 0.1896*m.x4336 + m.x4736 - m.x6171 - m.x6172 - m.x6173 == 0) m.c3469 = Constraint(expr= - 2.9428*m.x4337 + m.x4737 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3470 = Constraint(expr= - 3.485*m.x4338 + m.x4738 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3471 = Constraint(expr= - 2.8866*m.x4339 + m.x4739 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3472 = Constraint(expr= - 3.5715*m.x4340 + m.x4740 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3473 = Constraint(expr= - 2.979*m.x4341 + m.x4741 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3474 = Constraint(expr= - 2.963*m.x4342 + m.x4742 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3475 = Constraint(expr= - 2.8824*m.x4343 + m.x4743 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3476 = Constraint(expr= - 2.4856*m.x4344 + m.x4744 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3477 = Constraint(expr= - 3.8934*m.x4345 + m.x4745 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3478 = Constraint(expr= - 1.7634*m.x4346 + m.x4746 - m.x6174 - m.x6175 - m.x6176 == 0) m.c3479 = Constraint(expr= - 3.09*m.x4347 + m.x4747 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3480 = Constraint(expr= - 3.5441*m.x4348 + m.x4748 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3481 = Constraint(expr= - 4.554*m.x4349 + m.x4749 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3482 = Constraint(expr= - 4.9947*m.x4350 + m.x4750 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3483 = Constraint(expr= - 3.7915*m.x4351 + m.x4751 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3484 = Constraint(expr= - 1.3265*m.x4352 + m.x4752 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3485 = Constraint(expr= - 2.4526*m.x4353 + m.x4753 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3486 = Constraint(expr= - 1.8018*m.x4354 + m.x4754 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3487 = Constraint(expr= - 2.2535*m.x4355 + m.x4755 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3488 = Constraint(expr= - 3.0212*m.x4356 + m.x4756 - m.x6177 - m.x6178 - m.x6179 == 0) m.c3489 = Constraint(expr= - 4.9857*m.x4357 + m.x4757 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3490 = Constraint(expr= - 2.8471*m.x4358 + m.x4758 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3491 = Constraint(expr= - 4.0548*m.x4359 + m.x4759 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3492 = Constraint(expr= - 2.1528*m.x4360 + m.x4760 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3493 = Constraint(expr= - 2.5975*m.x4361 + m.x4761 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3494 = Constraint(expr= - 4.8994*m.x4362 + m.x4762 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3495 = Constraint(expr= - 2.0045*m.x4363 + m.x4763 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3496 = Constraint(expr= - 3.9624*m.x4364 + m.x4764 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3497 = Constraint(expr= - 2.6538*m.x4365 + m.x4765 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3498 = Constraint(expr= - 5.3211*m.x4366 + m.x4766 - m.x6180 - m.x6181 - m.x6182 == 0) m.c3499 = Constraint(expr= - 3.5915*m.x4367 + m.x4767 - m.x6183 - m.x6184 - m.x6185 == 0) m.c3500 = Constraint(expr= - 4.5269*m.x4368 + m.x4768 - m.x6183 - m.x6184 - m.x6185 == 0) m.c3501 = Constraint(expr= - 2.1178*m.x4369 + m.x4769 - m.x6183 - m.x6184 - m.x6185 == 0) m.c3502 = Constraint(expr= - 4.2762*m.x4370 + m.x4770 - m.x6183 - m.x6184 - m.x6185 == 0) m.c3503 = Constraint(expr= - 1.4836*m.x4371 + m.x4771 - m.x6183 - m.x6184 - m.x6185 == 0) m.c3504 = Constraint(expr= - 3.0762*m.x4372 + m.x4772 - m.x6183 - m.x6184 - m.x6185 == 0) m.c3505 = Constraint(expr= - 3.2167*m.x4373 + m.x4773 - m.x6183 - m.x6184 - m.x6185 == 0) m.c3506 = Constraint(expr= - 5.747*m.x4374 + m.x4774 - m.x6183 - m.x6184 - m.x6185 == 0) m.c3507 = Constraint(expr=
1 self.index = {} self.header = [] for k, v in values.items(): for col in v: if col not in self.index: self.index[col] = len(self.index) self.header.append(col) self.values = [[None for h in self.header] for k in range(mx)] for k, v in values.items(): for col, to in v.items(): self.values[k][self.index[col]] = to def __getitem__(self, irow): """ operator [], accepts slices :param irow: integer, tuple, slice or list :return: depends on irow - int --> a table with one row - slice --> a table with several rows - list --> a table with the selected rows - tuple --> a value """ if isinstance(irow, int): return self._private_getclass()( self.header, [self.values[irow]]) if isinstance(irow, slice): return self._private_getclass()( self.header, [self.values[ii] for ii in range(*irow.indices(len(self)))]) if isinstance(irow, list): return self._private_getclass()( self.header, [self.values[ii] for ii in irow]) if isinstance(irow, tuple): if isinstance(irow[1], str): row = self.values[irow[0]] v = self._interpret_row(row) return v[irow[1]] return self.values[irow[0]][irow[1]] raise TypeError("Invalid argument type: " + str(type(irow))) def __setitem__(self, irow, value): """ operator [], just accepts tuple(to change a value) :param irow: 2-uple :param value: new value """ if isinstance(irow, tuple): if isinstance(irow[1], str): row = self.values[irow[0]] v = self._interpret_row(row) v[irow[1]] = value else: self.values[irow[0]][irow[1]] = value else: raise TypeError( # pragma: no cover "Invalid argument type(only tuple accepted): " + str(type(irow))) def __len__(self): """ returns the number of rows """ return len(self.values) def __copy__(self): """ operator copy """ return self._private_getclass()(self.header, self.values) def __deepcopy__(self, memo): """ operator ``deepcopy`` """ return self._private_getclass()(copy.deepcopy(self.header, memo), copy.deepcopy(self.values, memo)) def copy(self): """ call ``copy.deepcopy(self)`` """ return copy.deepcopy(self) def delta(self, other): """ returns a list of differences between self and others :param other: TableFormula :return: list of differences(first one) """ if other is None: return False if not isinstance(other, TableFormula): raise TypeError("other is not a table: " + str(type(other))) if len(self.header) != len(other.header): return ["different number of columns"] for a, b in zip(self.header, other.header): if a != b: return ["different columns"] if len(self.values) != len(other.values): return ["different number of rows"] line = 0 for r, s in zip(self.values, other.values): if len(r) != len(s): return ["different number of values on row %d" % line] col = 0 for a, b in zip(r, s): if a != b: return ["different value on cell %d,%d: %s!=%s(type %s, %s)" % (line, col, a, b, str(type(a)), str(type(b)))] col += 1 line += 1 return [] def __eq__(self, other): """ check if two tables are equal by value :param other: other table :return: boolean """ if other is None: return False if not isinstance(other, TableFormula): return False if len(self.header) != len(other.header): return False for a, b in zip(self.header, other.header): if a != b: return False if len(self.values) != len(other.values): return False for r, s in zip(self.values, other.values): if len(r) != len(s): return False for a, b in zip(r, s): if a != b: return False return True def __str__(self): """ convert the table into a string :return: string """ rows = ["\t".join(self.header)] for row in self.values: s = "\t".join([str(_) for _ in row]) rows.append(s) return "\n".join(rows) def __html__(self, class_table=None, class_td=None, class_tr=None, class_th=None): """ Converts the table into a :epkg:`html` string. :param class_table: adds a class to the tag ``table`` (None for none) :param class_td: adds a class to the tag ``td`` (None for none) :param class_tr: adds a class to the tag ``tr`` (None for none) :param class_th: adds a class to the tag ``th`` (None for none) """ clta = ' class="%s"' % class_table if class_table is not None else "" cltr = ' class="%s"' % class_tr if class_tr is not None else "" cltd = ' class="%s"' % class_td if class_td is not None else "" clth = ' class="%s"' % class_th if class_th is not None else "" rows = ["<table%s>" % clta] rows.append("{0}{1}{2}".format(("<tr%s><th%s>" % (cltr, clth)), ("</th><th%s>" % clth).join(self.header), "</th></tr>")) septd = "</td><td%s>" % cltd strtd = "<tr%s><td%s>" % (cltr, cltd) for row in self.values: s = septd.join([str(_) for _ in row]) rows.append(strtd + s + "</td></tr>") rows.append("</table>") rows.append("") return "\n".join(rows) def __rst__(self, add_line=True): """ convert the table into rst format :: +------------------------+------------+----------+----------+ | Header row, column 1 | Header 2 | Header 3 | Header 4 | | (header rows optional) | | | | +========================+============+==========+==========+ | body row 1, column 1 | column 2 | column 3 | column 4 | +------------------------+------------+----------+----------+ | body row 2 | ... | ... | | +------------------------+------------+----------+----------+ :param add_line: add a line separator between each row """ tbl = self.values_to_str() length = [len(_) for _ in tbl.header] for row in tbl.values: for i, v in enumerate(row): length[i] = max(length[i], len(v)) length = [_ + 2 for _ in length] line = ["-" * le for le in length] lineb = ["=" * le for le in length] sline = "+%s+" % ("+".join(line)) slineb = "+%s+" % ("+".join(lineb)) res = [sline] def complete(cool): s, i = cool i -= 2 if len(s) < i: s += " " * (i - len(s)) return s res.append("| %s |" % " | ".join( map(complete, zip(tbl.header, length)))) res.append(slineb) res.extend(["| %s |" % " | ".join(map(complete, zip(row, length))) for row in tbl.values]) if add_line: t = len(res) for i in range(t - 1, 3, -1): res.insert(i, sline) res.append(sline) return "\n".join(res) + "\n" def strtype(self): """ displays the type of values(not the values) """ rows = ["\t".join(self.header)] for row in self.values: s = "\t".join([str(type(_)) for _ in row]) rows.append(s) return "\n".join(rows) def _read_file(self, file, numeric_column, sep, encoding, read_n_lines, sheet=0): """ private """ ext = os.path.splitext(file)[-1].lower() if ext in [".xls", ".xlsx"]: lines = list(open_workbook(file, sheet=sheet)) # removing empty column(assuming first row is the header) ind = [i for i, n in enumerate(lines[0]) if len(n) > 0] if len(ind) < len(lines[0]): lines = [[line[i] for i in ind] for line in lines] else: if sys.version_info.major >= 3 or encoding is None: if encoding is None: f = open(file, "r") else: f = open(file, "r", encoding=encoding) else: f = open(file, "r", encoding=encoding) if read_n_lines > 0: lines = [] for line in f: if len(lines) >= read_n_lines: break lines.append(line) else: lines = f.readlines() f.close() self._readlines(lines, numeric_column, sep) def change_header(self, new_header): """ change the column names :param new_header: a list or a function which modifies the header Example: :: tbl.change_header(lambda h: h if h != "column" else "new_name") .. warning:: Do not do that yourself, the class holds a dictionary up to date with the column index. """ if isinstance(new_header, list): self.header = new_header self.index = {} for i, h in enumerate(self.header): self.index[h] = i else: he = [new_header(h) for h in self.header] self.change_header(he) def rename_column(self, old_name, new_name): """ rename a column :param old_name: old name :param new_name: new name """ header = [{old_name: new_name}.get(_, _) for _ in self.header] self.change_header(header) def save(self, filename, sep="\t", encoding=None, newline="\n"): """ saves the tables in a text file, first row is the column names :param filename: filename :param sep: column separator :param encoding: encoding :param newline: line separator """ if sys.version_info.major >= 3 or encoding is None: if encoding is None: f = open(filename, "w", newline=newline) else: f = open(filename, "w", encoding=encoding, newline=newline) else: f = open(filename, "w", encoding=encoding) f.write(sep.join(self.header)) f.write("\n") for row in self.values: f.write(sep.join([str(_) for _ in row])) f.write("\n") f.close() def _readlines(self, lines, numeric_column, sep): """private""" if isinstance(lines[0], str): lines = [_.replace("\ufeff", "").replace("\xef\xbb\xbf", "") .strip("\n\r ").split(sep) for _ in lines if len(_) > 0] self.header = lines[0] self.values = lines[1:] self.index = {} for i, h in enumerate(self.header): self.index[h] = i elif isinstance(lines[0], list): self.header = lines[0] self.values = lines[1:] self.index = {} for i, h in enumerate(self.header): self.index[h] = i else: raise Exception("unexpected format: " + str(type(lines[0]))) self._auto_conversion(numeric_column) def _auto_conversion(self, others_columns): """ private set up the column type based on the column name """ def condition(k): if k.startswith("sum_") or k.startswith("pos_"): return True if k.startswith("avg_") or k.startswith("len_"): return True if k.startswith("nb_") or k.startswith("max_") or k.startswith("min_"): return True
variable=keys['values'], index=(i + idx_offset)), v_copy, name_base=('%s_prop%d' % (name_base, i)), use_generic=use_generic) else: keys['nitems'] = 0 keys['keys'] = cls.function_param['null'] keys['values'] = cls.function_param['null'] elif datatype['type'] in ['ply', 'obj']: pass elif datatype['type'] == '1darray': for k in ['subtype', 'precision']: keys[k] = datatype[k] keys['precision'] = int(keys['precision']) keys['length'] = datatype.get('length', '0') keys['units'] = datatype.get('units', '') elif datatype['type'] == 'ndarray': for k in ['subtype', 'precision']: keys[k] = datatype[k] keys['precision'] = int(keys['precision']) if 'shape' in datatype: shape_var = '%s_shape' % name_base if cls.zero_based: idx_offset = 0 else: idx_offset = 1 for i, x in enumerate(datatype['shape']): out.append(cls.format_function_param( 'assign', value=x, name=cls.format_function_param( 'index', variable=shape_var, index=(i + idx_offset)))) keys['ndim'] = len(datatype['shape']) keys['shape'] = shape_var typename = 'ndarray_arr' else: keys['ndim'] = 0 keys['shape'] = cls.function_param['null'] keys['units'] = datatype.get('units', '') elif (typename == 'scalar') or (typename in constants.VALID_TYPES): keys['subtype'] = datatype.get('subtype', datatype['type']) keys['units'] = datatype.get('units', '') if keys['subtype'] in ['bytes', 'string', 'unicode']: keys['precision'] = int(datatype.get('precision', 0)) else: keys['precision'] = int(datatype['precision']) typename = 'scalar' elif datatype['type'] in ['boolean', 'null', 'number', 'integer', 'string']: keys['type'] = datatype['type'] typename = 'default' elif (typename in ['class', 'function']): keys['type'] = typename typename = 'pyobj' elif typename in ['instance', 'any']: keys['use_generic'] = cls.function_param['true'] typename = 'empty' elif typename in ['schema']: keys['use_generic'] = cls.function_param['true'] else: # pragma: debug raise ValueError("Cannot create %s version of type '%s'" % (cls.language, typename)) fmt = cls.format_function_param('init_type_%s' % typename, **keys) out.append(cls.format_function_param('assign', name=name, value=fmt)) return out @classmethod def write_channel_def(cls, key, datatype=None, **kwargs): r"""Write an channel definition. Args: key (str): Entry in cls.function_param that should be used. datatype (dict, optional): Data type associated with the channel. Defaults to None and is ignored. **kwargs: Additional keyword arguments are passed as parameters to format_function_param. Returns: list: Lines required to declare and define an output channel. """ out = [] if (datatype is not None) and ('{channel_type}' in cls.function_param[key]): kwargs['channel_type'] = '%s_type' % kwargs['channel'] out += cls.write_type_def( kwargs['channel_type'], datatype, use_generic=kwargs.get('use_generic', False)) dir_map = {'input': 'recv', 'output': 'send'} try_keys = [dir_map[key] + '_converter', 'transform'] try_vals = [] if all([bool(kwargs.get(k, False)) for k in try_keys]): # pragma: debug # TODO: Handling merger of the transforms in yaml or # remove the *_converter options entirely raise RuntimeError(("Transforms are specified in multiple " "locations for this input: %s") % str(try_keys)) for k in try_keys: if k in kwargs: v = kwargs[k] if not isinstance(v, list): v = [v] try_vals += v # This last transform is used because the others are assumed # to be applied by the connection driver if try_vals and isinstance(try_vals[-1], str): try_key = '%s_%s' % (try_vals[-1], key) if ((('python_interface' in cls.function_param) and (try_key in cls.python_interface))): kwargs['python_interface'] = cls.python_interface[try_key] if ((('format_str' in kwargs) and ('python_interface_format' in cls.function_param))): key = 'python_interface_format' kwargs['format_str'] = kwargs['format_str'].encode( "unicode_escape").decode('utf-8') else: key = 'python_interface' out += [cls.format_function_param(key, **kwargs)] return out @classmethod def write_model_function_call(cls, model_function, flag_var, inputs, outputs, outputs_in_inputs=None, on_failure=None, format_not_flag_cond=None, format_flag_cond=None, iter_function_idx=None): r"""Write lines necessary to call the model function. Args: model_function (str): Handle of the model function that should be called. flag_var (str): Name of variable that should be used as a flag. inputs (list): List of dictionaries describing inputs to the model. outputs (list): List of dictionaries describing outputs from the model. outputs_in_inputs (bool, optional): If True, the outputs are presented in the function definition as inputs. Defaults to the class attribute outputs_in_inputs. on_failure (list, optional): Lines to be executed if the model call fails. Defaults to an error message. This variable is only used if flag_var is not None and outputs_in_inputs is True. format_not_flag_cond (str, optional): Format string that produces a conditional expression that evaluates to False when the model flag indicates a failure. Defaults to None and the class's value for 'not_flag_cond' in function_param is used if it exists. If it does not exist, format_flag_cond is used. format_flag_cond (str, optional): Format string that produces a conditional expression that evaluates to True when the model flag indicates a success. Defaults to None and the defaults class's value for 'flag_cond' in function_param is used if it exists. If it does not exist, the flag is directly evaluated as if it were a boolean. iter_function_idx (dict, optional): Variable that serves as an index to iterate over variables. Defaults to None. Returns: list: Lines required to carry out a call to a model function in this language. """ if outputs_in_inputs is None: # pragma: debug outputs_in_inputs = cls.outputs_in_inputs func_inputs = cls.channels2vars(inputs) func_outputs = cls.channels2vars(outputs) if iter_function_idx: for src in [func_inputs, func_outputs]: for i, x in enumerate(src): if 'iter_datatype' in x: src[i] = dict( x, datatype=x['iter_datatype'], name=cls.format_function_param( 'index', variable=x['name'], index=iter_function_idx['name'], extra=x), length_var=False) if isinstance(flag_var, dict): flag_var = flag_var['name'] out = cls.write_function_call( model_function, inputs=func_inputs, outputs=func_outputs, flag_var=flag_var, outputs_in_inputs=outputs_in_inputs) if flag_var and outputs_in_inputs: if (not format_flag_cond) and ('not_flag_cond' in cls.function_param): flag_cond = cls.format_function_param( 'not_flag_cond', flag_var=flag_var, replacement=format_not_flag_cond) else: # pragma: debug # flag_cond = '%s (%s)' % ( # cls.function_param['not'], # cls.format_function_param( # 'flag_cond', default='{flag_var}', flag_var=flag_var, # replacement=format_flag_cond)) raise RuntimeError("Untested code below. Uncomment " "at your own risk if you find " "use case for it.") if on_failure is None: on_failure = [cls.format_function_param( 'error', error_msg="Model call failed.")] out += cls.write_if_block(flag_cond, on_failure) if iter_function_idx: out = cls.write_for_loop(iter_function_idx['name'], iter_function_idx['begin'], iter_function_idx['end'], out) return out @classmethod def write_model_recv(cls, channel, recv_var, flag_var='flag', iter_var=None, allow_failure=False, alt_recv_function=None): r"""Write a model receive call include checking the return flag. Args: channel (str): Name of variable that the channel being received from was stored in. recv_var (dict, list): Information of one or more variables that receieved information should be stored in. flag_var (str, optional): Name of flag variable that the flag should be stored in. Defaults to 'flag', iter_var (str, optional): Name of flag signifying when the model is in it's first iteration. If allow_failure is True and iter_var is provided, an error will be raised if iter_var is True. Defaults to None. allow_failure (bool, optional): If True, the returned lines will call a break if the flag is False. Otherwise, the returned lines will issue an error. Defaults to False. alt_recv_function (str, optional): Alternate receive function format string. Defaults to None and is ignored. Returns: list: Lines required to carry out a receive call in this language. """ if cls.function_param is None: raise NotImplementedError("function_param attribute not set for" "language '%s'" % cls.language) recv_var_str = recv_var if not isinstance(recv_var, str): recv_var_par = cls.channels2vars(recv_var) recv_var_str = cls.prepare_output_variables( recv_var_par, in_inputs=cls.outputs_in_inputs, for_yggdrasil=True) else: recv_var_par = cls.split_variables(recv_var_str) expanded_recv_var = None if (len(recv_var_par) > 1) and ('multiple_outputs' in cls.function_param): expanded_recv_var = recv_var_str recv_var_str = 'temp_%s' % recv_var_par[0]['name'] if isinstance(flag_var, dict): flag_var = flag_var['name'] if isinstance(iter_var, dict): iter_var = iter_var['name'] if cls.outputs_in_inputs: inputs = [recv_var_str] outputs = [flag_var] else: inputs = [] outputs = [flag_var, recv_var_str] if cls.include_channel_obj: inputs.insert(0, channel) lines = cls.write_function_call( cls.format_function_param('recv_function', channel=channel, replacement=alt_recv_function), inputs=inputs, outputs=outputs, include_arg_count=cls.include_arg_count) if 'not_flag_cond' in cls.function_param: flag_cond = cls.format_function_param('not_flag_cond', flag_var=flag_var) else: flag_cond = '%s (%s)' % ( cls.function_param['not'], cls.format_function_param('flag_cond', default='{flag_var}', flag_var=flag_var)) fail_message = cls.escape_quotes( "Could not receive %s." % recv_var_str) if allow_failure: fail_message = cls.escape_quotes( 'End of input from %s.' % recv_var_str) if_block = [cls.format_function_param('print', message=fail_message), cls.function_param.get('break', 'break')] if iter_var is not None: if_block = cls.write_if_block( iter_var, [cls.format_function_param( 'error', error_msg=cls.escape_quotes( 'No input from %s.' % recv_var_str))], if_block) else: if_block = [cls.format_function_param('error', error_msg=fail_message)] lines += cls.write_if_block(flag_cond, if_block) # Check if single element should be expanded if expanded_recv_var: # lines.append(cls.format_function_param( # 'print_generic', object=recv_var_str)) if 'expand_mult' in cls.function_param: # pragma: matlab lines.append(cls.format_function_param( 'expand_mult', name=expanded_recv_var, value=recv_var_str)) elif 'assign_mult' in cls.function_param: lines.append(cls.format_function_param( 'assign_mult', name=expanded_recv_var, value=recv_var_str)) else: lines.append(cls.format_function_param( 'assign', name=expanded_recv_var, value=recv_var_str)) elif len(recv_var_par) == 1: lines += cls.write_expand_single_element(recv_var_str) return lines @classmethod def write_model_send(cls, channel, send_var, flag_var='flag', allow_failure=False): r"""Write a model send call include checking the return flag. Args: channel (str): Name of variable that the channel being sent to was stored in. send_var (dict, list): Information on one or more variables containing information that will be sent. flag_var (str, optional): Name of flag variable that the flag should be stored in. Defaults to 'flag', allow_failure (bool, optional): If True, the returned lines will call a break if the
<gh_stars>1-10 import unicodedata tests = { "bidirs": { "BN": "0000", "S": "0009", "B": "000A", "WS": "000C", "ON": "0021", "ET": "0023", "ES": "002B", "CS": "002C", "EN": "0030", "L": "0041", "NSM": "0300", "R": "05BE", "AN": "0600", "AL": "0608", "LRE": "202A", "RLE": "202B", "PDF": "202C", "LRO": "202D", "RLO": "202E" }, "categories": { "Cc": "0000", "Zs": "0020", "Po": "0021", "Sc": "0024", "Ps": "0028", "Pe": "0029", "Sm": "002B", "Pd": "002D", "Nd": "0030", "Lu": "0041", "Sk": "005E", "Pc": "005F", "Ll": "0061", "So": "00A6", "Pi": "00AB", "Cf": "00AD", "No": "00B2", "Pf": "00BB", "Lo": "01BB", "Lt": "01C5", "Lm": "02B0", "Mn": "0300", "Me": "0488", "Mc": "0903", "Nl": "16EE", "Zl": "2028", "Zp": "2029", "Cs": "D800", "Co": "E000" }, "combinings": { "0": "0000", "230": "0300", "232": "0315", "220": "0316", "216": "031B", "202": "0321", "1": "0334", "240": "0345", "233": "035C", "234": "035D", "222": "059A", "228": "05AE", "10": "05B0", "11": "05B1", "12": "05B2", "13": "05B3", "14": "05B4", "15": "05B5", "16": "05B6", "17": "05B7", "18": "05B8", "19": "05B9", "20": "05BB", "21": "05BC", "22": "05BD", "23": "05BF", "24": "05C1", "25": "05C2", "30": "0618", "31": "0619", "32": "061A", "27": "064B", "28": "064C", "29": "064D", "33": "0651", "34": "0652", "35": "0670", "36": "0711", "7": "093C", "9": "094D", "84": "0C55", "91": "0C56", "103": "0E38", "107": "0E48", "118": "0EB8", "122": "0EC8", "129": "0F71", "130": "0F72", "132": "0F74", "214": "1DCE", "218": "302A", "224": "302E", "8": "3099", "26": "FB1E", "226": "1D16D" }, "decimals": { "": "0000", "0": "0030", "1": "0031", "2": "0032", "3": "0033", "4": "0034", "5": "0035", "6": "0036", "7": "0037", "8": "0038", "9": "0039" }, "decompositions": { "": "0000", "<noBreak> 0020": "00A0", "<compat> 0020 0308": "00A8", "<super> 0061": "00AA", "<compat> 0020 0304": "00AF", "<super> 0032": "00B2", "<super> 0033": "00B3", "<compat> 0020 0301": "00B4", "<compat> 03BC": "00B5", "<compat> 0020 0327": "00B8", "<super> 0031": "00B9", "<super> 006F": "00BA", "<fraction> 0031 2044 0034": "00BC", "<fraction> 0031 2044 0032": "00BD", "<fraction> 0033 2044 0034": "00BE", "0041 0300": "00C0", "0041 0301": "00C1", "0041 0302": "00C2", "0041 0303": "00C3", "0041 0308": "00C4", "0041 030A": "00C5", "0043 0327": "00C7", "0045 0300": "00C8", "0045 0301": "00C9", "0045 0302": "00CA", "0045 0308": "00CB", "0049 0300": "00CC", "0049 0301": "00CD", "0049 0302": "00CE", "0049 0308": "00CF", "004E 0303": "00D1", "004F 0300": "00D2", "004F 0301": "00D3", "004F 0302": "00D4", "004F 0303": "00D5", "004F 0308": "00D6", "0055 0300": "00D9", "0055 0301": "00DA", "0055 0302": "00DB", "0055 0308": "00DC", "0059 0301": "00DD", "0061 0300": "00E0", "0061 0301": "00E1", "0061 0302": "00E2", "0061 0303": "00E3", "0061 0308": "00E4", "0061 030A": "00E5", "0063 0327": "00E7", "0065 0300": "00E8", "0065 0301": "00E9", "0065 0302": "00EA", "0065 0308": "00EB", "0069 0300": "00EC", "0069 0301": "00ED", "0069 0302": "00EE", "0069 0308": "00EF", "006E 0303": "00F1", "006F 0300": "00F2", "006F 0301": "00F3", "006F 0302": "00F4", "006F 0303": "00F5", "006F 0308": "00F6", "0075 0300": "00F9", "0075 0301": "00FA", "0075 0302": "00FB", "0075 0308": "00FC", "0079 0301": "00FD", "0079 0308": "00FF", "0041 0304": "0100", "0061 0304": "0101", "0041 0306": "0102", "0061 0306": "0103", "0041 0328": "0104", "0061 0328": "0105", "0043 0301": "0106", "0063 0301": "0107", "0043 0302": "0108", "0063 0302": "0109", "0043 0307": "010A", "0063 0307": "010B", "0043 030C": "010C", "0063 030C": "010D", "0044 030C": "010E", "0064 030C": "010F", "0045 0304": "0112", "0065 0304": "0113", "0045 0306": "0114", "0065 0306": "0115", "0045 0307": "0116", "0065 0307": "0117", "0045 0328": "0118", "0065 0328": "0119", "0045 030C": "011A", "0065 030C": "011B", "0047 0302": "011C", "0067 0302": "011D", "0047 0306": "011E", "0067 0306": "011F", "0047 0307": "0120", "0067 0307": "0121" }, "digits": { "": "0000", "0": "0030", "1": "0031", "2": "0032", "3": "0033", "4": "0034", "5": "0035", "6": "0036", "7": "0037", "8": "0038", "9": "0039" }, "names": { "LEFT PARENTHESIS": "0028", "COMMERCIAL AT": "0040", "LATIN SMALL LETTER C": "0063", "<control>": "0082", "SUPERSCRIPT ONE": "00B9", "LATIN CAPITAL LETTER O WITH CIRCUMFLEX": "00D4", "LATIN CAPITAL LETTER O WITH STROKE": "00D8", "LATIN SMALL LETTER Y WITH DIAERESIS": "00FF", "LATIN SMALL LETTER E WITH DOT ABOVE": "0117", "LATIN SMALL LETTER I WITH MACRON": "012B", "LATIN CAPITAL LETTER J WITH CIRCUMFLEX": "0134", "LATIN CAPITAL LETTER N WITH CARON": "0147", "LATIN SMALL LETTER O WITH BREVE": "014F", "LATIN CAPITAL LETTER R WITH CEDILLA": "0156", "LATIN CAPITAL LETTER S WITH CEDILLA": "015E", "LATIN CAPITAL LETTER Z WITH ACUTE": "0179", "LATIN CAPITAL LETTER TONE SIX": "0184", "LATIN SMALL LETTER K WITH HOOK": "0199", "LATIN CAPITAL LETTER O WITH HORN": "01A0", "LATIN CAPITAL LETTER P WITH HOOK": "01A4", "LATIN SMALL LETTER I WITH CARON": "01D0", "LATIN SMALL LETTER O WITH CARON": "01D2", "LATIN CAPITAL LETTER AE WITH MACRON": "01E2", "LATIN CAPITAL LETTER AE WITH ACUTE": "01FC", "LATIN CAPITAL LETTER R WITH INVERTED BREVE": "0212", "LATIN CAPITAL LETTER Y WITH MACRON": "0232", "LATIN SMALL LETTER DOTLESS J WITH STROKE": "025F", "LATIN SMALL LETTER H WITH HOOK": "0266", "LATIN SMALL LETTER TURNED R WITH LONG LEG": "027A", "LATIN SMALL LETTER SQUAT REVERSED ESH": "0285", "LATIN LETTER SMALL CAPITAL L": "029F", "MODIFIER LETTER SMALL TURNED R": "02B4", "MODIFIER LETTER SMALL W": "02B7", "MODIFIER LETTER LOW MACRON": "02CD", "DOT ABOVE": "02D9", "COMBINING ACUTE ACCENT": "0301", "COMBINING RING ABOVE": "030A", "COMBINING TILDE OVERLAY": "0334", "COMBINING RIGHT ARROWHEAD ABOVE": "0350", "COMBINING LEFT ARROWHEAD BELOW": "0354", "COMBINING LATIN SMALL LETTER E": "0364", "COMBINING LATIN SMALL LETTER O": "0366", "COMBINING LATIN SMALL LETTER R": "036C", "GREEK CAPITAL LETTER ARCHAIC SAMPI": "0372", "GREEK SMALL REVERSED LUNATE SIGMA SYMBOL": "037B", "GREEK CAPITAL LETTER DELTA": "0394", "GREEK SMALL LETTER UPSILON": "03C5", "GREEK SMALL LETTER OMEGA": "03C9", "GREEK THETA SYMBOL": "03D1", "GREEK LETTER DIGAMMA": "03DC", "COPTIC CAPITAL LETTER DEI": "03EE", "CYRILLIC CAPITAL LETTER IO": "0401", "CYRILLIC CAPITAL LETTER TE": "0422", "CYRILLIC CAPITAL LETTER HARD SIGN": "042A", "CYRILLIC SMALL LETTER I": "0438", "CYRILLIC SMALL LETTER ES": "0441", "CYRILLIC SMALL LETTER SHA": "0448", "CYRILLIC SMALL LETTER IZHITSA": "0475", "CYRILLIC CAPITAL LETTER IZHITSA WITH DOUBLE GRAVE ACCENT": "0476", "CYRILLIC CAPITAL LETTER ROUND OMEGA": "047A", "COMBINING CYRILLIC HUNDRED THOUSANDS SIGN": "0488", "CYRILLIC CAPITAL LETTER GHE WITH STROKE": "0492", "CYRILLIC CAPITAL LETTER EN WITH DESCENDER": "04A2", "CYRILLIC CAPITAL LETTER SHHA": "04BA", "CYRILLIC CAPITAL LETTER ZHE WITH BREVE": "04C1", "CYRILLIC CAPITAL LETTER SCHWA WITH DIAERESIS": "04DA", "CYRILLIC CAPITAL LETTER ZE WITH DIAERESIS": "04DE", "CYRILLIC SMALL LETTER I WITH DIAERESIS": "04E5", "CYRILLIC SMALL LETTER U WITH DOUBLE ACUTE": "04F3", "CYRILLIC CAPITAL LETTER KOMI ZJE": "0504", "CYRILLIC SMALL LETTER KOMI NJE": "050B", "ARMENIAN CAPITAL LETTER HO": "0540", "ARMENIAN CAPITAL LETTER RA": "054C", "ARMENIAN SMALL LETTER CA": "056E", "HEBREW ACCENT TELISHA QETANA": "05A9", "HEBREW POINT TSERE": "05B5", "HEBREW PUNCTUATION SOF PASUQ": "05C3", "HEBREW MARK LOWER DOT": "05C5", "HEBREW LETTER KAF": "05DB", "HEBREW LIGATURE YIDDISH DOUBLE VAV": "05F0", "ARABIC-INDIC PER TEN THOUSAND SIGN": "060A", "ARABIC LETTER TEH MARBUTA": "0629", "ARABIC LETTER REH": "0631", "ARABIC LETTER KEHEH WITH THREE DOTS BELOW": "063C", "ARABIC LETTER HEH": "0647", "ARABIC LETTER YEH": "064A", "ARABIC HAMZA BELOW": "0655", "ARABIC ZWARAKAY": "0659", "ARABIC-INDIC DIGIT SIX": "0666", "ARABIC LETTER TEHEH": "067F", "ARABIC LETTER TCHEHEH": "0687", "ARABIC LETTER RREH": "0691", "ARABIC LETTER REH WITH DOT BELOW": "0694", "ARABIC LETTER GAF": "06AF", "ARABIC LETTER YEH WITH THREE DOTS BELOW": "06D1", "ARABIC SMALL HIGH MADDA": "06E4", "SYRIAC END OF PARAGRAPH": "0700", "SYRIAC LETTER ZAIN": "0719", "SYRIAC LETTER PERSIAN GHAMAL": "072E", "ARABIC LETTER AIN WITH THREE DOTS POINTING DOWNWARDS ABOVE": "075E", "ARABIC LETTER KEHEH WITH THREE DOTS POINTING UPWARDS BELOW": "0764", "ARABIC LETTER NOON WITH SMALL V": "0769", "ARABIC LETTER REH WITH TWO DOTS VERTICALLY ABOVE": "076B", "ARABIC LETTER SEEN WITH TWO DOTS VERTICALLY ABOVE": "076D", "ARABIC LETTER HAH WITH SMALL ARABIC LETTER TAH ABOVE": "0772", "ARABIC LETTER YEH BARREE WITH EXTENDED ARABIC-INDIC DIGIT THREE ABOVE": "077B", "THAANA LETTER GAAFU": "078E", "THAANA LETTER YAA": "0794", "THAANA LETTER ZAA": "079C", "THAANA LETTER SHEENU": "079D", "NKO DIGIT SEVEN": "07C7", "NKO LETTER A": "07CA", "NKO LETTER U": "07CE", "NKO LETTER GBA": "07DC", "NKO EXCLAMATION MARK": "07F9", "SAMARITAN LETTER DALAT": "0803", "SAMARITAN MARK EPENTHETIC YUT": "081B", "SAMARITAN VOWEL SIGN U": "0827", "DEVANAGARI LETTER U": "0909", "DEVANAGARI LETTER CANDRA O": "0911", "DEVANAGARI LETTER GHA": "0918", "DEVANAGARI LETTER TA": "0924", "DEVANAGARI VOWEL SIGN
current subscriber count. for key in [key for key in dictionary_milestones if key > current_subscribers]: del dictionary_milestones[key] # Form the Markdown table from our dictionary of data. We also add # a first entry for the founding of the sub. header = ( "### Milestones\n\n\n" "| Date Reached | Subscriber Milestone | Average Daily Change " "| Days From Previous Milestone |\n" "|--------------|----------------------|----------------------|" "------------------------------|\n" ) founding_date = timekeeping.convert_to_string(reddit.subreddit(subreddit_name).created_utc) founding_line = "\n| {} | Created | --- |\n\n".format(founding_date) for milestone, date in list(sorted(dictionary_milestones.items())): # If we have previous items in this list, we want to calculate # the daily growth between this milestone and the previous one. if len(formatted_lines) != 0: previous_date = formatted_lines[-1].split("|")[1].strip() previous_milestone = int(formatted_lines[-1].split("|")[2].strip().replace(",", "")) else: # If there is no previous entry, we start from the founding # of the subreddit. previous_date = founding_date previous_milestone = 1 # Calculate the number of days between the two milestones and # the changes in between. Start by obtaining the subscriber # change between the last milestone. milestone_delta = milestone - previous_milestone days_difference = timekeeping.num_days_between(previous_date, date) # Calculate the average daily change. If the difference in days # is zero, set the delta value to a generic string. if days_difference != 0: daily_delta = "{:+,.2f}".format(milestone_delta / days_difference) else: daily_delta = "---" # Create a new line for the table and add it to our list. new_line = "| {} | {:,} | {} | {} |".format(date, milestone, daily_delta, days_difference) formatted_lines.append(new_line) # Join everything together. We also need to delete the last # milestone, which is not real since it's in the future. # Also sort it by newest first, and replace any double linebreaks # so that the table is intact. formatted_lines.reverse() body = "{}{}".format(header, "\n".join(formatted_lines)) + founding_line body = body.replace("\n\n", "\n") return body def subreddit_subscribers_pushshift_historical_recorder(subreddit_name, fetch_today=False): """Pushshift's API stores subscriber data for subreddits from about 2018-03-15. This function will go back until then and get the subscribers for each day if it can, namely by grabbing data in chunks and analyzing them for subscriber count. :param subreddit_name: Name of a subreddit. :param fetch_today: Whether we should get just today's stats, or a list of stats from March 15, 2018 onwards. :return: """ subscribers_dictionary = {} chunk_size = SETTINGS.pushshift_subscriber_chunks logger.info( "Subscribers PS: Retrieving historical " "subscribers for r/{}...".format(subreddit_name) ) # If we just want to get today's stats just create a list with today # as the only component. Otherwise, fetch a list of days since March # 15, which is when subscriber information became available on # Pushshift's database. if not fetch_today: yesterday = int(time.time()) - 86400 yesterday_string = timekeeping.convert_to_string(yesterday) # Insert check for subreddit age. If the subreddit was created # after the default start date of March 15, 2018, use the # creation date as the starting point instead. subreddit_created = int(reddit.subreddit(subreddit_name).created_utc) subreddit_created_date = timekeeping.convert_to_string(subreddit_created) if SETTINGS.pushshift_subscriber_start > subreddit_created_date: start_date = SETTINGS.pushshift_subscriber_start else: start_date = subreddit_created_date logger.info("Subscribers PS: Retrieval will start from {}.".format(start_date)) list_of_days_to_get = timekeeping.get_series_of_days(start_date, yesterday_string) else: today_string = timekeeping.convert_to_string(time.time()) list_of_days_to_get = [today_string] api_search_query = ( "https://api.pushshift.io/reddit/search/submission/" "?subreddit={}&after={}&before={}&sort_type=created_utc" "&fields=subreddit_subscribers,created_utc&size=750" ) # Get the data from Pushshift as JSON. We try to get a submission # per day and record the subscribers. list_chunked = [ list_of_days_to_get[i : i + chunk_size] for i in range(0, len(list_of_days_to_get), chunk_size) ] # Iterate over our chunks of days. for chunk in list_chunked: processed_days = [] # Set time variables. first_day = chunk[0] start_time = timekeeping.convert_to_unix(first_day) last_day = chunk[-1] end_time = timekeeping.convert_to_unix(last_day) + 86399 # Access Pushshift for the data. retrieved_data = subreddit_pushshift_access( api_search_query.format(subreddit_name, start_time, end_time) ) if "data" not in retrieved_data: continue else: returned_data = retrieved_data["data"] # Process the days in our chunk and get the earliest matching # submission's subscriber count for each day. for day in chunk: for unit in returned_data: unit_day = timekeeping.convert_to_string(int(unit["created_utc"])) if unit_day not in processed_days and day == unit_day: subscribers = int(unit["subreddit_subscribers"]) subscribers_dictionary[unit_day] = subscribers processed_days.append(unit_day) logger.info( "Subscribers PS: Data for {}: " "{:,} subscribers.".format(unit_day, subscribers) ) # Check to see if all the days are accounted for in our chunk. # If there are missing days, we pull a manual check for those. # This check involves getting just a single submission from the # day, usually the earliest one. processed_days.sort() if processed_days != chunk: missing_days = [x for x in chunk if x not in processed_days] logger.info( "Subscribers PS: Still missing data for {}. " "Retrieving individually.".format(missing_days) ) # If there are multiple missing days, run a quick check # to see if there are *any* posts at all in the time # frame. If they aren't any, skip and do not fetch # individual data for each day. if len(missing_days) > 1: first_day = timekeeping.convert_to_unix(missing_days[0]) last_day = timekeeping.convert_to_unix(missing_days[-1]) + 86399 multiple_missing_query = ( "https://api.pushshift.io/reddit/search/submission/" "?subreddit={}&after={}&before={}&sort_type=created_utc" "&size=1".format(subreddit_name, first_day, last_day) ) multiple_data = subreddit_pushshift_access(multiple_missing_query).get( "data", None ) if not multiple_data: logger.info( "Subscribers PS: No posts from {} to {}. " "Skipping chunk...".format(missing_days[0], missing_days[-1]) ) continue for day in missing_days: day_start = timekeeping.convert_to_unix(day) day_end = day_start + 86399 day_query = ( "https://api.pushshift.io/reddit/search/submission/?subreddit={}" "&after={}&before={}&sort_type=created_utc&size=1" ) day_data = subreddit_pushshift_access( day_query.format(subreddit_name, day_start, day_end) ) if "data" not in day_data: continue else: returned_submission = day_data["data"] # We have data here, so let's add it to the dictionary. if len(returned_submission) > 0: if "subreddit_subscribers" in returned_submission[0]: subscribers = returned_submission[0]["subreddit_subscribers"] subscribers_dictionary[day] = int(subscribers) logger.info( "Subscribers PS: Individual data for {}: " "{:,} subscribers.".format(day, subscribers) ) # If we have data we can save it and insert it into the database. if len(subscribers_dictionary.keys()) != 0: database.subscribers_insert(subreddit_name, subscribers_dictionary) logger.info("Subscribers PS: Recorded subscribers for r/{}.".format(subreddit_name)) return subscribers_dictionary def subreddit_subscribers_redditmetrics_historical_recorder(subreddit_name, fetch_mode=False): """Retrieve data from the equivalent RedditMetrics page. This goes back to November 2012 until March 15, 2018. After March 15 we can rely on Pushshift data, which is more consistent, since RM can give false data for recent dates after then and *must* not be relied upon for that. :param subreddit_name: Name of a subreddit. :param fetch_mode: When activated this will return the traffic data as a dictionary instead of saving it. :return: `None`. """ final_dictionary = {} # Access the site and retrieve its serialized data. If we encounter # an error loading the site page, return. try: response = urlopen("https://frontpagemetrics.com/r/{}/".format(subreddit_name)) except (HTTPError, URLError): return # Check if the subreddit exists on the website. If the sub doesn't # exist on RedditMetrics, it possibly post-dates the site, and so # the function should exit. logger.info("Subscribers RM: Retrieving historical r/{} data...".format(subreddit_name)) html_source = response.read() new_part = str(html_source).split("data:") try: total_chunk = new_part[2][1:] total_chunk = total_chunk.split("pointSize", 1)[0].replace("],", "]").strip() total_chunk = total_chunk.replace("\\n", " ") total_chunk = total_chunk.replace("\\'", "'")[5:-5].strip() except IndexError: return # Now convert the raw data from the website into a list. list_of_days = total_chunk.split("},") list_of_days = [x.strip() for x in list_of_days] list_of_days = [x.replace("{", "") for x in list_of_days] list_of_days = [x.replace("}", "") for x in list_of_days] # Iterate over this list and form a proper dictionary out of it. # This dictionary is indexed by day with subscriber counts as # values. for entry in list_of_days: date_string = re.findall(r"'(\S+)'", entry)[0] # We have code here to reject any date that's after 2018-03-15, # since RM gives false data after then. # That date is defined as 1521072001 in Unix time. date_string_num = timekeeping.convert_to_unix(date_string) if date_string_num > 1521072001: continue subscribers = entry.split("a: ", 1)[1] subscribers = int(subscribers) if subscribers != 0: final_dictionary[date_string] = subscribers # If we're in fetch mode, just return the dictionary and do NOT save # it to the database. if fetch_mode: return final_dictionary # Insert the data into the new database. database.subscribers_insert(subreddit_name, final_dictionary) return def subreddit_pushshift_oldest_retriever(subreddit_name): """This function uses Pushshift to retrieve the oldest posts on a subreddit and formats it as a Markdown list. :param subreddit_name: The community we are looking for. :return: A Markdown text paragraph containing the oldest posts as links
<reponame>ksmit799/POTCO-PS # File: C (Python 2.4) from direct.gui.DirectGui import * from pandac.PandaModules import * from direct.interval.IntervalGlobal import * from direct.directnotify import DirectNotifyGlobal from pirates.economy import EconomyGlobals from pirates.economy.EconomyGlobals import * from pirates.piratesbase import PiratesGlobals from pirates.reputation import ReputationGlobals from pirates.piratesbase import PLocalizer from pirates.battle import WeaponGlobals from pirates.battle.EnemySkills import * from direct.task import Task from pirates.piratesgui import RadialMenu from pirates.piratesbase import TeamUtils from pirates.uberdog.UberDogGlobals import InventoryType from pirates.effects.WispSpiral import WispSpiral from pirates.battle import Wand from pirates.pvp import DistributedPVPInstance from pirates.uberdog.DistributedInventoryBase import DistributedInventoryBase from pirates.piratesbase import Freebooter from direct.distributed.ClockDelta import * from pirates.uberdog import TradableInventoryBase from pirates.inventory import InventoryUICombatTrayGrid from pirates.inventory import InventoryUICharmGrid from pirates.inventory.InventoryGlobals import Locations from pirates.minigame import PotionGlobals from pirates.inventory import ItemGlobals from pirates.battle import EnemyGlobals import time import math import PiratesGuiGlobals from SkillButton import SkillButton from GuiTray import GuiTray from GuiButton import GuiButton import copy from TonicsPanel import TonicsPanel STAFF_INTERVAL = 0.40000000000000002 AIM_ASSIST_DURATION = 2.0 class WeaponButton(GuiButton): notify = DirectNotifyGlobal.directNotify.newCategory('WeaponButton') def __init__(self, hotkeys = (), hotkeyLabel = None, helpText = None, parent = None, showQuant = 0, **kw): gui = loader.loadModel('models/gui/toplevel_gui') button_gui = loader.loadModel('models/textureCards/skillIcons') buttonImage = button_gui.find('**/box_base') buttonColors = ((1, 1, 1, 1), (0.80000000000000004, 0.80000000000000004, 0.80000000000000004, 1), (1.0, 1.0, 1.0, 1), (0.59999999999999998, 0.59999999999999998, 0.59999999999999998, 1)) optiondefs = (('relief', None, None), ('state', 'normal', None), ('frameSize', (0, 0.12, 0, 0.12), None), ('image', buttonImage, None), ('image_scale', 0.13, None), ('image_pos', (0.059999999999999998, 0, 0.059999999999999998), None), ('extraArgs', [ 0], None)) self.defineoptions(kw, optiondefs) GuiButton.__init__(self, hotkeys = hotkeys, hotkeyLabel = hotkeyLabel, helpText = helpText, parent = parent) self.initialiseoptions(WeaponButton) self.weaponId = 0 self.category = 0 self.card = None self.showQuant = showQuant self.quantLabel = None self.invReq = None self.weaponLeveledIval = None self.expGainIval = None self.lastLevel = None self.lastExp = None gui.removeNode() def setWeaponId(self, weaponId): self.weaponId = weaponId if not self.invReq: self.invReq = DistributedInventoryBase.getInventory(localAvatar.getInventoryId(), self.initInventory) def initInventory(self, inventory): self.invReq = None if not inventory: self.notify.warning('Could not get inventory for setWeaponid: %s' % self.weaponId) return None if not self.weaponId: return None if self.isEmpty(): return None self.category = WeaponGlobals.getRepId(self.weaponId) card = loader.loadModel('models/gui/gui_icons_weapon') gui = None if not Freebooter.getPaidStatus(base.localAvatar.getDoId()): if not Freebooter.allowedFreebooterWeapon(self.category): if EconomyGlobals.getItemCategory(self.weaponId) == ItemType.WEAPON: gui = loader.loadModel('models/gui/toplevel_gui') if gui: self['geom'] = gui.find('**/pir_t_gui_gen_key_subscriber') self['geom_scale'] = 0.20000000000000001 self['geom_pos'] = (0.059999999999999998, 0, 0.059999999999999998) gui.removeNode() else: asset = ItemGlobals.getIcon(self.weaponId) if asset: self['geom'] = card.find('**/%s' % asset) self['geom_scale'] = 0.10000000000000001 self['geom_pos'] = (0.059999999999999998, 0, 0.059999999999999998) self.quantLabel = DirectLabel(parent = self, relief = None, state = DGG.DISABLED) if EconomyGlobals.getItemCategory(self.weaponId) == ItemType.WEAPON: repValue = inventory.getReputation(self.category) self.updateRep(repValue) card.removeNode() def destroy(self): self.linkedCannon = 0 if self.invReq: DistributedInventoryBase.cancelGetInventory(self.invReq) self.invReq = None if self.weaponLeveledIval: self.weaponLeveledIval.finish() self.weaponLeveledIval = None if self.expGainIval: self.expGainIval.finish() self.expGainIval = None GuiButton.destroy(self) def updateRep(self, value): (level, leftoverValue) = ReputationGlobals.getLevelFromTotalReputation(self.category, value) max = ReputationGlobals.getReputationNeededToLevel(self.category, level) if self.lastLevel == None or self.lastExp == None: self.lastLevel = level self.lastExp = localAvatar.getInventory().getReputation(self.category) return None if self.lastExp: expChange = value - self.lastExp if expChange: localAvatar.guiMgr.gameGui.createExpAlert(expChange, 4.0, Vec3(0.625, 0.0, -0.80000000000000004), Vec3(0.0, 0.0, 0.25)) if self.weaponLeveledIval: self.weaponLeveledIval.finish() self.weaponLeveledIval = None if self.expGainIval: self.expGainIval.finish() self.expGainIval = None if self.lastLevel != level: glowColor = Vec4(0.20000000000000001, 0.20000000000000001, 0.75, 1.0) glowObj = self startColor = Vec4(1, 1, 1, 1) startPos = self.getPos() movePos = startPos + Point3(0, 0, 0.014999999999999999) self.weaponLeveledIval = Sequence(Parallel(LerpPosInterval(self, 0.20000000000000001, movePos, startPos), LerpColorScaleInterval(glowObj, 0.20000000000000001, glowColor, startColor, blendType = 'easeInOut')), Wait(0.40000000000000002), Parallel(LerpColorScaleInterval(glowObj, 3, startColor, blendType = 'easeInOut'), Sequence(LerpPosInterval(self, 0.085000000000000006, startPos, movePos), LerpPosInterval(self, 0.059999999999999998, movePos - Point3(0.0, 0.0, 0.0050000000000000001), startPos), LerpPosInterval(self, 0.035000000000000003, startPos, movePos - Point3(0.0, 0.0, 0.0050000000000000001))))) self.weaponLeveledIval.start() elif self.lastExp != value: glowColor = Vec4(0.14999999999999999, 0.14999999999999999, 0.59999999999999998, 1.0) glowObj = self startColor = Vec4(1, 1, 1, 1) startPos = self.getPos() movePos = startPos + Point3(0, 0, 0.01) self.expGainIval = Sequence(Parallel(LerpPosInterval(self, 0.10000000000000001, movePos, startPos), LerpColorScaleInterval(glowObj, 0.10000000000000001, glowColor, startColor, blendType = 'easeInOut')), Wait(0.10000000000000001), Parallel(LerpColorScaleInterval(glowObj, 1, startColor, glowColor, blendType = 'easeInOut'), Sequence(LerpPosInterval(self, 0.085000000000000006, startPos, movePos), LerpPosInterval(self, 0.059999999999999998, movePos - Point3(0.0, 0.0, 0.0050000000000000001), startPos), LerpPosInterval(self, 0.035000000000000003, startPos, movePos - Point3(0.0, 0.0, 0.0050000000000000001))))) self.expGainIval.start() self.lastExp = value self.lastLevel = level class TonicButton(SkillButton): notify = DirectNotifyGlobal.directNotify.newCategory('TonicButton') def __init__(self): assocAmmo = ItemGlobals.getAllHealthIds() SkillButton.__init__(self, InventoryType.Potion1, self.callback, 0, 0, showQuantity = True, showHelp = False, showRing = True, hotkey = 'T', assocAmmo = assocAmmo) self.skillButton['geom_scale'] = 0.10000000000000001 self.invReq = None self._buttonDisabled = False self._hotkeysEnabled = True self.enableHotkeys(self.skillId) def getAmmoCat(self): return InventoryType.ItemTypeConsumable def enableHotkeys(self, skillId = None): if skillId == None: skillId = self.skillId self._hotkeySkillId = skillId self._hotkeysEnabled = True if not self._buttonDisabled: self._enableHotkeys() def disableHotkeys(self): self._hotKeysEnabled = False if not self._buttonDisabled: self._disableHotkeys() def disableButton(self): self._buttonDisabled = True if self._hotkeysEnabled: self._disableHotkeys() def enableButton(self): self._buttonDisabled = False if self._hotkeysEnabled: self._enableHotkeys() def _enableHotkeys(self): self.accept('t', self.callback, [ self._hotkeySkillId]) self.accept('shift-t', self.callback, [ self._hotkeySkillId]) def _disableHotkeys(self): self.ignore('t') self.ignore('shift-t') def callback(self, skillId): if localAvatar.getGameState() in [ 'Injured', 'Dying', 'Getup']: return None realSkillId = WeaponGlobals.getSkillIdForAmmoSkillId(skillId) if not realSkillId: realSkillId = skillId localAvatar.guiMgr.combatTray.trySkill(InventoryType.UseItem, realSkillId, 0) def updateBestTonic(self): if not (self.invReq) and localAvatar.getInventoryId(): self.invReq = DistributedInventoryBase.getInventory(localAvatar.getInventoryId(), self.initInventory) def getBestTonic(self, allowNone = 0): defaultTonic = ItemGlobals.TONIC if allowNone: defaultTonic = None inv = localAvatar.getInventory() if inv and inv.isReady(): tonics = dict(map(lambda x: (x.getType(), x.getCount()), filter(lambda x: ItemGlobals.isAutoTonic(x.getType()), inv.getConsumables().values()))) if tonics.get(ItemGlobals.ROAST_PORK) > 0: return ItemGlobals.ROAST_PORK idealAmount = max(0, localAvatar.getMaxHp() * 0.80000000000000004 - localAvatar.getHp()) bestTonicId = defaultTonic for (tonicId, count) in sorted(tonics.iteritems()): if count: bestTonicId = tonicId skillId = WeaponGlobals.getSkillIdForAmmoSkillId(tonicId) if WeaponGlobals.getAttackSelfHP(skillId) > idealAmount: break WeaponGlobals.getAttackSelfHP(skillId) > idealAmount return bestTonicId return defaultTonic def initInventory(self, inventory): self.invReq = None if not inventory: self.notify.warning('Could not get inventory for setTonicid: %s' % self.skillId) return None oldTonic = self.skillId newTonic = self.getBestTonic() quantity = inventory.getItemQuantity(InventoryType.ItemTypeConsumable, newTonic) self.updateQuantity(quantity) realAmmoSkillId = WeaponGlobals.getSkillIdForAmmoSkillId(newTonic) if realAmmoSkillId == oldTonic: return None self.updateSkillId(realAmmoSkillId) if hasattr(self, 'skillRingIval'): if self.skillRingIval.isPlaying(): self.setGeomColor(0.5, 0.5, 0.5, 1.0) self.disableHotkeys() self.enableHotkeys(newTonic) self.skillButton['geom_scale'] = 0.10000000000000001 def destroy(self): self.enableButton() self.disableHotkeys() if self.invReq: DistributedInventoryBase.cancelGetInventory(self.invReq) self.invReq = None SkillButton.destroy(self) class ShipRepairButton(SkillButton): notify = DirectNotifyGlobal.directNotify.newCategory('ShipRepairButton') def __init__(self): self._skillIconName = 'sail_come_about' self._skillId = InventoryType.ShipRepairKit SkillButton.__init__(self, self._skillId, self.callback, 0, 0, showQuantity = True, showHelp = False, showRing = True, hotkey = 'T') self.skillButton['geom_scale'] = 0.10000000000000001 self.invReq = None self._buttonDisabled = False self._hotkeysEnabled = True self.enableHotkeys() self._ignoredAmountUpdates = 0 self.accept('inventoryQuantity-%s-%s' % (localAvatar.getInventoryId(), self._skillId), self._inventoryQuantityChanged) self._updateAmount() def destroy(self): self.enableButton() self.disableHotkeys() if self.invReq: DistributedInventoryBase.cancelGetInventory(self.invReq) self.invReq = None SkillButton.destroy(self) def enableHotkeys(self): self._hotkeysEnabled = True if not self._buttonDisabled: self._enableHotkeys() def disableHotkeys(self): self._hotKeysEnabled = False if not self._buttonDisabled: self._disableHotkeys() def disableButton(self): self._buttonDisabled = True if self._hotkeysEnabled: self._disableHotkeys() def enableButton(self): self._buttonDisabled = False if self._hotkeysEnabled: self._enableHotkeys() def _enableHotkeys(self): self.accept('t', self.callback, [ InventoryType.ShipRepairKit]) self.accept('shift-t', self.callback, [ InventoryType.ShipRepairKit]) def _disableHotkeys(self): self.ignore('t') self.ignore('shift-t') def _updateAmount(self): if not self.invReq: self.invReq = DistributedInventoryBase.getInventory(localAvatar.getInventoryId(), self._gotInventory) def _gotInventory(self, inventory): self.invReq = None if not inventory: self._updateAmount() return None self.updateQuantity(inventory.getStackQuantity(self._skillId)) def updateQuantity(self, quantity): SkillButton.updateQuantity(self, quantity) self.updateSkillId(self._skillId) if hasattr(self, 'skillRingIval') and self.skillRingIval: if self.skillRingIval.isPlaying(): self.setGeomColor(0.5, 0.5, 0.5, 1.0) def _inventoryQuantityChanged(self, amount): if self._ignoredAmountUpdates > 0: self._ignoredAmountUpdates -= 1 else: self.updateQuantity(amount) def callback(self, skillId): if localAvatar.guiMgr.combatTray.trySkill(InventoryType.UseItem, skillId, 0): self._ignoredAmountUpdates += 1 self.updateQuantity(self.quantity - 1) class CombatTray(GuiTray): notify = DirectNotifyGlobal.directNotify.newCategory('CombatTray') InstantCast = base.config.GetBool('instant-cast', 0) SkillButtonEvents = ('attack', 'mouse2', 'attack-up') COMBO_WINDOW_START = 0.29999999999999999 COMBO_WINDOW_END = 1.0 RECOVERY_TIME = 0.75 WINDOW_LENGTH = 0.40000000000000002 BUTTON_MASH_WINDOW = 1.3 BASIC_ATTACKS = (InventoryType.CutlassHack, InventoryType.PistolShoot, InventoryType.MusketShoot, InventoryType.MeleePunch, InventoryType.DaggerCut, InventoryType.GrenadeThrow, InventoryType.StaffBlast, InventoryType.DollAttune, InventoryType.DollPoke) IGNORES_INPUT_LOCK = (InventoryType.UseItem, InventoryType.UsePotion, EnemySkills.PISTOL_RELOAD, EnemySkills.GRENADE_RELOAD) NO_PRINT_RANGE = (InventoryType.UseItem, InventoryType.UsePotion, EnemySkills.PISTOL_RELOAD, EnemySkills.PISTOL_CHARGE, EnemySkills.GRENADE_RELOAD, EnemySkills.GRENADE_CHARGE, EnemySkills.STAFF_FIZZLE, EnemySkills.STAFF_WITHER_CHARGE, EnemySkills.STAFF_SOULFLAY_CHARGE, EnemySkills.STAFF_PESTILENCE_CHARGE, EnemySkills.STAFF_HELLFIRE_CHARGE, EnemySkills.STAFF_BANISH_CHARGE, EnemySkills.STAFF_DESOLATION_CHARGE, EnemySkills.LEFT_BROADSIDE, EnemySkills.RIGHT_BROADSIDE, EnemySkills.DOLL_UNATTUNE, InventoryType.CutlassHack, InventoryType.CutlassSlash, InventoryType.CutlassCleave, InventoryType.CutlassFlourish, InventoryType.CutlassStab, InventoryType.DaggerCut, InventoryType.DaggerSwipe, InventoryType.DaggerGouge, InventoryType.DaggerEviscerate) L1_COMBO_ATTACKS = (InventoryType.CutlassHack, InventoryType.DaggerCut) COMBO_ATTACKS = (InventoryType.CutlassHack, InventoryType.CutlassSlash, InventoryType.CutlassCleave, InventoryType.CutlassFlourish, InventoryType.CutlassStab, InventoryType.DaggerCut, InventoryType.DaggerSwipe, InventoryType.DaggerGouge, InventoryType.DaggerEviscerate) NO_VOLLEY_PROJECTILES = (EnemySkills.STAFF_WITHER_CHARGE, EnemySkills.STAFF_SOULFLAY_CHARGE, EnemySkills.STAFF_HELLFIRE_CHARGE, EnemySkills.STAFF_PESTILENCE_CHARGE, EnemySkills.STAFF_BANISH_CHARGE, EnemySkills.STAFF_DESOLATION_CHARGE, EnemySkills.PISTOL_CHARGE, EnemySkills.PISTOL_RELOAD, EnemySkills.GRENADE_CHARGE, EnemySkills.GRENADE_RELOAD, InventoryType.UseItem) def __init__(self, parent, **kw): optiondefs = (('relief', None, None),) self.defineoptions(kw, optiondefs) GuiTray.__init__(self, parent, 0.35999999999999999, 0.12) self.initialiseoptions(CombatTray) self.CHARGE_MODE_TIME_THRESHOLD = 0.29999999999999999 self.thresholdHit = 0 self.aimAssistTarget = None self.skillTray = RadialMenu.SkillTray() self.skillsHidden = 1 self.weaponId = 0 self.rep = 0 self.numberOfItems = -1 self.ammoSkillId = 0 self.lastAmmoSkillId = { } self.lastAttack = () self.onLastAttack = 0 self.chargeTime = 0 self.maxCharge = 0 self.tryShoot = 0 self.tryAim = 0 self.volley = 0 self.linkedCannon
import gc import math import multiprocessing import shutil from os.path import join from tempfile import mkdtemp from typing import Sequence, Optional import numpy from catboost import CatBoostRegressor, CatBoostError, Pool from aydin.regression.base import RegressorBase from aydin.regression.cb_utils.callbacks import CatBoostStopTrainingCallback from aydin.util.log.log import lsection, lprint class CBRegressor(RegressorBase): """ The CatBoost Regressor uses the gradient boosting library <a href="https://github.com/catboost">CatBoost</a> to perform regression from a set of feature vectors and target values. CatBoost main advantage is that it is very fast compared to other gradient boosting libraries -- in particular when GPU acceleration is available. Compared to other libraries (lightGBM, XGBoost) it is much easier to ship the GPU enabled version because it just works. It performs comparably and sometimes better than other libraries like LightGBM. """ model: CatBoostRegressor def __init__( self, num_leaves: int = None, max_num_estimators: Optional[int] = None, min_num_estimators: Optional[int] = None, max_bin: int = None, learning_rate: Optional[float] = None, loss: str = 'l1', patience: int = 32, compute_load: float = 0.95, gpu: bool = True, gpu_devices: Optional[Sequence[int]] = None, ): """Constructs a CatBoost regressor. Parameters ---------- num_leaves : int Number of leaves in the decision trees. We recommend values between 128 and 512. (advanced) max_num_estimators : Optional[int] Maximum number of estimators (trees). Typical values range from 1024 to 4096. Use larger values for more difficult datasets. If training stops exactly at these values that is a sign you need to increase this number. Quality of the results typically increases with the number of estimators, but so does computation time too. We do not recommend using a value of more than 10000. min_num_estimators : Optional[int] Minimum number of estimators. Training restarts with a lower learning rate if the number of estimators is too low as defined by this threshold. Regressor that have too few estimators typically lead to poor results. (advanced) max_bin : int Maximum number of allowed bins. The features are quantised into that many bins. Higher values achieve better quantisation of features but also leads to longer training and more memory consumption. We do not recommend changing this parameter. When using GPU training the number of bins must be equal or below 254. (advanced) learning_rate : Optional[float] Learning rate for the catboost model. The learning rate is determined automatically if the value None is given. We recommend values around 0.01. (advanced) loss : str Type of loss to be used. Van be 'l1' for L1 loss (MAE), and 'l2' for L2 loss (RMSE), 'Lq:q=1.5' with q>=1 real number as power coefficient (here q=1.5), 'Poisson' for Poisson loss, 'Huber:delta=0.1' for Huber loss with delta=0.1, 'Expectile:alpha=0.5' for expectile loss with alpha parameter set to 0.5, or 'expectile' as a shortcut for 'Expectile:alpha=0.5'. We recommend using: 'l1', 'l2', and 'Poisson'. (advanced) patience : int Number of rounds after which training stops if no improvement occurs. (advanced) compute_load : float Allowed load on computational resources in percentage, typically used for CPU training when deciding on how many available cores to use. (advanced) gpu : bool True enables GPU acceleration if available. Fallsback to CPU if it fails for any reason. (advanced) gpu_devices : Optional[Sequence[int]] List of GPU device indices to be used by CatBoost. For example, to use GPUs of index 0 and 1, set to '0:1'. For a range of devices set to '0-3' for example for all devices 0,1,2,3. It is recommended to only use together similar or ideally identical GPU devices. (advanced) """ super().__init__() self.force_verbose_eval = False self.stop_training_callback = CatBoostStopTrainingCallback() self.num_leaves = 512 if num_leaves is None else num_leaves self.max_num_estimators = max_num_estimators self.min_num_estimators = min_num_estimators if max_bin is None: self.max_bin = 254 if gpu else 512 else: self.max_bin = max_bin self.learning_rate = learning_rate self.metric = loss self.early_stopping_rounds = patience self.compute_load = compute_load self.gpu = gpu self.gpu_devices = gpu_devices with lsection("CB Regressor"): lprint(f"patience: {self.early_stopping_rounds}") lprint(f"gpu: {self.gpu}") def recommended_max_num_datapoints(self) -> int: """Recommended maximum number of datapoints Returns ------- int """ return int(40e6 if self.gpu else 1e6) def _get_params( self, num_samples, num_features, learning_rate, dtype, use_gpu, train_folder ): # Setting min data in leaf: min_data_in_leaf = 20 + int(0.01 * (num_samples / self.num_leaves)) lprint(f'min_data_in_leaf: {min_data_in_leaf}') # Normalise losses/metrics/objectives: objective: str = self.metric if objective.lower() == 'l1': objective = 'MAE' elif objective.lower() == 'l2': objective = 'RMSE' elif objective.lower() == 'poisson': objective = 'Poisson' elif objective.lower() == 'expectile': objective = 'Expectile:alpha=0.5' else: objective = 'l1' lprint(f'objective: {objective}') # We pick a max depth: max_depth = max(3, int(math.log2(self.num_leaves)) - 1) max_depth = min(max_depth, 8) if use_gpu else max_depth lprint(f'max_depth: {max_depth}') # If the dataset is really big we want to switch to pinned memeory: gpu_ram_type = 'CpuPinnedMemory' if num_samples > 10e6 else 'GpuRam' lprint(f'gpu_ram_type: {gpu_ram_type}') # Setting max number of iterations: if self.max_num_estimators is None: iterations = 4096 if use_gpu else 2048 else: iterations = self.max_num_estimators lprint(f'max_num_estimators: {iterations}') params = { "iterations": iterations, "task_type": "GPU" if use_gpu else "CPU", "devices": 'NULL' if self.gpu_devices is None else ':'.join(self.gpu_devices), # uses all available GPUs 'objective': objective, "loss_function": self.metric.upper(), "allow_writing_files": True, "train_dir": train_folder, "max_bin": self.max_bin, "rsm": None if use_gpu else 0.8, # same as GBM "thread_count": max( 1, int(self.compute_load * multiprocessing.cpu_count()) ), "gpu_cat_features_storage": gpu_ram_type, 'max_depth': max_depth, 'early_stopping_rounds': self.early_stopping_rounds, 'bagging_temperature': 1, 'min_data_in_leaf': min_data_in_leaf, 'l2_leaf_reg': 30, 'feature_border_type': 'UniformAndQuantiles', # 'verbose_eval' : 10, 'metric_period': 50 if use_gpu else 1, # "num_leaves": self.num_leaves, "learning_rate": learning_rate, } # Note: we could add optional automatic meta-parameter tunning by using cross val: # https://effectiveml.com/using-grid-search-to-optimise-catboost-parameters.html return params def stop_fit(self): self.stop_training_callback.continue_training = False def _fit( self, x_train, y_train, x_valid=None, y_valid=None, regressor_callback=None ): with lsection("CatBoost regressor fitting:"): nb_data_points = y_train.shape[0] self.num_features = x_train.shape[-1] has_valid_dataset = x_valid is not None and y_valid is not None lprint(f"Number of data points: {nb_data_points}") if has_valid_dataset: lprint(f"Number of validation data points: {y_valid.shape[0]}") lprint(f"Number of features per data point: {self.num_features}") # Train folder to store training info: train_folder = mkdtemp(prefix="catboost_training_") self.__epoch_counter = 0 model = None with lsection( f"CatBoost regressor fitting now using {f'GPU({self.gpu_devices})' if self.gpu else 'CPU'} " ): # CatBoost prefers float32 arrays: x_train = x_train.astype(numpy.float32, copy=False) y_train = y_train.astype(numpy.float32, copy=False) xy_train_pool = Pool(data=x_train, label=y_train) # Keep this for later: x_train_shape = x_train.shape y_train_shape = y_train.shape x_train_dtype = x_train.dtype # Give a chance to reclaim this memory if needed: x_train, y_train = None, None # CatBoost fails (best_iter == 0 or too small) sometimes to train if learning rate is too high, this loops # tries increasingly smaller learning rates until training succeeds (best_iter>min_n_estimators) learning_rate = self.learning_rate # Default min num of estimators: if self.min_num_estimators is None: min_num_estimators = 1024 if self.gpu else 512 else: min_num_estimators = self.min_num_estimators for i in range(10): if not self.stop_training_callback.continue_training: break lprint( f"Trying learning rate of '{learning_rate}' (None -> automatic)" ) # The purpose of this try block is to protect against failure to use GPU. try: params = self._get_params( num_samples=nb_data_points, num_features=self.num_features, learning_rate=learning_rate, dtype=x_train_dtype, use_gpu=self.gpu, train_folder=train_folder, ) lprint(f"Initialising CatBoost with {params}") model = CatBoostRegressor(**params) # Logging callback: class MetricsCheckerCallback: def after_iteration(self, info): iteration = info.iteration metrics = info.metrics lprint(f"Iteration: {iteration} metrics: {metrics}") return True # Callbacks: callbacks = None if self.gpu else [MetricsCheckerCallback()] # # When to be silent? when we actually can printout the logs. silent = not self.gpu lprint( f"Fitting CatBoost model for: X{x_train_shape} -> y{y_train_shape}" ) model.fit( X=xy_train_pool, eval_set=(x_valid, y_valid) if has_valid_dataset else None, early_stopping_rounds=self.early_stopping_rounds, use_best_model=has_valid_dataset, callbacks=callbacks, silent=silent, ) except CatBoostError as e: print(e) lprint("GPU training likely failed, switching to CPU.") self.gpu = False # next attempt next... continue # Training succeeds when the best iteration is not the zeroth's iteration. # best_iteration_ might be None if there is no validation data provided... if ( model.best_iteration_ is None or model.best_iteration_ > min_num_estimators ): self.learning_rate = learning_rate lprint( f"CatBoost fitting succeeded! new learning rate for regressor: {learning_rate}" ) break else: # Reduce learning rate: if learning_rate is None: # If None we were using an automatic value, we set the learning rate so we can start # with the (relatively high) default value of 0.1 learning_rate = 2 * 0.1
<reponame>vkotronis/artemis import datetime import json as classic_json import multiprocessing as mp import time from typing import Dict from typing import NoReturn import redis import requests import ujson as json from artemis_utils import get_hash from artemis_utils import get_logger from artemis_utils.constants import CONFIGURATION_HOST from artemis_utils.constants import NOTIFIER_HOST from artemis_utils.constants import PREFIXTREE_HOST from artemis_utils.db import DB from artemis_utils.envvars import BULK_TIMER from artemis_utils.envvars import DB_HOST from artemis_utils.envvars import DB_NAME from artemis_utils.envvars import DB_PASS from artemis_utils.envvars import DB_PORT from artemis_utils.envvars import DB_USER from artemis_utils.envvars import HISTORIC from artemis_utils.envvars import RABBITMQ_URI from artemis_utils.envvars import REDIS_HOST from artemis_utils.envvars import REDIS_PORT from artemis_utils.envvars import REST_PORT from artemis_utils.envvars import WITHDRAWN_HIJACK_THRESHOLD from artemis_utils.rabbitmq import create_exchange from artemis_utils.rabbitmq import create_queue from artemis_utils.redis import ping_redis from artemis_utils.redis import purge_redis_eph_pers_keys from artemis_utils.redis import redis_key from artemis_utils.service import wait_data_worker_dependencies from kombu import Connection from kombu import Producer from kombu import uuid from kombu.mixins import ConsumerProducerMixin from tornado.ioloop import IOLoop from tornado.web import Application from tornado.web import RequestHandler # logger log = get_logger() # shared memory object locks shared_memory_locks = { "data_worker": mp.Lock(), "monitored_prefixes": mp.Lock(), "configured_prefix_count": mp.Lock(), "config_timestamp": mp.Lock(), "insert_bgp_entries": mp.Lock(), "handle_bgp_withdrawals": mp.Lock(), "handled_bgp_entries": mp.Lock(), "outdate_hijacks": mp.Lock(), "insert_hijacks_entries": mp.Lock(), "monitors": mp.Lock(), "service_reconfiguring": mp.Lock(), } # global vars TABLES = ["bgp_updates", "hijacks", "configs"] VIEWS = ["view_configs", "view_bgpupdates", "view_hijacks"] SERVICE_NAME = "database" DATA_WORKER_DEPENDENCIES = [PREFIXTREE_HOST, NOTIFIER_HOST] def save_config(wo_db, config_hash, yaml_config, raw_config, comment, config_timestamp): try: query = ( "INSERT INTO configs (key, raw_config, time_modified, comment)" "VALUES (%s, %s, %s, %s);" ) wo_db.execute( query, ( config_hash, raw_config, datetime.datetime.fromtimestamp(config_timestamp), comment, ), ) except Exception: log.exception("failed to save config in db") def retrieve_most_recent_config_hash(ro_db): try: hash_ = ro_db.execute( "SELECT key from configs ORDER BY time_modified DESC LIMIT 1", fetch_one=True, ) if isinstance(hash_, tuple): return hash_[0] except Exception: log.exception("failed to retrieved most recent config hash in db") return None def retrieve_most_recent_raw_config(ro_db): return_msg = None try: entry = ro_db.execute( "SELECT key, raw_config, comment, time_modified from configs ORDER BY time_modified DESC LIMIT 1", fetch_one=True, ) if entry: return_msg = { "key": entry[0], "raw_config": entry[1], "comment": entry[2], "time_modified": entry[3].timestamp(), } except Exception: log.exception("failed to retrieved most recent config in db") return return_msg def store_monitored_prefixes_stat(wo_db, monitored_prefixes): try: wo_db.execute( "UPDATE stats SET monitored_prefixes=%s;", (len(monitored_prefixes),) ) except Exception: log.exception("exception") def store_configured_prefix_count_stat(wo_db, configured_prefix_count): try: wo_db.execute( "UPDATE stats SET configured_prefixes=%s;", (configured_prefix_count,) ) except Exception: log.exception("exception") def configure_database(msg, shared_memory_manager_dict): config = msg try: # DB variables ro_db = DB( application_name="database-rest-configuration-readonly", user=DB_USER, password=<PASSWORD>, host=DB_HOST, port=DB_PORT, database=DB_NAME, reconnect=True, autocommit=True, readonly=True, ) wo_db = DB( application_name="database-rest-configuration-write", user=DB_USER, password=<PASSWORD>, host=DB_HOST, port=DB_PORT, database=DB_NAME, ) # check newer config config_timestamp = shared_memory_manager_dict["config_timestamp"] if config["timestamp"] > config_timestamp: shared_memory_locks["service_reconfiguring"].acquire() shared_memory_manager_dict["service_reconfiguring"] = True shared_memory_locks["service_reconfiguring"].release() incoming_config_timestamp = config["timestamp"] if "timestamp" in config: del config["timestamp"] raw_config = "" if "raw_config" in config: raw_config = config["raw_config"] del config["raw_config"] comment = "" if "comment" in config: comment = config["comment"] del config["comment"] config_hash = get_hash(raw_config) latest_config_in_db_hash = retrieve_most_recent_config_hash(ro_db) if config_hash != latest_config_in_db_hash: save_config( wo_db, config_hash, config, raw_config, comment, incoming_config_timestamp, ) else: log.debug("database config is up-to-date") # extract monitors monitors = config.get("monitors", {}) shared_memory_locks["monitors"].acquire() shared_memory_manager_dict["monitors"] = monitors shared_memory_locks["monitors"].release() # now that the conf is changed, get and store additional stats from prefixtree r = requests.get( "http://{}:{}/monitoredPrefixes".format(PREFIXTREE_HOST, REST_PORT) ) shared_memory_locks["monitored_prefixes"].acquire() shared_memory_manager_dict["monitored_prefixes"] = r.json()[ "monitored_prefixes" ] store_monitored_prefixes_stat( wo_db, monitored_prefixes=shared_memory_manager_dict["monitored_prefixes"], ) shared_memory_locks["monitored_prefixes"].release() r = requests.get( "http://{}:{}/configuredPrefixCount".format(PREFIXTREE_HOST, REST_PORT) ) shared_memory_locks["configured_prefix_count"].acquire() shared_memory_manager_dict["configured_prefix_count"] = r.json()[ "configured_prefix_count" ] store_configured_prefix_count_stat( wo_db, configured_prefix_count=shared_memory_manager_dict[ "configured_prefix_count" ], ) shared_memory_locks["configured_prefix_count"].release() shared_memory_locks["config_timestamp"].acquire() shared_memory_manager_dict["config_timestamp"] = incoming_config_timestamp shared_memory_locks["config_timestamp"].release() shared_memory_locks["service_reconfiguring"].acquire() shared_memory_manager_dict["service_reconfiguring"] = False shared_memory_locks["service_reconfiguring"].release() return {"success": True, "message": "configured"} except Exception: log.exception("exception") shared_memory_locks["service_reconfiguring"].acquire() shared_memory_manager_dict["service_reconfiguring"] = False shared_memory_locks["service_reconfiguring"].release() return {"success": False, "message": "error during service configuration"} class MonitorHandler(RequestHandler): """ REST request handler for monitor information. """ def initialize(self, shared_memory_manager_dict): self.shared_memory_manager_dict = shared_memory_manager_dict def get(self): """ Simply provides the configured monitors (in the form of a JSON dict) to the requester """ self.write({"monitors": self.shared_memory_manager_dict["monitors"]}) class ConfigHandler(RequestHandler): """ REST request handler for configuration. """ def initialize(self, shared_memory_manager_dict): self.shared_memory_manager_dict = shared_memory_manager_dict self.ro_db = DB( application_name="database-rest-configuration-readonly", user=DB_USER, password=<PASSWORD>, host=DB_HOST, port=DB_PORT, database=DB_NAME, reconnect=True, autocommit=True, readonly=True, ) def get(self): """ Simply provides the raw configuration stored in the DB (with timestamp, hash and comment) to the requester. Format: { "key": <string>, "raw_config": <string>, "comment": <string>, "time_modified": <timestamp>, } """ most_recent_config = retrieve_most_recent_raw_config(self.ro_db) if most_recent_config: write_json = most_recent_config write_json["success"] = True else: write_json = {"success": False} self.write(write_json) def post(self): """ Configures database and responds with a success message. :return: {"success": True | False, "message": < message >} """ try: msg = json.loads(self.request.body) self.write(configure_database(msg, self.shared_memory_manager_dict)) except Exception: self.write( {"success": False, "message": "error during service configuration"} ) class HealthHandler(RequestHandler): """ REST request handler for health checks. """ def initialize(self, shared_memory_manager_dict): self.shared_memory_manager_dict = shared_memory_manager_dict def get(self): """ Extract the status of a service via a GET request. :return: {"status" : <unconfigured|running|stopped><,reconfiguring>} """ status = "stopped" shared_memory_locks["data_worker"].acquire() if self.shared_memory_manager_dict["data_worker_running"]: status = "running" shared_memory_locks["data_worker"].release() if self.shared_memory_manager_dict["service_reconfiguring"]: status += ",reconfiguring" self.write({"status": status}) class ControlHandler(RequestHandler): """ REST request handler for control commands. """ def initialize(self, shared_memory_manager_dict): self.shared_memory_manager_dict = shared_memory_manager_dict def start_data_worker(self): shared_memory_locks["data_worker"].acquire() if self.shared_memory_manager_dict["data_worker_running"]: log.info("data worker already running") shared_memory_locks["data_worker"].release() return "already running" shared_memory_locks["data_worker"].release() mp.Process(target=self.run_data_worker_process).start() return "instructed to start" def run_data_worker_process(self): try: with Connection(RABBITMQ_URI) as connection: shared_memory_locks["data_worker"].acquire() data_worker = DatabaseDataWorker( connection, self.shared_memory_manager_dict ) self.shared_memory_manager_dict["data_worker_running"] = True shared_memory_locks["data_worker"].release() log.info("data worker started") data_worker.run() except Exception: log.exception("exception") finally: shared_memory_locks["data_worker"].acquire() self.shared_memory_manager_dict["data_worker_running"] = False shared_memory_locks["data_worker"].release() log.info("data worker stopped") @staticmethod def stop_data_worker(): shared_memory_locks["data_worker"].acquire() try: with Connection(RABBITMQ_URI) as connection: with Producer(connection) as producer: command_exchange = create_exchange("command", connection) producer.publish( "", exchange=command_exchange, routing_key="stop-{}".format(SERVICE_NAME), serializer="ujson", ) except Exception: log.exception("exception") finally: shared_memory_locks["data_worker"].release() message = "instructed to stop" return message def post(self): """ Instruct a service to start or stop by posting a command. Sample request body { "command": <start|stop> } :return: {"success": True|False, "message": <message>} """ try: msg = json.loads(self.request.body) command = msg["command"] # start/stop data_worker if command == "start": message = self.start_data_worker() self.write({"success": True, "message": message}) elif command == "stop": message = self.stop_data_worker() self.write({"success": True, "message": message}) else: self.write({"success": False, "message": "unknown command"}) except Exception: log.exception("Exception") self.write({"success": False, "message": "error during control"}) class HijackCommentHandler(RequestHandler): """ REST request handler for hijack comments. """ def initialize(self, shared_memory_manager_dict): self.shared_memory_manager_dict = shared_memory_manager_dict self.wo_db = DB( application_name="database-rest-hijack-comment-write", user=DB_USER, password=<PASSWORD>, host=DB_HOST, port=DB_PORT, database=DB_NAME, ) def post(self): """ Receives a "hijack-comment" message and stores it in DB. :param message: { "key": <str>, "comment": <str> } :return: - """ raw = json.loads(self.request.body) log.debug("payload: {}".format(raw)) try: self.wo_db.execute( "UPDATE hijacks SET comment=%s WHERE key=%s;", (raw["comment"], raw["key"]), ) self.write({"success": True, "message": ""}) except Exception: self.write({"success": False, "message": "unknown error"}) log.exception("{}".format(raw)) class HijackMultiActionHandler(RequestHandler): """ REST request handler for multiple hijack actions. """ def initialize(self, shared_memory_manager_dict): self.shared_memory_manager_dict = shared_memory_manager_dict self.ro_db = DB( application_name="database-rest-hijack-multi-action-readonly", user=DB_USER, password=<PASSWORD>, host=DB_HOST, port=DB_PORT, database=DB_NAME, reconnect=True, autocommit=True, readonly=True, ) self.wo_db = DB( application_name="database-rest-hijack-multi-action-write", user=DB_USER, password=<PASSWORD>, host=DB_HOST, port=DB_PORT, database=DB_NAME, ) self.redis = redis.Redis(host=REDIS_HOST, port=REDIS_PORT) def post(self): """ Receives a "hijack-multi-action" message and applies the related actions in DB. :param message: { "keys": <list<str>>, "action": <str> } :return: - """ raw = json.loads(self.request.body) log.debug("payload: {}".format(raw)) seen_action = False ignore_action = False resolve_action = False delete_action = False try: if not raw["keys"]: query = None elif raw["action"] == "hijack_action_resolve": query = "UPDATE hijacks SET resolved=true, active=false, dormant=false, seen=true, time_ended=%s WHERE resolved=false AND ignored=false AND key=%s;" resolve_action = True elif raw["action"] == "hijack_action_ignore": query = "UPDATE hijacks SET ignored=true, active=false, dormant=false, seen=false WHERE ignored=false AND resolved=false AND key=%s;" ignore_action = True elif raw["action"] == "hijack_action_acknowledge": query = "UPDATE hijacks SET seen=true WHERE key=%s;" seen_action = True elif raw["action"] == "hijack_action_acknowledge_not": query = "UPDATE hijacks SET seen=false WHERE key=%s;" seen_action = True elif raw["action"] == "hijack_action_delete": query = "DELETE FROM hijacks WHERE key=%s;" delete_action = True else: raise BaseException("unreachable code reached") except Exception: log.exception("None action: {}".format(raw)) query = None if not query: self.write({"success": False, "message": "unknown error"}) return else: for hijack_key in raw["keys"]: try: entries = self.ro_db.execute( "SELECT prefix, hijack_as, type FROM hijacks WHERE key = %s;", (hijack_key,), ) if entries: entry = entries[0] redis_hijack_key = redis_key( entry[0], entry[1], entry[2] # prefix # hijack_as # type ) if seen_action: self.wo_db.execute(query, (hijack_key,)) elif ignore_action: # if ongoing, clear redis if self.redis.sismember("persistent-keys", hijack_key): purge_redis_eph_pers_keys( self.redis, redis_hijack_key, hijack_key ) self.wo_db.execute(query, (hijack_key,)) elif resolve_action: # if ongoing, clear redis if self.redis.sismember("persistent-keys", hijack_key): purge_redis_eph_pers_keys( self.redis, redis_hijack_key, hijack_key ) self.wo_db.execute( query, (datetime.datetime.now(), hijack_key) ) elif delete_action: redis_hijack = self.redis.get(redis_hijack_key) if self.redis.sismember("persistent-keys", hijack_key): purge_redis_eph_pers_keys( self.redis, redis_hijack_key, hijack_key ) log.debug( "redis-entry for {}: {}".format( redis_hijack_key, redis_hijack ) ) self.wo_db.execute(query, (hijack_key,)) if redis_hijack and classic_json.loads( redis_hijack.decode("utf-8") ).get("bgpupdate_keys", []): log.debug("deleting hijack using cache for bgp updates")
{'k1': '2065', 'k2': '739,19'}, {'k1': '2066', 'k2': '733,28'}, {'k1': '2067', 'k2': '727,41'}, {'k1': '2068', 'k2': '721,59'}, {'k1': '2069', 'k2': '715,82'}, {'k1': '2070', 'k2': '710,1'}, {'k1': '2071', 'k2': '704,41'}, {'k1': '2072', 'k2': '698,78'}, {'k1': '2073', 'k2': '693,19'}, {'k1': '2074', 'k2': '687,64'}, {'k1': '2075', 'k2': '682,14'}, {'k1': '2076', 'k2': '676,69'}, {'k1': '2077', 'k2': '671,27'}, {'k1': '2078', 'k2': '665,9'}, {'k1': '2079', 'k2': '660,57'}, {'k1': '2080', 'k2': '655,29'}, {'k1': '2081', 'k2': '650,05'}, {'k1': '2082', 'k2': '644,85'}, {'k1': '2083', 'k2': '639,69'}, {'k1': '2084', 'k2': '634,57'}, {'k1': '2085', 'k2': '629,49'}, {'k1': '2086', 'k2': '624,46'}, {'k1': '2087', 'k2': '619,46'}, {'k1': '2088', 'k2': '614,51'}, {'k1': '2089', 'k2': '609,59'}, {'k1': '2090', 'k2': '604,71'}, {'k1': '2091', 'k2': '599,88'}, {'k1': '2092', 'k2': '595,08'}, {'k1': '2093', 'k2': '590,32'}, {'k1': '2094', 'k2': '585,59'}, {'k1': '2095', 'k2': '580,91'}, {'k1': '2096', 'k2': '576,26'}, {'k1': '2097', 'k2': '571,65'}, {'k1': '2098', 'k2': '567,08'}, {'k1': '2099', 'k2': '562,54'}, {'k1': '2100', 'k2': '558,04'} ] return jsonify(data) # Get information about the forcecast of worldwide crude oil reserves # k1 = Year # k2 = Tons of crude oil @app.route("/forecast_of_worldwide_crude_oil_reserves") def forecast_of_worldwide_crude_oil_reserves(): data = [ {'k1': '2014', 'k2': '186610000000'}, {'k1': '2015', 'k2': '177279500000'}, {'k1': '2016', 'k2': '168415525000'}, {'k1': '2017', 'k2': '159994748750'}, {'k1': '2018', 'k2': '151995011313'}, {'k1': '2019', 'k2': '144395260747'}, {'k1': '2020', 'k2': '137175497710'}, {'k1': '2021', 'k2': '130316722824'}, {'k1': '2022', 'k2': '123800886683'}, {'k1': '2023', 'k2': '117610842349'}, {'k1': '2024', 'k2': '111730300231'}, {'k1': '2025', 'k2': '106143785220'}, {'k1': '2026', 'k2': '100836595959'}, {'k1': '2027', 'k2': '95794766161'}, {'k1': '2028', 'k2': '91005027853'}, {'k1': '2029', 'k2': '86454776460'}, {'k1': '2030', 'k2': '82132037637'}, {'k1': '2031', 'k2': '78025435755'}, {'k1': '2032', 'k2': '74124163967'}, {'k1': '2033', 'k2': '70417955769'}, {'k1': '2034', 'k2': '66897057981'}, {'k1': '2035', 'k2': '63552205082'}, {'k1': '2036', 'k2': '60374594828'}, {'k1': '2037', 'k2': '57355865086'}, {'k1': '2038', 'k2': '54488071832'}, {'k1': '2039', 'k2': '51763668240'}, {'k1': '2040', 'k2': '49175484828'}, {'k1': '2041', 'k2': '46716710587'}, {'k1': '2042', 'k2': '44380875057'}, {'k1': '2043', 'k2': '42161831305'}, {'k1': '2044', 'k2': '40053739739'}, {'k1': '2045', 'k2': '38051052752'}, {'k1': '2046', 'k2': '36148500115'}, {'k1': '2047', 'k2': '34341075109'}, {'k1': '2048', 'k2': '32624021354'}, {'k1': '2049', 'k2': '30992820286'}, {'k1': '2050', 'k2': '29443179272'}, {'k1': '2051', 'k2': '27971020308'}, {'k1': '2052', 'k2': '26572469293'}, {'k1': '2053', 'k2': '25243845828'}, {'k1': '2054', 'k2': '23981653537'}, {'k1': '2055', 'k2': '22782570860'}, {'k1': '2056', 'k2': '21643442317'}, {'k1': '2057', 'k2': '20561270201'}, {'k1': '2058', 'k2': '19533206691'}, {'k1': '2059', 'k2': '18556546356'}, {'k1': '2060', 'k2': '17628719039'}, {'k1': '2061', 'k2': '16747283087'}, {'k1': '2062', 'k2': '15909918932'}, {'k1': '2063', 'k2': '15114422986'}, {'k1': '2064', 'k2': '14358701836'}, {'k1': '2065', 'k2': '13640766745'}, {'k1': '2066', 'k2': '12958728407'}, {'k1': '2067', 'k2': '12310791987'}, {'k1': '2068', 'k2': '11695252388'}, {'k1': '2069', 'k2': '11110489768'}, {'k1': '2070', 'k2': '10554965280'}, {'k1': '2071', 'k2': '10027217016'}, {'k1': '2072', 'k2': '9525856165'}, {'k1': '2073', 'k2': '9049563357'}, {'k1': '2074', 'k2': '8597085189'}, {'k1': '2075', 'k2': '8167230930'}, {'k1': '2076', 'k2': '7758869383'}, {'k1': '2077', 'k2': '7370925914'}, {'k1': '2078', 'k2': '7002379618'}, {'k1': '2079', 'k2': '6652260637'}, {'k1': '2080', 'k2': '6319647605'}, {'k1': '2081', 'k2': '6003665225'}, {'k1': '2082', 'k2': '5703481964'}, {'k1': '2083', 'k2': '5418307866'}, {'k1': '2084', 'k2': '5147392472'}, {'k1': '2085', 'k2': '4890022849'}, {'k1': '2086', 'k2': '4645521706'}, {'k1': '2087', 'k2': '4413245621'}, {'k1': '2088', 'k2': '4192583340'}, {'k1': '2089', 'k2': '3982954173'}, {'k1': '2090', 'k2': '3783806464'}, {'k1': '2091', 'k2': '3594616141'}, {'k1': '2092', 'k2': '3414885334'}, {'k1': '2093', 'k2': '3244141067'}, {'k1': '2094', 'k2': '3081934014'}, {'k1': '2095', 'k2': '2927837313'}, {'k1': '2096', 'k2': '2781445448'}, {'k1': '2097', 'k2': '2642373175'}, {'k1': '2098', 'k2': '2510254516'}, {'k1': '2099', 'k2': '2384741791'}, {'k1': '2100', 'k2': '2265504701'} ] return jsonify(data) # Get information about the forecast of worldwide uranium reserves # k1 = Year # k2 = Tons of uranium @app.route("/forecast_of_worldwide_uranium_reserves") def forecast_of_worldwide_uranium_reserves(): data = [ {'k1': '2009', 'k2': '7813591'}, {'k1': '2010', 'k2': '7758896'}, {'k1': '2011', 'k2': '7704584'}, {'k1': '2012', 'k2': '7650652'}, {'k1': '2013', 'k2': '7597097'}, {'k1': '2014', 'k2': '7543917'}, {'k1': '2015', 'k2': '7491110'}, {'k1': '2016', 'k2': '7438672'}, {'k1': '2017', 'k2': '7386601'}, {'k1': '2018', 'k2': '7334895'}, {'k1': '2019', 'k2': '7283551'}, {'k1': '2020', 'k2': '7232566'}, {'k1': '2021', 'k2': '7181938'}, {'k1': '2022', 'k2': '7131665'}, {'k1': '2023', 'k2': '7081743'}, {'k1': '2024', 'k2': '7032171'}, {'k1': '2025', 'k2': '6982945'}, {'k1': '2026', 'k2': '6934065'}, {'k1': '2027', 'k2': '6885526'}, {'k1': '2028', 'k2': '6837328'}, {'k1': '2029', 'k2': '6789466'}, {'k1': '2030', 'k2': '6741940'}, {'k1': '2031', 'k2': '6694747'}, {'k1': '2032', 'k2': '6647883'}, {'k1': '2033', 'k2': '6601348'}, {'k1': '2034', 'k2': '6555139'}, {'k1': '2035', 'k2': '6509253'}, {'k1': '2036', 'k2': '6463688'}, {'k1': '2037', 'k2': '6418442'}, {'k1': '2038', 'k2': '6373513'}, {'k1': '2039', 'k2': '6328898'}, {'k1': '2040', 'k2': '6284596'}, {'k1': '2041', 'k2': '6240604'}, {'k1': '2042', 'k2': '6196920'}, {'k1': '2043', 'k2': '6153541'}, {'k1': '2044', 'k2': '6110467'}, {'k1': '2045', 'k2': '6067693'}, {'k1': '2046', 'k2': '6025219'}, {'k1': '2047', 'k2': '5983043'}, {'k1': '2048', 'k2': '5941162'}, {'k1': '2049', 'k2': '5899573'}, {'k1': '2050', 'k2': '5858276'}, {'k1': '2051', 'k2': '5817269'}, {'k1': '2052', 'k2': '5776548'}, {'k1': '2053', 'k2': '5736112'}, {'k1': '2054', 'k2': '5695959'}, {'k1': '2055', 'k2': '5656087'}, {'k1': '2056', 'k2': '5616495'}, {'k1': '2057', 'k2': '5577179'}, {'k1': '2058', 'k2': '5538139'}, {'k1': '2059', 'k2': '5499372'}, {'k1': '2060', 'k2': '5460876'}, {'k1': '2061', 'k2': '5422650'}, {'k1': '2062', 'k2': '5384692'}, {'k1': '2063', 'k2': '5346999'}, {'k1': '2064', 'k2': '5309570'}, {'k1': '2065', 'k2': '5272403'}, {'k1': '2066', 'k2': '5235496'}, {'k1': '2067', 'k2': '5198848'}, {'k1': '2068', 'k2': '5162456'}, {'k1': '2069', 'k2': '5126318'}, {'k1': '2070', 'k2': '5090434'}, {'k1': '2071', 'k2': '5054801'}, {'k1': '2072', 'k2': '5019418'}, {'k1': '2073', 'k2': '4984282'}, {'k1': '2074', 'k2': '4949392'}, {'k1': '2075', 'k2': '4914746'}, {'k1': '2076', 'k2': '4880343'}, {'k1': '2077', 'k2': '4846180'}, {'k1': '2078', 'k2': '4812257'}, {'k1': '2079', 'k2': '4778571'}, {'k1': '2080', 'k2': '4745121'}, {'k1': '2081', 'k2': '4711905'}, {'k1': '2082', 'k2': '4678922'}, {'k1': '2083', 'k2': '4646170'}, {'k1': '2084', 'k2': '4613646'}, {'k1': '2085', 'k2': '4581351'}, {'k1': '2086', 'k2': '4549281'}, {'k1': '2087', 'k2': '4517437'}, {'k1': '2088', 'k2': '4485814'}, {'k1': '2089', 'k2': '4454414'}, {'k1': '2090', 'k2': '4423233'}, {'k1': '2091', 'k2': '4392270'}, {'k1': '2092', 'k2': '4361524'}, {'k1': '2093', 'k2': '4330994'}, {'k1': '2094', 'k2': '4300677'}, {'k1': '2095', 'k2': '4270572'}, {'k1': '2096', 'k2': '4240678'}, {'k1': '2097', 'k2': '4210993'}, {'k1': '2098', 'k2': '4181516'}, {'k1': '2099', 'k2': '4152246'}, {'k1': '2100', 'k2': '4123180'} ] return jsonify(data) # Get information about the worldwide mercury producers # k1 = Country # k2 = Free energy devices possible to use ? # k3 = Ratification of Minamata Convention ? # k4 = Suitable for the nuclear transmutation of mercury ? @app.route("/worldwide_mercury_producers") def worldwide_mercury_producers(): data = [ {'k1': 'Finland', 'k2': 'Yes', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Russia', 'k2': 'Yes', 'k3': 'No', 'k4': 'Yes'}, {'k1': 'Canada', 'k2': 'Yes', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Chile', 'k2': 'Yes', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Argentina', 'k2': 'Yes', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'China', 'k2': 'Yes', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Mexico', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Spain', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Algeria', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'USA', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'Tchecoslovakia', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'Turkey', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'Yougoslavia', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'Germany', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Australia', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Colombia', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Irland', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Italy', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Japan', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Peru', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'The Philippines', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Tunisia', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Thailand', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'Ukrain', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'Tadjikistan', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'Kirghizistan', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'Morocco', 'k2': 'No', 'k3': 'No', 'k4': 'No'}, {'k1': 'Iran', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'}, {'k1': 'Norway', 'k2': 'No', 'k3': 'Yes', 'k4': 'No'} ] return jsonify(data) # Get information about countries who are ratified minamata convention # k1 = Country @app.route("/countries_ratified_minamata_convention") def countries_ratified_minamata_convention(): data = [ {'k1': 'Angola'}, {'k1': 'Argentina'}, {'k1': 'Armenia'}, {'k1': 'Australia'}, {'k1': 'Austria'}, {'k1': 'Bangladesh'}, {'k1': 'Belgium'}, {'k1': 'Benin'}, {'k1': 'Bolivia'}, {'k1': 'Brazil'}, {'k1': 'Bulgaria'}, {'k1': 'Burkina Faso'}, {'k1': 'Cambodia'}, {'k1': 'Canada'}, {'k1': 'Central African Republic'}, {'k1': 'Chile'}, {'k1': 'China'}, {'k1': 'Colombia'}, {'k1': 'Comoros'}, {'k1': 'Costa Rica'}, {'k1': 'Ivory Coast'}, {'k1': 'Czech Republic'}, {'k1': 'Denmark'}, {'k1': 'Djibouti'}, {'k1': 'Dominican Republic'}, {'k1': 'Ecuador'}, {'k1': 'Ethiopia'}, {'k1': 'European Union'}, {'k1': 'Finland'}, {'k1': 'France'}, {'k1': 'Gambia'}, {'k1': 'Georgia'}, {'k1': 'Germany'}, {'k1':
generic wrapper. """ if enabled: return functor(*args, **kwargs) return NoOpContextManager() class ContextManagerStack(object): """Context manager that is designed to safely allow nesting and stacking. Python2.7 directly supports a with syntax generally removing the need for this, although this form avoids indentation hell if there is a lot of context managers. It also permits more programmatic control and allowing conditional usage. For Python2.6, see http://docs.python.org/library/contextlib.html; the short version is that there is a race in the available stdlib/language rules under 2.6 when dealing w/ multiple context managers, thus this safe version was added. For each context manager added to this instance, it will unwind them, invoking them as if it had been constructed as a set of manually nested with statements. """ def __init__(self): self._stack = [] def Add(self, functor, *args, **kwargs): """Add a context manager onto the stack. Usage of this is essentially the following: >>> stack.add(Timeout, 60) It must be done in this fashion, else there is a mild race that exists between context manager instantiation and initial __enter__. Invoking it in the form specified eliminates that race. Args: functor: A callable to instantiate a context manager. args and kwargs: positional and optional args to functor. Returns: The newly created (and __enter__'d) context manager. Note: This is not the same value as the "with" statement -- that returns the value from the __enter__ function while this is the manager itself. """ obj = None try: obj = functor(*args, **kwargs) return obj finally: if obj is not None: obj.__enter__() self._stack.append(obj) def __enter__(self): # Nothing to do in this case. The individual __enter__'s are done # when the context managers are added, which will likely be after # the __enter__ method of this stack is called. return self def __exit__(self, exc_type, exc, exc_tb): # Exit each context manager in stack in reverse order, tracking the results # to know whether or not to suppress the exception raised (or to switch that # exception to a new one triggered by an individual handler's __exit__). for handler in reversed(self._stack): # pylint: disable=bare-except try: if handler.__exit__(exc_type, exc, exc_tb): exc_type = exc = exc_tb = None except: exc_type, exc, exc_tb = sys.exc_info() self._stack = [] # Return True if any exception was handled. if all(x is None for x in (exc_type, exc, exc_tb)): return True # Raise any exception that is left over from exiting all context managers. # Normally a single context manager would return False to allow caller to # re-raise the exception itself, but here the exception might have been # raised during the exiting of one of the individual context managers. raise exc_type, exc, exc_tb class ApiMismatchError(Exception): """Raised by GetTargetChromiteApiVersion.""" class NoChromiteError(Exception): """Raised when an expected chromite installation was missing.""" def GetTargetChromiteApiVersion(buildroot, validate_version=True): """Get the re-exec API version of the target chromite. Args: buildroot: The directory containing the chromite to check. validate_version: If set to true, checks the target chromite for compatibility, and raises an ApiMismatchError when there is an incompatibility. Returns: The version number in (major, minor) tuple. Raises: May raise an ApiMismatchError if validate_version is set. """ try: api = RunCommand( [constants.PATH_TO_CBUILDBOT, '--reexec-api-version'], cwd=buildroot, error_code_ok=True, capture_output=True) except RunCommandError: # Although error_code_ok=True was used, this exception will still be raised # if the executible did not exist. full_cbuildbot_path = os.path.join(buildroot, constants.PATH_TO_CBUILDBOT) if not os.path.exists(full_cbuildbot_path): raise NoChromiteError('No cbuildbot found in buildroot %s, expected to ' 'find %s. ' % (buildroot, full_cbuildbot_path)) raise # If the command failed, then we're targeting a cbuildbot that lacks the # option; assume 0:0 (ie, initial state). major = minor = 0 if api.returncode == 0: major, minor = map(int, api.output.strip().split('.', 1)) if validate_version and major != constants.REEXEC_API_MAJOR: raise ApiMismatchError( 'The targeted version of chromite in buildroot %s requires ' 'api version %i, but we are api version %i. We cannot proceed.' % (buildroot, major, constants.REEXEC_API_MAJOR)) return major, minor def GetChrootVersion(chroot=None, buildroot=None): """Extract the version of the chroot. Args: chroot: Full path to the chroot to examine. buildroot: If |chroot| is not set, find it relative to |buildroot|. Returns: The version of the chroot dir. """ if chroot is None and buildroot is None: raise ValueError('need either |chroot| or |buildroot| to search') from chromite.lib import osutils if chroot is None: chroot = os.path.join(buildroot, constants.DEFAULT_CHROOT_DIR) ver_path = os.path.join(chroot, 'etc', 'cros_chroot_version') try: return osutils.ReadFile(ver_path).strip() except IOError: logging.warning('could not read %s', ver_path) return None def iflatten_instance(iterable, terminate_on_kls=(basestring,)): """Derivative of snakeoil.lists.iflatten_instance; flatten an object. Given an object, flatten it into a single depth iterable- stopping descent on objects that either aren't iterable, or match isinstance(obj, terminate_on_kls). Example: >>> print list(iflatten_instance([1, 2, "as", ["4", 5])) [1, 2, "as", "4", 5] """ def descend_into(item): if isinstance(item, terminate_on_kls): return False try: iter(item) except TypeError: return False # Note strings can be infinitely descended through- thus this # recursion limiter. return not isinstance(item, basestring) or len(item) > 1 if not descend_into(iterable): yield iterable return for item in iterable: if not descend_into(item): yield item else: for subitem in iflatten_instance(item, terminate_on_kls): yield subitem # TODO: Remove this once we move to snakeoil. def load_module(name): """load a module Args: name: python dotted namespace path of the module to import Returns: imported module Raises: FailedImport if importing fails """ m = __import__(name) # __import__('foo.bar') returns foo, so... for bit in name.split('.')[1:]: m = getattr(m, bit) return m def PredicateSplit(func, iterable): """Splits an iterable into two groups based on a predicate return value. Args: func: A functor that takes an item as its argument and returns a boolean value indicating which group the item belongs. iterable: The collection to split. Returns: A tuple containing two lists, the first containing items that func() returned True for, and the second containing items that func() returned False for. """ trues, falses = [], [] for x in iterable: (trues if func(x) else falses).append(x) return trues, falses @contextlib.contextmanager def Open(obj, mode='r'): """Convenience ctx that accepts a file path or an already open file object.""" if isinstance(obj, basestring): with open(obj, mode=mode) as f: yield f else: yield obj def LoadKeyValueFile(obj, ignore_missing=False, multiline=False): """Turn a key=value file into a dict Note: If you're designing a new data store, please use json rather than this format. This func is designed to work with legacy/external files where json isn't an option. Args: obj: The file to read. Can be a path or an open file object. ignore_missing: If the file does not exist, return an empty dict. multiline: Allow a value enclosed by quotes to span multiple lines. Returns: a dict of all the key=value pairs found in the file. """ d = {} try: with Open(obj) as f: key = None in_quotes = None for raw_line in f: line = raw_line.split('#')[0] if not line.strip(): continue # Continue processing a multiline value. if multiline and in_quotes and key: if line.rstrip()[-1] == in_quotes: # Wrap up the multiline value if the line ends with a quote. d[key] += line.rstrip()[:-1] in_quotes = None else: d[key] += line continue chunks = line.split('=', 1) if len(chunks) != 2: raise ValueError('Malformed key=value file %r; line %r' % (obj, raw_line)) key = chunks[0].strip() val = chunks[1].strip() if len(val) >= 2 and val[0] in "\"'" and val[0] == val[-1]: # Strip matching quotes on the same line. val = val[1:-1] elif val and multiline and val[0] in "\"'": # Unmatched quote here indicates a multiline value. Do not # strip the '\n' at the end of the line. in_quotes = val[0] val = chunks[1].lstrip()[1:] d[key] = val except EnvironmentError as e: if not (ignore_missing and e.errno == errno.ENOENT): raise return d def MemoizedSingleCall(functor): """Decorator for simple functor targets, caching the results The functor must accept no arguments beyond either a class or self (depending on if this is used in a classmethod/instancemethod context). Results of the wrapped method will be written to the class/instance namespace in a specially named cached value. All future invocations will just reuse that
<reponame>london-escience/libhpc-cf<gh_stars>1-10 # Copyright (c) 2015, Imperial College London # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # 1. Redistributions of source code must retain the above copyright notice, # this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # 3. Neither the names of the copyright holders nor the names of their # contributors may be used to endorse or promote products derived from this # software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. # ----------------------------------------------------------------------------- # # This file is part of the libhpc-cf Coordination Forms library that has been # developed as part of the libhpc projects # (http://www.imperial.ac.uk/lesc/projects/libhpc). # # We gratefully acknowledge the Engineering and Physical Sciences Research # Council (EPSRC) for their support of the projects: # - libhpc: Intelligent Component-based Development of HPC Applications # (EP/I030239/1). # - libhpc Stage II: A Long-term Solution for the Usability, Maintainability # and Sustainability of HPC Software (EP/K038788/1). ''' File created on Aug 7, 2012 @author: jhc02 ''' import sys import importlib import threading import params from collections import Iterable class Component(): ''' A function and its associated metadata to be run with some coordination forms parameters. ''' # A unique string identifier and human-readable name for this function # While the ID content can be anything, a package style structure is recommended component_id = '' component_name = '' component_code = '' # Metadata for parameters and return types - list of instances of cfparams.Parameter parameters = [] # Metadata for function return types - list of cfparams.Parameter (function may return list or tuple) return_data = [] #dependencies = [] # Additional parameters - whether these should be added to the parameter list before or # after existing params. In some case this matters. additional_params = 'pre' def __init__(self, id, name, function_entry_point, input_params, return_params, static_pos = 'post'): ''' Constructor ''' print 'Creating a new coordination forms component...' self.component_id = id self.component_name = name self.component_code = function_entry_point self.parameters = input_params self.return_data = return_params self.static_params_pos = static_pos #self.dependencies = dependencies print 'Creating a new coordination forms component...\n\tID: ' + self.component_id + '\tType: ' + self.component_name + '\n' def get_id(self): return self.component_id def get_name(self): return self.component_name def get_params(self): return self.parameters def get_return_data(self): return self.return_data def get_code(self): return self.component_code def get_dependencies(self): return self.dependencies def static_params_pre(self): if self.static_params_pos == 'pre': return True else: return False def run(self, parameter_list, result_store=None): print 'Parameter list provided: <' + str(parameter_list) + '>\n\n' if not hasattr(parameter_list, '__iter__'): parameter_list = [parameter_list] print 'Component to run: ' + self.component_name + ' - ' + str(len(parameter_list)) + ' parameters.\n' # Handle lookup of function code that we're going to run. # Lookup uses the string providing the package and name of the function function_parts = self.component_code.rsplit('.',1) func = None if len(function_parts) == 2: #mod = __import__(function_parts[0]) mod = importlib.import_module(function_parts[0]) sys.stdout.write('Got module ' + str(mod) + '\n') else: sys.stdout.write('ValueError: The provided component_name <' + str(function_parts) + '> is invalid, a fully qualified name must be specified.\n') raise ValueError('The provided component_name <' + str(function_parts) + '> is invalid, a fully qualified name must be specified.') func = getattr(mod, function_parts[1], None) sys.stdout.write('Checking if function ' + str(func) + ' is callable...function parts: ' + str(function_parts) + '\n') if callable(func): sys.stdout.write('Function is callable\n\n') # Do any pre/post processing of function data here. sys.stdout.write(threading.current_thread().getName() + ': About to run function...\n\n') # TODO: Handle None inputs to parameter list? Received an earlier error about marshalling # values to unicode and thought this was because I was passing a None as a parameter. Check this. # Problem occurred when calling fastqplitter with None as a parameter. result = func(*parameter_list) # Turn off returning result as a tuple for now. #if type(result) != type(tuple()): # result = (result,) result_length = 1 if hasattr(result,'__iter__'): result_length = len(result) print 'Function <' + self.get_name() + '> has completed and returned <' + str(result_length) + '> values.\n' # We now prepare a tuple to return. This contains all the parameters # That are marked as outputs and not marked as being ignored. # A list will be built and converted to a tuple return_list = [] # First check the function return value. This may be a single value or # an iterable list. If its length is 1 we assume its just a single return value. # If its greater than 1 we assume its wrapped in a list or tuple. We build a corresponding # list that will end up as the first element of return_list count = 0 if len(self.return_data) == 0: # No value is returned so ignore return value pass elif len(self.return_data) == 1: if not self.return_data[0].ignore(): # We're returning a single value, it may be wrapped in # a tuple or list or may be an unwrapped value if hasattr(result,'__iter__'): #return_list.append(result[0]) return_list.append(result) else: return_list.append(result) count = count + 1 else: return_result_list = [] for value in self.return_data: if not value.ignore(): # TODO: Do we want to package up the main return value as a list # leading to the following line appending result[0][count]? return_result_list.append(result[count]) count = count + 1 return_list.append(return_result_list) # Now we go through the other parameters in turn, adding them to the # return list if they're marked as output and not marked to be ignored. print 'Count is: ' + str(count) + ', we\'re expecting the remaining return values in the result tuple.' for value in self.parameters: if (value.get_dir() == 'output') or (value.get_dir() == 'inout'): if not value.ignore(): return_list.append(result[count]) count = count + 1 # If we've been provided with a result store and an # associated index, we store the result here, otherwise # we return the result. result_store[0] is the list for results to # be stored to, result_store[1] is the index at which to store the result if result_store != None: print 'Storing result <' + str(return_list) + '> to result store index ' + str(result_store[1]) print 'Result array: ' + str(result_store[0]) if len(return_list) == 1: print 'Storing single value to result store for function <' + self.get_name() + '>.' result_store[0][result_store[1]] = return_list[0] elif len(return_list) > 1: print 'Storing <' + str(len(return_list)) + '> values from function <' + self.get_name() + '> to result store.' result_store[0][result_store[1]] = tuple(return_list) #else: # result_store[0][result_store[1]] = result if len(return_list) == 1: print 'Returning a single value from function <' + self.get_name() + '>.' return return_list[0] elif len(return_list) > 1: print 'Returning <' + str(len(return_list)) + '> values from function <' + self.get_name() + '>.' return tuple(return_list) else: print 'Nothing to return from function <' + self.get_name() + '>.' class ComponentList(): ''' A list of parameter objects to be passed to a coordination forms operator ''' component_list = [] def __init__(self, *component_list): ''' Constructor ''' print 'Creating
= rightx_current - self.lanewidth_estimated_pixels if window > 0 : leftx_current = np.int((rightx_current - self.lanewidth_estimated_pixels) * alfa + leftx_current * ( 1 - alfa )) if left_window_minpix_ok == False or right_window_minpix_ok == False: keep_looking_ahead = False if((len(non_zero_rect_intersects_right)) > self.minpix) and keep_looking_ahead == True and ((len(non_zero_rect_intersects_right)) > self.minpix): both_lane_fits_count += 1 self.last_window_x[window, 0] = leftx_current self.last_window_x[window, 1] = rightx_current self.last_dwa_hits = any_lane_hits_count dwadebug = "%sTHIS ITERATION DETECTIONS = %d\n" % (dwadebug, self.last_dwa_hits) self.useDWA = True left_fitx, right_fitx, ploty, avg_radius_m, vehicle_offset_m_signed, update_lane_polygon, lane_polygon, lane_width, debugmsg = self.fit_poly(slice.shape[0], LHS_lane_x_coords, LHS_lane_y_coords, RHS_lane_x_coords, RHS_lane_y_coords) if both_lane_fits_count > 2 and any_lane_hits_count >= 1 and update_lane_polygon == False: dwadebug = "%sPOLYFIT FAILED: MAYBE CLEAR AHEAD FOR %d UNITS (%1.1fm) \n" % (dwadebug, both_lane_fits_count, both_lane_fits_count * window_height * self.ym_per_pix) ''' dwa suggested polygon ''' debugmsg = "%s%s\n" % (debugmsg, dwadebug) return out_img, avg_radius_m, vehicle_offset_m_signed, update_lane_polygon, lane_polygon, lane_width, debugmsg ''' *Polynomal proximity fit function [PROXI-FIT]* This function needs an existing fit to work with (need dwa to to run scuccesfullly at least once) @Idea: I guess an ideal Self deiving car would be able to somehow determine this for any givn frame, therefore pocess an contextual understandnig of it's surroundings ''' def search_near_poly(self, slice): self.useDWA = False # Grab activated pixels nonzero = slice.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) ''' search for activated x values within the margin constained by the polynomial function. ''' left_lane_y_inds = ((nonzerox >= self.left_fit_current[0] * nonzeroy**2 + self.left_fit_current[1] * nonzeroy + self.left_fit_current[2] - self.margin) & (nonzerox <= self.left_fit_current[0] * nonzeroy**2 + self.left_fit_current[1] * nonzeroy + self.left_fit_current[2] + self.margin)).nonzero()[0] right_lane_y_inds = ((nonzerox >= self.right_fit_current[0] * nonzeroy**2 + self.right_fit_current[1] * nonzeroy + self.right_fit_current[2] - self.margin) & (nonzerox <= self.right_fit_current[0] * nonzeroy**2 + self.right_fit_current[1] * nonzeroy + self.right_fit_current[2] + self.margin)).nonzero()[0] ''' extract the pixel coordinates ''' leftx = nonzerox[left_lane_y_inds] lefty = nonzeroy[left_lane_y_inds] rightx = nonzerox[right_lane_y_inds] righty = nonzeroy[right_lane_y_inds] ''' fit_new_polynomials, get lane curvature radius estimate, get vehicle offset estimate ''' left_fitx, right_fitx, ploty, avg_radius_m, vehicle_offset_m_signed, update_lane_polygon, lane_polygon, lane_width, debugmsg = self.fit_poly(slice.shape[0], leftx, lefty, rightx, righty) ''' calculate polygons 1. Lane polygons (boundary of the proximity search) - for the binary image 2. Lane area polygon - to be backprojected into the source image ''' if left_fitx is not None or right_fitx is not None: view_img = np.dstack((slice, slice, slice)) # multiply by 255 if thresholded to 1 poly_img = np.zeros_like(view_img) if left_fitx is not None: left_lane_window1 = np.array([np.transpose(np.vstack([left_fitx-self.margin, ploty]))]) left_lane_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+self.margin, ploty])))]) left_lane_pts = np.hstack((left_lane_window1, left_lane_window2)) cv2.fillPoly(poly_img, np.int_([left_lane_pts]) , (0, 255, 0)) if right_fitx is not None: right_lane_window1 = np.array([np.transpose(np.vstack([right_fitx-self.margin, ploty]))]) right_lane_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+self.margin, ploty])))]) right_lane_pts = np.hstack((right_lane_window1, right_lane_window2)) cv2.fillPoly(poly_img, np.int_([right_lane_pts]), (0, 255, 0)) if left_fitx is not None or right_fitx is not None: warped_result_annotated = (cv2.addWeighted(view_img, 0.7, poly_img, 0.3, 0 ))/255. return warped_result_annotated, avg_radius_m, vehicle_offset_m_signed, update_lane_polygon, lane_polygon, lane_width, debugmsg else: return np.dstack((slice, slice, slice)), avg_radius_m, vehicle_offset_m_signed, update_lane_polygon, lane_polygon, lane_width, debugmsg ''' state machine image prep function wraps around Lane finder's image prep function ''' def prep_image(self,image): roi_warped = self.prepare_image(image) return "select_lane_finder", roi_warped; ''' Choose which lane finder to use: ''' def select_lane_finder(self, image): if(self.use_previous_reference == True): return "find_lanes_from_prev_ref", image; else: return "find_lanes_initial", image; ''' Run this when no previous lane refences are not available. Because this is not a realtime app, find both (othewise only the lane it can't lock onto) ''' def find_lanes_initial(self,image): None ''' when thigs are going good, use learningsfrom previous frame, The state evolution of coefficents are assumed to be markovian, I think this may at least work in most cases but not all! ''' def find_lanes_from_previous_references(self): None # To get the Next image this function has to return an image to be replaced. def entry(self, image): # First we get a filtered and warped image. debugmsg = None warped, hist, undistorted_image = self.prepare_image(image) ''' at startup or in a fallback, do dwa for m frames! and if seperation is stable hand over to proxi-fit ''' if (self.entry_run_next == 0): # first iteration or a recovery request annotated, avg_radius_m, offset_m, update_lane_polygon, lane_polygon, lane_width_m, debugmsg = self.dwa(warped, hist) debugmsg = "%sMODE: DWA\n" % (debugmsg) if update_lane_polygon == False : self.entry_fatal_error_count = self.entry_fatal_error_count + 1 if self.entry_fatal_error_count < self.entry_fatal_error_count_limit else self.entry_fatal_error_count_limit if self.entry_fatal_error_count > 1 : self.entry_total_error_count += 1 # just logging elif update_lane_polygon == True and (self.entry_proxi_fit_misses_count > 0) : self.entry_fatal_error_count = 0 self.entry_proxi_fit_recovery_by_dwa_count += 1 # just logging elif update_lane_polygon == True and (self.entry_fatal_error_count > 0) : self.entry_fatal_error_count = 0 self.entry_dwa_next_frame_recovery += 1 if (self.entry_run_next == 1): annotated, avg_radius_m, offset_m, update_lane_polygon, lane_polygon, lane_width_m, debugmsg = self.search_near_poly(warped) debugmsg = "%sMODE: PROXI-FIT\n" % (debugmsg) if update_lane_polygon == False : self.entry_proxi_fit_misses_count = self.entry_proxi_fit_misses_count + 1 if self.entry_proxi_fit_misses_count < self.entry_proxi_fit_misses_count_limit else self.entry_proxi_fit_misses_count_limit elif update_lane_polygon == True : self.entry_proxi_fit_misses_count = 0 alfa = 0.6 lane_polygon_t = None if lane_polygon is not None and self.last_succesful_lane_polygon is not None and update_lane_polygon == True: try: lane_polygon_t = lane_polygon * alfa + self.last_succesful_lane_polygon * (1-alfa) except: None lane_polygon = lane_polygon_t if lane_polygon_t is not None else lane_polygon offset_msg = "" radius_msg = "" topsLeftCornerText = (10,50) topsLeftCornerText1 = (10,100) topsLeftCornerText2 = (10,150) LaneWidthMText = (10,200) ProxiFitRecoveryByDWANumText = (10,250) DWANextFrameRecoveryNumText = (10,300) self.imsrc.set_data(annotated) augmented_image = np.zeros_like(undistorted_image) radius_msg = 'Avg R = %.1f m' % avg_radius_m if avg_radius_m is not None else "Avg R = NOT COMPUTED" ; lane_msg = "NO LANE MSG" lane_width_msg = "Lane Width = NA" interop_tries = 0 if update_lane_polygon == True: self.entry_run_next = 1 self.lane_area_polygon.set_xy(lane_polygon) poly_img = np.zeros_like(undistorted_image) cv2.fillPoly(poly_img, np.int_([lane_polygon]) , (0, 255, 0)) augmented_image = cv2.addWeighted(undistorted_image, 1.0 , poly_img, 0.3, 0 ) self.last_succesful_lane_polygon = np.copy(lane_polygon) self.failures_since_last_polygon = 0 offset_msg = "Vehicle is " + "%.2f" % np.abs(offset_m ) + "m " + ("left" if (offset_m > 0) else "right") + " of centre"; lane_msg = "LANE SEPERATION: YES" if lane_width_m is not None: lane_width_msg = "Lane Width = %.2fm" % lane_width_m cv2.putText(augmented_image, offset_msg, topsLeftCornerText , self.font, self.fontScale, self.fontColor, self.lineType_) elif self.failures_since_last_polygon is not None: if self.failures_since_last_polygon < self.failures_since_last_polygon_max : self.lane_area_polygon.set_xy(self.last_succesful_lane_polygon) poly_img = np.zeros_like(undistorted_image) cv2.fillPoly(poly_img, np.int_([self.last_succesful_lane_polygon]) , (0, 255, 0)) augmented_image = cv2.addWeighted(undistorted_image, 1.0 , poly_img, 0.3, 0 ) lane_msg = "LANE SEPERATION: APPROXIMATED" self.entry_run_next = 1 self.failures_since_last_polygon = self.failures_since_last_polygon + 1 if self.failures_since_last_polygon < self.failures_since_last_polygon_max else self.failures_since_last_polygon_max debugmsg = "%sConsecutive failures since last polygon = %d/%d\n" % (debugmsg, self.failures_since_last_polygon, self.failures_since_last_polygon_max) else: lane_msg = "LANE SEPERATION: NON DETECTIONS OVER THRESHOLD [%d]" % (self.failures_since_last_polygon_max) self.entry_run_next = 0; self.left_lane.set_data(0,0) self.right_lane.set_data(0,0) debugmsg = "%sFATAL ERROR: COMPUTING ABORTED\n" % (debugmsg) augmented_image = undistorted_image radius_msg = "Avg R = NA" else: if self.failures_since_last_polygon is not None: self.failures_since_last_polygon = self.failures_since_last_polygon + 1 if self.failures_since_last_polygon < self.failures_since_last_polygon_max else self.failures_since_last_polygon_max self.entry_run_next = 0 augmented_image = undistorted_image lane_msg = "LANE SEPERATION: NO" proxi_dwa_recovery_num_text = "PROXI-DWA Recovery Count = %d" % self.entry_proxi_fit_recovery_by_dwa_count; dwa_dwa_recovery_num_text = "DWA-DWA Recovery Attempts Count = %d" % self.entry_total_error_count; if debugmsg is None: debugmsg = "" cv2.putText(augmented_image, lane_msg, topsLeftCornerText2 , self.font, self.fontScale, self.fontColor, self.lineType_) cv2.putText(augmented_image, radius_msg, topsLeftCornerText1 , self.font, self.fontScale, self.fontColor, self.lineType_) cv2.putText(augmented_image, proxi_dwa_recovery_num_text, ProxiFitRecoveryByDWANumText , self.font, self.fontScale, self.fontColor, self.lineType_) cv2.putText(augmented_image, dwa_dwa_recovery_num_text, DWANextFrameRecoveryNumText , self.font, self.fontScale, self.fontColor, self.lineType_) cv2.putText(augmented_image, lane_width_msg, LaneWidthMText , self.font, self.fontScale, self.fontColor, self.lineType_) debugmsgText = (10, 350) y0, dy = 350, 25 for i, line in enumerate(debugmsg.split('\n')): y = y0 + i*dy cv2.putText(augmented_image, line, (10, y) , self.font, self.debugFontScale, self.fontColor, self.lineType_) ''' if we have a a prior reference we use that, If not we try to find lanes from scratch using dynamic windows. @note: It might be a good idea to find LEFT and RIGHT lanes seperately because usually if we are in an end lane we may have a strong continous lane on one side thus saving compute. @todo: Implement seperate lane search ''' ex = (0, np.shape(warped)[1], 0, np.shape(warped)[0]) self.imsrc.set_extent(ex); self.fig.canvas.draw() image_ = np.frombuffer(self.fig.canvas.tostring_rgb(), dtype='uint8') # extract plot
median def by_weighted_median(self, container, inputs): for n in range(0,9): #first lien weighted median block #print inputs[self.purchaser_first_lien_rates[n]], inputs[self.purchaser_first_lien_weight[n]] if len(inputs[self.purchaser_first_lien_rates[n]]) >0 and len(inputs[self.purchaser_first_lien_weight[n]]) >0: #check to see if the array is populated nd_first_rates = inputs[self.purchaser_first_lien_rates[n]] nd_first_values = inputs[self.purchaser_first_lien_weight[n]] nd_first_rates, nd_first_values = zip(*sorted(zip(nd_first_rates, nd_first_values))) step_size = round(float(sum(nd_first_values)) / len(nd_first_values),3) steps_needed = (len(nd_first_values) / float(2)) nd_steps = [round(x/step_size,3) for x in nd_first_values] #compute step size by weights (values) of loans count = 0 for i in range(0, len(nd_first_values)): step_taken = nd_first_values[i] / float(step_size) steps_needed -=step_taken count +=1 if steps_needed <= 0: #print nd_first_rates, "*"*10,nd_first_rates[count-1], count-1 container['points'][9]['purchasers'][n]['firstlienvalue'] = sorted(nd_first_rates)[count-1] #for weighted median break #junior lien weighted median block if len(inputs[self.purchaser_junior_lien_rates[n]]) > 0 and len(inputs[self.purchaser_junior_lien_weight[n]]) >0: #check to see if the array is populated nd_junior_rates = inputs[self.purchaser_junior_lien_rates[n]] nd_junior_values = inputs[self.purchaser_junior_lien_weight[n]] nd_junior_rates, nd_junior_values = zip(*sorted(zip(nd_junior_rates, nd_junior_values))) step_size = round(float(sum(nd_junior_values)) / len(nd_junior_values),3) steps_needed = (len(nd_junior_values) / float(2)) nd_steps = [round(x/step_size,3) for x in nd_junior_values] count = 0 for i in range(0, len(nd_junior_values)): step_taken = nd_junior_values[i] / float(step_size) steps_needed -=step_taken count +=1 if steps_needed <= 0: container['points'][9]['purchasers'][n]['juniorlienvalue'] = sorted(nd_junior_rates)[count-1] #for weighted median break def calc_weighted_median(self, rate_list, weight_list): if len(rate_list) > 0 and len(weight_list) > 0:#check for divide by 0 errors rate_list, weight_list = zip(*sorted(zip(rate_list, weight_list))) #sort both lists by rate- this converts the lists to tuples step_size = round(Decimal(sum(weight_list)) / len(weight_list),2) #get a managable decimal length steps_needed = Decimal(round(len(weight_list) / Decimal(2),1)) count = 0 #count is used to choose find the index of the rate for the median weight, can this be simplified? for i in range(0, len(weight_list)): step_taken = Decimal(weight_list[i] / Decimal(step_size)) steps_needed -= step_taken #print count, 'count', step_taken, 'step taken', steps_needed, 'needed' if round(steps_needed,2) <= 0: return rate_list[count] count +=1 def by_applicant_income_4x(self, container, inputs): #aggregate loans by applicant income index if inputs['income bracket'] > 5 or inputs['action taken'] == ' ' or inputs['action taken'] > 5: #filter out of bounds indexes before calling aggregations pass elif inputs['income bracket'] <6 and inputs['action taken'] < 6: container['incomes'][inputs['income bracket']]['dispositions'][0]['count'] += 1 #add to 'applications received' container['incomes'][inputs['income bracket']]['dispositions'][0]['value'] += int(inputs['loan value']) #add to 'applications received' container['incomes'][inputs['income bracket']]['dispositions'][inputs['action taken']]['count'] += 1 #loans by action taken code container['incomes'][inputs['income bracket']]['dispositions'][inputs['action taken']]['value'] += int(inputs['loan value']) else: print "error aggregating income for report 4-1" def by_4x_demographics(self, container, inputs, key, key_index): if inputs['action taken'] < 6: if key == 'minoritystatuses' and key_index > 1: pass else: self.aggregate_4x(container, inputs, key, key_index, 0, False) self.aggregate_4x(container, inputs, key, key_index, inputs['action taken'], False) if inputs['gender'] < 3: self.aggregate_4x(container, inputs, key, key_index, 0, True) self.aggregate_4x(container, inputs, key, key_index, inputs['action taken'], True) def aggregate_4x(self, container, inputs, key, key_index, action_index, gender_bool): if gender_bool == False: container[key][key_index]['dispositions'][action_index]['count'] +=1 container[key][key_index]['dispositions'][action_index]['value'] +=int(inputs['loan value']) elif gender_bool == True: container[key][key_index]['genders'][inputs['gender']]['dispositions'][action_index]['count'] +=1 container[key][key_index]['genders'][inputs['gender']]['dispositions'][action_index]['value'] +=int(inputs['loan value']) def totals_4x(self, container, inputs): if inputs['action taken'] < 6 and inputs['action taken'] != ' ': container['total'][0]['count'] += 1 container['total'][0]['value'] += int(inputs['loan value']) container['total'][inputs['action taken']]['count'] +=1 container['total'][inputs['action taken']]['value'] += int(inputs['loan value']) def by_5x_totals(self, container, inputs): if inputs['action taken'] > 5: pass else: container['total'][0]['count'] +=1 container['total'][0]['value'] += int(inputs['loan value']) container['total'][inputs['action taken']]['count'] +=1 container['total'][inputs['action taken']]['value'] += int(inputs['loan value']) def by_5x_demographics(self, container, inputs, index_num, index_name, index_code): #index_num: the index of the primary list in the dictionary #index_name: the key corresponding to the index number #index_code: the code from the inputs dictionary for the row being aggregated if inputs['income bracket'] < 5 and inputs['action taken'] < 6: container['applicantincomes'][inputs['income bracket']]['borrowercharacteristics'][index_num][index_name][index_code]['dispositions'][0]['count'] += 1 #increment count of applications received by minority status container['applicantincomes'][inputs['income bracket']]['borrowercharacteristics'][index_num][index_name][index_code]['dispositions'][0]['value'] += int(inputs['loan value']) container['applicantincomes'][inputs['income bracket']]['borrowercharacteristics'][index_num][index_name][index_code]['dispositions'][inputs['action taken']]['count'] += 1 #increment count by action taken and minority status container['applicantincomes'][inputs['income bracket']]['borrowercharacteristics'][index_num][index_name][index_code]['dispositions'][inputs['action taken']]['value'] += int(inputs['loan value']) def build_report5x(self, table5x, inputs): self.by_5x_demographics(table5x, inputs, 0, 'races', inputs['race']) self.by_5x_demographics(table5x, inputs, 1, 'ethnicities', inputs['ethnicity']) if inputs['minority status'] < 2: self.by_5x_demographics(table5x, inputs, 2, 'minoritystatus', inputs['minority status']) self.by_5x_totals(table5x, inputs) def by_tract_characteristics(self, container, inputs, json_index, key, key_index, action_index): if action_index < 6 and key_index <4: container['censuscharacteristics'][json_index][key][key_index]['dispositions'][0]['count'] +=1 container['censuscharacteristics'][json_index][key][key_index]['dispositions'][0]['value'] +=int(inputs['loan value']) container['censuscharacteristics'][json_index][key][key_index]['dispositions'][action_index]['count'] +=1 container['censuscharacteristics'][json_index][key][key_index]['dispositions'][action_index]['value'] +=int(inputs['loan value']) def by_income_ethnic_combo(self, container, inputs): if inputs['action taken'] > 5 or inputs['tract income index'] > 3: pass else: container['incomeRaces'][0]['incomes'][inputs['tract income index']]['compositions'][inputs['minority percent index']]['dispositions'][0]['count'] +=1 container['incomeRaces'][0]['incomes'][inputs['tract income index']]['compositions'][inputs['minority percent index']]['dispositions'][0]['value'] += int(inputs['loan value']) container['incomeRaces'][0]['incomes'][inputs['tract income index']]['compositions'][inputs['minority percent index']]['dispositions'][inputs['action taken']]['count'] +=1 container['incomeRaces'][0]['incomes'][inputs['tract income index']]['compositions'][inputs['minority percent index']]['dispositions'][inputs['action taken']]['value'] += int(inputs['loan value']) def get_small_county_flag(self, cur, MSA): #msa can be either an MSA or the last 5 of a geoid? SQL = '''SELECT small_county FROM tract_to_cbsa_2010 WHERE geoid_msa = %s;''' cur.execute(SQL, MSA) small_county_flag = cur.fetchall()[0] def by_geo_type(self, container, inputs, index_num, action_index): container['types'][index_num]['dispositions'][action_index]['count'] +=1 container['types'][index_num]['dispositions'][action_index]['value'] +=int(inputs['loan value']) def totals_7x(self, container, inputs): if inputs['action taken'] > 5: pass else: container['total'][0]['count'] += 1 container['total'][0]['value'] += int(inputs['loan value']) container['total'][inputs['action taken']]['count'] += 1 container['total'][inputs['action taken']]['value'] += int(inputs['loan value']) def by_denial_percent(self, container, inputs, index_num, key): for j in range(0, len(container['applicantcharacteristics'][index_num][key])): for i in range(0, len(container['applicantcharacteristics'][index_num][key][j]['denialreasons'])): if float(container['applicantcharacteristics'][index_num][key][j]['denialreasons'][9]['count']) >0: container['applicantcharacteristics'][index_num][key][j]['denialreasons'][i]['value'] = int(round((container['applicantcharacteristics'][index_num][key][j]['denialreasons'][i]['count'] / float(container['applicantcharacteristics'][index_num][key][j]['denialreasons'][9]['count'])) *100,0)) def by_denial_reason(self, container, inputs, index_num, key, key_singular): for reason in inputs['denial_list']: if reason is None: pass else: container['applicantcharacteristics'][index_num][key][inputs[key_singular]]['denialreasons'][9]['count'] +=1 #add to totals container['applicantcharacteristics'][index_num][key][inputs[key_singular]]['denialreasons'][reason]['count'] +=1 #adds to race/reason cell def build_report8x(self, table8x, inputs): self.by_denial_reason(table8x, inputs, 0, 'races', 'race') self.by_denial_reason(table8x, inputs, 1, 'ethnicities', 'ethnicity') if inputs['minority status'] <2: #pass on loans with no minority status information self.by_denial_reason(table8x, inputs, 2, 'minoritystatuses', 'minority status') self.by_denial_reason(table8x, inputs, 3, 'genders', 'gender') if inputs['income bracket'] <6: self.by_denial_reason(table8x, inputs, 4, 'incomes', 'income bracket') def build_report7x(self, table7x, inputs): self.by_tract_characteristics(table7x, inputs, 0, 'compositions', inputs['minority percent index'], inputs['action taken']) self.by_tract_characteristics(table7x, inputs, 1, 'incomes', inputs['tract income index'], inputs['action taken']) self.by_income_ethnic_combo(table7x, inputs) if inputs['small county flag'] == '1': self.by_geo_type(table7x, inputs, 0, 0) self.by_geo_type(table7x, inputs, 0, inputs['action taken']) if inputs['tract to MSA income'] == 4 and inputs['action taken'] < 6: self.by_geo_type(table7x, inputs, 1, 0) self.by_geo_type(table7x, inputs, 1, inputs['action taken']) self.totals_7x(table7x, inputs) def build_report4x(self, table4x, inputs): #call functions to fill JSON object for table 4-1 (FHA, FSA, RHS, and VA home purchase loans) self.by_4x_demographics(table4x, inputs, 'races', inputs['race']) self.by_4x_demographics(table4x, inputs, 'ethnicities', inputs['ethnicity']) self.by_4x_demographics(table4x, inputs, 'minoritystatuses', inputs['minority status']) self.by_applicant_income_4x(table4x, inputs) #aggregate loans by applicant income to MSA income ratio self.totals_4x(table4x, inputs) #totals of applications by application disposition def build_report_31(self, table31, inputs): #calls aggregation functions to fill JSON object for table 3-1 self.by_characteristics(table31, inputs, 'borrowercharacteristics', 0, 'races', inputs['race'], 'purchasers', inputs['purchaser'])#aggregate loan by race self.by_characteristics(table31, inputs, 'borrowercharacteristics', 1, 'ethnicities', inputs['ethnicity'], 'purchasers', inputs['purchaser'])#aggregate loan by ethnicity if inputs['minority status'] < 2: self.by_characteristics(table31, inputs, 'borrowercharacteristics', 2, 'minoritystatuses', inputs['minority status'], 'purchasers', inputs['purchaser'])#aggregate loan by minority status (binary determined by race and ethnicity) if inputs['income bracket'] < 6: #income index outside bounds of report 3-1 self.by_characteristics(table31, inputs, 'borrowercharacteristics', 3, 'applicantincomes', inputs['income bracket'], 'purchasers', inputs['purchaser'])#aggregates by ratio of appicant income to tract median income (census) if inputs['minority percent index'] < 5: #minority percent not available self.by_characteristics(table31, inputs, 'censuscharacteristics', 0, 'tractpctminorities', inputs['minority percent index'], 'purchasers', inputs['purchaser'])#aggregates loans by percent of minority residents (census) if inputs['tract income index'] < 4: #income ratio not available or outside report 3-1 bounds self.by_characteristics(table31, inputs, 'censuscharacteristics', 1, 'incomelevels', inputs['tract income index'], 'purchasers', inputs['purchaser']) #aggregates loans by census tract income rating - low/moderate/middle/upper self.totals(table31, inputs) #aggregate totals for each purchaser return table31 def build_report_32(self, table32, inputs): #calls aggregation functions to fill JSON object for table 3-2 self.by_pricing_status(table32, inputs) #aggregate count by lien status self.by_rate_spread(table32, inputs) #aggregate loans by percentage points above APOR as ##.##% self.by_hoepa_status(table32, inputs) #aggregates loans by presence of HOEPA flag self.fill_rate_lists(inputs) self.fill_weight_lists(inputs) #fills the median rate spread for each purchaser #mean and median functions are not called here #mean and median function must be called outside the control loop return table32 def fill_11_12_weights(self, inputs): if inputs['rate spread'] != 'NA ' and inputs['rate spread'] != ' ': self.race_weight_list[inputs['race']].append(Decimal(inputs['loan value'])) self.ethnicity_weight_list[inputs['ethnicity']].append(Decimal(inputs['loan value'])) if inputs['minority status'] < 2: self.minority_weight_list[inputs['minority status']].append(Decimal(inputs['loan value'])) if inputs['income bracket'] < 6: self.income_weight_list[inputs['income bracket']].append(Decimal(inputs['loan value'])) self.gender_weight_list[inputs['gender']].append(Decimal(inputs['loan value'])) self.composition_weight_list[inputs['minority percent index']].append(Decimal(inputs['loan value'])) if inputs['tract income index'] < 4: self.tract_income_weight_list[inputs['tract income index']].append(Decimal(inputs['loan value'])) def fill_11_12_rates(self, inputs): #race section if inputs['rate spread'] != 'NA ' and inputs['rate spread'] != ' ': self.race_rate_list[inputs['race']].append(Decimal(inputs['rate spread'])) self.ethnicity_rate_list[inputs['ethnicity']].append(Decimal(inputs['rate spread'])) if inputs['minority status'] < 2: self.minority_rate_list[inputs['minority status']].append(Decimal(inputs['rate spread'])) #print type(inputs['rate spread']) if inputs['income bracket'] < 6: self.income_rate_list[inputs['income
) gr.gridWdg ( label = 'Z Spacing', dataWdg = self.fpZCurrWdg, units = 'steps', cfgWdg = self.fpZUserWdg, cat = self.EtalonCat, ) self.model.fpZ.addROWdg(self.fpZCurrWdg) self.model.fpZ.addROWdg(self.fpZUserWdg, setDefault=True) # Detector widgets # detector image header; the label is a toggle button # for showing detector image info # grid that first as it is always displayed self.showDetectWdg = RO.Wdg.Checkbutton( master = self, text = "Detector", defValue = False, helpText = "Show window mode?", helpURL = self.HelpPrefix + "ShowDetector", ) gr.addShowHideControl(self.DetectCat, self.showDetectWdg) self._stateTracker.trackCheckbutton("showDetector", self.showDetectWdg) gr.gridWdg ( label = self.showDetectWdg, ) # grid detector labels; these show/hide along with all other detector data detectLabelDict = {} for setName in ("data", "cfg"): detectLabelDict[setName] = [ Tkinter.Label( master = self, text=axis, ) for axis in ("X", "Y") ] gr.gridWdg ( label = None, dataWdg = detectLabelDict["data"], cfgWdg = detectLabelDict["cfg"], sticky = "", cat = self.DetectCat, row = -1, ) # Detector window winDescr = ( "smallest x", "smallest y", "largest x", "largest y", ) self.detWindowCurrWdgSet = [ RO.Wdg.IntLabel( master = self, width = 4, helpText = "%s of current window (pix)" % winDescr[ii], helpURL = self.HelpPrefix + "Window", ) for ii in range(4) ] self.detWindowUserWdgSet = [ RO.Wdg.IntEntry( master = self, minValue = 1, maxValue = self.model.detSizeConst[(0, 1, 0, 1)[ii]], width = 4, helpText = "%s of requested window (pix)" % winDescr[ii], helpURL = self.HelpPrefix + "Window", clearMenu = None, defMenu = "Current", minMenu = ("Mininum", "Minimum", None, None)[ii], maxMenu = (None, None, "Maximum", "Maximum")[ii], callFunc = self._newUserWindow, autoIsCurrent = True, isCurrent = False, ) for ii in range(4) ] gr.gridWdg ( label = "Window", dataWdg = self.detWindowCurrWdgSet[0:2], cfgWdg = self.detWindowUserWdgSet[0:2], units = "LL pix", cat = self.DetectCat, ) gr.gridWdg ( label = None, dataWdg = self.detWindowCurrWdgSet[2:4], cfgWdg = self.detWindowUserWdgSet[2:4], units = "UR pix", cat = self.DetectCat, ) # Image size, in pixels self.imageSizeCurrWdgSet = [RO.Wdg.IntLabel( master = self, width = 4, helpText = "current %s size of image (pix)" % winDescr[ii], helpURL = self.HelpPrefix + "ImageSize", ) for ii in range(2) ] self.imageSizeUserWdgSet = [ RO.Wdg.IntLabel( master = self, width = 4, helpText = "requested %s size of image (pix)" % ("X", "Y")[ii], helpURL = self.HelpPrefix + "ImageSize", ) for ii in range(2) ] gr.gridWdg ( label = "Image Size", dataWdg = self.imageSizeCurrWdgSet, cfgWdg = self.imageSizeUserWdgSet, units = "pix", cat = self.DetectCat, ) self.fowlerSamplesCurrWdg = RO.Wdg.IntLabel( master = self, helpText = "current number of samples", helpURL = self.HelpPrefix + "FowlerSamples", ) self.fowlerSamplesUserWdg = RO.Wdg.IntEntry( master = self, helpText = "requested number of Fowler samples", helpURL = self.HelpPrefix + "FowlerSamples", defMenu = "Current", minMenu = "Minimum", maxMenu = "Maximum", autoIsCurrent = True, width = 2, isCurrent = False, ) gr.gridWdg ( label = 'Fowler Samples', dataWdg = self.fowlerSamplesCurrWdg, units = None, cfgWdg = self.fowlerSamplesUserWdg, colSpan = 2, cat = self.DetectCat, ) self.model.fowlerSamples.addIndexedCallback(self._updFowlerSamples) self.model.fowlerSamplesMax.addIndexedCallback(self._updFowlerSamplesMax) # Temperature warning and individual temperatures self.environShowHideWdg = RO.Wdg.Checkbutton( master = self, text = "Environment", helpText = "Show pressure and temps?", helpURL = self.HelpPrefix + "Environment", ) self._stateTracker.trackCheckbutton("showEnvironment", self.environShowHideWdg) self.environStatusWdg = RO.Wdg.StrLabel( master = self, anchor = "w", helpText = "Are pressure and temps OK?", helpURL = self.HelpPrefix + "Environment", ) gr.gridWdg ( label = self.environShowHideWdg, dataWdg = self.environStatusWdg, colSpan = 2, ) # hidable frame showing current pressure and temperatures self.envFrameWdg = Tkinter.Frame(master=self, borderwidth=1, relief="solid") # create header headStrSet = ( "Sensor", "Curr", "Min", "Max", ) for ind in range(len(headStrSet)): headLabel = RO.Wdg.Label( master = self.envFrameWdg, text = headStrSet[ind], anchor = "e", helpURL = self.HelpPrefix + "Environment", ) headLabel.grid(row=0, column=ind, sticky="e") # create pressure widgets pressHelpStrs = ( "pressure", "current pressure", None, "maximum safe pressure", ) rowInd = 1 colInd = 0 wdg = RO.Wdg.StrLabel( master = self.envFrameWdg, text = "Pressure", anchor = "e", helpText = pressHelpStrs[0], helpURL = self.HelpPrefix + "Environment", ) wdg.grid(row = rowInd, column = colInd, sticky="e") newWdgSet = [wdg] for colInd in range(1, 4): wdg = RO.Wdg.Label( master = self.envFrameWdg, formatFunc = fmtExp, width = _EnvWidth, anchor = "e", helpText = pressHelpStrs[colInd], helpURL = self.HelpPrefix + "Environment", ) wdg.grid(row = rowInd, column = colInd, sticky="ew") newWdgSet.append(wdg) colInd += 1 wdg = RO.Wdg.StrLabel( master = self.envFrameWdg, text = "torr", anchor = "w", ) wdg.grid(row = rowInd, column = colInd, sticky="w") newWdgSet.append(wdg) self.pressWdgSet = newWdgSet # temperatures self.tempHelpStrSet = ( "temperature sensor", "current temperature", "minimum safe temperature", "maximum safe temperature", ) # create blank widgets to display temperatures # this set is indexed by row (sensor) # and then by column (name, current temp, min temp, max temp) self.tempWdgSet = [] nextCol = gr.getNextCol() gr.gridWdg ( label = False, dataWdg = self.envFrameWdg, cfgWdg = False, colSpan = nextCol + 1, sticky = "w", numStatusCols = None, cat = self.EnvironCat, ) self.columnconfigure(nextCol, weight=1) # add callbacks that deal with multiple widgets self.model.filterNames.addCallback(self._updFilterNames) self.environShowHideWdg.addCallback(self._doShowHide, callNow = False) self.fpOPathUserWdg.addCallback(self._doShowHide, callNow = False) self.slitOPathUserWdg.addCallback(self._doShowHide, callNow = False) self.model.press.addCallback(self._updEnviron, callNow = False) self.model.pressMax.addCallback(self._updEnviron, callNow = False) self.model.temp.addCallback(self._updEnviron, callNow = False) self.model.detWindow.addCallback(self._newCurrWindow) self._updEnviron() self._doShowHide() eqFmtFunc = RO.InputCont.BasicFmt( nameSep="=", ) # set up the input container set self.inputCont = RO.InputCont.ContList ( conts = [ RO.InputCont.WdgCont ( name = 'filters set', wdgs = self.filterUserWdg, formatFunc = eqFmtFunc, ), RO.InputCont.WdgCont ( name = 'slit opath', wdgs = self.slitOPathUserWdg, formatFunc = eqFmtFunc, ), RO.InputCont.WdgCont ( name = 'slit focus', wdgs = self.slitFocusUserWdg, formatFunc = eqFmtFunc, ), RO.InputCont.WdgCont ( name = 'fp opath', wdgs = self.fpOPathUserWdg, formatFunc = eqFmtFunc, ), RO.InputCont.WdgCont ( name = 'fp setz', wdgs = self.fpZUserWdg, formatFunc = eqFmtFunc, ), RO.InputCont.WdgCont ( name = 'fp setx', wdgs = self.fpXUserWdg, formatFunc = eqFmtFunc, ), RO.InputCont.WdgCont ( name = 'fp sety', wdgs = self.fpYUserWdg, formatFunc = eqFmtFunc, ), RO.InputCont.WdgCont ( name = 'fowler nfs', wdgs = self.fowlerSamplesUserWdg, formatFunc = eqFmtFunc, ), RO.InputCont.WdgCont ( name = 'window', wdgs = self.detWindowUserWdgSet, formatFunc = RO.InputCont.BasicFmt( rejectBlanks = True, ), ), ], ) self.configWdg = RO.Wdg.InputContPresetsWdg( master = self, sysName = "%sConfig" % (self.InstName,), userPresetsDict = self.tuiModel.userPresetsDict, inputCont = self.inputCont, helpText = "use and manage named presets", helpURL = self.HelpPrefix + "Presets", ) self.gridder.gridWdg( "Presets", cfgWdg = self.configWdg, ) self.gridder.allGridded() def repaint(evt): self.restoreDefault() self.bind('<Map>', repaint) def _addTempWdgRow(self): """Add a row of temperature widgets""" rowInd = len(self.tempWdgSet) + 2 colInd = 0 wdg = RO.Wdg.StrLabel( master = self.envFrameWdg, anchor = "e", helpText = self.tempHelpStrSet[colInd], helpURL = self.HelpPrefix + "Environment", ) wdg.grid(row = rowInd, column = colInd, sticky="e") newWdgSet = [wdg] for colInd in range(1, 4): wdg = RO.Wdg.FloatLabel( master = self.envFrameWdg, precision = 1, anchor = "e", helpText = self.tempHelpStrSet[colInd], helpURL = self.HelpPrefix + "Environment", ) wdg.grid(row = rowInd, column = colInd, sticky="ew") newWdgSet.append(wdg) colInd += 1 wdg = RO.Wdg.StrLabel( master = self.envFrameWdg, text = "K", anchor = "w", ) wdg.grid(row = rowInd, column = colInd, sticky="w") newWdgSet.append(wdg) self.tempWdgSet.append(newWdgSet) def _doShowHide(self, wdg=None): showEtalon = self.fpOPathUserWdg.getBool() showTemps = self.environShowHideWdg.getBool() showSlitFocus = self.slitOPathUserWdg.getBool() argDict = {self.EtalonCat: showEtalon, self.EnvironCat: showTemps, self.SlitCat: showSlitFocus} self.gridder.showHideWdg (**argDict) def _showFilterTimer(self, doShow): """Show or hide the filter timer (and thus hide or show the current filter name). """ if doShow: self.filterTimerWdg.grid() self.filterCurrWdg.grid_remove() else: self.filterCurrWdg.grid() self.filterTimerWdg.grid_remove() def _showFPTimer(self, doShow): """Show or hide the etalon in/out timer (and thus hide or show the current in/out state). """ if doShow: self.fpTimerWdg.grid() self.fpOPathCurrWdg.grid_remove() else: self.fpOPathCurrWdg.grid() self.fpTimerWdg.grid_remove() def _showSlitTimer(self, doShow): """Show or hide the slit in/out timer """ #print "_showSlitTimer(%s)" % (doShow,) if doShow: self.slitTimerWdg.grid() self.slitOPathCurrWdg.grid_remove() else: self.slitOPathCurrWdg.grid() self.slitTimerWdg.grid_remove() def _updEnviron(self, *args, **kargs): # handle pressure isCurrent = True press, pressCurr = self.model.press.getInd(0) pressMax, pressMaxCurr = self.model.pressMax.getInd(0) isCurrent = isCurrent and pressCurr and pressMaxCurr pressSev = RO.Constants.sevNormal pressOK = True if press is not None and pressMax is not None and press > pressMax: pressSev = RO.Constants.sevError pressOK = False self.pressWdgSet[0].setSeverity(pressSev) self.pressWdgSet[1].set(press, isCurrent = pressCurr, severity = pressSev) self.pressWdgSet[3].set(pressMax, isCurrent = pressMaxCurr, severity = pressSev) # handle temperatures tempNames, namesCurr = self.model.tempNames.get() temps, tempsCurr = self.model.temp.get() tempMin, minCurr = self.model.tempMin.get() tempMax, maxCurr = self.model.tempMax.get() isCurrent = isCurrent and namesCurr and tempsCurr and minCurr and maxCurr if not (len(temps) == len(tempNames) ==
#!/usr/bin/env python # -*- coding: utf-8 -*- """diskover - Elasticsearch file system crawler diskover is a file system crawler that index's your file metadata into Elasticsearch. See README.md or https://github.com/shirosaidev/diskover for more information. Copyright (C) <NAME> 2017-2019 diskover is released under the Apache 2.0 license. See LICENSE for the full license text. """ from diskover import q_crawl, adaptive_batch, config, get_time from diskover_bot_module import scrape_tree_meta import socket import subprocess try: import queue as Queue except ImportError: import Queue import threading import uuid import json import time import sys import pickle import struct # dict to hold socket tasks #socket_tasks = {} # list of socket client #clientlist = [] def socket_thread_handler(threadnum, q, cliargs, logger): """This is the socket thread handler function. It runs the command msg sent from client. """ BUFF = 1024 while True: try: c = q.get() clientsock, addr = c logger.debug(clientsock) logger.debug(addr) data = clientsock.recv(BUFF) data = data.decode('utf-8') logger.debug('received data: %s' % data) if not data: q.task_done() # close connection to client clientsock.close() logger.info("[thread-%s]: %s closed connection" % (threadnum, str(addr))) continue # check if ping msg if data == 'ping': logger.info("[thread-%s]: Got ping from %s" % (threadnum, str(addr))) # send pong reply message = b'pong' clientsock.send(message) logger.debug('sending data: %s' % message) else: # strip away any headers sent by curl data = data.split('\r\n')[-1] logger.info("[thread-%s]: Got command from %s" % (threadnum, str(addr))) # load json and store in dict command_dict = json.loads(data) logger.debug(command_dict) # run command from json data run_command(threadnum, command_dict, clientsock, cliargs, logger) q.task_done() # close connection to client clientsock.close() logger.info("[thread-%s]: %s closed connection" % (threadnum, str(addr))) except (ValueError, TypeError) as e: q.task_done() logger.error("[thread-%s]: Invalid JSON from %s: (%s)" % (threadnum, str(addr), e)) message = b'{"msg": "error", "error": "Invalid JSON caused by %s"}\n' % str(e).encode('utf-8') clientsock.send(message) logger.debug(message) # close connection to client clientsock.close() logger.info("[thread-%s]: %s closed connection" % (threadnum, str(addr))) pass except socket.error as e: q.task_done() logger.error("[thread-%s]: Socket error (%s)" % (threadnum, e)) # close connection to client clientsock.close() logger.info("[thread-%s]: %s closed connection" % (threadnum, str(addr))) pass def recvall(sock, count): buf = b'' while count: newbuf = sock.recv(count) if not newbuf: return None buf += newbuf count -= len(newbuf) return buf def recv_one_message(sock): lengthbuf = recvall(sock, 4) if not lengthbuf: return None length, = struct.unpack('!I', lengthbuf) return recvall(sock, length) def socket_thread_handler_twc(threadnum, q, q_kill, lock, rootdir, num_sep, level, batchsize, cliargs, logger, reindex_dict): """This is the socket thread handler tree walk client function. Stream of directory listings (pickle) from diskover treewalk client connections are enqueued to redis rq queue. """ while True: try: c = q.get() clientsock, addr = c logger.debug(clientsock) logger.debug(addr) totalfiles = 0 while True: data = recv_one_message(clientsock) if not data: break if data == b'SIGKILL' or data == 'SIGKILL': q_kill.put(b'SIGKILL') break # unpickle data sent from client data_decoded = pickle.loads(data) logger.debug(data_decoded) # enqueue to redis batch = [] for root, dirs, files in data_decoded: files_len = len(files) totalfiles += files_len # check for empty dirs if len(dirs) == 0 and len(files) == 0 and not cliargs['indexemptydirs']: continue batch.append((root, dirs, files)) batch_len = len(batch) if batch_len >= batchsize or (cliargs['adaptivebatch'] and totalfiles >= config['adaptivebatch_maxfiles']): q_crawl.enqueue(scrape_tree_meta, args=(batch, cliargs, reindex_dict,), result_ttl=config['redis_ttl']) if cliargs['debug'] or cliargs['verbose']: logger.info("enqueued batchsize: %s (batchsize: %s)" % (batch_len, batchsize)) del batch[:] totalfiles = 0 if cliargs['adaptivebatch']: batchsize = adaptive_batch(q_crawl, cliargs, batchsize) if cliargs['debug'] or cliargs['verbose']: logger.info("batchsize set to: %s" % batchsize) if len(batch) > 0: # add any remaining in batch to queue q_crawl.enqueue(scrape_tree_meta, args=(batch, cliargs, reindex_dict,), result_ttl=config['redis_ttl']) del batch[:] # close connection to client clientsock.close() logger.info("[thread-%s]: %s closed connection" % (threadnum, str(addr))) q.task_done() except socket.error as e: logger.error("[thread-%s]: Socket error (%s)" % (threadnum, e)) def start_socket_server(cliargs, logger): """This is the start socket server function. It opens a socket and waits for remote commands. """ #global clientlist # set thread/connection limit max_connections = config['listener_maxconnections'] # Queue for socket threads q = Queue.Queue(maxsize=max_connections) try: # create TCP socket object serversock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) serversock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) host = config['listener_host'] # default is localhost port = config['listener_port'] # default is 9999 # bind to port serversock.bind((host, port)) # start listener serversock.listen(max_connections) # set up the threads and start them for i in range(max_connections): # create thread t = threading.Thread(target=socket_thread_handler, args=(i, q, cliargs, logger,)) t.daemon = True t.start() while True: logger.info("Waiting for connection, listening on %s port %s TCP (ctrl-c to shutdown)" % (str(host), str(port))) # establish connection clientsock, addr = serversock.accept() logger.debug(clientsock) logger.debug(addr) logger.info("Got a connection from %s" % str(addr)) # add client to list client = (clientsock, addr) #clientlist.append(client) # add task to Queue q.put(client) except socket.error as e: serversock.close() logger.error("Error opening socket (%s)" % e) sys.exit(1) except KeyboardInterrupt: print('\nCtrl-c keyboard interrupt received, shutting down...') q.join() serversock.close() sys.exit(0) def start_socket_server_twc(rootdir_path, num_sep, level, batchsize, cliargs, logger, reindex_dict): """This is the start socket server tree walk function. It opens a socket and waits for diskover tree walk client connections. """ #global clientlist # set thread/connection limit max_connections = config['listener_maxconnections'] # Queue for socket threads q = Queue.Queue(maxsize=max_connections) q_kill = Queue.Queue() lock = threading.Lock() try: # create TCP socket object serversock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) serversock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) host = config['listener_host'] # default is localhost if cliargs['twcport']: port = cliargs['twcport'] else: port = config['listener_twcport'] # default is 9998 # bind to port serversock.bind((host, port)) # start listener serversock.listen(max_connections) # set up the threads and start them for i in range(max_connections): t = threading.Thread( target=socket_thread_handler_twc, args=(i, q, q_kill, lock, rootdir_path, num_sep, level, batchsize, cliargs, logger, reindex_dict,)) t.daemon = True t.start() starttime = time.time() while True: if q_kill.qsize() > 0: logger.info("Received signal to shutdown socket server") q.join() serversock.close() return starttime logger.info("Waiting for connection, listening on %s port %s TCP (ctrl-c to shutdown)" % (str(host), str(port))) # establish connection clientsock, addr = serversock.accept() logger.debug(clientsock) logger.debug(addr) logger.info("Got a connection from %s" % str(addr)) # add client to list client = (clientsock, addr) #clientlist.append(client) # set start time to first connection #if len(clientlist) == 1: # starttime = time.time() # put client into Queue q.put(client) except socket.error as e: serversock.close() logger.error("Error opening socket (%s)" % e) sys.exit(1) except KeyboardInterrupt: print('\nCtrl-c keyboard interrupt received, shutting down...') serversock.close() sys.exit(0) def run_command(threadnum, command_dict, clientsock, cliargs, logger): """This is the run command function. It runs commands from the listener socket using values in command_dict. """ #global socket_tasks #global clientlist # try to get index name from command or use from diskover config file try: index = str(command_dict['index']) except KeyError: index = str(config['index']) pass # try to get min days mtime from command or use default try: mtime = str(command_dict['mtime']) except KeyError: mtime = str(cliargs['mtime']) pass # try to get min size from command or use default try: minsize = str(command_dict['minsize']) except KeyError: minsize = str(cliargs['minsize']) pass # try to get worker batch size from command or use default try: batchsize = str(command_dict['batchsize']) except KeyError: batchsize = str(cliargs['batchsize']) pass # try to get adaptive batch option from command or use default try: adaptivebatch = str(command_dict['adaptivebatch']) except KeyError: adaptivebatch = str(cliargs['adaptivebatch']) pass # try to get optimize index option from command or use default try: optimizeindex = str(command_dict['optimizeindex']) except KeyError: optimizeindex = str(cliargs['optimizeindex']) pass # try to get auto tag option from command or use default try: autotag = str(command_dict['autotag']) except KeyError: autotag = str(cliargs['autotag']) pass # try to get empty dirs option from command or use default try: indexemptydirs = str(command_dict['indexemptydirs']) except KeyError: indexemptydirs = str(cliargs['indexemptydirs']) pass try: action = command_dict['action'] pythonpath = config['python_path'] diskoverpath = config['diskover_path'] # set up command for different action if action == 'crawl': path = command_dict['path'] cmd = [pythonpath, diskoverpath, '-b', batchsize, '-i', index, '-d', path, '-m', mtime, '-s', minsize, '-q', '-F'] elif action == 'finddupes': cmd = [pythonpath, diskoverpath, '-b', batchsize, '-i', index, '--finddupes', '-q', '-F'] elif action == 'hotdirs': index2 = str(command_dict['index2']) cmd = [pythonpath, diskoverpath, '-b', batchsize, '-i', index, '--hotdirs', index2, '-q', '-F'] elif action == 'reindex': try: recursive = command_dict['recursive'] except KeyError: recursive = 'false' pass path = command_dict['path'] if recursive == 'true': cmd = [pythonpath, diskoverpath, '-b', batchsize, '-i', index, '-d', path, '-R', '-q', '-F'] else: cmd = [pythonpath, diskoverpath, '-b', batchsize,
<gh_stars>0 import json from datetime import datetime, timedelta from textwrap import dedent from typing import Type, Any, Union, cast, List import pytest from deepdiff import DeepDiff from networkx import DiGraph from core.cli.model import CLIContext from core.console_renderer import ConsoleRenderer, ConsoleColorSystem from core.model.model import ( StringKind, Kind, NumberKind, BooleanKind, DateKind, DateTimeKind, ArrayKind, Property, ComplexKind, Model, DictionaryKind, predefined_kinds, PropertyPath, TransformKind, DurationKind, SyntheticProperty, ) from core.model.typed_model import to_json, from_js from core.util import from_utc, utc, utc_str def test_json_marshalling() -> None: roundtrip(StringKind("string"), Kind) roundtrip(StringKind("string", 5, 13, "foo.*bla"), Kind) roundtrip(StringKind("string", enum={"foo", "bla"}), Kind) roundtrip(NumberKind("num", "int32", minimum=2, maximum=34), Kind) roundtrip(NumberKind("num", "int64", enum={1, 2}), Kind) roundtrip(BooleanKind("b"), Kind) roundtrip(DateKind("d"), Kind) roundtrip(DateTimeKind("d"), Kind) roundtrip(DurationKind("duration"), Kind) roundtrip(TransformKind("synth", "duration", "datetime", "duration_to_datetime", True), Kind) roundtrip(ArrayKind(StringKind("string")), Kind) roundtrip(Property("foo", "foo"), Property) roundtrip(Property("age", "trafo.duration_to_datetime", False, SyntheticProperty(["ctime"])), Property) roundtrip( ComplexKind( "Test", ["Base"], [ Property("array", "string[]"), Property("s", "float"), Property("i", "int32"), Property("other", "SomeComposite"), ], ), Kind, ) def test_string() -> None: a = StringKind("string", 5, 13, "foo.*bla") assert expect_error(a, "foo") == ">foo< does not conform to regex: foo.*bla" assert expect_error(a, "fooooo") == ">fooooo< does not conform to regex: foo.*bla" assert a.check_valid("fooooobla") is None assert expect_error(a, "fooooooooooobla") == ">fooooooooooobla< is too long! Allowed: 13" b = StringKind("string", enum={"foo", "bla", "bar"}) assert b.check_valid("foo") is None assert expect_error(b, "baz").startswith(">baz< should be one of") def test_number() -> None: a = NumberKind("cores", "int32", 1, 8) assert a.check_valid(1) is None assert a.check_valid(8) is None assert expect_error(a, 0) == ">0< should be greater or equals than: 1" assert expect_error(a, 9) == ">9< should be smaller or equals than: 8" b = NumberKind("bin", "int32", enum={1, 2, 4}) assert b.check_valid(1) is None assert expect_error(b, 3) == ">3< should be one of: {1, 2, 4}" def test_boolean() -> None: a = BooleanKind("question") assert a.check_valid(True) is None assert a.check_valid(False) is None assert expect_error(a, "test").startswith("Expected type boolean but got") def test_duration() -> None: a = DurationKind("dt") assert a.check_valid("3d5h6min3s") is None assert expect_error(a, True) == "Expected type duration but got bool" assert ( expect_error(a, "23df") == "Wrong format for duration: 23df. Examples: 1yr, 3mo, 3d4h3min1s, 3days and 2hours" ) assert a.coerce("12d") == "1036800s" with pytest.raises(AttributeError) as no_date: a.coerce("simply no duration") assert str(no_date.value) == f"Expected duration but got: >simply no duration<" def test_transform() -> None: age = TransformKind("dt", "duration", "datetime", "duration_to_datetime", True) age.resolve({"duration": DurationKind("duration"), "datetime": DateTimeKind("datetime")}) with pytest.raises(AttributeError): age.check_valid("3s") # check valid is not allowed on synthetic values (they do not get imported!) # age transforms a duration into a timestamp before now one_day_old = from_utc(age.coerce("1d")) # difference between 1d and computed utc-24h should be less than 2 seconds (depending on test env less) assert (one_day_old - (utc() - timedelta(hours=24))).total_seconds() <= 2 # transform back from underlying timestamp to timedelta assert age.transform(utc_str(utc() - timedelta(seconds=123))) == "2min3s" assert age.transform(utc_str(utc() - timedelta(seconds=123456))) == "1d10h" assert age.transform(utc_str(utc() - timedelta(seconds=1234567))) == "14d6h" assert age.transform(utc_str(utc() - timedelta(seconds=123456789))) == "3yr10mo" def test_datetime() -> None: a = DateTimeKind("dt") assert a.check_valid("2021-06-08T08:56:15Z") is None assert a.check_valid("2021-06-08T08:56:15+00:00") == "2021-06-08T08:56:15Z" assert expect_error(a, True) == "Expected type datetime but got bool" assert a.coerce("2021-06-08T08:56:15Z") == "2021-06-08T08:56:15Z" assert a.coerce("2021-06-08T08:56:15.0000+00:00") == "2021-06-08T08:56:15Z" assert a.coerce("2021-06-08T08:56:15.0000+02:00") == "2021-06-08T06:56:15Z" assert a.coerce("2021-06-08T08:56:15.0000-02:00") == "2021-06-08T10:56:15Z" assert a.coerce("2021-06-08T08:56:15.0000+0000") == "2021-06-08T08:56:15Z" assert a.coerce("2021-06-08 08:56:15").startswith("2021-06-08T") # type: ignore assert a.coerce("2021-06-08 08:56:15").endswith(":56:15Z") # type: ignore # ignore the hours, time zone dependant today = datetime.today().replace(hour=6, minute=56, second=15).strftime(DateTimeKind.Format) assert a.coerce("08:56:15").startswith(today[0:11]) # type: ignore assert a.coerce("08:56:15").endswith(":56:15Z") # type: ignore# ignore the hours, time zone dependant assert a.coerce("-12d").startswith("20") # type: ignore assert a.coerce("12mo").startswith("20") # type: ignore with pytest.raises(AttributeError) as no_date: a.coerce("simply no date") assert str(no_date.value) == f"Expected datetime but got: >simply no date<" def test_date() -> None: a = DateKind("d") assert a.check_valid("2021-06-08") is None assert expect_error(a, True) == "Expected type date but got bool" assert a.coerce("2021-06-08") == "2021-06-08" assert a.coerce("2021 06 08") == "2021-06-08" assert a.coerce("-12d").startswith("20") # type: ignore assert a.coerce("12mo").startswith("20") # type: ignore with pytest.raises(AttributeError) as no_date: a.coerce("simply no date") assert str(no_date.value) == f"Expected date but got: >simply no date<" def test_dictionary() -> None: model = {k.fqn: k for k in predefined_kinds} result = Property.parse_kind("dictionary[string, string]", model) assert isinstance(result, DictionaryKind) assert result.key_kind is model["string"] assert result.value_kind is model["string"] result = Property.parse_kind("dictionary[string, dictionary[string, float]]", model) assert isinstance(result, DictionaryKind) assert result.key_kind is model["string"] assert result.value_kind == DictionaryKind(model["string"], model["float"]) address = ComplexKind( "Foo", [], [Property("tags", "dictionary[string, string]"), Property("anything", "dictionary[string, any]")] ) address_model = Model.from_kinds([address]) assert address_model.check_valid({"kind": "Foo", "tags": {"a": "b", "b": "c"}}) is None expected = 'Kind:Foo Property:tags is not valid: value of dictionary[string, string] is not valid: Expected type string but got int: {"kind": "Foo", "tags": {"a": 1, "b": "c"}}' assert expect_error(address_model, {"kind": "Foo", "tags": {"a": 1, "b": "c"}}) == expected assert address_model.check_valid({"kind": "Foo", "anything": {"a": 1, "b": "c", "c": True}}) is None expected = 'Kind:Foo Property:anything is not valid: dictionary requires a json object, but got this: 1: {"kind": "Foo", "anything": 1}' assert expect_error(address_model, {"kind": "Foo", "anything": 1}) == expected def test_any() -> None: model = Model.from_kinds(predefined_kinds) assert model.check_valid({"kind": "any", "a": True, "b": 12, "c": [], "d": {"a": "b"}}) is None def test_array() -> None: foo = ComplexKind("Foo", [], [Property("tags", "dictionary[string, string]"), Property("kind", "string")]) complex_kind = ComplexKind( "TestArray", [], [ Property("kind", "string"), Property("los", "string[]"), Property("lod", "dictionary[string, string][]"), Property("foos", "Foo[]"), Property("los_los", "string[][]"), Property("los_los_los", "string[][][]"), ], ) model = Model.from_kinds([foo, complex_kind]) assert ( model.check_valid( { "kind": "TestArray", "los": ["a", "b", "c"], "lod": [{"a": "b"}, {"b": "c"}], "foos": [{"kind": "Foo", "tags": {"a": "b"}}, {"kind": "Foo", "tags": {"b": "c"}}], "los_los": [["a", "b"], ["c"], ["d", "e"]], "los_los_los": [[["a", "b"], ["c"]], [["d", "e"], ["f"]]], } ) is None ) def test_model_checking(person_model: Model) -> None: assert person_model.check_valid({"kind": "Base", "id": "32"}) is None assert person_model.check_valid({"kind": "Base", "id": "32", "list": ["one", "two"]}) is None expected = 'Kind:Base Property:list is not valid: Expected type string but got int: {"kind": "Base", "id": "32", "list": [1, 2]}' assert expect_error(person_model, {"kind": "Base", "id": "32", "list": [1, 2]}) == expected expected = 'Kind:Base Property:list is not valid: Expected property is not an array!: {"kind": "Base", "id": "32", "list": "not iterable"}' assert expect_error(person_model, {"kind": "Base", "id": "32", "list": "not iterable"}) == expected expected = 'Kind:Base Property:id is not valid: Expected type string but got int: {"kind": "Base", "id": 32}' assert expect_error(person_model, {"kind": "Base", "id": 32}) == expected expected = 'Kind:Base Property:id is required and missing in {"kind": "Base"}' assert expect_error(person_model, {"kind": "Base"}) == expected expected = "Kind:Base Property:unknown is not defined in model!" assert expect_error(person_model, {"kind": "Base", "id": "bla", "unknown": 1}) == expected expected = ( 'Kind:Address Property:id is required and missing in {"kind": "Address", "zip": "12345", "city": "gotham"}' ) assert expect_error(person_model, {"kind": "Address", "zip": "12345", "city": "gotham"}) == expected nested = { "id": "batman", "kind": "Person", "name": "batman", "address": {"kind": "Address", "id": "foo", "city": "gotham"}, } assert person_model.check_valid(nested) is None nested = {"id": "batman", "kind": "Person", "name": "batman", "address": {"kind": "Address", "city": "gotham"}} expected = 'Kind:Person Property:address is not valid: Kind:Address Property:id is required and missing in {"kind": "Address", "city": "gotham"}: {"id": "batman", "kind": "Person", "name": "batman", "address": {"kind": "Address", "city": "gotham"}}' assert expect_error(person_model, nested) == expected assert person_model.check_valid({"kind": "Base", "id": "32", "mtime": "2008-09-03T20:56:35+20:00"})["mtime"] == "2008-09-03T00:56:35Z" # type: ignore anything = {"kind": "any", "some": [1, 2, 3], "not": "defined", "props": True} assert person_model.check_valid(anything) is None any_foo = {"kind": "any_foo", "id": "foo", "foo": {"a": [1, 2, 3]}, "test": "hallo"} assert person_model.check_valid(any_foo) is None def test_property_path() -> None: p1 = PropertyPath(["a", None, "c", None]) p2 = PropertyPath(["a", "b", "c", "d"]) p3 = PropertyPath(["a", "b"]) p4 = p3.child("c").child("d") assert p1.same_as(p2) assert p2.same_as(p1) assert not p1.same_as(p3) assert p2.same_as(p4) def test_property_path_on_model(person_model: Model) -> None: # complex based property path person: ComplexKind = cast(ComplexKind, person_model["Person"]) person_path = {p.path: p for p in person.resolved_properties} assert len(person_path) == 11 assert person_path[PropertyPath(["name"])].kind == person_model["string"] assert person_path[PropertyPath(["name"])].prop.name == "name" assert person_path[PropertyPath(["list[]"])].kind == person_model["string"] assert person_path[PropertyPath(["list[]"])].prop.name == "list" assert person_path[PropertyPath(["tags", None])].kind == person_model["string"] assert person_path[PropertyPath(["address", "zip"])].kind == person_model["zip"] assert person_path[PropertyPath(["address", "zip"])].prop.name == "zip" with pytest.raises(KeyError): _ = person_path[PropertyPath(["anything"])] # model based property path assert person_model.kind_by_path("name") == person_model["string"] assert person_model.kind_by_path("list[]") == person_model["string"] assert person_model.kind_by_path("tags.foo") == person_model["string"] assert person_model.kind_by_path("tags.bla") == person_model["string"] assert person_model.kind_by_path("other_addresses.bla.zip") == person_model["zip"] assert person_model.kind_by_path("address.zip") == person_model["zip"] def test_update(person_model: Model) -> None: with pytest.raises(AttributeError) as not_allowed: # update city with different type person_model.update_kinds( [ ComplexKind( "Address", ["Base"], [ Property("city", "int32", required=True), ], ) ]
import sys from typing import List, Tuple import numpy as np import pandas as pd def get_valid_gene_info( genes: List[str], release=102, species='homo sapiens' ) -> Tuple[List[str], List[int], List[int], List[int]]: """Returns gene locations for all genes in ensembl release 93 --S Markson 3 June 2020 Parameters ---------- genes : A list of genes genes : List[str] : genes : List[str] : genes : List[str] : genes : List[str] : genes : List[str] : genes : List[str] : genes : List[str] : genes: List[str] : Returns ------- """ from pyensembl import EnsemblRelease assembly = EnsemblRelease(release, species=species) gene_names = [] gene_contigs = [] gene_starts = [] gene_ends = [] for gene in np.intersect1d(genes, [ gene.gene_name for gene in assembly.genes() if gene.contig.isnumeric() or gene.contig == 'X' ]): # Toss genes not in hg38 release 93 gene_info = assembly.genes_by_name(gene) gene_info = gene_info[0] gene_names.append(gene) gene_contigs.append(gene_info.contig) gene_starts.append(gene_info.start) gene_ends.append(gene_info.end) return gene_names, gene_contigs, gene_starts, gene_ends def seurat_to_loom(seuratrds, patient_id_column, celltype_column, complexity_column, loomfile): """ Parameters ---------- seuratrds : patient_id_column : celltype_column : complexity_column : loomfile : Returns ------- """ import rpy2.robjects as robjects from scipy import sparse from rpy2.robjects import pandas2ri import loompy robjects.r(''' library(Seurat) seurat2rawandmeta <- function(seuratrds) { seuratobj <- readRDS(seuratrds) return(list(genes=rownames(seuratobj@data), metadata=<EMAIL>, data=as.data.frame(summary(seuratobj@data)))) } ''') seurat_grab = robjects.r['seurat2rawandmeta'](seuratrds) genes = pd.DataFrame(np.array(seurat_grab.rx2('genes'))) genes.columns = ['gene'] metadata = pandas2ri.rpy2py_dataframe(seurat_grab.rx2('metadata')) if patient_id_column != 'patient_ID': metadata['patient_ID'] = metadata[patient_id_column] metadata.drop(patient_id_column, inplace=True) if celltype_column != 'cell_type': metadata['cell_type'] = metadata[celltype_column] metadata.drop(celltype_column, inplace=True) if complexity_column != 'complexity': metadata['complexity'] = metadata[complexity_column] metadata.drop(complexity_column, inplace=True) data_df = pandas2ri.rpy2py_dataframe(seurat_grab.rx2('data')) sparsedata = sparse.coo_matrix( (data_df['x'], (data_df['i'] - 1, data_df['j'] - 1))).tocsc() sparsedata.resize((genes.shape[0], metadata.shape[0])) loompy.create(loomfile, sparsedata, genes.to_dict("list"), metadata.to_dict("list")) def intify(df_init): """ Parameters ---------- df_init : Returns ------- """ import binascii df = df_init.copy() for col in df.columns: if col.endswith('_ad'): raise Exception( "Don't append you column names with _ad! -- Samuel") df[col] = df[col].apply( lambda x: int(binascii.hexlify(x.encode()), 16)) while np.sum(df.max() > sys.maxsize) > 0: for col in df.columns: if df[col].max() > sys.maxsize: df[col + '_ad'] = df[col] // sys.maxsize df[col] = df[col] % sys.maxsize return df.astype(np.int64) def deintify(df_init): """ Parameters ---------- df_init : Returns ------- """ import binascii df = df_init.copy() while np.sum([x.endswith('_ad') for x in df.columns]) > 0: for col in df.columns: if col.endswith('_ad') and col + '_ad' not in df.columns: df[col[0:-3]] = df[col[0:-3]].astype(object) df[col] = df[col].astype(object) df[col[0:-3]] = df[col[0:-3]] + sys.maxsize * df[col] df.drop(col, axis=1, inplace=True) for col in df.columns: try: df[col] = df[col].apply( lambda x: binascii.unhexlify(hex(x)[2::].encode()).decode()) except: print(df[col].apply( lambda x: binascii.unhexlify(hex(x)[2::].encode()).decode())) raise Exception("whoops") return df def recover_meta(db, do_deint=False): """ Parameters ---------- db : do_deint : (Default value = False) Returns ------- """ colmeta = None for key in db.ca.keys(): if colmeta is None: colmeta = pd.DataFrame(db.ca[key]) colmeta.columns = [key] else: colmeta[key] = db.ca[key] if do_deint: colmeta = deintify(colmeta.astype(np.int64)) rowmeta = None for key in db.ra.keys(): if rowmeta is None: rowmeta = pd.DataFrame(db.ra[key]) rowmeta.columns = [key] else: rowmeta[key] = db.ra[key] if do_deint: rowmeta = deintify(rowmeta.astype(np.int64)) return rowmeta, colmeta def we_can_pickle_it(thing, thingname: str): """ Parameters ---------- thing : thingname : str : thingname : str : thingname : str : thingname : str : thingname: str : Returns ------- """ import pickle with open(thingname, 'wb') as f: pickle.dump(thing, f, pickle.HIGHEST_PROTOCOL) def we_can_unpickle_it(thingname: str): """ Parameters ---------- thingname : str : thingname : str : thingname : str : thingname : str : thingname: str : Returns ------- """ import pickle with open(thingname, 'rb') as f: thing = pickle.load(f) return thing def get_alpha_concave_hull_polygon(xcoords, ycoords, alpha=0.1, buffer=1): """Much credit to https://thehumangeo.wordpress.com/2014/05/12/drawing-boundaries-in-python/ Parameters ---------- xcoords : ycoords : alpha : (Default value = 0.1) buffer : (Default value = 1) Returns ------- """ from shapely.ops import cascaded_union, polygonize import shapely.geometry as geometry from scipy.spatial import Delaunay import numpy as np import math def alpha_shape(points, alpha): """Compute the alpha shape (concave hull) of a set of points. Parameters ---------- points : Iterable container of points. alpha : alpha value to influence the gooeyness of the border. Smaller numbers don't fall inward as much as larger numbers. Too large, and you lose everything! Returns ------- """ if len(points) < 4: # When you have a triangle, there is no sense # in computing an alpha shape. return geometry.MultiPoint(list(points)).convex_hull def add_edge(edges, edge_points, coords, i, j): """Add a line between the i-th and j-th points, if not in the list already Parameters ---------- edges : edge_points : coords : i : j : Returns ------- """ if (i, j) in edges or (j, i) in edges: # already added return edges.add((i, j)) edge_points.append(coords[[i, j]]) coords = np.array([point.coords[0] for point in points]) tri = Delaunay(coords) edges = set() edge_points = [] # loop over triangles: # ia, ib, ic = indices of corner points of the # triangle for ia, ib, ic in tri.vertices: pa = coords[ia] pb = coords[ib] pc = coords[ic] # Lengths of sides of triangle a = math.sqrt((pa[0] - pb[0])**2 + (pa[1] - pb[1])**2) b = math.sqrt((pb[0] - pc[0])**2 + (pb[1] - pc[1])**2) c = math.sqrt((pc[0] - pa[0])**2 + (pc[1] - pa[1])**2) # Semiperimeter of triangle s = (a + b + c) / 2.0 # Area of triangle by Heron's formula area = math.sqrt(s * (s - a) * (s - b) * (s - c)) circum_r = a * b * c / (4.0 * area) # Here's the radius filter. #print circum_r if circum_r < 1.0 / alpha: add_edge(edges, edge_points, coords, ia, ib) add_edge(edges, edge_points, coords, ib, ic) add_edge(edges, edge_points, coords, ic, ia) m = geometry.MultiLineString(edge_points) triangles = list(polygonize(m)) return cascaded_union(triangles), edge_points points = [] for x, y in zip(xcoords, ycoords): points.append(geometry.shape({'type': 'Point', 'coordinates': [x, y]})) concave_hull, edge_points = alpha_shape(points, alpha=alpha) return concave_hull.buffer(buffer) def get_outlier_removal_mask(xcoords, ycoords, nth_neighbor=10, quantile=.9): """ Parameters ---------- xcoords : ycoords : nth_neighbor : (Default value = 10) quantile : (Default value = .9) Returns ------- """ from scipy.spatial.distance import pdist, squareform D = squareform(pdist(np.vstack((xcoords, ycoords)).T)) distances = D[np.argsort(D, axis=0)[nth_neighbor - 1, :], 0] return distances <= np.quantile(distances, quantile) def cohensd(g1, g2): """ Returns Cohen's D for the effect size of group 1 values (g1) over group 2 values (g2). Parameters ---------- g1 : group 1 values (list or numpy vector) g2 : group 2 values (list or numpy vector) Returns ------- (mean(g1) - mean(g2) )/s, where s is the pooled standard deviation of the two groups with Bessel's correction """ n1 = len(g1) n2 = len(g2) s1 = np.std(g1, ddof=1) s2 = np.std(g2, ddof=1) s = np.sqrt(((n1 - 1) * s1 * s1 + (n2 - 1) * s2 * s2) / (n1 + n2 - 2)) return (np.mean(g1) - np.mean(g2)) / s def phi_coefficient(contingency_table): """ Returns the phi-coefficient for a contingency table. Paramenters ----------- contingency_table : contingency table, identical in format to scipy.stats.fisher_exact Returns ------- phi coefficient """ table1 = contingency_table[0] table2 = contingency_table[1] table = np.vstack([table1, table2]) phitop = (table1[0] * table2[1] - table1[1] * table2[0]) phibottom = np.sqrt((table2[1]+table2[0])*\ (table1[1]+table1[0])*\ (table1[0]+table2[0])*\ (table2[1]+table1[1])) phi = phitop / phibottom return phi def get_igraph_from_adjacency(adjacency, directed=None): """This is taken from scanpy._utils.__init__.py as of 12 August 2021 Get igraph graph from adjacency matrix.""" import igraph as ig sources, targets = adjacency.nonzero() weights = adjacency[sources, targets] if isinstance(weights, np.matrix): weights = weights.A1 g = ig.Graph(directed=directed) g.add_vertices(adjacency.shape[0]) # this adds adjacency.shape[0] vertices g.add_edges(list(zip(sources, targets))) try: g.es['weight'] = weights except KeyError: pass if g.vcount() != adjacency.shape[0]: logg.warning(f'The constructed graph has only {g.vcount()} nodes. ' 'Your adjacency matrix contained redundant nodes.') return g def convert_10x_h5(path_10x_h5, output_file, labelkey=None, label='', genes_as_ca=[], gene_whitelist=None, output_type='loom'): import cellranger.matrix as cr_matrix import loompy output_type = output_file.split('.')[-1] if output_type not in ['loom', 'pkl']: raise Exception( "output_file must be have suffix loom or pkl, denoting an output type of loom of pickle respectively" ) filtered_feature_bc_matrix = cr_matrix.CountMatrix.load_h5_file( path_10x_h5) id2feature = { val: key for key,
#!/usr/bin/env python3 """ Phonon Only harmonic Phonon subroutines implemented All numerically heavy computations are in f90 (f_phonon.f90) """ #import os #from itertools import islice, product from ..util.tool import my_flatten #from ..util.string_utils import fn_available from ..symm_kpts import HighSymmKpath from ..interface_vasp import Poscar #from ..structure import SupercellStructure import numpy as np from math import sqrt, pi try: # import matplotlib # matplotlib.use('AGG') import matplotlib.pyplot as plt from matplotlib.backends.backend_pdf import PdfPages except ImportError: print("WARNING: cannot import pyplot; plotting disabled") plt = None pass from f_phonon import f_phonon #import logging #logger = logging.getLogger(__name__) #debug_level = 10 class NA_correction(): """ long range non-analytic dipole-dipole correction for force-constants based phonon calculation """ def __init__(self, method, rho, Ncell=-1.0): """ method: -1 (disabled), 1 (Parlinski) or 2 (mixed-space) or 0 (dipole, FC MUST exclude dipole force) rho: mixing parameter in Parlinski Ncell: number of cells in the mixed-space approach """ self.id= method self.rho = rho self.Ncell = float(Ncell) self.uniq_ijk_idx = [] @classmethod def from_dict(cls, d): rho = d.getfloat('NAC_rho', 0.05) Ncell = d.getfloat('NAC_Ncell', -1.0) return cls(d.getint('nac',-1), rho, Ncell) class Phonon(): """ All phonon related stuff Note: all q-points (and hence real space coordinates) should be Cartesian unless explicitly specified """ def __init__(self, prim, LD, sol, pdfout, NAC=None, etafac=8.0): """ :param prim: :param LD: :param sol: solution vector :param NAC: long range dipole-dipole non-analytic correction :return: """ self.units = {'THz': (1, 'Freq. (THz)'), 'meV': (4.1356668, 'En (meV)'), 'eV': (0.0041356668, 'En (eV)'), 'cm': (33.3564227583, ' Freq. (cm^{-1})')} self.prim = prim self.LD = LD #self.dim = prim.num_sites*3 if prim.intensive_properties['epsilon_inf'] is None: NAC=None if NAC is not None: if (NAC.id != 0) and LD.dipole_force: raise ValueError('Must set nac=0 when dipole forces are considered explicitly') self.NAC = NAC self.pdfout = pdfout # process pair interactions self.pairinfo = LD.get_pair_info(LD.get_full_fct(sol)) self.setup_cell(self.prim) def setup_cell(self, cell): from ..util.mathtool import Union self.cell=cell self.dim=self.cell.num_sites*3 #print("debug tralslate pair to supercell, size=", cell.num_sites) pairinfo = self.pairinfo if cell is self.prim else self.LD.translate_pairinfo_to_supercell(cell, *self.pairinfo) f_phonon.init(cell.lattice._matrix, cell.atomic_masses, cell.frac_coords, *pairinfo) if (self.NAC is not None) and (self.NAC.id in [0, 1, 2]): self.NAC.uniq_ijk_idx = Union(self.pairinfo[0], True) if self.NAC.Ncell < 0: self.NAC.Ncell = float(len(self.NAC.uniq_ijk_idx)) else: if abs(self.NAC.Ncell - len(self.NAC.uniq_ijk_idx)) > 1E-6: print("WARNING: input Ncell not equal to the given number of ijk", len(self.NAC.uniq_ijk_idx)) f_phonon.init_nac(self.NAC.id, cell.intensive_properties['epsilon_inf'], cell.site_properties['born_charge'], self.NAC.rho, cell.lattice.volume, self.NAC.uniq_ijk_idx, 1.0) if (self.NAC.id==0) and (self.LD._dpcor is not None): print(" Loading dipole correction FCM for phonon") dpcor= self.LD._dpcor if cell is self.prim else self.LD.translate_pairinfo_to_supercell(cell, *self.LD._dpcor) f_phonon.init_dpcor(*dpcor) # short hand to get cartesian wave vector def to_c(self, kpts, cart): return np.array(kpts) if cart else self.prim.reciprocal_lattice.get_cartesian_coords(kpts) def get_dm(self, k_in, cart=True): """ :param k_in: list of cartesian K-points :return: """ # call fortran program return f_phonon.get_dm(self.to_c(k_in, cart), self.dim) def get_dmnomass(self, k_in, cart=True): """ Get the fourier transformed force constant matrix, without sqrt(M1,M2) """ mass= np.sqrt(self.cell.atomic_masses).repeat(3) return self.get_dm(k_in, cart)* (np.outer(mass, mass)[:,:,None]) @staticmethod def _maketicks(dist, lbl, plt): """ private utility method to add ticks to a band structure """ def _get_greek(ls): return ["$" + l + "$" if l.startswith("\\") or l.find("_") != -1 else l for l in ls] uniq_d = [] uniq_l = [] temp_ticks = list(map(list,zip(dist, _get_greek(lbl)))) for i in range(len(temp_ticks)): # if (i < len(temp_ticks)-1) and (temp_ticks[i][1] != temp_ticks[i+1][1]) and abs(temp_ticks[i][0]-temp_ticks[i+1][0])<1E-7: if (i < len(temp_ticks)-1) and abs(temp_ticks[i][0]-temp_ticks[i+1][0])<1E-7: temp_ticks[i][1] +="|" + temp_ticks[i+1][1] if (temp_ticks[i][1] != temp_ticks[i+1][1]) else '' temp_ticks[i+1][1] = "" if i == 0: uniq_d.append(temp_ticks[i][0]) uniq_l.append(temp_ticks[i][1]) # logger.debug("Adding label {l} at {d}".format( # l=temp_ticks[i][0], d=temp_ticks[i][1])) else: if temp_ticks[i][1] == temp_ticks[i - 1][1]: # logger.debug("Skipping label {i}".format( # i=temp_ticks[i][1])) continue else: # logger.debug("Adding label {l} at {d}".format( # l=temp_ticks[i][0], d=temp_ticks[i][1])) uniq_d.append(temp_ticks[i][0]) uniq_l.append(temp_ticks[i][1]) # logger.debug("Unique labels are %s" % list(zip(uniq_d, uniq_l))) plt.gca().set_xticks(uniq_d) plt.gca().set_xticklabels(uniq_l) for i in range(len(lbl)): if lbl[i] is not None and lbl[i]: # don't print the same label twice if i != 0: if lbl[i] == lbl[i - 1]: # logger.debug("already print label... " # "skipping label {i}".format( # i=ticks['label'][i])) continue else: # logger.debug("Adding a line at {d}" # " for label {l}".format( # d=ticks['distance'][i], l=ticks['label'][i])) plt.axvline(dist[i], color='gray', linestyle='--') else: # logger.debug("Adding a line at {d} for label {l}".format( # d=ticks['distance'][i], l=ticks['label'][i])) plt.axvline(dist[i], color='gray', linestyle='--') plt.axhline(0, color='black') with open('wavevec_label.txt', 'w') as f: f.write(''.join([x[1]+'\n' for x in temp_ticks])) return plt #@staticmethod def str2kpt(self, s, cart): """ :param s: example "[[10, [0,0,0],'\\Gamma', [0.5,0.5,0.5], 'X', [0.5,0.5,0], 'K']]" or simply "[[10, 'X', 'K']]" or "Auto N_pt" for automatic K-path with N_pt points on each segment or "Auto" for default density (20) :param cart: boolean for cartesian coordinates :return: """ from csld.util.mathtool import vec_linspace s_list = s.split() symkp= HighSymmKpath(self.prim) if s_list[0] == 'Auto': kpts, lbls = symkp.get_kpoints(20 if len(s_list)<2 else int(s_list[1])) kpts = np.array(kpts) return kpts, lbls d = eval(s) lbls= [] kpts= [] kpt_dat= symkp.kpath['kpoints'] lbl_only = isinstance(d[0][1], str) npt= d[0][0] for x in d: if lbl_only: print(" found only labels in kpts specs.") lb_line = x[1:] kp_line = [kpt_dat[k] for k in lb_line] else: lb_line = x[2::2] kp_line = x[1::2] for i in range(len(lb_line)-1): lbls.extend([lb_line[i]] + (['']*(npt-2)) + [lb_line[i+1]]) kpts.extend(vec_linspace(kp_line, npt)) # lbls.append(x[2]) # for y in x[4::2]: # lbls.extend(['']*(x[0]-2)) # lbls.append(y) # kpts = np.vstack([vec_linspace(x[1::2], x[0]) for x in d]) kpts = np.array(kpts) if not lbl_only: kpts = self.to_c(kpts, cart) return kpts, lbls def replace_gamma(self, kpts, no_gamma): if no_gamma and self.NAC is not None: for i in range(len(kpts)): if not np.any(kpts[i]): if len(kpts)<2: kpts[i]=np.full((3),1e-6) elif i==0: kpts[i]=kpts[i+1]*1e-3 elif i==len(kpts)-1: kpts[i]=kpts[i-1]*1e-3 elif np.any(kpts[i+1]): kpts[i]=kpts[i+1]*1e-3 else: kpts[i]=kpts[i-1]*1e-3 def get_dispersion(self, k_in_s, unit='THz', cart=True, no_gamma=False): """ :param k_in_s: string "Auto" or K-points definition :return: """ #rec = self.prim.lattice.reciprocal_lattice kpts, labels = self.str2kpt(k_in_s, cart) self.replace_gamma(kpts, no_gamma) np.savetxt('wavevec_frac_cart.txt', np.hstack([self.prim.reciprocal_lattice.get_fractional_coords(kpts), kpts])) xval = np.zeros(len(kpts)) for i in range(1, len(kpts)): xval[i] = xval[i-1] + (0 if labels[i-1] and labels[i] else np.linalg.norm(kpts[i:i+1] - kpts[i-1:i])) eigE = f_phonon.get_dispersion(kpts, 1, self.dim)* self.units[unit][0] print(np.min(eigE),np.max(eigE)) np.savetxt('phonon-dispersion.out', np.hstack((np.array([xval]).T, eigE.T)), header= 'col1: wavevector distance; col 2 ...: band 1 ...: '+self.units[unit][1]) if False: ref_eigE= np.loadtxt('ref-dispersion.txt')[:,1:].T print(eigE.shape, ref_eigE.shape) else: ref_eigE= None if plt is not None and self.pdfout: plt.figure() if ref_eigE is not None: plt.plot(*my_flatten([[xval, eigE[i,:], 'b-'] for i in range(self.dim)]+ [[xval, ref_eigE[i,:], 'r-'] for i in range(self.dim)])) else: plt.plot(*my_flatten([[xval, eigE[i,:], 'b-'] for i in range(self.dim)])) plt.xlabel("Wavevector") plt.ylabel(self.units[unit][1]) #print(xval, labels) Phonon._maketicks(xval, labels, plt) plt.savefig(self.pdfout, format='pdf') plt.close() return eigE, kpts def get_eig_e_vec(self, k_in,unit='THz', cart=True): """ :param k_in: list of K-points :return: """ from scipy.io import mmwrite eigE, eigV = f_phonon.get_eig_e_vec(self.to_c(k_in, cart), self.dim) # dm = f_phonon.get_dm(self.to_c(k_in, cart), self.dim) # np.savetxt("eigE.txt", eigE) # for i in range(len(k_in)): # mmwrite("eigV_%d.mtx"%(i), eigV[:,:,i]) # mmwrite("dm_%d.mtx"%(i), dm[:,:,i]) # np.savetxt("hessian.txt", self.get_FCM()) return (eigE, eigV) def plot_dos(self, plt, x, y, title, unit): if plt is not None: plt.figure() plt.plot(x, y, 'b-') plt.xlabel(unit) plt.ylabel("Phonon DOS") plt.title(title) plt.savefig(self.pdfout, format='pdf') plt.close() def get_dos(self, mesh, nEdos, ismear, temp, epsilon, unit='THz', pdos=False, no_gamma=False): """ :param mesh: [nx ny nz] :param ngrid_en: number of energy points :param ismear: -1 for tetrahedron, 0 for Lorentzian smearing, 1 for Gaussian smearing :param epsilon: width of smearing :param unit: :return: numpy array [[t1, dos1], [t2, dos2], ...] """ kgrid = np.mgrid[0:1:1./mesh[0], 0:1:1./mesh[1], 0:1:1./mesh[2]].transpose((1,2,3,0)).reshape(-1,3) self.replace_gamma(kgrid, no_gamma) kred, wt = zip(*self.prim.syminfo.get_ir_reciprocal_mesh(mesh)) dos, pd = f_phonon.get_dos_new(pdos, mesh, self.to_c(kgrid, False), nEdos, ismear, epsilon, self.prim.num_sites) en = dos[:,0]* self.units[unit][0] self.plot_dos(plt, en, dos[:,1]/self.units[unit][0], "Total", self.units[unit][1]) # returned dos always in eV per primitive cell (3 N_atom modes) dos[:,0] *= self.units['eV'][0] dos[:,1] /= self.units['eV'][0] np.savetxt('phonon-total-dos'+str(temp)+'.out', dos, header='col1: energy in eV; col 2: DOS') if pdos: for i in range(self.prim.num_sites): self.plot_dos(plt, en, pd[i]/self.units[unit][0], "Partial for atom %d"%(i+1), self.units[unit][1]) pd /= self.units['eV'][0] np.savetxt('phonon-partial-dos'+str(temp)+'.out', np.hstack((dos[:,0:1], pd.T)) , header='col1: energy in eV; col 2: atom 1 DOE, etc') return dos @staticmethod def calc_thermal_QHA(dos, Tlist, outf): """ Thermodynamic properties in the quasi-harmonic approximation :param dos: :param Tlist: list of temperature :param outf: :return: """ dat = f_phonon.calc_thermal(dos, Tlist) np.nan_to_num(dat,copy=False,nan=0.0,posinf=0,neginf=0) np.savetxt(outf, dat, header='QHA Per primitive cell: T (K); E (eV); A=E-TS (eV); S (kB); Cv (kB)') return dat def debye_velocity(self, grid, x1rad, x0rad, average=True): """ returns Debye velocity in m/s :param grid: [num_dcostheta, num_dphi] :param x1rad: fractional WRT the Weigner-Sietz cell of reciprocal space """ from ..util.mathtool import mkgrid_surfaceintegral_spherical scale= self.prim.reciprocal_lattice.WS_radius samples= mkgrid_surfaceintegral_spherical(grid, True) qpt= np.array([[np.sin(p[0])*np.cos(p[1]), np.sin(p[0])*np.sin(p[1]), np.cos(p[0])] for p in samples])*scale q1= qpt*x1rad q0= qpt*x0rad eigE1 = f_phonon.get_dispersion(q1, 1, self.dim)[:3,:].T.reshape((-1)) eigE0 = np.zeros_like(eigE1) if np.abs(x0rad)<1E-20 else f_phonon.get_dispersion(q0, 1, self.dim)[:3,:].T.reshape((-1)) # note energy in THz unit (frequency, NOT angular) # return unit meter/second velocity = 100*2*np.pi*(eigE1-eigE0)*self.units['THz'][0]/((x1rad-x0rad)*scale) if
""" Set of programs to read and interact with output from Bifrost """ import numpy as np import os from glob import glob from . import cstagger class BifrostData(object): """ Reads data from Bifrost simulations in native format. """ def __init__(self, file_root, snap=None, meshfile=None, fdir='.', verbose=True, dtype='f4', big_endian=False, ghost_analyse=False): """ Loads metadata and initialises variables. Parameters ---------- file_root - string Basename for all file names. Snapshot number will be added afterwards, and directory will be added before. snap - integer, optional Snapshot number. If None, will read first snapshot in sequence. meshfile - string, optional File name (including full path) for file with mesh. If set to None (default), a uniform mesh will be created. fdir - string, optional Directory where simulation files are. Must be a real path. verbose - bool, optional If True, will print out more diagnostic messages dtype - string, optional Data type for reading variables. Default is 32 bit float. big_endian - string, optional If True, will read variables in big endian. Default is False (reading in little endian). ghost_analyse - bool, optional If True, will read data from ghost zones when this is saved to files. Default is never to read ghost zones. Examples -------- This reads snapshot 383 from simulation "cb24bih", whose file root is "cb24bih_", and is found at directory /data/cb24bih: >>> a = Bifrost.Data("cb24bih_", snap=383, fdir=""/data/cb24bih") Scalar variables do not need de-staggering and are available as memory map (only loaded to memory when needed), e.g.: >>> a.r.shape (504, 504, 496) Composite variables need to be obtained by get_var(): >>> vx = a.get_var("ux") """ self.fdir = fdir self.verbose = verbose self.file_root = os.path.join(self.fdir, file_root) self.meshfile = meshfile self.ghost_analyse = ghost_analyse # endianness and data type if big_endian: self.dtype = '>' + dtype else: self.dtype = '<' + dtype self.set_snap(snap) def _set_snapvars(self): """ Sets list of avaible variables """ self.snapvars = ['r', 'px', 'py', 'pz', 'e'] self.auxvars = self.params['aux'].split() if (self.do_mhd): self.snapvars += ['bx', 'by', 'bz'] self.hionvars = [] if 'do_hion' in self.params: if self.params['do_hion'] > 0: self.hionvars = ['hionne', 'hiontg', 'n1', 'n2', 'n3', 'n4', 'n5', 'n6', 'fion', 'nh2'] self.compvars = ['ux', 'uy', 'uz', 's', 'ee'] self.simple_vars = self.snapvars + self.auxvars + self.hionvars self.auxxyvars = [] # special case for the ixy1 variable, lives in a separate file if 'ixy1' in self.auxvars: self.auxvars.remove('ixy1') self.auxxyvars.append('ixy1') self.vars2d = [] # special case for 2D variables, stored in a separate file for var in self.auxvars: if any(i in var for i in ('xy', 'yz', 'xz')): self.auxvars.remove(var) self.vars2d.append(var) def set_snap(self, snap): """ Reads metadata and sets variable memmap links for a given snapshot number. Parameters ---------- snap - integer Number of simulation snapshot to load. """ if snap is None: try: tmp = sorted(glob("%s*idl" % self.file_root))[0] snap = int(tmp.split(self.file_root + '_')[1].split(".idl")[0]) except IndexError: raise ValueError(("(EEE) set_snap: snapshot not defined and no" " .idl files found")) self.snap = snap self.snap_str = '_%03i' % snap self._read_params() # Read mesh for all snaps because meshfiles could differ self.__read_mesh(self.meshfile) # variables: lists and initialisation self._set_snapvars() self._init_vars() def _read_params(self): """ Reads parameter file (.idl) """ if (self.snap < 0): filename = self.file_root + '.idl.scr' elif (self.snap == 0): filename = self.file_root + '.idl' else: filename = self.file_root + self.snap_str + '.idl' self.params = read_idl_ascii(filename) # assign some parameters as attributes for p in ['x', 'y', 'z', 'b']: try: setattr(self, 'n' + p, self.params['m' + p]) except KeyError: raise KeyError(('read_params: could not find ' 'm%s in idl file!' % p)) for p in ['dx', 'dy', 'dz', 'do_mhd']: try: setattr(self, p, self.params[p]) except KeyError: raise KeyError(('read_params: could not find ' '%s in idl file!' % p)) try: if self.params['boundarychk'] == 1: self.nzb = self.nz + 2 * self.nb else: self.nzb = self.nz except KeyError: self.nzb = self.nz # check if units are there, if not use defaults and print warning unit_def = {'u_l': 1.e8, 'u_t': 1.e2, 'u_r': 1.e-7, 'u_b': 1.121e3, 'u_ee': 1.e12} for unit in unit_def: if unit not in self.params: print(("(WWW) read_params:"" %s not found, using " "default of %.3e" % (unit, unit_def[unit]))) self.params[unit] = unit_def[unit] def __read_mesh(self, meshfile): """ Reads mesh file """ if meshfile is None: meshfile = os.path.join(self.fdir, self.params['meshfile'].strip()) if os.path.isfile(meshfile): f = open(meshfile, 'r') for p in ['x', 'y', 'z']: dim = int(f.readline().strip('\n').strip()) assert dim == getattr(self, 'n' + p) # quantity setattr(self, p, np.array( [float(v) for v in f.readline().strip('\n').split()])) # quantity "down" setattr(self, p + 'dn', np.array( [float(v) for v in f.readline().strip('\n').split()])) # up derivative of quantity setattr(self, 'd%sid%sup' % (p, p), np.array( [float(v) for v in f.readline().strip('\n').split()])) # down derivative of quantity setattr(self, 'd%sid%sdn' % (p, p), np.array( [float(v) for v in f.readline().strip('\n').split()])) f.close() if self.ghost_analyse: # extend mesh to cover ghost zones self.z = np.concatenate(( self.z[0] - np.linspace(self.dz*self.nb, self.dz, self.nb), self.z, self.z[-1] + np.linspace(self.dz, self.dz*self.nb, self.nb))) self.zdn = np.concatenate(( self.zdn[0] - np.linspace(self.dz*self.nb, self.dz, self.nb), self.zdn, (self.zdn[-1] + np.linspace(self.dz, self.dz*self.nb, self.nb)))) self.dzidzup = np.concatenate(( np.repeat(self.dzidzup[0], self.nb), self.dzidzup, np.repeat(self.dzidzup[-1], self.nb))) self.dzidzdn = np.concatenate(( np.repeat(self.dzidzdn[0], self.nb), self.dzidzdn, np.repeat(self.dzidzdn[-1], self.nb))) self.nz = self.nzb else: # no mesh file print('(WWW) Mesh file %s does not exist.' % meshfile) if self.dx == 0.0: self.dx = 1.0 if self.dy == 0.0: self.dy = 1.0 if self.dz == 0.0: self.dz = 1.0 print(('(WWW) Creating uniform grid with [dx,dy,dz] = ' '[%f,%f,%f]') % (self.dx, self.dy, self.dz)) # x self.x = np.arange(self.nx) * self.dx self.xdn = self.x - 0.5 * self.dx self.dxidxup = np.zeros(self.nx) + 1. / self.dx self.dxidxdn = np.zeros(self.nx) + 1. / self.dx # y self.y = np.arange(self.ny) * self.dy self.ydn = self.y - 0.5 * self.dy self.dyidyup = np.zeros(self.ny) + 1. / self.dy self.dyidydn = np.zeros(self.ny) + 1. / self.dy # z if self.ghost_analyse: self.nz = self.nzb self.z = np.arange(self.nz) * self.dz self.zdn = self.z - 0.5 * self.dz self.dzidzup = np.zeros(self.nz) + 1. / self.dz self.dzidzdn = np.zeros(self.nz) + 1. / self.dz def _init_vars(self, *args, **kwargs): """ Memmaps "simple" variables, and maps them to methods. Also, sets file name[s] from which to read a data """ self.variables = {} for var in self.simple_vars: try: self.variables[var] = self._get_simple_var( var, *args, **kwargs) setattr(self, var, self.variables[var]) except Exception: if self.verbose: print(('(WWW) init_vars: could not read ' 'variable %s' % var)) for var in self.auxxyvars: try: self.variables[var] = self._get_simple_var_xy(var, *args, **kwargs) setattr(self, var, self.variables[var]) except Exception: if self.verbose: print(('(WWW) init_vars: could not read ' 'variable %s' % var)) rdt = self.r.dtype cstagger.init_stagger(self.nz, self.dx, self.dy, self.z.astype(rdt), self.zdn.astype(rdt), self.dzidzup.astype(rdt), self.dzidzdn.astype(rdt)) def get_var(self, var, snap=None, *args, **kwargs): """ Reads a given variable from the relevant files. Parameters ---------- var - string Name of the variable to read. Must be Bifrost internal names. snap - integer, optional Snapshot number to read. By default reads the loaded snapshot; if a different number is requested, will load that snapshot by running self.set_snap(snap). """ if (snap is not None) and (snap != self.snap): self.set_snap(snap) if var in self.simple_vars: # is variable already loaded? return self._get_simple_var(var, *args, **kwargs) elif var in self.auxxyvars: return self._get_simple_var_xy(var, *args, **kwargs) elif var in self.compvars: # add to variable list self.variables[var] = self._get_composite_var(var, *args, **kwargs) setattr(self, var, self.variables[var]) return self.variables[var] else: raise ValueError( ("get_var: could not read variable %s. Must be " "one of %s" % (var, (self.simple_vars + self.compvars + self.auxxyvars)))) def _get_simple_var(self, var, order='F', mode='r', *args, **kwargs): """ Gets "simple" variable (ie, only memmap, not load into memory). Parameters ---------- var - string Name of the variable to read. Must be Bifrost internal names. order - string, optional Must be either 'C' (C order) or 'F' (Fortran order, default). mode - string, optional numpy.memmap read mode. By default is read only ('r'), but you can use 'r+' to read and write. DO NOT USE 'w+'. Returns ------- result - numpy.memmap array Requested variable. """ if self.snap < 0: filename = self.file_root fsuffix_b = '.scr' elif self.snap == 0: filename = self.file_root fsuffix_b = '' else: filename = self.file_root + self.snap_str fsuffix_b = '' if
<gh_stars>1-10 import numpy as np import pandas as pd import os, errno import datetime import uuid import itertools import yaml import subprocess import scipy.sparse as sp from scipy.spatial.distance import squareform from sklearn.decomposition.nmf import non_negative_factorization from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from sklearn.utils import sparsefuncs from fastcluster import linkage from scipy.cluster.hierarchy import leaves_list import matplotlib.pyplot as plt import scanpy as sc from cnmf import * class cNMF_nonorm(cNMF): def get_norm_counts(self, counts, tpm, high_variance_genes_filter = None, num_highvar_genes = None ): """ Parameters ---------- counts : anndata.AnnData Scanpy AnnData object (cells x genes) containing raw counts. Filtered such that no genes or cells with 0 counts tpm : anndata.AnnData Scanpy AnnData object (cells x genes) containing tpm normalized data matching counts high_variance_genes_filter : np.array, optional (default=None) A pre-specified list of genes considered to be high-variance. Only these genes will be used during factorization of the counts matrix. Must match the .var index of counts and tpm. If set to None, high-variance genes will be automatically computed, using the parameters below. num_highvar_genes : int, optional (default=None) Instead of providing an array of high-variance genes, identify this many most overdispersed genes for filtering Returns ------- normcounts : anndata.AnnData, shape (cells, num_highvar_genes) A counts matrix containing only the high variance genes. No normalization. """ if high_variance_genes_filter is None: ## Get list of high-var genes if one wasn't provided if sp.issparse(tpm.X): (gene_counts_stats, gene_fano_params) = get_highvar_genes_sparse(tpm.X, numgenes=num_highvar_genes) else: (gene_counts_stats, gene_fano_params) = get_highvar_genes(np.array(tpm.X), numgenes=num_highvar_genes) high_variance_genes_filter = list(tpm.var.index[gene_counts_stats.high_var.values]) ## Subset out high-variance genes norm_counts = counts[:, high_variance_genes_filter] ## Save a \n-delimited list of the high-variance genes used for factorization open(self.paths['nmf_genes_list'], 'w').write('\n'.join(high_variance_genes_filter)) ## Check for any cells that have 0 counts of the overdispersed genes zerocells = norm_counts.X.sum(axis=1)==0 if zerocells.sum()>0: examples = norm_counts.obs.index[zerocells] print('Warning: %d cells have zero counts of overdispersed genes. E.g. %s' % (zerocells.sum(), examples[0])) print('Consensus step may not run when this is the case') return(norm_counts) def consensus(self, k, density_threshold_str='0.5', local_neighborhood_size = 0.30,show_clustering = False, skip_density_and_return_after_stats = False, close_clustergram_fig=True): merged_spectra = load_df_from_npz(self.paths['merged_spectra']%k) norm_counts = sc.read(self.paths['normalized_counts']) if skip_density_and_return_after_stats: density_threshold_str = '2' density_threshold_repl = density_threshold_str.replace('.', '_') density_threshold = float(density_threshold_str) n_neighbors = int(local_neighborhood_size * merged_spectra.shape[0]/k) # Rescale topics such to length of 1. l2_spectra = (merged_spectra.T/np.sqrt((merged_spectra**2).sum(axis=1))).T if not skip_density_and_return_after_stats: # Compute the local density matrix (if not previously cached) topics_dist = None if os.path.isfile(self.paths['local_density_cache'] % k): local_density = load_df_from_npz(self.paths['local_density_cache'] % k) else: # first find the full distance matrix topics_dist = squareform(fast_euclidean(l2_spectra.values)) # partition based on the first n neighbors partitioning_order = np.argpartition(topics_dist, n_neighbors+1)[:, :n_neighbors+1] # find the mean over those n_neighbors (excluding self, which has a distance of 0) distance_to_nearest_neighbors = topics_dist[np.arange(topics_dist.shape[0])[:, None], partitioning_order] local_density = pd.DataFrame(distance_to_nearest_neighbors.sum(1)/(n_neighbors), columns=['local_density'], index=l2_spectra.index) save_df_to_npz(local_density, self.paths['local_density_cache'] % k) del(partitioning_order) del(distance_to_nearest_neighbors) density_filter = local_density.iloc[:, 0] < density_threshold l2_spectra = l2_spectra.loc[density_filter, :] kmeans_model = KMeans(n_clusters=k, n_init=10, random_state=1) kmeans_model.fit(l2_spectra) kmeans_cluster_labels = pd.Series(kmeans_model.labels_+1, index=l2_spectra.index) # Find median usage for each gene across cluster median_spectra = l2_spectra.groupby(kmeans_cluster_labels).median() # Normalize median spectra to probability distributions. median_spectra = (median_spectra.T/median_spectra.sum(1)).T # Compute the silhouette score stability = silhouette_score(l2_spectra.values, kmeans_cluster_labels, metric='euclidean') # Obtain the reconstructed count matrix by re-fitting the usage matrix and computing the dot product: usage.dot(spectra) refit_nmf_kwargs = yaml.load(open(self.paths['nmf_run_parameters']), Loader=yaml.FullLoader) refit_nmf_kwargs.update(dict( n_components = k, H = median_spectra.values, update_H = False )) _, rf_usages = self._nmf(norm_counts.X, nmf_kwargs=refit_nmf_kwargs) rf_usages = pd.DataFrame(rf_usages, index=norm_counts.obs.index, columns=median_spectra.index) rf_pred_norm_counts = rf_usages.dot(median_spectra) # Compute prediction error as a frobenius norm if sp.issparse(norm_counts.X): prediction_error = ((norm_counts.X.todense() - rf_pred_norm_counts)**2).sum().sum() else: prediction_error = ((norm_counts.X - rf_pred_norm_counts)**2).sum().sum() consensus_stats = pd.DataFrame([k, density_threshold, stability, prediction_error], index = ['k', 'local_density_threshold', 'stability', 'prediction_error'], columns = ['stats']) if skip_density_and_return_after_stats: return consensus_stats save_df_to_npz(median_spectra, self.paths['consensus_spectra']%(k, density_threshold_repl)) save_df_to_npz(rf_usages, self.paths['consensus_usages']%(k, density_threshold_repl)) save_df_to_npz(consensus_stats, self.paths['consensus_stats']%(k, density_threshold_repl)) save_df_to_text(median_spectra, self.paths['consensus_spectra__txt']%(k, density_threshold_repl)) save_df_to_text(rf_usages, self.paths['consensus_usages__txt']%(k, density_threshold_repl)) # Compute gene-scores for each GEP by regressing usage on Z-scores of TPM tpm = sc.read(self.paths['tpm']) tpm_stats = load_df_from_npz(self.paths['tpm_stats']) if sp.issparse(tpm.X): norm_tpm = (np.array(tpm.X.todense()) - tpm_stats['__mean'].values) / tpm_stats['__std'].values else: norm_tpm = (tpm.X - tpm_stats['__mean'].values) / tpm_stats['__std'].values usage_coef = fast_ols_all_cols(rf_usages.values, norm_tpm) usage_coef = pd.DataFrame(usage_coef, index=rf_usages.columns, columns=tpm.var.index) save_df_to_npz(usage_coef, self.paths['gene_spectra_score']%(k, density_threshold_repl)) save_df_to_text(usage_coef, self.paths['gene_spectra_score__txt']%(k, density_threshold_repl)) # Convert spectra to TPM units, and obtain results for all genes by running last step of NMF # with usages fixed and TPM as the input matrix norm_usages = rf_usages.div(rf_usages.sum(axis=1), axis=0) refit_nmf_kwargs.update(dict( H = norm_usages.T.values, )) _, spectra_tpm = self._nmf(tpm.X.T, nmf_kwargs=refit_nmf_kwargs) spectra_tpm = pd.DataFrame(spectra_tpm.T, index=rf_usages.columns, columns=tpm.var.index) save_df_to_npz(spectra_tpm, self.paths['gene_spectra_tpm']%(k, density_threshold_repl)) save_df_to_text(spectra_tpm, self.paths['gene_spectra_tpm__txt']%(k, density_threshold_repl)) if show_clustering: if topics_dist is None: topics_dist = squareform(fast_euclidean(l2_spectra.values)) # (l2_spectra was already filtered using the density filter) else: # (but the previously computed topics_dist was not!) topics_dist = topics_dist[density_filter.values, :][:, density_filter.values] spectra_order = [] for cl in sorted(set(kmeans_cluster_labels)): cl_filter = kmeans_cluster_labels==cl if cl_filter.sum() > 1: cl_dist = squareform(topics_dist[cl_filter, :][:, cl_filter]) cl_dist[cl_dist < 0] = 0 #Rarely get floating point arithmetic issues cl_link = linkage(cl_dist, 'average') cl_leaves_order = leaves_list(cl_link) spectra_order += list(np.where(cl_filter)[0][cl_leaves_order]) else: ## Corner case where a component only has one element spectra_order += list(np.where(cl_filter)[0]) from matplotlib import gridspec import matplotlib.pyplot as plt width_ratios = [0.5, 9, 0.5, 4, 1] height_ratios = [0.5, 9] fig = plt.figure(figsize=(sum(width_ratios), sum(height_ratios))) gs = gridspec.GridSpec(len(height_ratios), len(width_ratios), fig, 0.01, 0.01, 0.98, 0.98, height_ratios=height_ratios, width_ratios=width_ratios, wspace=0, hspace=0) dist_ax = fig.add_subplot(gs[1,1], xscale='linear', yscale='linear', xticks=[], yticks=[],xlabel='', ylabel='', frameon=True) D = topics_dist[spectra_order, :][:, spectra_order] dist_im = dist_ax.imshow(D, interpolation='none', cmap='viridis', aspect='auto', rasterized=True) left_ax = fig.add_subplot(gs[1,0], xscale='linear', yscale='linear', xticks=[], yticks=[], xlabel='', ylabel='', frameon=True) left_ax.imshow(kmeans_cluster_labels.values[spectra_order].reshape(-1, 1), interpolation='none', cmap='Spectral', aspect='auto', rasterized=True) top_ax = fig.add_subplot(gs[0,1], xscale='linear', yscale='linear', xticks=[], yticks=[], xlabel='', ylabel='', frameon=True) top_ax.imshow(kmeans_cluster_labels.values[spectra_order].reshape(1, -1), interpolation='none', cmap='Spectral', aspect='auto', rasterized=True) hist_gs = gridspec.GridSpecFromSubplotSpec(3, 1, subplot_spec=gs[1, 3], wspace=0, hspace=0) hist_ax = fig.add_subplot(hist_gs[0,0], xscale='linear', yscale='linear', xlabel='', ylabel='', frameon=True, title='Local density histogram') hist_ax.hist(local_density.values, bins=np.linspace(0, 1, 50)) hist_ax.yaxis.tick_right() xlim = hist_ax.get_xlim() ylim = hist_ax.get_ylim() if density_threshold < xlim[1]: hist_ax.axvline(density_threshold, linestyle='--', color='k') hist_ax.text(density_threshold + 0.02, ylim[1] * 0.95, 'filtering\nthreshold\n\n', va='top') hist_ax.set_xlim(xlim) hist_ax.set_xlabel('Mean distance to k nearest neighbors\n\n%d/%d (%.0f%%) spectra above threshold\nwere removed prior to clustering'%(sum(~density_filter), len(density_filter), 100*(~density_filter).mean())) fig.savefig(self.paths['clustering_plot']%(k, density_threshold_repl), dpi=250) if close_clustergram_fig: plt.close(fig) if __name__=="__main__": """ Example commands for now: output_dir="/Users/averes/Projects/Melton/Notebooks/2018/07-2018/cnmf_test/" python cnmf.py prepare --output-dir $output_dir \ --name test --counts /Users/averes/Projects/Melton/Notebooks/2018/07-2018/cnmf_test/test_data.df.npz \ -k 6 7 8 9 --n-iter 5 python cnmf.py factorize --name test --output-dir $output_dir THis can be parallelized as such: python cnmf.py factorize --name test --output-dir $output_dir --total-workers 2 --worker-index WORKER_INDEX (where worker_index starts with 0) python cnmf.py combine --name test --output-dir $output_dir python cnmf.py consensus --name test --output-dir $output_dir """ import sys, argparse parser = argparse.ArgumentParser() parser.add_argument('command', type=str, choices=['prepare', 'factorize', 'combine', 'consensus', 'k_selection_plot']) parser.add_argument('--name', type=str, help='[all] Name for analysis. All output will be placed in [output-dir]/[name]/...', nargs='?', default='cNMF') parser.add_argument('--output-dir', type=str, help='[all] Output directory. All output will be placed in [output-dir]/[name]/...', nargs='?', default='.') parser.add_argument('-c', '--counts', type=str, help='[prepare] Input (cell x gene) counts matrix as df.npz or tab delimited text file') parser.add_argument('-k', '--components', type=int, help='[prepare] Numper of components (k) for matrix factorization. Several can be specified with "-k 8 9 10"', nargs='+') parser.add_argument('-n', '--n-iter', type=int, help='[prepare] Numper of factorization replicates', default=100) parser.add_argument('--total-workers', type=int, help='[all] Total number of workers to distribute jobs to', default=1) parser.add_argument('--seed', type=int, help='[prepare] Seed for pseudorandom number generation', default=None) parser.add_argument('--genes-file', type=str, help='[prepare] File containing a list of genes to include, one gene per line. Must match column labels of counts matrix.', default=None) parser.add_argument('--numgenes', type=int, help='[prepare] Number of high variance genes to use for matrix factorization.', default=2000) parser.add_argument('--tpm', type=str, help='[prepare] Pre-computed (cell x gene) TPM values as df.npz or tab separated txt file. If not provided TPM will be calculated automatically', default=None) parser.add_argument('--beta-loss', type=str, choices=['frobenius', 'kullback-leibler', 'itakura-saito'], help='[prepare] Loss function for NMF.', default='frobenius') parser.add_argument('--densify', dest='densify', help='[prepare] Treat the input data as non-sparse', action='store_true', default=False) parser.add_argument('--worker-index', type=int, help='[factorize] Index of current worker (the first worker should have index 0)', default=0) parser.add_argument('--local-density-threshold', type=str, help='[consensus] Threshold for the local density filtering. This string must convert to a float >0 and <=2', default='0.5') parser.add_argument('--local-neighborhood-size', type=float, help='[consensus] Fraction of the number of replicates to use as nearest neighbors for local density filtering', default=0.30) parser.add_argument('--show-clustering', dest='show_clustering', help='[consensus] Produce a clustergram figure summarizing the spectra clustering', action='store_true') args = parser.parse_args() cnmf_obj = cNMF(output_dir=args.output_dir, name=args.name) cnmf_obj._initialize_dirs() if args.command == 'prepare': if args.counts.endswith('.h5ad'): input_counts =
Util.getdims(expected) == 1 else [t[ind] for t in expected if len(t) > ind] if len(actual_temp) == 0: continue p, _, base = self.get_assert_greater_bound(actual_temp, expected_temp, threshold) bounds.append(p) base_bounds.append(base) return np.min(bounds), np.min(expected), np.min(base_bounds) # max probability of failing if self.dist_type is None: try: name, dist = self.compute_fit_score(actual) bound, base_bound = self.get_dist_limits(dist, max_bound=False, min_bound=True, threshold=threshold) except: self.logger.logo(tb.format_exc()) return -np.inf, np.min(expected), np.inf # elif self.dist_type == norm: # mean, var = norm.fit(actual) # if var <= 0.0: # var = 1e-20 # bound = norm.cdf(expected[0], loc=mean, scale=var) # else: # ecdf = ECDF(actual) # bound = ecdf([expected[0]])[0] return np.min(bound), np.min(expected), base_bound def get_assert_all_close_tolerance_bound(self, actual: list, expected: list, tol_thresh, threshold): if self.enable_per_dim_comparison and Util.getdims(actual) >= 2: # for each index of actual, figure out the probability bounds = [] base_bounds = [] for ind in range(len(actual[0])): actual_temp = [x[ind] for x in actual if ind < len(x)] expected_temp = expected if Util.getdims(expected) == 1 else [ t[ind] for t in expected if ind < len(t)] if len(actual_temp) == 0: continue p, e, base = self.get_assert_all_close_tolerance_bound(actual_temp, expected_temp, tol_thresh, threshold) bounds.append(p) base_bounds.append(base) return np.max(bounds), tol_thresh['rtol'], np.max(base_bounds) if self.dist_type is None: try: name, dist = self.compute_fit_score(np.subtract(np.abs(np.subtract(actual, expected)), tol_thresh['atol'])) bound, base_bound = self.get_dist_limits(dist, True, False, threshold) if expected != 0: bound = bound / np.abs(expected) #prob = dist.cdf(tol_thresh['rtol'] * np.abs(expected)) except: import traceback as tb self.logger.logo(tb.format_exc()) return np.inf, np.inf, np.inf # elif self.dist_type == norm: # mean, var = norm.fit(np.abs(np.subtract(actual, expected))) # if var <= 0.0: # var = 1e-20 # prob = norm.cdf(tol_thresh, loc=mean, scale=var) # else: # ecdf = ECDF(np.abs(np.subtract(actual, expected))/np.abs(expected)) # prob = ecdf([tol_thresh])[0] return np.max(np.abs(bound)), tol_thresh['rtol'], np.max(np.abs(base_bound)) def get_assert_tolerance_bound(self, actual: list, expected: list, tol_thresh, threshold): if isinstance(tol_thresh, dict): if 'rtol' in tol_thresh: tol_thresh = tol_thresh['rtol'] elif 'decimal' in tol_thresh: tol_thresh = tol_thresh['decimal'] elif 'significant' in tol_thresh: tol_thresh = tol_thresh['significant'] elif 'prec' in tol_thresh: tol_thresh = tol_thresh['prec'] if self.enable_per_dim_comparison and Util.getdims(actual) >= 2: # for each index of actual, figure out the probability bounds = [] base_bounds = [] for ind in range(len(actual[0])): p, e, base = self.get_assert_tolerance_bound([x[ind] for x in actual if ind < len(x)], expected if Util.getdims(expected) == 1 else [ t[ind] for t in expected if ind < len(t)], tol_thresh, threshold) bounds.append(p) base_bounds.append(base) return np.max(bounds), tol_thresh, np.max(base_bounds) if self.dist_type is None: try: name, dist = self.compute_fit_score( np.abs(np.subtract(actual, expected))) bound, base_bound = self.get_dist_limits(dist, True, False, threshold) #prob = dist.cdf(tol_thresh) except: import traceback as tb self.logger.logo(tb.format_exc()) return np.inf, np.inf, np.inf # elif self.dist_type == norm: # mean, var = norm.fit(np.abs(np.subtract(actual, expected))) # if var <= 0.0: # var = 1e-20 # prob = norm.cdf(tol_thresh, loc=mean, scale=var) # else: # ecdf = ECDF(np.abs(np.subtract(actual, expected))) # prob = ecdf([tol_thresh])[0] return np.max(np.abs(bound)), tol_thresh, np.max(np.abs(base_bound)) def check_sample_size(self, samples): N = len(samples) K = 2*int(np.sqrt(N)) # size of each subset of bootstrap subsamples M = K # number of p values to be computed J = N # chooses first J samples alpha = 0.1 confidence_interval = 1 - alpha print("M: {0}, N: {1}, alpha: {2}, conf interval: {3}".format(M, N, alpha, confidence_interval)) critical_value = dist.t.ppf(confidence_interval, M - 1) # print(samples) print("Sample size %d" % N) bootstrap_samples = [np.random.choice(samples, size=N, replace=True) for _ in range(M - 1)] + [samples] x_bootstrap_samples = [k[:J] for k in bootstrap_samples] k_bootstrap_samples = [[np.random.choice(t, size=len(t), replace=True) for _ in range(K - 1)] + [t] for t in x_bootstrap_samples] print("Fitting...") gev_models = [[self.compute_fit_score(t, models=[dist.genextreme]) for t in x] for x in k_bootstrap_samples] return_levels = [[t[1].ppf(0.99) for t in k] for k in gev_models] # print(return_levels) # sw_stats = [AD_pval(st.anderson(t, 'norm')[0], len(t)) for t in return_levels] # considering only p value sw_stats = [st.shapiro(t)[1] for t in return_levels] # considering only p value print(sw_stats) avg_pvalue = np.mean(sw_stats) var_pvalue = np.sum([np.square(p - avg_pvalue) for p in sw_stats]) / (M - 1) sd_pvalue = np.sqrt(var_pvalue) print("Avg: {0}, Var: {1}, SD: {2}".format(avg_pvalue, var_pvalue, sd_pvalue)) lower_bound = avg_pvalue - critical_value * sd_pvalue print("Lower bound: %f" % lower_bound) if lower_bound > alpha: print("Accept") return True else: print("Reject") return False def check_bootstrap_conf(self, samples, max_percentile, min_tail_values): alpha = 0.05 N = len(samples) K = N M = K printf = lambda x: self.logger.logo(x) printf("Sample size %d" % N) printf("K: {0}, N: {1}".format(K, N)) bootstrap_samples = [np.random.choice(samples, size=N, replace=True) for _ in range(M - 1)] + [samples] printf("Fitting...") gpd_return_levels = [self.run_mbpta_cv(t, min_tail_values, max_percentile, printf=None) for t in bootstrap_samples] if any(np.isinf(gpd_return_levels)): printf("Reject, infs") return False, -np.inf, -np.inf avg_return = np.mean(gpd_return_levels) sd_return = np.std(gpd_return_levels) w, p = st.shapiro(gpd_return_levels) printf("Shapiro stats: w: {0}, p: {1}".format(w, p)) printf("Avg Return: {0}, Sd Return: {1}".format(avg_return, sd_return)) if p > alpha: printf("Accept") return True, avg_return, sd_return else: printf("Reject") return False, -np.inf, -np.inf def gpd_test(self, samples): samples = np.array(samples) gpdtest = importr("gPdtest") res = gpdtest.gpd_test(FloatVector(samples)) print("GPD: H+:{0}, H-:{1}".format(res[1][0], res[1][1])) if res[1][0] >= 0.05 or res[1][1] >= 0.05: return True else: return False def gpd_test2(self, samples): samples = np.array(samples) eva = importr("eva") try: res = eva.gpdAd(FloatVector(samples), bootstrap=True, bootnum=100, allowParallel=True, numCores=5) pval = res[1][0] scale = res[2][0] shape = res[2][1] except: self.logger.logo("GPD exception") return 0, -np.inf, -np.inf return pval, scale, shape def llrtest(self, samples, alpha=0.05): d1_param = st.genpareto.fit(samples) # Good if already light or exp if d1_param[-3] < 0: return TailType.LIGHT, d1_param, 0, 1 elif d1_param[-3] == 0: return TailType.EXP, d1_param, 0, 1 d2_param = st.genpareto.fit(samples, fc=0) d1 = st.genpareto(d1_param[-3], loc=d1_param[-2], scale=d1_param[-1]) d2 = st.genpareto(d2_param[-3], loc=d2_param[-2], scale=d2_param[-1]) ll1 = np.sum([np.log(d1.pdf(x)) for x in samples]) ll2 = np.sum([np.log(d2.pdf(x)) for x in samples]) llr = -2 * (ll2 - ll1) #print("llr", llr) chi = st.chi2(1).ppf(1 - alpha) pval = 1 - st.chi2(1).cdf(llr) #print("p val", 1 - st.chi2(1).cdf(llr)) #print("ppf-gen: {0}, ppf-exp: {1}".format(d1.ppf(0.9999), d2.ppf(0.9999))) if llr <= chi: return TailType.EXP, d2_param, llr, pval else: return TailType.HEAVY, d1_param, llr, pval def run_mbpta_cv(self, samples, min_tail_values, max_percentile, printf=None): if printf is None: printf = lambda x: self.logger.logo(x) samples = np.array(samples) N = len(samples) # select samples on the right of mode hist, edges = np.histogram(samples, bins='sturges') edges = np.sort(np.unique(samples)) #i = np.argmax(hist) #samples = samples[samples > edges[i]] #printf("Selected samples: {0}, mode: {1}, total samples: {2}".format(len(samples), edges[i], N)) if len(samples) < min_tail_values: printf("not enough tail values") return -np.inf best_th = -np.inf best_th_bc = -np.inf thresholds = list(np.sort([np.min(samples)-0.1] + list(edges))) thresholds = thresholds[-1::-1] # reversed #thresholds = np.random.choice(thresholds, min(20, len(thresholds))) printf("Thresholds : {0}".format(thresholds)) th_dict = dict() exp_param_dict = dict() for th in thresholds: s = samples[samples > th] if len(s) < min_tail_values: continue # test for exponentiality exc = [k - th for k in s] # cv_th = np.std(exc) / np.mean(exc) # gpd_param = st.genpareto.fit(exc) pval, scale, shape = self.gpd_test2(exc) printf("--Threshold: {0}, Size: {1}".format(th, len(s))) printf("GPD pval: {0}, shape: {1}, scale: {2}, Accepted: {3}".format(pval, shape, scale, pval > 0.05)) if pval > 0.05: #printf("Threshold accepted! GPD Test passed") th_dict[th] = (pval, shape, scale) # if shape <= 0: # is_exp_tail = True # exp_params = st.genpareto.fit(exc) tail_type, exp_params, llrscore, pval = self.llrtest(exc) printf("LLR score: {0}, TailType: {1}, PVal: {2}, Exp_params: {3}".format(llrscore, tail_type, pval, exp_params)) if tail_type == TailType.LIGHT or tail_type == TailType.EXP: best_th = th exp_param_dict[th] = exp_params for p in [0.90, 0.95, 0.99, 0.999, 0.9999]: estimate = best_th + st.genpareto(*exp_param_dict[best_th]).ppf(p) printf(">>Estimates {0} :: {1}".format(p, estimate)) #printf("LLR test passed") #printf("Exp Tail : {0}".format(is_exp_tail)) printf("Best th: {0}".format(best_th)) # choose the lowest threshold for computation if np.isfinite(best_th): printf("Computing exp dist") printf("Exp params: {0}".format(exp_param_dict[best_th])) #estimate = self.fit_expon_to_tail(best_th, max_percentile, printf, samples) for p in [0.90, 0.95, 0.99, 0.999, 0.9999]: estimate = best_th + st.genpareto(*exp_param_dict[best_th]).ppf(p) printf("Estimate {0} :: {1}".format(p, estimate)) estimate = best_th + st.genpareto(*exp_param_dict[best_th]).ppf(max_percentile) printf("Estimate {0} : {1}".format(max_percentile, estimate)) return estimate return -np.inf # get best ppf estimate when distribution does not converge (under approximates) # adjust_ppf: choose lower percentile when cv < 0.7 def get_best_fit_exp_tail_estimate(self, samples, min_tail_values, max_percentile, printf=None, adjust_ppf=False): if printf is None: printf = lambda x: self.logger.logo(x) samples = np.array(samples) # select samples on the right of mode hist, edges = np.histogram(samples, bins='sturges') i = np.argmax(hist) samples = samples[samples > edges[i]] if len(samples) < min_tail_values:
# This code is part of Qiskit. # # (C) Copyright IBM 2019, 2021. # # This code is licensed under the Apache License, Version 2.0. You may # obtain a copy of this license in the LICENSE.txt file in the root directory # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. # # Any modifications or derivative works of this code must retain this # copyright notice, and modified files need to carry a notice indicating # that they have been altered from the originals. """Tests for the IBMCompositeJob.""" import re import copy import time import random import uuid from datetime import datetime, timedelta, timezone from dateutil import tz from qiskit import transpile from qiskit.exceptions import QiskitError from qiskit.circuit.random import random_circuit from qiskit.providers.models import BackendProperties from qiskit.providers.jobstatus import JobStatus, JOB_FINAL_STATES from qiskit.test.reference_circuits import ReferenceCircuits from qiskit_ibm.job.exceptions import (IBMJobFailureError, IBMJobInvalidStateError, IBMJobNotFoundError, IBMJobTimeoutError) from qiskit_ibm.job import IBMCompositeJob from qiskit_ibm.job.constants import IBM_COMPOSITE_JOB_TAG_PREFIX, IBM_COMPOSITE_JOB_ID_PREFIX from qiskit_ibm.apiconstants import ApiJobStatus from ..ibm_test_case import IBMTestCase from ..decorators import requires_provider from ..fake_account_client import (BaseFakeAccountClient, CancelableFakeJob, JobSubmitFailClient, BaseFakeJob, FailedFakeJob, JobTimeoutClient, FixedStatusFakeJob, MissingFieldFakeJob) class TestIBMCompositeJob(IBMTestCase): """Tests for IBMCompositeJob.""" @classmethod @requires_provider def setUpClass(cls, provider): """Initial class level setup.""" # pylint: disable=arguments-differ super().setUpClass() cls.provider = provider cls.sim_backend = provider.get_backend('ibmq_qasm_simulator') cls.last_week = datetime.now() - timedelta(days=7) def setUp(self): """Initial test setup.""" super().setUp() self._qc = ReferenceCircuits.bell() # TODO: We can remove all the deepcopy and do a disable_account # instead once issue 84 is resolved. self.fake_backend = copy.deepcopy(self.sim_backend) self.fake_provider = copy.deepcopy(self.provider) self._set_fake_client(BaseFakeAccountClient()) self.fake_backend._provider = self.fake_provider self.fake_provider.backend._provider = self.fake_provider self.fake_backend._configuration.max_experiments = 5 def tearDown(self): """Tear down.""" super().tearDown() self.fake_backend._api_client.tear_down() def _set_fake_client(self, fake_client): self.fake_backend._api_client = fake_client self.fake_provider._api_client = fake_client def test_split_circuits(self): """Test having circuits split into multiple jobs.""" max_circs = self.fake_backend.configuration().max_experiments circs = [] for _ in range(max_circs+2): circs.append(self._qc) job_set = self.fake_backend.run(circs) result = job_set.result() self.assertEqual(len(job_set.sub_jobs()), 2) self.assertEqual(len(result.results), max_circs+2) self.assertTrue(job_set.job_id().startswith(IBM_COMPOSITE_JOB_ID_PREFIX)) def test_custom_split_circuits(self): """Test having circuits split with custom slices.""" job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) self.assertEqual(len(job_set.sub_jobs()), 2) def test_job_report(self): """Test job report.""" job_classes = [BaseFakeJob, FailedFakeJob, CancelableFakeJob, CancelableFakeJob, FixedStatusFakeJob] job_count = len(job_classes) self._set_fake_client(BaseFakeAccountClient( job_class=job_classes, job_kwargs={'fixed_status': ApiJobStatus.VALIDATING})) job_set = self.fake_backend.run([self._qc] * len(job_classes), max_circuits_per_job=1) job_set.sub_jobs()[2].cancel() job_set.sub_jobs()[0].wait_for_final_state() for detailed in [True, False]: with self.subTest(detailed=detailed): report = job_set.report(detailed=detailed) self.assertIn(job_set.job_id(), report) self.assertIn(f"Total jobs: {job_count}", report) for stat in ['Successful', 'Failed', 'Cancelled', 'Running', 'Pending']: self.assertIn(f"{stat} jobs: 1", report) if detailed: for sub_job in job_set.sub_jobs(): self.assertIn(sub_job.job_id(), report) for i in range(job_count): self.assertIn(f"Circuits {i}-{i}:", report) self.assertIn(f"Job index: {i}", report) for stat in [JobStatus.DONE, JobStatus.ERROR, JobStatus.CANCELLED, JobStatus.RUNNING, JobStatus.VALIDATING]: self.assertIn(f"Status: {stat}", report) else: for sub_job in job_set.sub_jobs(): self.assertNotIn(sub_job.job_id(), report) def test_job_pending_status(self): """Test pending and running status.""" sub_tests = [(ApiJobStatus.VALIDATING, JobStatus.VALIDATING, 'Pending'), (ApiJobStatus.RUNNING, JobStatus.RUNNING, 'Running'), (ApiJobStatus.QUEUED, JobStatus.QUEUED, 'Pending')] for api_status, job_status, report_text in sub_tests: with self.subTest(status=job_status): self._set_fake_client(BaseFakeAccountClient( job_class=[BaseFakeJob, FixedStatusFakeJob], job_kwargs={'fixed_status': api_status})) job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) stat_job = job_set.sub_jobs()[1] while stat_job.status() != job_status: time.sleep(1) time.sleep(2) # Let the other job advance. self.assertEqual(job_set.status(), job_status) self.assertNotEqual(job_set.sub_jobs()[0].status(), job_status) self.assertEqual(stat_job.status(), job_status) report = job_set.report() self.assertIn(f"{report_text} jobs: 1", report) self.assertIsNotNone( re.search(rf"Job ID: {stat_job.job_id()}\s*Status: {job_status}", report), report) def test_status_done(self): """Test job status of completed.""" job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) job_set.wait_for_final_state() self.assertEqual(job_set.status(), JobStatus.DONE) for sub_job in job_set.sub_jobs(): self.assertEqual(sub_job.status(), JobStatus.DONE) self.assertIn("Successful jobs: 2", job_set.report()) def test_job_circuits(self): """Test job circuits.""" circs = [] for _ in range(3): circs.append(random_circuit(num_qubits=2, depth=3, measure=True)) circs_copied = circs.copy() job_set = self.fake_backend.run(circs, max_circuits_per_job=1) job_circuits = job_set.circuits() self.assertEqual(job_circuits, circs_copied) for i, sub_job in enumerate(job_set.sub_jobs()): self.assertEqual(sub_job.circuits()[0], circs_copied[i]) def test_job_backend_options(self): """Test getting backend options.""" custom_options = {'shots': 100, 'memory': True} job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1, **custom_options) self.assertLessEqual(custom_options.items(), job_set.backend_options().items()) job_set.block_for_submit() rjob_set = self.fake_provider.backend.job(job_set.job_id()) self.assertLessEqual(custom_options.items(), rjob_set.backend_options().items()) def test_job_header(self): """Test getting job header.""" custom_header = {'test': 'test_job_header'} job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1, header=custom_header) self.assertLessEqual(custom_header.items(), job_set.header().items()) job_set.block_for_submit() rjob_set = self.fake_provider.backend.job(job_set.job_id()) self.assertLessEqual(custom_header.items(), rjob_set.header().items()) def test_job_backend(self): """Test getting job backend.""" job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) self.assertEqual(job_set.backend().name(), self.fake_backend.name()) job_set.block_for_submit() rjob_set = self.fake_provider.backend.job(job_set.job_id()) self.assertEqual(rjob_set.backend().name(), self.fake_backend.name()) def test_job_name(self): """Test job name.""" custom_name = 'batman' job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1, job_name=custom_name) self.assertEqual(job_set.name(), custom_name) job_set.block_for_submit() rjob_set = self.fake_provider.backend.job(job_set.job_id()) self.assertEqual(rjob_set.name(), custom_name) def test_job_name_update(self): """Test changing the name associated with a job.""" new_name = 'robin' job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1, job_name='batman') job_set.update_name(new_name) self.assertEqual(job_set.name(), new_name) job_set.block_for_submit() rjob_set = self.fake_provider.backend.job(job_set.job_id()) self.assertEqual(rjob_set.name(), new_name) def test_job_properties(self): """Test job properties.""" job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) self.assertIsInstance(job_set.properties(), BackendProperties) def test_multiple_job_properties(self): """Test multiple job properties.""" self._set_fake_client(BaseFakeAccountClient(props_count=2)) job_set = self.fake_backend.run([self._qc] * 3, max_circuits_per_job=1) props = job_set.properties() self.assertIsInstance(props, list) self.assertEqual(len(props), 2) self.assertTrue(all(isinstance(prop, BackendProperties) for prop in props)) def test_error_message_one(self): """Test error message when one job failed.""" failure_types = ['validation', 'partial', 'result'] for fail_type in failure_types: with self.subTest(fail_type=fail_type): self._set_fake_client( BaseFakeAccountClient(job_class=[BaseFakeJob, FailedFakeJob], job_kwargs={'failure_type': fail_type})) job_set = self.fake_backend.run([self._qc] * 4, max_circuits_per_job=2) error_msg = job_set.error_message() self.assertIsNotNone(error_msg) self.assertEqual(job_set.status(), JobStatus.ERROR) self.assertNotEqual(job_set.sub_jobs()[0].status, JobStatus.ERROR) bad_job = job_set.sub_jobs()[1] self.assertIsNotNone( re.search(f"Circuits 2-3: Job {bad_job.job_id()} failed: ", error_msg), f"Error msg: {error_msg}") if fail_type == 'partial': self.assertIn('Experiment 1:', error_msg) else: self.assertIsNotNone(re.search(r"Error code: \d{4}", error_msg), f"Error msg: {error_msg}") def test_error_message_all(self): """Test error message report when all jobs failed.""" self._set_fake_client(BaseFakeAccountClient(job_class=FailedFakeJob)) job_set = self.fake_backend.run([self._qc] * 4, max_circuits_per_job=2) error_msg = job_set.error_message() self.assertIsNotNone(error_msg) for idx, job in enumerate(job_set.sub_jobs()): self.assertIsNotNone( re.search(f"Circuits {idx*2}-{idx*2+1}: Job {job.job_id()} failed: " + r".+ Error code: \d{4}", error_msg), f"Error msg: {error_msg}") def test_async_submit_exception(self): """Test asynchronous job submit failed.""" self.fake_backend._api_client = JobSubmitFailClient(failed_indexes=0) job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) job_set.wait_for_final_state() self.assertEqual(job_set.status(), JobStatus.ERROR) with self.assertRaises(IBMJobFailureError): job_set.result() self.assertIn("Circuits 0-0: Job submit failed", job_set.error_message()) report = job_set.report() self.assertIn("Failed jobs: 1", report) self.assertIn("Successful jobs: 1", report) self.assertIn("Error submitting job", report) self.assertIn("Status: JobStatus.DONE", report) result = job_set.result(partial=True) self.assertFalse(result.success) self.assertFalse(result.results[0].success) self.assertFalse(result.results[0].data.to_dict()) def test_job_limit(self): """Test reaching job limit.""" job_limit = 3 self._set_fake_client(BaseFakeAccountClient( job_limit=job_limit, job_class=CancelableFakeJob)) job_set = None try: job_set = self.fake_backend.run( [self._qc]*(job_limit+2), max_circuits_per_job=1) # Wait for first 3 jobs to be submitted. max_loop = 5 while len(job_set.sub_jobs(block_for_submit=False)) < job_limit and max_loop: time.sleep(0.5) max_loop -= 1 self.assertGreater(max_loop, 0) self.assertEqual(job_set.status(), JobStatus.INITIALIZING) report = job_set.report() self.assertIsNotNone( re.search(r"index: 3\s+Status: Job not yet submitted.*" r"index: 4\s+Status: Job not yet submitted", report, re.DOTALL), report) for job in job_set.sub_jobs(block_for_submit=False): job.cancel() time.sleep(1) self.assertNotIn('Job not yet submitted', job_set.report()) finally: job_set.cancel() def test_job_limit_timeout(self): """Test timing out while waiting for old job to finish.""" job_limit = 3 self._set_fake_client(JobTimeoutClient(job_limit=job_limit, max_fail_count=1)) job_set = None try: job_set = self.fake_backend.run( [self._qc]*(job_limit+2), max_circuits_per_job=1) self.assertEqual(job_set.status(), JobStatus.INITIALIZING) job_set.wait_for_final_state(timeout=60) finally: job_set.cancel() def test_job_tags_replace(self): """Test updating job tags by replacing existing tags.""" initial_job_tags = [uuid.uuid4().hex] job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1, job_tags=initial_job_tags) job_set.block_for_submit() tag_prefix = uuid.uuid4().hex replacement_tags = ['{}_new_tag_{}'.format(tag_prefix, i) for i in range(2)] job_set.update_tags(new_tags=replacement_tags) for job in job_set.sub_jobs(): job.refresh() job_set_tags = \ {tag for tag in job.tags() if tag.startswith(IBM_COMPOSITE_JOB_TAG_PREFIX)} self.assertEqual(set(job.tags())-job_set_tags, set(replacement_tags), job.tags()) self.assertIn(job_set.job_id(), job_set_tags, job.tags()) self.assertEqual(len(job_set_tags), 2, job.tags()) def test_sub_job_tags_replace(self): """Test updating subjob tags.""" job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) job = job_set.sub_jobs()[0] job.update_tags(new_tags=[]) self.assertIn(job_set.job_id(), job.tags()) def test_skipped_result(self): """Test one of the jobs has no result.""" sub_tests = [CancelableFakeJob, FailedFakeJob] for job_class in sub_tests: with self.subTest(job_class=job_class): self.fake_backend._api_client = BaseFakeAccountClient( job_class=[BaseFakeJob, job_class]) job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) job_set.block_for_submit() if job_class == CancelableFakeJob: job_set.sub_jobs()[1].cancel() result = job_set.result(partial=True) self.assertEqual(len(result.results), 2) self.assertFalse(result.success) self.assertTrue(result.results[0].success) self.assertFalse(result.results[1].success) self.assertTrue(result.get_counts(0)) with self.assertRaises(QiskitError): result.get_counts(1) def test_partial_result(self): """Test one of the circuits has no result.""" self.fake_backend._api_client = BaseFakeAccountClient( job_class=[BaseFakeJob, FailedFakeJob], job_kwargs={'failure_type': 'partial'}) job_set = self.fake_backend.run([self._qc] * 4, max_circuits_per_job=2) job_set.block_for_submit() result = job_set.result(partial=True) self.assertEqual(len(result.results), 4) self.assertFalse(result.success) self.assertTrue(all(res.success for res in result.results[:3])) self.assertFalse(result.results[3].success) with self.assertRaises(QiskitError): result.get_counts(3) def test_job_result(self): """Test job result.""" max_per_job = 3 job_set = self.fake_backend.run([self._qc] * max_per_job * 2, max_circuits_per_job=max_per_job) result = job_set.result() self.assertTrue(result.success) for i in range(max_per_job*2): self.assertEqual( result.get_counts(i), job_set.sub_jobs()[int(i/max_per_job)].result().get_counts(i % max_per_job)) self.assertTrue(result.results[i].success) def test_cancel(self): """Test job cancellation.""" self.fake_backend._api_client = BaseFakeAccountClient( job_class=[BaseFakeJob, CancelableFakeJob]) job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) job_set.block_for_submit() job_set.cancel() self.assertEqual(job_set.status(), JobStatus.CANCELLED) with self.assertRaises(IBMJobInvalidStateError): job_set.result(partial=False) def test_creation_date(self): """Test retrieving creation date.""" job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) job_set.block_for_submit() creation_date = job_set.creation_date() self.assertTrue(creation_date) self.assertIsNotNone(creation_date.tzinfo) self.assertEqual(creation_date, job_set.sub_jobs()[0].creation_date()) def test_time_per_step_done(self): """Test retrieving time per step when job is done.""" job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) job_set.wait_for_final_state() time_per_step = job_set.time_per_step() self.assertTrue(time_per_step) self.assertIn('COMPLETED', time_per_step) self.assertEqual(time_per_step['CREATING'], job_set.creation_date()) status_samples = ['CREATING', 'QUEUED', 'RUNNING', 'COMPLETED'] for i in range(0, len(status_samples)-1): self.assertLessEqual(time_per_step[status_samples[i]], time_per_step[status_samples[i+1]]) def test_time_per_step_running(self): """Test retrieving time per step when job is running.""" self._set_fake_client( BaseFakeAccountClient(job_class=[BaseFakeJob, FixedStatusFakeJob], job_kwargs={'fixed_status': ApiJobStatus.RUNNING})) job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) while job_set.status() != JobStatus.RUNNING: time.sleep(1) job_set.sub_jobs()[0].wait_for_final_state() time_per_step = job_set.time_per_step() self.assertTrue(time_per_step) self.assertIn('RUNNING', time_per_step) self.assertNotIn('COMPLETED', time_per_step) self.assertEqual(time_per_step['CREATING'], job_set.creation_date()) status_samples = ['CREATING', 'QUEUED', 'RUNNING'] for i in range(0, len(status_samples)-1): self.assertLessEqual(time_per_step[status_samples[i]], time_per_step[status_samples[i+1]]) def test_time_per_step_error(self): """Test retrieving time per step when job failed.""" self._set_fake_client(BaseFakeAccountClient(job_class=[BaseFakeJob, FailedFakeJob])) job_set = self.fake_backend.run([self._qc] * 2, max_circuits_per_job=1) job_set.wait_for_final_state()
import torch import numpy as np import pickle #from torchsummary import summary from collections import OrderedDict from torch.utils.tensorboard import SummaryWriter import datetime import time import copy import torchvision.datasets as datasets import sys sys.path.append('../Utils') from DE.DNN import DNN class JADE_MLP(): def __init__(self, outdim=1,maxdepth=70,mindepth=5,minneuron=4,maxneuron=10,bsize=10,epoch=100,initSize=20,maxiter=10,stopcount=3,\ trainingset=None,validationset=None,trainingTarget=None,validateTarget=None,crossover=1,tb=None): self.best=[] self.mean=[] self.outdim=outdim self.maxdepth=maxdepth self.mindepth=mindepth self.minneuron = minneuron self.maxneuron = maxneuron self.bsize = bsize self.epoch = epoch self.stopcount = stopcount self.pplSize = initSize self.maxiter = maxiter self.MLPlayerlist = [] self.depthlist=np.random.choice(range(self.mindepth,self.maxdepth),self.pplSize,replace=True) self.crossover=crossover self.adap_conf = (0.1,0.1,0.9) self.tb = tb #if torch.cuda.is_available(): # self.device = torch.device('cuda') #else: self.device = torch.device('cpu') self.training = torch.tensor(trainingset.reshape((trainingset.shape[0],-1))).float().to(self.device) self.validationSet = torch.tensor(validationset.reshape((validationset.shape[0],-1))).float().to(self.device) self.target=torch.tensor(trainingTarget).float().to(self.device) self.validationTarget = torch.tensor(validateTarget).float().to(self.device) # Generate initial population for i in range(self.pplSize): depth = self.depthlist[i] tmp = [] # the number of neurons for the first layer is the dimension of the element in training data (in our case the size of the image) tmp.append(self.training.shape[1]) for j in range(depth): # generate the number of neurons for each layer tmp.append(np.random.choice(range(self.minneuron,self.maxneuron),1,replace=False)[0]) tmp.append(self.outdim) # last layer consist of 1 neuron by default tmp=np.array(tmp) self.MLPlayerlist.append(tmp) # define fit function - it calculates the fitness of one individual (one NN) def fit(self,config,id_,p=None): dnn = DNN(config) # define DNN based on configurations (layers and neurons) dnn.layers.to(self.device) best = float('inf') bestaccuracy =0 stop=0 opt = torch.optim.Adam(dnn.layers.parameters(), lr=0.001) loss = torch.nn.BCEWithLogitsLoss() batch = self.training.shape[0]//self.bsize vbatch = self.validationSet.shape[0]//self.bsize idxs = [x for x in range(self.training.shape[0])] vidxs = [x for x in range(self.validationSet.shape[0])] for e in range(self.epoch): start=time.time() # training np.random.shuffle(idxs) dnn.layers.train() batchloss=0 for i in range(batch): idx=idxs[i*self.bsize:i*self.bsize+self.bsize] opt.zero_grad() data = self.training[idx] y = self.target[idx] yhat = dnn(data) l = loss(yhat,y) batchloss+=l.item() l.backward() opt.step() # validating dnn.layers.eval() np.random.shuffle(vidxs) vloss=0 accuracy = 0 for i in range(vbatch): vidx=vidxs[i*self.bsize:i*self.bsize+self.bsize] vdata = self.validationSet[vidx] vy = self.validationTarget[vidx] vyhat = dnn(vdata) vl = loss(vyhat,vy) vloss += vl.item() vyhat=vyhat.detach().numpy() vy=vy.detach().numpy() predict = np.argmax(vyhat,axis=1) vy = np.argmax(vy,axis=1) accuracy += np.where(predict==vy)[0].shape[0] vloss = vloss/vbatch accuracy = accuracy/(self.bsize*vbatch) # updating best loss if(vloss<best): best=vloss bestaccuracy = accuracy # updating stopping condition else: stop+=1 end=time.time() print(f'JADE ConfigID: {id_:3d}, Epoch: {e:3d}, Training Loss: {(batchloss/batch):10.8f}, Validation Loss: {(vloss):10.8f},Best: {best:10.8f}, Accuracy: {accuracy}, StopCount/Limit: {stop:3d}/{self.stopcount:3d}, Time:{(end-start):10.8f}') # stopping condition and stopping if(stop>=self.stopcount): return best,bestaccuracy,config,id_ def jde_params(self,beta,cr): tau1,tau2,beta1,betau = self.adap_conf r1,r2,r3,r4 = np.random.uniform(0,1,4) if(r2 < tau1): beta = round(beta1 + r1 * betau,3) # else, keep the beta same if(r4 < tau2): cr = r3 return beta,cr def find_pbest(self,scores,p): pi = int(p*len(scores)+1) # it is taken p% out of length and then ceiling fits = scores.copy() # keep scores and its indexes fits = sorted(fits)[::-1] # sort fits desc idx = np.random.choice(range(pi),1,replace=False)[0] # random index for sorted fits return list(scores).index(fits[idx]) # return the index from sorted def self_adaptive_beta(self,beta): tau,beta1,betau = self.adap_conf r1,r2 = np.random.uniform(0,1,2) if(r2 < tau): beta = round(beta1 + r1 * betau,3) # else, keep the beta same return beta def mutation_pbest_1_z(self,x,x1,xs,beta,debug=False): indim = x[0] # forget first and last layers x,x1,xs[0],xs[1] = x[1:-1],x1[1:-1],xs[0][1:-1],xs[1][1:-1] if(debug): print(f'M1 : x len {x.shape[0]} x1 len {x1.shape[0]} xs0 len {xs[0].shape[0]} xs1 len {xs[1].shape[0]}') print(f'M1 : x {x} \nM1 : x1 {x1} \nM1 : xs0 {xs[0]} \nM1 : xs1 len {xs[1]}') # # A. Mutating the # of layers minlen = np.min([x.shape[0],x1.shape[0],xs[0].shape[0],xs[1].shape[0]]) if(debug): print(f'M1 : minlen {minlen}') newminlen = minlen targetlen=int(np.floor( (x.shape[0]) + beta * (x1.shape[0] - x.shape[0]) + beta * (xs[0].shape[0] - xs[1].shape[0]) )) # check the sign of targetlen: if the new length == 0 , set it back to target len , if <0 , take abs if(targetlen==0): targetlen=x1.shape[0] elif(targetlen<0): targetlen=abs(targetlen) # check if new length is between mindepth and maxdepth if(targetlen < self.mindepth): targetlen = self.mindepth elif(targetlen > self.maxdepth): targetlen = self.maxdepth # new minimum length is min of minlen and targetlen if(targetlen < minlen): newminlen=targetlen if(debug): print(f'M1 : New Min Len :{newminlen}, Length Mutation :{targetlen}') # # B. Mutating the # of neurons # As lengths of x, x1, xs[0], xs[1] and new length can possibly be different, # 1) do the mutation for # of neurons for new minlen, # 2) apply the same rule to remaining if needed # xa = np.zeros((targetlen),dtype=int) # Mutating the number of neurons up to min len layers xa = x[:newminlen] + beta * (x1[:newminlen] - x[:newminlen]) + beta * (xs[0][:newminlen] - xs[1][:newminlen]) # mutate on node with minlen # Mutating the number of neurons for the rest layers if(targetlen>minlen): xaa = np.zeros((targetlen-minlen)) a,b,c,d=None,None,None,None for i in range(targetlen-newminlen): # if number of neurons missing in vector, generate random from range (min) if(x.shape[0]<=newminlen+i): a=np.random.choice(range(self.minneuron,self.maxneuron),1,replace=False)[0] elif(x.shape[0]>newminlen+i): a=x[newminlen+i] if(x1.shape[0]<=newminlen+i): b=np.random.choice(range(self.minneuron,self.maxneuron),1,replace=False)[0] elif(x1.shape[0]>newminlen+i): b=x1[newminlen+i] if(xs[0].shape[0]<=newminlen+i): c=np.random.choice(range(self.minneuron,self.maxneuron),1,replace=False)[0] elif(xs[0].shape[0]>newminlen+i): c=xs[0][newminlen+i] if(xs[1].shape[0]<=newminlen+i): d=np.random.choice(range(self.minneuron,self.maxneuron),1,replace=False)[0] elif(xs[1].shape[0]>newminlen+i): d=xs[1][newminlen+i] xaa[i] = a + beta * (b - a) + beta * (c - d) xa = np.concatenate((xa, xaa), axis=None) # check if numbers of neurons are in allowed range for i in range(xa.shape[0]): if(xa[i]>self.maxneuron): xa[i]=self.maxneuron elif(xa[i]<self.minneuron): xa[i]=self.minneuron xa[i] = np.floor(xa[i]) xa = np.concatenate((np.array(indim,dtype=int),np.array(xa,dtype=int),np.array(self.outdim,dtype=int)), axis=None,dtype=int) return xa def mutation_rand_1_z(self,x1,xs,beta,debug=False): indim = x1[0] x1 = x1[1:-1] # remove in/out dim xs[0] = xs[0][1:-1] xs[1] = xs[1][1:-1] if(debug): print(f'M1 : x1 len {x1.shape[0]} xs0 len {xs[0].shape[0]} xs1 len {xs[1].shape[0]}') print(f'M1 : x1 {x1} \nM1 : xs0 {xs[0]} \nM1 : xs1 len {xs[1]}') # # A. Mutating the # of layers minlen = np.min([x1.shape[0],xs[0].shape[0],xs[1].shape[0]]) if(debug): print(f'M1 : minlen {minlen}') newminlen = minlen targetlen=int(np.floor((x1.shape[0]) + beta * (xs[0].shape[0] - xs[1].shape[0]))) # check the sign of targetlen: if the new length == 0 , set it back to target len , if <0 , take abs if(targetlen==0): targetlen=x1.shape[0] elif(targetlen<0): targetlen=abs(targetlen) # check if new length is between mindepth and maxdepth if(targetlen < self.mindepth): targetlen = self.mindepth elif(targetlen > self.maxdepth): targetlen = self.maxdepth # new minimum length is min of minlen and targetlen if(targetlen < minlen): newminlen=targetlen if(debug): print(f'M1 : New Min Len :{newminlen}, Length Mutation :{targetlen}') # # B. Mutating the # of neurons # As lengths of x1, xs[0], xs[1] and new length can possibly be different, # 1) do the mutation for # of neurons for new minlen, # 2) apply the same rule to remaining if needed # xa = np.zeros((targetlen),dtype=int) # Mutating the number of neurons up to min len layers xa = x1[:newminlen] + beta * (xs[0][:newminlen] - xs[1][:newminlen]) # mutate on node with minlen # Mutating the number of neurons for the rest layers if(targetlen>minlen): xaa = np.zeros((targetlen-minlen)) a,b,c=None,None,None for i in range(targetlen-newminlen): # if number of neurons missing in vector, generate random from range (min) if(x1.shape[0]<=newminlen+i): a=np.random.choice(range(self.minneuron,self.maxneuron),1,replace=False)[0] elif(x1.shape[0]>newminlen+i): a=x1[newminlen+i] if(xs[0].shape[0]<=newminlen+i): b=np.random.choice(range(self.minneuron,self.maxneuron),1,replace=False)[0] elif(xs[0].shape[0]>newminlen+i): b=xs[0][newminlen+i] if(xs[1].shape[0]<=newminlen+i): c=np.random.choice(range(self.minneuron,self.maxneuron),1,replace=False)[0] elif(xs[1].shape[0]>newminlen+i): c=xs[1][newminlen+i] xaa[i]=a + beta * (b - c) xa = np.concatenate((xa, xaa), axis=None) # check if numbers of neurons are in allowed range for i in range(xa.shape[0]): if(xa[i]>self.maxneuron): xa[i]=self.maxneuron elif(xa[i]<self.minneuron): xa[i]=self.minneuron xa[i] = np.floor(xa[i]) xa = np.concatenate((np.array(indim,dtype=int),np.array(xa,dtype=int),np.array(self.outdim,dtype=int)), axis=None,dtype=int) return xa def crossoverMean(self,parent,u): order = [parent[1:-1],u[1:-1]] if(parent.shape[0] > u.shape[0]): order = [u[1:-1],parent[1:-1]] order[0] = np.resize(order[0],order[1].shape[0]) middle = np.mean(order,axis=0,dtype=int) child=np.insert(middle,0,parent[0]) child=np.append(child,parent[-1]) return child.copy() def crossoverRandomSwap(self,parent,u): # the first one is with min len order = [parent[1:-1],u[1:-1]] child = [parent[0]] if(parent.shape[0] > u.shape[0]): order = [u[1:-1],parent[1:-1]] order[0] = np.resize(order[0],order[1].shape[0]) swap = np.random.randint(0,2,order[0].shape[0]) for i in range(len(swap)): if(swap[i]==0): child.append(order[0][i]) else: child.append(order[1][i]) child.append(parent[-1]) return np.array(child).copy() def run(self,beta=0.5,p=0.2): current_gen=self.MLPlayerlist scores = np.zeros((self.pplSize)) accuracy = np.zeros((self.pplSize)) #initial Run print('JADE Initial Run Start') for i in range(len(self.MLPlayerlist)): b,a,_,_ = self.fit(self.MLPlayerlist[i],i) scores[i]=b accuracy[i]=a print('JADE Initial Run End') currentbest = np.min(scores) overallBest = currentbest currentmean = np.mean(scores) currentbestidx = np.argmin(scores) overallBestConfig = current_gen[currentbestidx] overallBestAccuracy = accuracy[currentbestidx] currentbestAccuracy = accuracy[currentbestidx] bestGen = 0 print(f'JADE Init Run Best: {currentbest}, Mean: {currentmean}, ID:{currentbestidx}, config: {current_gen[currentbestidx]}') #Generation Run for i in range(self.maxiter): structureStatistic=np.zeros((self.pplSize,5)) updatecount=0 start=time.time() print(f'JADE Gen {i} Run Start') betas = np.ones(self.pplSize)*beta for j in range(self.pplSize): parent = current_gen[j] # factors betas[j] = self.self_adaptive_beta(betas[j]) # mutation tidx = self.find_pbest(scores,p) idx1,idx2 = np.array(np.random.choice(np.delete(np.arange(self.pplSize),tidx),2,replace=False),dtype=int) target = current_gen[tidx] diff = [current_gen[idx1],current_gen[idx2]] unitvector = self.mutation_pbest_1_z(parent,target,diff,betas[j]) # crossover if(self.crossover==1): nextGen = self.crossoverMean(parent,unitvector) else: nextGen = self.crossoverRandomSwap(parent,unitvector) print(f'Next Gen: {nextGen}') structureStatistic[j,0]= nextGen.shape[0]-2 structureStatistic[j,1]= np.mean(nextGen[1:-1]) structureStatistic[j,2]= np.median(nextGen[1:-1]) structureStatistic[j,3]= np.quantile(nextGen[1:-1],0.25) structureStatistic[j,4]= np.quantile(nextGen[1:-1],0.75) s,a,_,_ = self.fit(nextGen,j) if(s<scores[j]): updatecount+=1 scores[j]=s accuracy[j]=a current_gen[j]=nextGen print(f'JADE Gen {i} Run End') end=time.time() currentbest = np.min(scores)
<gh_stars>1-10 #!/usr/bin/env python3 """Train with 1000""" from __future__ import annotations import argparse import functools import logging import pathlib import random import typing import albumentations as A import cv2 import horovod.tensorflow.keras as hvd import numpy as np import scipy import sklearn.metrics import tensorflow as tf logger = logging.getLogger(__name__) def _main(): try: import better_exceptions better_exceptions.hook() except Exception: pass parser = argparse.ArgumentParser() parser.add_argument( "--data", default="cifar10", choices=("mnist", "fashion_mnist", "cifar10", "cifar100"), ) parser.add_argument("--check", action="store_true", help="3epochだけお試し実行(動作確認用)") parser.add_argument( "--results-dir", default=pathlib.Path("results"), type=pathlib.Path ) args = parser.parse_args() hvd.init() gpus = tf.config.experimental.list_physical_devices("GPU") for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) if gpus: tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()], "GPU") handlers = [logging.StreamHandler()] if hvd.rank() == 0: args.results_dir.mkdir(parents=True, exist_ok=True) handlers.append( logging.FileHandler( args.results_dir / f"{args.data}.log", mode="w", encoding="utf-8" ) ) logging.basicConfig( format="[%(levelname)-5s] %(message)s", level=logging.DEBUG if hvd.rank() == 0 else logging.WARNING, handlers=handlers, ) runs = 1 if args.check else 5 val_acc, val_ce, test_acc, test_ce = zip(*[_run(args) for _ in range(runs)]) if runs > 1: val_acc = np.mean(val_acc, axis=0) val_ce = np.mean(val_ce, axis=0) test_acc = np.mean(test_acc, axis=0) test_ce = np.mean(test_ce, axis=0) logger.info(f"Val Accuracy: {val_acc:.4f} ({runs} runs)") logger.info(f"Val Cross Entropy: {val_ce:.4f} ({runs} runs)") logger.info(f"Test Accuracy: {test_acc:.4f} ({runs} runs)") logger.info(f"Test Cross Entropy: {test_ce:.4f} ({runs} runs)") def _run(args): (X_train, y_train), (X_val, y_val), (X_test, y_test), num_classes = load_data( args.data ) epochs = 2 if args.check else 1800 refine_epoch = 2 if args.check else 100 batch_size = 64 global_batch_size = batch_size * hvd.size() base_lr = 1e-3 * global_batch_size steps_per_epoch = -(-len(X_train) // global_batch_size) input_shape = X_train.shape[1:] model = create_network(input_shape, num_classes) learning_rate = CosineAnnealing(base_lr, decay_steps=epochs * steps_per_epoch) optimizer = tf.keras.optimizers.SGD(learning_rate, momentum=0.9, nesterov=True) optimizer = hvd.DistributedOptimizer(optimizer, compression=hvd.Compression.fp16) def loss(y_true, logits): # categorical crossentropy log_p = logits - tf.math.reduce_logsumexp(logits, axis=-1, keepdims=True) loss = -tf.math.reduce_sum(y_true * log_p, axis=-1) # Label smoothing <https://myrtle.ai/learn/how-to-train-your-resnet-8-bag-of-tricks/> label_smoothing = 0.2 kl = -tf.math.reduce_mean(log_p, axis=-1) loss = (1 - label_smoothing) * loss + label_smoothing * kl return loss model.compile(optimizer, loss, metrics=["acc"], experimental_run_tf_function=False) model.summary(print_fn=logger.info) tf.keras.utils.plot_model( model, args.results_dir / f"{args.data}.png", show_shapes=True ) model.fit( create_dataset( X_train, y_train, batch_size, num_classes, shuffle=True, mode="train" ), steps_per_epoch=steps_per_epoch, validation_data=create_dataset( X_val, y_val, batch_size, num_classes, shuffle=True ), validation_steps=-(-len(X_val) * 3 // global_batch_size), validation_freq=[1] + list(range(epochs, 1, -int(epochs ** 0.5))), epochs=epochs, callbacks=[ hvd.callbacks.BroadcastGlobalVariablesCallback(0), hvd.callbacks.MetricAverageCallback(), ], verbose=1 if hvd.rank() == 0 else 0, ) # Refined Data Augmentation <https://arxiv.org/abs/1909.09148> # + freeze BNs <https://arxiv.org/abs/1709.01507> for layer in model.layers: if isinstance(layer, tf.keras.layers.BatchNormalization): layer.trainable = False optimizer = tf.keras.optimizers.SGD( learning_rate=base_lr / 100, momentum=0.9, nesterov=True ) optimizer = hvd.DistributedOptimizer(optimizer, compression=hvd.Compression.fp16) model.compile(optimizer, loss, metrics=["acc"], experimental_run_tf_function=False) model.fit( create_dataset( X_train, y_train, batch_size, num_classes, shuffle=True, mode="refine" ), steps_per_epoch=steps_per_epoch, validation_data=create_dataset( X_val, y_val, batch_size, num_classes, shuffle=True ), validation_steps=-(-len(X_val) * 3 // global_batch_size), validation_freq=[1] + list(range(refine_epoch, 1, -int(refine_epoch ** 0.5))), epochs=refine_epoch, callbacks=[ hvd.callbacks.BroadcastGlobalVariablesCallback(0), hvd.callbacks.MetricAverageCallback(), ], verbose=1 if hvd.rank() == 0 else 0, ) logger.info(f"Arguments: --data={args.data}") # 検証/評価 # 両方出してたら分けた意味ない気はするけど面倒なので両方出しちゃう # 普段はValを見ておいて最終評価はTestというお気持ち val_acc, val_ce = evaluate("Val", X_val, y_val, model, batch_size, num_classes) test_acc, test_ce = evaluate("Test", X_test, y_test, model, batch_size, num_classes) if hvd.rank() == 0: # 後で何かしたくなった時のために一応保存 model.save(args.results_dir / f"{args.data}.h5", include_optimizer=False) return val_acc, val_ce, test_acc, test_ce def evaluate(name, X_test, y_test, model, batch_size, num_classes): pred_test_list = [] for _ in range(64): # shardingして推論&TTA (再現性は無いので注意) shard_size = len(X_test) // hvd.size() shard_offset = shard_size * hvd.rank() s = X_test[shard_offset : shard_offset + shard_size] pred_test = model.predict( create_dataset( s, np.zeros((len(s),), dtype=np.int32), batch_size, num_classes, mode="refine", ), steps=-(-len(s) // batch_size), verbose=1 if hvd.rank() == 0 else 0, ) pred_test = hvd.allgather(pred_test).numpy() pred_test_list.append(pred_test) pred_test = np.mean(pred_test_list, axis=0) # 評価 acc = sklearn.metrics.accuracy_score(y_test, pred_test.argmax(axis=-1)) ce = sklearn.metrics.log_loss(y_test, scipy.special.softmax(pred_test, axis=-1)) logger.info(f"{name} Accuracy: {acc:.4f}") logger.info(f"{name} Cross Entropy: {ce:.4f}") return acc, ce def load_data(data): """データの読み込み。""" (X_train, y_train), (X_test, y_test) = { "mnist": tf.keras.datasets.mnist.load_data, "fashion_mnist": tf.keras.datasets.fashion_mnist.load_data, "cifar10": tf.keras.datasets.cifar10.load_data, "cifar100": tf.keras.datasets.cifar100.load_data, }[data]() y_train = np.squeeze(y_train) y_test = np.squeeze(y_test) num_classes = len(np.unique(y_train)) # 末尾1万件を検証データとする (本実装独自) X_val, y_val = X_train[-10000:], y_train[-10000:] # 訓練データはクラスごとに先頭から切り出す (Train with 1000準拠) X_train, y_train = extract1000(X_train, y_train, num_classes=num_classes) return (X_train, y_train), (X_val, y_val), (X_test, y_test), num_classes def extract1000(X, y, num_classes): """https://github.com/mastnk/train1000 を参考にクラスごとに均等に先頭から取得する処理。""" num_data = 1000 num_per_class = num_data // num_classes index_list = [] for c in range(num_classes): index_list.extend(np.where(y == c)[0][:num_per_class]) assert len(index_list) == num_data return X[index_list], y[index_list] def create_network(input_shape, num_classes): """ネットワークを作成して返す。""" conv2d = functools.partial( tf.keras.layers.Conv2D, kernel_size=3, padding="same", use_bias=False, kernel_initializer="he_uniform", kernel_regularizer=tf.keras.regularizers.l2(1e-4), ) bn = functools.partial( tf.keras.layers.BatchNormalization, gamma_regularizer=tf.keras.regularizers.l2(1e-4), ) act = functools.partial(tf.keras.layers.Activation, "relu") def blocks(filters, count, down=True): def layers(x): if down: x = conv2d(filters)(x) x = BlurPooling2D(taps=4)(x) x = bn()(x) for _ in range(count): sc = x x = conv2d(filters)(x) x = bn()(x) x = act()(x) x = conv2d(filters)(x) # resblockのadd前だけgammaの初期値を0にする。 <https://arxiv.org/abs/1812.01187> x = bn(gamma_initializer="zeros")(x) x = tf.keras.layers.add([sc, x]) x = bn()(x) x = act()(x) return x return layers inputs = x = tf.keras.layers.Input(input_shape) x = conv2d(128)(x) x = bn()(x) x = blocks(128, 8, down=False)(x) x = blocks(256, 8)(x) x = blocks(512, 8)(x) x = tf.keras.layers.GlobalAveragePooling2D()(x) x = tf.keras.layers.Dense( num_classes, use_bias=False, kernel_regularizer=tf.keras.regularizers.l2(1e-4) )(x) model = tf.keras.models.Model(inputs=inputs, outputs=x) return model def create_dataset(X, y, batch_size, num_classes, shuffle=False, mode="test"): """generator。""" if mode == "train": aug1 = A.Compose( [ RandomTransform((32, 32), p=1), RandomCompose( [ A.Equalize(mode="pil", by_channels=True, p=0.125), A.Equalize(mode="pil", by_channels=False, p=0.125), A.CLAHE(p=0.125), A.RandomBrightnessContrast(brightness_by_max=True, p=0.5), A.HueSaturationValue(val_shift_limit=0, p=0.5), A.Posterize(num_bits=(4, 7), p=0.125), A.Solarize(threshold=(50, 255 - 50), p=0.125), A.Blur(blur_limit=1, p=0.125), A.Sharpen(alpha=(0, 0.5), p=0.125), A.Emboss(alpha=(0, 0.5), p=0.125), A.GaussNoise(var_limit=(0, 10.0 ** 2), p=0.125), RandomErasing(alpha=0.125, p=0.25), ], p=1, ), ] ) aug2 = RandomErasing(p=0.5) elif mode == "refine": aug1 = RandomTransform.create_refine((32, 32)) aug2 = A.Compose([]) else: aug1 = A.Compose([]) aug2 = A.Compose([]) def process1(X_i, y_i): X_i = tf.numpy_function( lambda img: aug1(image=img)["image"], inp=[X_i], Tout=tf.uint8 ) X_i = tf.cast(X_i, tf.float32) y_i = tf.one_hot(y_i, num_classes, dtype=tf.float32) return X_i, y_i def process2(X_i, y_i): X_i = tf.numpy_function( lambda img: aug2(image=img)["image"], inp=[X_i], Tout=tf.float32 ) X_i = tf.ensure_shape(X_i, (None, None, None)) X_i = X_i / 127.5 - 1 return X_i, y_i ds = tf.data.Dataset.from_tensor_slices((X, y)) ds = ds.shuffle(buffer_size=len(X)) if shuffle else ds ds = ds.map( process1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=not shuffle ) if mode == "train": assert shuffle ds = mixup(ds, process2) else: ds = ds.map(process2) ds = ds.repeat() if shuffle else ds # シャッフル時はバッチサイズを固定するため先にrepeat ds = ds.batch(batch_size) ds = ds.prefetch(buffer_size=tf.data.AUTOTUNE) return ds def mixup( ds: tf.data.Dataset, postmix_fn: typing.Callable[..., typing.Any] = None, num_parallel_calls: int = None, ): """tf.dataでのmixup: <https://arxiv.org/abs/1710.09412> Args: ds: 元のデータセット postmix_fn: mixup後の処理 num_parallel_calls: premix_fnの並列数 """ @tf.function def mixup_fn(*data): r = tf.random.uniform(()) data = [ tf.cast(d[0], tf.float32) * r + tf.cast(d[1], tf.float32) * (1 - r) for d in data ] return data if postmix_fn is None else postmix_fn(*data) ds = ds.repeat() ds = ds.batch(2) ds = ds.map(mixup_fn, num_parallel_calls=num_parallel_calls) return ds class RandomCompose(A.Compose): """シャッフル付きCompose。""" def __call__(self, *args, force_apply=False, **data): """変換の適用。""" backup = self.transforms.transforms.copy() try: random.shuffle(self.transforms.transforms) return super().__call__(*args, force_apply=force_apply, **data) finally: self.transforms.transforms = backup class RandomTransform(A.DualTransform): """Flip, Scale, Resize, Rotateをまとめて処理。 Args: size: 出力サイズ(h, w) flip: 反転の有無(vertical, horizontal) translate: 平行移動の量(vertical, horizontal) border_mode: edge, reflect, wrap, zero, half, one clip_bboxes: はみ出すbboxをclippingするか否か with_bboxes: Trueならできるだけbboxを1つ以上含むようにcropする (bboxesが必須になってしまうため既定値False) mode: "normal", "preserve_aspect", "crop" """ @classmethod def create_refine( cls, size: tuple[int, int], flip: tuple[bool, bool] = (False, True), translate: tuple[float, float] = (0.0625, 0.0625), border_mode: str = "edge", clip_bboxes: bool = True, with_bboxes: bool = False, mode: str = "normal", always_apply: bool = False, p: float = 1.0, ) -> RandomTransform: """Refined Data Augmentation <https://arxiv.org/abs/1909.09148> 用の控えめバージョンを作成する。""" return cls( size=size, flip=flip, translate=translate, scale_prob=0.0, aspect_prob=0.0, rotate_prob=0.0, border_mode=border_mode, clip_bboxes=clip_bboxes, with_bboxes=with_bboxes, mode=mode, always_apply=always_apply, p=p, ) @classmethod def create_test( cls, size: tuple[int, int], border_mode: str = "edge", clip_bboxes: bool = True, mode: str = "normal", always_apply: bool = False, p: float = 1.0, ) -> RandomTransform: """Data Augmentation無しバージョン(リサイズのみ)を作成する。""" return cls( size=size, flip=(False, False), translate=(0.0, 0.0), scale_prob=0.0, aspect_prob=0.0, rotate_prob=0.0, border_mode=border_mode, clip_bboxes=clip_bboxes, mode=mode, always_apply=always_apply, p=p, ) def __init__( self, size: tuple[int, int], flip: tuple[bool, bool] = (False, True), translate: tuple[float, float] = (0.125, 0.125), scale_prob: float = 0.5, scale_range: tuple[float, float] = (2 / 3, 3 / 2), base_scale: float = 1.0, aspect_prob: float = 0.5, aspect_range: tuple[float, float] = (3 / 4, 4 / 3), rotate_prob: float = 0.25, rotate_range: tuple[int, int] = (-15, +15), border_mode: str = "edge", clip_bboxes: bool = True, with_bboxes: bool = False, mode: str = "normal", always_apply: bool = False, p: float = 1.0, ): super().__init__(always_apply=always_apply, p=p) self.size = size self.flip = flip self.translate = translate self.scale_prob = scale_prob self.base_scale = base_scale self.scale_range = scale_range self.aspect_prob = aspect_prob self.aspect_range = aspect_range self.rotate_prob = rotate_prob self.rotate_range = rotate_range self.border_mode = border_mode self.clip_bboxes = clip_bboxes self.with_bboxes = with_bboxes self.mode = mode def apply(self, img,
dict): self._property_changed('trading_pnl') self.__trading_pnl = value @property def collateral_value_required(self) -> dict: return self.__collateral_value_required @collateral_value_required.setter def collateral_value_required(self, value: dict): self._property_changed('collateral_value_required') self.__collateral_value_required = value @property def given_plus_paid(self) -> dict: return self.__given_plus_paid @given_plus_paid.setter def given_plus_paid(self, value: dict): self._property_changed('given_plus_paid') self.__given_plus_paid = value @property def short_conviction_small(self) -> dict: return self.__short_conviction_small @short_conviction_small.setter def short_conviction_small(self, value: dict): self._property_changed('short_conviction_small') self.__short_conviction_small = value @property def price_to_earnings_positive(self) -> dict: return self.__price_to_earnings_positive @price_to_earnings_positive.setter def price_to_earnings_positive(self, value: dict): self._property_changed('price_to_earnings_positive') self.__price_to_earnings_positive = value @property def forecast(self) -> dict: return self.__forecast @forecast.setter def forecast(self, value: dict): self._property_changed('forecast') self.__forecast = value @property def pnl(self) -> dict: return self.__pnl @pnl.setter def pnl(self, value: dict): self._property_changed('pnl') self.__pnl = value @property def upfront_payment_currency(self) -> dict: return self.__upfront_payment_currency @upfront_payment_currency.setter def upfront_payment_currency(self, value: dict): self._property_changed('upfront_payment_currency') self.__upfront_payment_currency = value @property def date_index(self) -> dict: return self.__date_index @date_index.setter def date_index(self, value: dict): self._property_changed('date_index') self.__date_index = value @property def tcm_cost_horizon4_day(self) -> dict: return self.__tcm_cost_horizon4_day @tcm_cost_horizon4_day.setter def tcm_cost_horizon4_day(self, value: dict): self._property_changed('tcm_cost_horizon4_day') self.__tcm_cost_horizon4_day = value @property def asset_classifications_is_primary(self) -> dict: return self.__asset_classifications_is_primary @asset_classifications_is_primary.setter def asset_classifications_is_primary(self, value: dict): self._property_changed('asset_classifications_is_primary') self.__asset_classifications_is_primary = value @property def styles(self) -> dict: return self.__styles @styles.setter def styles(self, value: dict): self._property_changed('styles') self.__styles = value @property def short_name(self) -> dict: return self.__short_name @short_name.setter def short_name(self, value: dict): self._property_changed('short_name') self.__short_name = value @property def dwi_contribution(self) -> dict: return self.__dwi_contribution @dwi_contribution.setter def dwi_contribution(self, value: dict): self._property_changed('dwi_contribution') self.__dwi_contribution = value @property def reset_frequency1(self) -> dict: return self.__reset_frequency1 @reset_frequency1.setter def reset_frequency1(self, value: dict): self._property_changed('reset_frequency1') self.__reset_frequency1 = value @property def asset2_id(self) -> dict: return self.__asset2_id @asset2_id.setter def asset2_id(self, value: dict): self._property_changed('asset2_id') self.__asset2_id = value @property def reset_frequency2(self) -> dict: return self.__reset_frequency2 @reset_frequency2.setter def reset_frequency2(self, value: dict): self._property_changed('reset_frequency2') self.__reset_frequency2 = value @property def average_fill_price(self) -> dict: return self.__average_fill_price @average_fill_price.setter def average_fill_price(self, value: dict): self._property_changed('average_fill_price') self.__average_fill_price = value @property def price_notation_type2(self) -> dict: return self.__price_notation_type2 @price_notation_type2.setter def price_notation_type2(self, value: dict): self._property_changed('price_notation_type2') self.__price_notation_type2 = value @property def price_notation_type3(self) -> dict: return self.__price_notation_type3 @price_notation_type3.setter def price_notation_type3(self, value: dict): self._property_changed('price_notation_type3') self.__price_notation_type3 = value @property def bid_gspread(self) -> dict: return self.__bid_gspread @bid_gspread.setter def bid_gspread(self, value: dict): self._property_changed('bid_gspread') self.__bid_gspread = value @property def open_price(self) -> dict: return self.__open_price @open_price.setter def open_price(self, value: dict): self._property_changed('open_price') self.__open_price = value @property def depth_spread_score(self) -> dict: return self.__depth_spread_score @depth_spread_score.setter def depth_spread_score(self, value: dict): self._property_changed('depth_spread_score') self.__depth_spread_score = value @property def sub_account(self) -> dict: return self.__sub_account @sub_account.setter def sub_account(self, value: dict): self._property_changed('sub_account') self.__sub_account = value @property def notional_currency_leg1(self) -> dict: return self.__notional_currency_leg1 @notional_currency_leg1.setter def notional_currency_leg1(self, value: dict): self._property_changed('notional_currency_leg1') self.__notional_currency_leg1 = value @property def notional_currency_leg2(self) -> dict: return self.__notional_currency_leg2 @notional_currency_leg2.setter def notional_currency_leg2(self, value: dict): self._property_changed('notional_currency_leg2') self.__notional_currency_leg2 = value @property def fair_volatility(self) -> dict: return self.__fair_volatility @fair_volatility.setter def fair_volatility(self, value: dict): self._property_changed('fair_volatility') self.__fair_volatility = value @property def dollar_cross(self) -> dict: return self.__dollar_cross @dollar_cross.setter def dollar_cross(self, value: dict): self._property_changed('dollar_cross') self.__dollar_cross = value @property def portfolio_type(self) -> dict: return self.__portfolio_type @portfolio_type.setter def portfolio_type(self, value: dict): self._property_changed('portfolio_type') self.__portfolio_type = value @property def vendor(self) -> dict: return self.__vendor @vendor.setter def vendor(self, value: dict): self._property_changed('vendor') self.__vendor = value @property def currency(self) -> dict: return self.__currency @currency.setter def currency(self, value: dict): self._property_changed('currency') self.__currency = value @property def cluster_class(self) -> dict: return self.__cluster_class @cluster_class.setter def cluster_class(self, value: dict): self._property_changed('cluster_class') self.__cluster_class = value @property def queueing_time(self) -> dict: return self.__queueing_time @queueing_time.setter def queueing_time(self, value: dict): self._property_changed('queueing_time') self.__queueing_time = value @property def ann_return5_year(self) -> dict: return self.__ann_return5_year @ann_return5_year.setter def ann_return5_year(self, value: dict): self._property_changed('ann_return5_year') self.__ann_return5_year = value @property def bid_size(self) -> dict: return self.__bid_size @bid_size.setter def bid_size(self, value: dict): self._property_changed('bid_size') self.__bid_size = value @property def arrival_mid(self) -> dict: return self.__arrival_mid @arrival_mid.setter def arrival_mid(self, value: dict): self._property_changed('arrival_mid') self.__arrival_mid = value @property def business_sponsor(self) -> dict: return self.__business_sponsor @business_sponsor.setter def business_sponsor(self, value: dict): self._property_changed('business_sponsor') self.__business_sponsor = value @property def asset_parameters_exchange_currency(self) -> dict: return self.__asset_parameters_exchange_currency @asset_parameters_exchange_currency.setter def asset_parameters_exchange_currency(self, value: dict): self._property_changed('asset_parameters_exchange_currency') self.__asset_parameters_exchange_currency = value @property def unexplained(self) -> dict: return self.__unexplained @unexplained.setter def unexplained(self, value: dict): self._property_changed('unexplained') self.__unexplained = value @property def candidate_name(self) -> dict: return self.__candidate_name @candidate_name.setter def candidate_name(self, value: dict): self._property_changed('candidate_name') self.__candidate_name = value @property def metric(self) -> dict: return self.__metric @metric.setter def metric(self, value: dict): self._property_changed('metric') self.__metric = value @property def ask(self) -> dict: return self.__ask @ask.setter def ask(self, value: dict): self._property_changed('ask') self.__ask = value @property def implied_lognormal_volatility(self) -> dict: return self.__implied_lognormal_volatility @implied_lognormal_volatility.setter def implied_lognormal_volatility(self, value: dict): self._property_changed('implied_lognormal_volatility') self.__implied_lognormal_volatility = value @property def close_price(self) -> dict: return self.__close_price @close_price.setter def close_price(self, value: dict): self._property_changed('close_price') self.__close_price = value @property def absolute_strike(self) -> dict: return self.__absolute_strike @absolute_strike.setter def absolute_strike(self, value: dict): self._property_changed('absolute_strike') self.__absolute_strike = value @property def source(self) -> dict: return self.__source @source.setter def source(self, value: dict): self._property_changed('source') self.__source = value @property def asset_classifications_country_code(self) -> dict: return self.__asset_classifications_country_code @asset_classifications_country_code.setter def asset_classifications_country_code(self, value: dict): self._property_changed('asset_classifications_country_code') self.__asset_classifications_country_code = value @property def expense_ratio_net_bps(self) -> dict: return self.__expense_ratio_net_bps @expense_ratio_net_bps.setter def expense_ratio_net_bps(self, value: dict): self._property_changed('expense_ratio_net_bps') self.__expense_ratio_net_bps = value @property def data_set_sub_category(self) -> dict: return self.__data_set_sub_category @data_set_sub_category.setter def data_set_sub_category(self, value: dict): self._property_changed('data_set_sub_category') self.__data_set_sub_category = value @property def day_count_convention2(self) -> dict: return self.__day_count_convention2 @day_count_convention2.setter def day_count_convention2(self, value: dict): self._property_changed('day_count_convention2') self.__day_count_convention2 = value @property def quantity_bucket(self) -> dict: return self.__quantity_bucket @quantity_bucket.setter def quantity_bucket(self, value: dict): self._property_changed('quantity_bucket') self.__quantity_bucket = value @property def factor_two(self) -> dict: return self.__factor_two @factor_two.setter def factor_two(self, value: dict): self._property_changed('factor_two') self.__factor_two = value @property def oe_name(self) -> dict: return self.__oe_name @oe_name.setter def oe_name(self, value: dict): self._property_changed('oe_name') self.__oe_name = value @property def given(self) -> dict: return self.__given @given.setter def given(self, value: dict): self._property_changed('given') self.__given = value @property def delisting_date(self) -> dict: return self.__delisting_date @delisting_date.setter def delisting_date(self, value: dict): self._property_changed('delisting_date') self.__delisting_date = value @property def price_spot_target_value(self) -> dict: return self.__price_spot_target_value @price_spot_target_value.setter def price_spot_target_value(self, value: dict): self._property_changed('price_spot_target_value') self.__price_spot_target_value = value @property def weight(self) -> dict: return self.__weight @weight.setter def weight(self, value: dict): self._property_changed('weight') self.__weight = value @property def business_scope(self) -> dict: return self.__business_scope @business_scope.setter def business_scope(self, value: dict): self._property_changed('business_scope') self.__business_scope = value @property def market_data_point(self) -> dict: return self.__market_data_point @market_data_point.setter def market_data_point(self, value: dict): self._property_changed('market_data_point') self.__market_data_point = value @property def absolute_weight(self) -> dict: return self.__absolute_weight @absolute_weight.setter def absolute_weight(self, value: dict): self._property_changed('absolute_weight') self.__absolute_weight = value @property def measure(self) -> dict: return self.__measure @measure.setter def measure(self, value: dict): self._property_changed('measure') self.__measure = value @property def hedge_annualized_volatility(self) -> dict: return self.__hedge_annualized_volatility @hedge_annualized_volatility.setter def hedge_annualized_volatility(self, value: dict): self._property_changed('hedge_annualized_volatility') self.__hedge_annualized_volatility = value @property def benchmark_currency(self) -> dict: return self.__benchmark_currency @benchmark_currency.setter def benchmark_currency(self, value: dict): self._property_changed('benchmark_currency') self.__benchmark_currency = value @property def futures_contract(self) -> dict: return self.__futures_contract @futures_contract.setter def futures_contract(self, value: dict): self._property_changed('futures_contract') self.__futures_contract = value @property def name(self) -> dict: return self.__name @name.setter def name(self, value: dict): self._property_changed('name') self.__name = value @property def aum(self) -> dict: return self.__aum @aum.setter def aum(self, value: dict): self._property_changed('aum') self.__aum = value @property def folder_name(self) -> dict: return self.__folder_name @folder_name.setter def folder_name(self, value: dict): self._property_changed('folder_name') self.__folder_name = value @property def swaption_atm_fwd_rate(self) -> dict: return self.__swaption_atm_fwd_rate @swaption_atm_fwd_rate.setter def swaption_atm_fwd_rate(self, value: dict): self._property_changed('swaption_atm_fwd_rate') self.__swaption_atm_fwd_rate = value @property def live_date(self) -> dict: return self.__live_date @live_date.setter def live_date(self, value: dict): self._property_changed('live_date') self.__live_date = value @property def ask_high(self) -> dict: return self.__ask_high @ask_high.setter def ask_high(self, value: dict): self._property_changed('ask_high') self.__ask_high = value @property def corporate_action_type(self) -> dict: return self.__corporate_action_type @corporate_action_type.setter def corporate_action_type(self, value: dict): self._property_changed('corporate_action_type') self.__corporate_action_type = value @property def prime_id(self) -> dict: return self.__prime_id @prime_id.setter def prime_id(self, value: dict): self._property_changed('prime_id') self.__prime_id = value @property def region_name(self) -> dict: return self.__region_name @region_name.setter def region_name(self, value: dict): self._property_changed('region_name') self.__region_name = value @property def description(self) -> dict: return self.__description @description.setter def description(self, value: dict): self._property_changed('description') self.__description = value @property def asset_classifications_is_country_primary(self) -> dict: return self.__asset_classifications_is_country_primary @asset_classifications_is_country_primary.setter def asset_classifications_is_country_primary(self, value: dict): self._property_changed('asset_classifications_is_country_primary') self.__asset_classifications_is_country_primary = value @property def value_revised(self) -> dict: return self.__value_revised @value_revised.setter def value_revised(self, value: dict): self._property_changed('value_revised') self.__value_revised = value
if self.Value is not None: namespaceprefix_ = self.Value_nsprefix_ + ':' if (UseCapturedNS_ and self.Value_nsprefix_) else '' showIndent(outfile, level, pretty_print) outfile.write('<%sValue>%s</%sValue>%s' % (namespaceprefix_ , self.gds_format_decimal(self.Value, input_name='Value'), namespaceprefix_ , eol_)) def build(self, node, gds_collector_=None): self.gds_collector_ = gds_collector_ if SaveElementTreeNode: self.gds_elementtree_node_ = node already_processed = set() self.ns_prefix_ = node.prefix self.buildAttributes(node, node.attrib, already_processed) for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_, gds_collector_=gds_collector_) return self def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False, gds_collector_=None): if nodeName_ == 'Units': value_ = child_.text value_ = self.gds_parse_string(value_, node, 'Units') value_ = self.gds_validate_string(value_, node, 'Units') self.Units = value_ self.Units_nsprefix_ = child_.prefix # validate type WeightUnits self.validate_WeightUnits(self.Units) elif nodeName_ == 'Value' and child_.text: sval_ = child_.text fval_ = self.gds_parse_decimal(sval_, node, 'Value') fval_ = self.gds_validate_decimal(fval_, node, 'Value') self.Value = fval_ self.Value_nsprefix_ = child_.prefix # end class Weight class WebAuthenticationDetail(GeneratedsSuper): """Used in authentication of the sender's identity.""" __hash__ = GeneratedsSuper.__hash__ subclass = None superclass = None def __init__(self, ParentCredential=None, UserCredential=None, gds_collector_=None, **kwargs_): self.gds_collector_ = gds_collector_ self.gds_elementtree_node_ = None self.original_tagname_ = None self.parent_object_ = kwargs_.get('parent_object_') self.ns_prefix_ = None self.ParentCredential = ParentCredential self.ParentCredential_nsprefix_ = None self.UserCredential = UserCredential self.UserCredential_nsprefix_ = None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, WebAuthenticationDetail) if subclass is not None: return subclass(*args_, **kwargs_) if WebAuthenticationDetail.subclass: return WebAuthenticationDetail.subclass(*args_, **kwargs_) else: return WebAuthenticationDetail(*args_, **kwargs_) factory = staticmethod(factory) def get_ns_prefix_(self): return self.ns_prefix_ def set_ns_prefix_(self, ns_prefix): self.ns_prefix_ = ns_prefix def get_ParentCredential(self): return self.ParentCredential def set_ParentCredential(self, ParentCredential): self.ParentCredential = ParentCredential def get_UserCredential(self): return self.UserCredential def set_UserCredential(self, UserCredential): self.UserCredential = UserCredential def hasContent_(self): if ( self.ParentCredential is not None or self.UserCredential is not None ): return True else: return False def export(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='WebAuthenticationDetail', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('WebAuthenticationDetail') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ = '\n' else: eol_ = '' if self.original_tagname_ is not None and name_ == 'WebAuthenticationDetail': name_ = self.original_tagname_ if UseCapturedNS_ and self.ns_prefix_: namespaceprefix_ = self.ns_prefix_ + ':' showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespaceprefix_, name_, namespacedef_ and ' ' + namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespaceprefix_, name_='WebAuthenticationDetail') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespaceprefix_, namespacedef_, name_='WebAuthenticationDetail', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespaceprefix_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespaceprefix_='', name_='WebAuthenticationDetail'): pass def exportChildren(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='WebAuthenticationDetail', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\n' else: eol_ = '' if self.ParentCredential is not None: namespaceprefix_ = self.ParentCredential_nsprefix_ + ':' if (UseCapturedNS_ and self.ParentCredential_nsprefix_) else '' self.ParentCredential.export(outfile, level, namespaceprefix_, namespacedef_='', name_='ParentCredential', pretty_print=pretty_print) if self.UserCredential is not None: namespaceprefix_ = self.UserCredential_nsprefix_ + ':' if (UseCapturedNS_ and self.UserCredential_nsprefix_) else '' self.UserCredential.export(outfile, level, namespaceprefix_, namespacedef_='', name_='UserCredential', pretty_print=pretty_print) def build(self, node, gds_collector_=None): self.gds_collector_ = gds_collector_ if SaveElementTreeNode: self.gds_elementtree_node_ = node already_processed = set() self.ns_prefix_ = node.prefix self.buildAttributes(node, node.attrib, already_processed) for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_, gds_collector_=gds_collector_) return self def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False, gds_collector_=None): if nodeName_ == 'ParentCredential': obj_ = WebAuthenticationCredential.factory(parent_object_=self) obj_.build(child_, gds_collector_=gds_collector_) self.ParentCredential = obj_ obj_.original_tagname_ = 'ParentCredential' elif nodeName_ == 'UserCredential': obj_ = WebAuthenticationCredential.factory(parent_object_=self) obj_.build(child_, gds_collector_=gds_collector_) self.UserCredential = obj_ obj_.original_tagname_ = 'UserCredential' # end class WebAuthenticationDetail class WebAuthenticationCredential(GeneratedsSuper): """Two part authentication string used for the sender's identity""" __hash__ = GeneratedsSuper.__hash__ subclass = None superclass = None def __init__(self, Key=None, Password=<PASSWORD>, gds_collector_=None, **kwargs_): self.gds_collector_ = gds_collector_ self.gds_elementtree_node_ = None self.original_tagname_ = None self.parent_object_ = kwargs_.get('parent_object_') self.ns_prefix_ = None self.Key = Key self.Key_nsprefix_ = None self.Password = Password self.Password_nsprefix_ = None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, WebAuthenticationCredential) if subclass is not None: return subclass(*args_, **kwargs_) if WebAuthenticationCredential.subclass: return WebAuthenticationCredential.subclass(*args_, **kwargs_) else: return WebAuthenticationCredential(*args_, **kwargs_) factory = staticmethod(factory) def get_ns_prefix_(self): return self.ns_prefix_ def set_ns_prefix_(self, ns_prefix): self.ns_prefix_ = ns_prefix def get_Key(self): return self.Key def set_Key(self, Key): self.Key = Key def get_Password(self): return self.Password def set_Password(self, Password): self.Password = Password def hasContent_(self): if ( self.Key is not None or self.Password is not None ): return True else: return False def export(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='WebAuthenticationCredential', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('WebAuthenticationCredential') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ = '\n' else: eol_ = '' if self.original_tagname_ is not None and name_ == 'WebAuthenticationCredential': name_ = self.original_tagname_ if UseCapturedNS_ and self.ns_prefix_: namespaceprefix_ = self.ns_prefix_ + ':' showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespaceprefix_, name_, namespacedef_ and ' ' + namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespaceprefix_, name_='WebAuthenticationCredential') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespaceprefix_, namespacedef_, name_='WebAuthenticationCredential', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespaceprefix_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespaceprefix_='', name_='WebAuthenticationCredential'): pass def exportChildren(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='WebAuthenticationCredential', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\n' else: eol_ = '' if self.Key is not None: namespaceprefix_ = self.Key_nsprefix_ + ':' if (UseCapturedNS_ and self.Key_nsprefix_) else '' showIndent(outfile, level, pretty_print) outfile.write('<%sKey>%s</%sKey>%s' % (namespaceprefix_ , self.gds_encode(self.gds_format_string(quote_xml(self.Key), input_name='Key')), namespaceprefix_ , eol_)) if self.Password is not None: namespaceprefix_ = self.Password_nsprefix_ + ':' if (UseCapturedNS_ and self.Password_nsprefix_) else '' showIndent(outfile, level, pretty_print) outfile.write('<%sPassword>%s</%sPassword>%s' % (namespaceprefix_ , self.gds_encode(self.gds_format_string(quote_xml(self.Password), input_name='Password')), namespaceprefix_ , eol_)) def build(self, node, gds_collector_=None): self.gds_collector_ = gds_collector_ if SaveElementTreeNode: self.gds_elementtree_node_ = node already_processed = set() self.ns_prefix_ = node.prefix self.buildAttributes(node, node.attrib, already_processed) for child in node: nodeName_ = Tag_pattern_.match(child.tag).groups()[-1] self.buildChildren(child, node, nodeName_, gds_collector_=gds_collector_) return self def buildAttributes(self, node, attrs, already_processed): pass def buildChildren(self, child_, node, nodeName_, fromsubclass_=False, gds_collector_=None): if nodeName_ == 'Key': value_ = child_.text value_ = self.gds_parse_string(value_, node, 'Key') value_ = self.gds_validate_string(value_, node, 'Key') self.Key = value_ self.Key_nsprefix_ = child_.prefix elif nodeName_ == 'Password': value_ = child_.text value_ = self.gds_parse_string(value_, node, 'Password') value_ = self.gds_validate_string(value_, node, 'Password') self.Password = value_ self.Password_nsprefix_ = child_.prefix # end class WebAuthenticationCredential class VersionId(GeneratedsSuper): """Identifies the version/level of a service operation expected by a caller (in each request) and performed by the callee (in each reply).""" __hash__ = GeneratedsSuper.__hash__ subclass = None superclass = None def __init__(self, ServiceId=None, Major=None, Intermediate=None, Minor=None, gds_collector_=None, **kwargs_): self.gds_collector_ = gds_collector_ self.gds_elementtree_node_ = None self.original_tagname_ = None self.parent_object_ = kwargs_.get('parent_object_') self.ns_prefix_ = None self.ServiceId = ServiceId self.ServiceId_nsprefix_ = None self.Major = Major self.Major_nsprefix_ = None self.Intermediate = Intermediate self.Intermediate_nsprefix_ = None self.Minor = Minor self.Minor_nsprefix_ = None def factory(*args_, **kwargs_): if CurrentSubclassModule_ is not None: subclass = getSubclassFromModule_( CurrentSubclassModule_, VersionId) if subclass is not None: return subclass(*args_, **kwargs_) if VersionId.subclass: return VersionId.subclass(*args_, **kwargs_) else: return VersionId(*args_, **kwargs_) factory = staticmethod(factory) def get_ns_prefix_(self): return self.ns_prefix_ def set_ns_prefix_(self, ns_prefix): self.ns_prefix_ = ns_prefix def get_ServiceId(self): return self.ServiceId def set_ServiceId(self, ServiceId): self.ServiceId = ServiceId def get_Major(self): return self.Major def set_Major(self, Major): self.Major = Major def get_Intermediate(self): return self.Intermediate def set_Intermediate(self, Intermediate): self.Intermediate = Intermediate def get_Minor(self): return self.Minor def set_Minor(self, Minor): self.Minor = Minor def hasContent_(self): if ( self.ServiceId is not None or self.Major is not None or self.Intermediate is not None or self.Minor is not None ): return True else: return False def export(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='VersionId', pretty_print=True): imported_ns_def_ = GenerateDSNamespaceDefs_.get('VersionId') if imported_ns_def_ is not None: namespacedef_ = imported_ns_def_ if pretty_print: eol_ = '\n' else: eol_ = '' if self.original_tagname_ is not None and name_ == 'VersionId': name_ = self.original_tagname_ if UseCapturedNS_ and self.ns_prefix_: namespaceprefix_ = self.ns_prefix_ + ':' showIndent(outfile, level, pretty_print) outfile.write('<%s%s%s' % (namespaceprefix_, name_, namespacedef_ and ' ' + namespacedef_ or '', )) already_processed = set() self.exportAttributes(outfile, level, already_processed, namespaceprefix_, name_='VersionId') if self.hasContent_(): outfile.write('>%s' % (eol_, )) self.exportChildren(outfile, level + 1, namespaceprefix_, namespacedef_, name_='VersionId', pretty_print=pretty_print) showIndent(outfile, level, pretty_print) outfile.write('</%s%s>%s' % (namespaceprefix_, name_, eol_)) else: outfile.write('/>%s' % (eol_, )) def exportAttributes(self, outfile, level, already_processed, namespaceprefix_='', name_='VersionId'): pass def exportChildren(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='VersionId', fromsubclass_=False, pretty_print=True): if pretty_print: eol_ = '\n' else: eol_ = '' if self.ServiceId is not None: namespaceprefix_ = self.ServiceId_nsprefix_ + ':' if (UseCapturedNS_ and self.ServiceId_nsprefix_) else '' showIndent(outfile, level, pretty_print) outfile.write('<%sServiceId>%s</%sServiceId>%s' % (namespaceprefix_ , self.gds_encode(self.gds_format_string(quote_xml(self.ServiceId), input_name='ServiceId')), namespaceprefix_ , eol_)) if self.Major is not None: namespaceprefix_ = self.Major_nsprefix_ + ':'
<reponame>WithPrecedent/rankings_remix """ rankings_remix: US News Law School Rankings Done Better <NAME> <<EMAIL>> Copyright 2020-2021, <NAME> License: Apache-2.0 (https://www.apache.org/licenses/LICENSE-2.0) """ from __future__ import annotations import pathlib from typing import (Any, Callable, ClassVar, Dict, Hashable, Iterable, List, Mapping, MutableMapping, MutableSequence, Optional, Sequence, Set, Tuple, Type, Union) import warnings import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn import preprocessing DATA_FOLDER = pathlib.Path('..') / 'data' / 'rankings' VISUALS_FOLDER = pathlib.Path('..') / 'visuals' / 'rankings' IMPORT_DATA_PATH = pathlib.Path(DATA_FOLDER) / '2022rankings.csv' EXPORT_DATA_PATH = pathlib.Path(DATA_FOLDER) / 'remixed_rankings.csv' RENAMES = { 'Rank': 'US News Rank', 'Overall score': 'US News Score', 'Peer assessment score (5.0=highest)': 'Peer Score', 'Assessment score by lawyers/judges (5.0=highest)': 'Practitioner Score', '2020 undergrad GPA 25th-75th percentile': 'GPA', '2020 LSAT score 25th-75th percentile': 'LSAT', '2020 acceptance rate': 'Acceptance Rate', '2020 student/faculty ratio': 'Student/Faculty Ratio', '2019 grads employed at graduation²': 'Immediate Employed', '2019 grads employed 10 months after graduation²': '10-Month Employed', 'School\'s bar passage rate in jurisdiction': 'Bar Pass Rate', 'Jurisdiction\'s overall bar passage rate': 'Jurisdiction Bar Pass Rate', 'Propotion of 2020 J.D. graduates who borrowed at least one educational loan in law school': 'Student Percentage with Loan', 'Average indebtedness of 2020 J.D. graduates who incurred law school debt': 'Average Debt'} TYPES = { 'School': str, 'US News Rank': int, 'US News Score': int, 'Peer Score': float, 'Practitioner Score': float, 'GPA': float, 'LSAT': float, 'Acceptance Rate': 'percent', 'Student/Faculty Ratio': float, 'Immediate Employed': 'percent', '10-Month Employed': 'percent', 'Bar Pass Rate': 'percent', 'Jurisdiction Bar Pass Rate': 'percent', 'Bar Pass Ratio': float, 'Student Percentage with Loan': 'percent', 'Average Debt': int} LOW_IS_BETTER = [ 'Acceptance Rate', 'Student/Faculty Ratio', 'Student Percentage with Loan', 'Average Debt'] USNEWS_WEIGHTS = { 'Peer Score': 0.25, 'Practitioner Score': 0.15, 'GPA Estimated Median': 0.0875, 'LSAT Estimated Median': 0.1125, 'Acceptance Rate': .01, 'Immediate Employed': 0.04, '10-Month Employed': 0.14, 'Bar Pass Ratio': 0.0225, 'Student/Faculty Ratio': 0.02, 'Student Percentage with Loan': 0.02, 'Average Debt': 0.03} CORE_COLUMNS = [ 'Peer Score', 'Practitioner Score', 'GPA Estimated Median', 'LSAT Estimated Median', '10-Month Employed', 'Average Debt'] def minmax_scale(data: pd.Series, low_is_good: bool = False) -> pd.Series: if low_is_good: data = pd.Series(1 - data) return preprocessing.minmax_scale(data) def standard_scale(data: pd.Series, low_is_good: bool = False) -> pd.Series: if low_is_good: data = pd.Series(1 - data) return preprocessing.scale(data) scalers = { 'Percent': minmax_scale, 'Standardized': standard_scale} def import_rankings_data() -> pd.DataFrame: df = pd.read_csv(IMPORT_DATA_PATH, encoding = 'windows-1252') df = df.rename(columns = RENAMES) return df def fix_lower_ranks(df: pd.DataFrame) -> pd.DataFrame: df['US News Rank'] = df['US News Rank'].str.split('-').str[0] df['US News Rank'] = pd.to_numeric(df['US News Rank'], downcast = 'integer') df['US News Score Scaled'] = minmax_scale(data = df['US News Score']) df['US News Rank Scaled'] = minmax_scale(data = df['US News Rank'], low_is_good = True) return df def change_to_numerical(df: pd.DataFrame) -> pd.DataFrame: for column in df.columns: for suffix, kind in TYPES.items(): if column.endswith(suffix) and df.dtypes[column] in [object]: if kind in [int]: df[column] = df[column].str.replace('$', '') df[column] = df[column].str.replace(',', '') df[column] = df[column].astype('float') df[column] = pd.to_numeric(df[column], downcast = 'integer') elif suffix in ['GPA', 'LSAT']: low = f'{column} 25' high = f'{column} 75' median = f'{column} Estimated Median' df[[low, high]] = df[column].str.split('-', expand = True) df[low] = pd.to_numeric(df[low]) df[high] = pd.to_numeric(df[high]) df[median] = df[[low, high]].mean(axis = 1) elif kind in [float, 'percent']: df[column] = (df[column].str.split('%') .str[0] .astype('float')) if kind in ['percent']: df[column] = df[column]/100 return df def compute_bar_ratio(df: pd.DataFrame) -> pd.DataFrame: df['Bar Pass Ratio'] = ( df['Bar Pass Rate'] / df['Jurisdiction Bar Pass Rate']) return df def ordinal_rank(df: pd.DataFrame, column: str, method: str = 'max') -> pd.Series: return df[column].rank(method = method) def scale_all_usnews_columns(df: pd.DataFrame) -> pd.DataFrame: for name, scaler in scalers.items(): for column in USNEWS_WEIGHTS.keys(): kwargs = {} if column in LOW_IS_BETTER: kwargs['low_is_good'] = True scaled_column = f'{name} {column}' df[scaled_column] = scaler(data = df[column], **kwargs) return df def compute_scores(df: pd.DataFrame) -> pd.DataFrame: keys = tuple(USNEWS_WEIGHTS.keys()) for name in scalers.keys(): columns = [col for col in df if col.startswith(name)] columns = [col for col in columns if col.endswith(keys)] score_column = f'{name} Score' scaled_score_column = f'{score_column} Scaled' weights = list(USNEWS_WEIGHTS.values()) df[score_column] = df[columns].mul(weights).sum(1) df[scaled_score_column] = minmax_scale(data = df[score_column]) core_columns = [col for col in df if col.startswith('Percent')] core_columns = [ col for col in core_columns if col.endswith(tuple(CORE_COLUMNS))] core_weights = {k: v for k, v in USNEWS_WEIGHTS.items() if k in CORE_COLUMNS} core_weights = list(core_weights.values()) df['Core Score'] = df[core_columns].mul(core_weights).sum(1) df['Core Score Scaled'] = minmax_scale(data = df['Core Score']) return df def compute_ranks(df: pd.DataFrame) -> pd.DataFrame: keys = tuple(USNEWS_WEIGHTS.keys()) for name in scalers.keys(): columns = [col for col in df if col.startswith(name)] columns = [col for col in columns if col.endswith(keys)] score_column = f'{name} Score' rank_column = f'{name} Rank' df[rank_column] = df[score_column].rank(method = 'min', ascending = False) df['Core Rank'] = df['Core Score'].rank(method = 'min', ascending = False) df['US News Rank Adjusted'] = df['US News Rank'].rank(method = 'min', ascending = True) return df def add_comparisons(df: pd.DataFrame) -> pd.DataFrame: df['Hidden Data Rank Boost'] = (df['Standardized Rank'] - df['US News Rank Adjusted']) df['Standardization Rank Boost'] = (df['Percent Rank'] - df['Standardized Rank']) df['Questionable Data Categories Rank Boost'] = (df['Core Rank'] - df['Percent Rank']) df['Hidden Data Score Boost'] = (df['US News Score Scaled'] - df['Standardized Score Scaled']) df['Standardization Score Boost'] = (df['Standardized Score Scaled'] - df['Percent Score Scaled']) df['Questionable Data Categories Score Boost'] = (df['Percent Score Scaled'] - df['Core Score Scaled']) df['Total Rank Boost'] = (df['Hidden Data Rank Boost'] + df['Standardization Rank Boost'] + df['Questionable Data Categories Rank Boost']) df['Total Score Boost'] = (df['Hidden Data Score Boost'] + df['Standardization Score Boost'] + df['Questionable Data Categories Score Boost']) return df def visualize_rank_versus_score(df: pd.DataFrame) -> None: rank_score = sns.scatterplot(x = df['US News Score'], y = df['US News Rank Adjusted'], color = 'aqua') rank_score.set_xlim([df['US News Score'].min(), df['US News Score'].max()]) rank_score.set_ylim([df['US News Rank Adjusted'].min(), df['US News Rank Adjusted'].max()]) rank_score.figure.suptitle( 'US News Rank vs. US News Score', size = 16) plt.tight_layout() export_path = pathlib.Path(VISUALS_FOLDER) / 'rank_score.png' rank_score.figure.savefig(export_path) plt.close() return def visualize_standardization_effects(df: pd.DataFrame) -> None: percent_columns = [f'Percent {key}' for key in USNEWS_WEIGHTS] standardized_columns = [f'Standardized {key}' for key in USNEWS_WEIGHTS] column_groups = [percent_columns, standardized_columns] distributions, axes = plt.subplots(figsize = (12, 6), ncols = len(column_groups), sharey = True) distributions.suptitle( 'Effects of Standardization on Component Score Distributions', size = 16) for i, column_group in enumerate(column_groups): for column in column_group: distributions = sns.kdeplot(df[column], ax = axes[i], legend = False) axes[0].set_xlim([0, 1]) axes[1].set_xlim([-3, 3]) distributions.figure.legend(title = 'Categories', bbox_to_anchor = (1.01, 1), borderaxespad = 0, labels = list(USNEWS_WEIGHTS.keys()), prop = {'size': 10}) plt.tight_layout() export_path = pathlib.Path(VISUALS_FOLDER) / 'category_distributions.png' distributions.figure.savefig(export_path) plt.close() return def visualize_score_rank_distributions(df: pd.DataFrame) -> None: distributions, axis = plt.subplots() distributions.suptitle('Distributions of US News Scores and Rankings', size = 16) sns.histplot(df['US News Score Scaled'], ax = axis, color = 'blue', kde = True, label = 'US News Score') sns.histplot(df['US News Rank Scaled'], ax = axis, color = 'orange', kde = True, label = 'US News Rank') axis.set(xlabel = 'US News Scores and Ranks (percent scale)') axis.set_xlim([0, 1]) plt.legend() plt.tight_layout() export_path = pathlib.Path(VISUALS_FOLDER) / 'score_rank_distributions.png' distributions.figure.savefig(export_path) plt.close() return def visualize_comparisons(df: pd.DataFrame) -> None: public_scatter = sns.scatterplot(x = df['Standardized Rank'], y = df['US News Rank'], color = 'green') plt.xlabel('Estimated US News Rank Based on Public Data') public_scatter.set_xlim([df['US News Rank'].min(), df['US News Rank'].max()]) public_scatter.set_ylim([df['US News Rank'].min(), df['US News Rank'].max()]) public_scatter.figure.suptitle( 'Effects of Hidden Data on School Rankings', size = 16) plt.tight_layout() export_path = pathlib.Path(VISUALS_FOLDER) / 'public_scatter.png' public_scatter.figure.savefig(export_path) plt.close() score_comparison = sns.scatterplot(x = df['US News Score Scaled'], y = df['US News Score'], label = 'US News', color = 'orange', alpha = 0.35) score_comparison = sns.scatterplot(x = df['Percent Score Scaled'], y = df['US News Score'], label = 'Percent', color = 'olive', alpha = 0.5) score_comparison = sns.scatterplot(x = df['Core Score Scaled'], y = df['US News Score'], label = 'Core Metrics', color = 'purple', alpha = 0.35) score_comparison.set_xlim(0, 1) score_comparison.set_ylim([df['US News Score'].min() - 5, df['US News Score'].max() + 5]) plt.xlabel('Percent Score') score_comparison.figure.suptitle(
from tieler.tile_cpp import fill_mesh from tieler.periodicity import compute_vertex_periodicity from dolfin import Mesh, info, Timer, CompiledSubDomain from collections import namedtuple from copy import deepcopy import numpy as np try: from itertools import izip except ImportError: izip = zip # MAIN IDEA: # We have x, cells, ... representing a single cell. To get the tile # repeated ntimes I don't simply stack next to each other. # Consider 1+1+1+1+1+1+1+1 has 7 additions. But (4+4) where 4 = 2+2 where # 2 = 1+1 is three! So providided that `+` scales reasonably with input # size this is more efficient. Here + is really unary. For input other # than power of 2 it is necessary to extend it to binary, e.g. 5 = (2+2)+1 def powers2(num): '''List (descending) of powers of two in the number''' powers = [int(power) for power, value in enumerate(reversed(format(num, 'b'))) if value != '0'] info('Number of binary + %d' % (len(powers) - 1)) info('Largest tile does %d unary +' % (max(powers) - 1)) return powers[::-1] # Basic data structure representing tile consists of coordinates and # cells as indices in to the coord array. master/slave vertices define a # map for gluing tiles in certain direction. The periodic maps for the # remaining dirs are in vertex_mappings. Marked entities (encode by their # vertex numbering) are in data Tile = namedtuple('Tile', ('coords', 'cells', 'master_vertices', 'slave_vertices', 'mappings', 'data')) def make_tile(x, cells, master_vertices, slave_vertices, vertex_mappings, data): '''Freeze the tile from input data.''' # As the data further evolves I need to make copy return Tile(deepcopy(x), deepcopy(cells), np.copy(master_vertices), np.copy(slave_vertices), [vm.copy() for vm in vertex_mappings], data.copy()) def merge(x, x_cells, shift, master_vertices, slave_vertices, y=None, y_cells=None): ''' The + operation for tiles (x, x_cells) [(y, y_cells)]. Tiles are glued by shifting y by shift. Return x, cells for the resulting merged mesh and a map from tile y local vertex numbering to the global numbering of the mesh ''' if y is None and y_cells is None: y, y_cells = x, x_cells n = len(y) # To make the tile piece we add all but the master vertices (as these # are already (slaves) in x-tile new_vertices = set(range(n)) new_vertices.difference_update(master_vertices) new_vertices = np.fromiter(new_vertices, dtype=int) assert np.all(np.diff(new_vertices) > 0) # Offset the free translate = np.empty(n, dtype=int) translate[new_vertices] = len(x) + np.arange(len(new_vertices)) # Those at master positions take slave values translate[master_vertices] = slave_vertices # Verices of the glued tiles x = np.vstack([x, y[new_vertices] + shift]) # Cells of the glued tiles new_cells = np.empty_like(y_cells) new_cells.ravel()[:] = translate[y_cells.flat] x_cells = np.vstack([x_cells, new_cells]) return x, x_cells, translate def TileMesh(tile, shape, mesh_data={}, TOL=1E-9): ''' [tile tile; tile tile; tile tile; tile tile] The shape is an ntuple describing the number of pieces put next to each other in the i-th axis. mesh_data : (tdim, tag) -> [entities] is the way to encode mesh data of the tile. ''' # Sanity for glueing gdim = tile.geometry().dim() assert len(shape) <= gdim, (shape, gdim) # While evolve is general mesh writing is limited to simplices only (FIXME) # so we bail out early assert str(tile.ufl_cell()) in ('interval', 'triangle', 'tetrahedron') t = Timer('evolve') # Do nothing if all(v == 1 for v in shape): return tile, mesh_data # We want to evolve cells, vertices of the mesh using geometry information # and periodicity info x = tile.coordinates() min_x = np.min(x, axis=0) max_x = np.max(x, axis=0) shifts = max_x - min_x shifts_x = [] # Geometry vertex_mappings = [] # Periodicity # Compute geometrical shift for EVERY direction: for axis in range(len(shape)): shift = shifts[axis] # Vector to shift cell vertices shift_x = np.zeros(gdim); shift_x[axis] = shift shifts_x.append(shift_x) # Compute periodicity in the vertices to_master = lambda x, shift=shift_x: x - shift # Mapping facets master = CompiledSubDomain('near(x[i], A, tol)', i=axis, A=min_x[axis], tol=TOL) slave = CompiledSubDomain('near(x[i], A, tol)', i=axis, A=max_x[axis], tol=TOL) error, vertex_mapping = compute_vertex_periodicity(tile, master, slave, to_master) # Fail when exended direction is no periodic assert error < 10*TOL, error vertex_mappings.append(vertex_mapping) # The final piece of information is cells cells = np.empty((tile.num_cells(), tile.ufl_cell().num_vertices()), dtype='uintp') cells.ravel()[:] = tile.cells().flat # Evolve the mesh by tiling along the last direction in shape while shape: x, cells, vertex_mappings, shape = \ evolve(x, cells, vertex_mappings, shape, shifts_x, mesh_data=mesh_data) info('\tEvolve took %g s ' % t.stop()) # We evolve data but the mesh function is made out of it outside mesh = make_mesh(x, cells, tdim=tile.topology().dim(), gdim=gdim) return mesh, mesh_data def evolve(x, cells, vertex_mappings, shape, shifts_x, mesh_data={}): '''Evolve tile [and its data]along the last axis''' axis, gdim = len(shape) - 1, x.shape[1] assert gdim > axis >= 0 # Do nothing for 1 if shape[axis] == 1: vertex_mappings.pop() # No longer needed shifts_x.pop() # Do not touch x and cells return x, cells, vertex_mappings, shape[:-1] # Glue how many times refine = shape[axis] # Get the sizes of tiles (the real sizes are powers of two of these) # The idea is to do unary + on each base tile and power. Then binary # + on the resulting tile tile_sizes = powers2(refine) # Iff refine was a power of two then there is no need to merge as # at the end of tile evolution is the final mesh data. Otherwise # we develop the tiles in their LOCAL numbering => they each start # from their copy vertex_mapping, shift_x = vertex_mappings.pop(), shifts_x.pop() master_vertices = list(vertex_mapping.values()) slave_vertices = list(vertex_mapping.keys()) tiles = [] # Are we even or odd (to keep the initial tile) if 0 in tile_sizes: tiles = [make_tile(x, cells, master_vertices, slave_vertices, vertex_mappings, mesh_data)] # Done with this size tile_sizes.pop() # We need to evolve tiles of sizes which are power of 2. The idea # is to get them while evolving to the largest one size = 1 max_size = max(tile_sizes) target_size = tile_sizes.pop() # The body of the loop corresponds to unary + while size <= max_size: x, cells, translate = merge(x, cells, shift_x, master_vertices, slave_vertices) # For the directions that do not evolve we add the new periodic pairs for vm in vertex_mappings: keys, values = np.array(list(vm.items())).T vm.update(dict(izip(translate[keys], translate[values]))) # Update the periodicty mapping - slaves are new slave_vertices = translate[slave_vertices] # Add the entities defined in terms of the vertices if mesh_data: mesh_data = evolve_data(mesh_data, translate) # We might be at a needed size if size == target_size: tiles.append(make_tile( x, cells, master_vertices, slave_vertices, vertex_mappings, mesh_data)) if tile_sizes: target_size = tile_sizes.pop() # Iterate size += 1 # Remember powers of 2 (like divide) shift_x *= 2 # so we double assert not tile_sizes # Now that the tiles are prepare we want to glue them tile0 = tiles.pop() # No glueing of one tile if not tiles: return tile0.coords, tile0.cells, tile0.mappings, shape[:-1] # Otherwise extend data of the largest tile x, cells, vertex_mappings = tile0.coords, tile0.cells, tile0.mappings slave_vertices = tile0.slave_vertices # Now the binary + dx = shift_x / 2**max_size # Recover length of the base tile shift_x = 0 shifts = (2**p for p in powers2(refine)) while tiles: next_tile = tiles.pop() # Shift for next tile to be added to x shift_x = shift_x + dx*next(shifts) x, cells, translate = merge(x, cells, shift_x, next_tile.master_vertices, slave_vertices, next_tile.coords, next_tile.cells) # Updata periodicity mappings of next tile using new global map for vm, next_vm in zip(vertex_mappings, next_tile.mappings): keys, values = np.array(list(next_vm.items())).T vm.update(dict(izip(translate[keys], translate[values]))) # Data evolve if mesh_data: mesh_data = evolve_data(mesh_data, translate, next_tile.data) # New global slabes slave_vertices = translate[next_tile.slave_vertices] return x, cells, vertex_mappings, shape[:-1] def evolve_data(data, mapping, other=None): ''' Add data of new tile. A new tile is this one or other. Mapping has local to global from new tile to the summed tile. ''' if other is None: other = data for key in data.keys(): old = data[key] new = np.empty_like(other[key]) new.ravel()[:] = mapping[other[key].flat] data[key] = np.vstack([old, new]) return data def make_mesh(coordinates, cells, tdim, gdim): '''Mesh by MeshEditor
CountryRegion.objects.all() country_regions = {cr.id: cr.region for cr in country_regions} # values for r, obj in enumerate(query.all()): c = 0 for fld in obj._meta.fields: attr = fld.attname if attr == 'country': v = countries.alpha3(obj.country.code) elif attr == 'country_region_id': v = country_regions[obj.country_region_id] else: v = getattr(obj, attr) if isinstance(v, datetime.datetime): v = v.replace(tzinfo=None) # raw value try: basestring except NameError: basestring = str if isinstance(v, basestring): # if v is a URL and it contains unicode and it is # very long, we get an encoding error from the warning # message, so just force strings as strings ws.write_string(r + 1, c, str(v)) else: ws.write(r + 1, c, v) c += 1 # write the looked-up value actual_v = v if attr in lookups: choices = lookups[attr] if attr == 'country': v = choices.get(obj.country.code, v) else: v = choices.get(v, v) if v is not None: v = str(v) ws.write(r + 1, c, v) c += 1 if attr == 'topic': # major topics t = MAJOR_TOPICS[TOPIC_GROUPS[actual_v] - 1][1] ws.write(r + 1, c, t) c += 1 # women focus topic t = FOCUS_TOPIC_ID_MAP.get(actual_v) if t: t = FOCUS_TOPICS[t - 1][1] ws.write(r + 1, c, t) c += 1 if write_weights: # Write weight weight = weights[(obj.country, name)] # weight = [w.weight for w in weights if # w.country == obj.country and # w.media_type == name][0] ws.write(r + 1, c, weight) def recode_country(self, country): # some split countries must be "joined" at the global report level if self.report_type == 'global': return COUNTRY_RECODES.get(country, country) return country def dictfetchall(self, cursor): """ Returns all rows from a cursor as a dict """ desc = cursor.description return [ OrderedDict(list(zip([col[0] for col in desc], row))) for row in cursor.fetchall() ] def apply_weights(self, rows, db_table, media_type): """ param rows: Queryset to apply the weights to param db_table: name of relevant sheet table param: media_type: media type to weigh by """ query = rows.extra( tables=['reports_weights'], where=[ 'reports_weights.country = %s.country' % (db_table), 'reports_weights.media_type = \'%s\'' % (media_type), ]).annotate() raw_query, params = query.query.sql_with_params() if self.report_type == 'country': weights = 'SELECT count(1) AS "n",' else: weights = 'SELECT cast(SUM(reports_weights.weight) as float) AS "n",' raw_query = raw_query.replace('SELECT', weights) cursor = connection.cursor() cursor.execute(raw_query, params) return self.dictfetchall(cursor) def ws_01(self, ws): """ Cols: Media Type Rows: Region """ counts_list = [] for media_types, models in SHEET_MEDIA_GROUPS: counts = Counter() for media_type, model in models.items(): rows = model.objects\ .values('country_region__region')\ .filter(country_region__region__in=self.region_list)\ .annotate(n=Count('id')) rows = self.apply_weights(rows, model._meta.db_table, media_type) for row in rows: if row['region'] is not None: # Get media and region id's to assign to counts media_id = [media[0] for media in media_types if media[1] == media_type][0] region_id = [region[0] for region in self.regions if region[1] == row['region']][0] counts.update({(media_id, region_id): row['n']}) counts_list.append(counts) self.tabulate(ws, counts_list[0], TM_MEDIA_TYPES, self.regions, row_perc=True) c = ws.dim_colmax + 2 self.tabulate(ws, counts_list[1], DM_MEDIA_TYPES, self.regions, row_perc=True, c=c, write_row_headings=False) c = ws.dim_colmax + 2 self.tabulate_historical(ws, '01', MEDIA_TYPES, self.regions, c=c) def ws_02(self, ws): """ Cols: Media Type Rows: Region, Country """ r = 6 c = 2 weights = {(w.country, w.media_type): w.weight for w in Weights.objects.all()} first = True historical_c = None for region_id, region in self.regions: counts_list = [] for media_types, models in SHEET_MEDIA_GROUPS: counts = Counter() for media_type, model in models.items(): rows = model.objects\ .values('country')\ .filter(country__in=self.country_list)\ .annotate(n=Count('id')) # rows = self.apply_distinct_weights(rows, model._meta.db_table, media_type) for row in rows: if row['country'] is not None: weight = weights[(row['country'], media_type)] # Get media id's to assign to counts media_id = [media[0] for media in media_types if media[1] == media_type][0] counts.update({(media_id, self.recode_country(row['country'])): row['n'] * weight}) for key, value in counts.items(): counts[key] = int(round(value)) counts_list.append(counts) self.write_primary_row_heading(ws, region, r=r) region_countries = [(code, country) for code, country in self.countries if code in REGION_COUNTRY_MAP[region]] self.tabulate(ws, counts_list[0], TM_MEDIA_TYPES, region_countries, row_perc=True, write_col_headings=True, r=r) c = 7 self.tabulate(ws, counts_list[1], DM_MEDIA_TYPES, region_countries, row_perc=True, write_col_headings=True, write_row_headings=False, r=r, c=c) if historical_c is None: historical_c = ws.dim_colmax + 2 self.tabulate_historical(ws, '02', MEDIA_TYPES, region_countries, r=r, c=historical_c, write_year=first, write_col_headings=first) first = False r += (len(region_countries) + 2) def ws_03(self, ws): """ Cols: Media type Rows: Region """ # calculate total N for each media type for 2015, # then we'll compare it to 2010 and get a %age change # get the historical data for 2010 counts = {} for media_type, model in sheet_models.items(): rows = model.objects\ .values('country_region__region')\ .annotate(n=Count(get_sheet_model_name_field(media_type), distinct=True))\ .filter(country_region__region__in=self.region_list)\ .annotate(n=Count('id')) for row in rows: region = row['country_region__region'] if region is not None: # Get media and region id's to assign to counts media_id, media = [m for m in MEDIA_TYPES if m[1] == media_type][0] region_id, region = [r for r in self.regions if r[1] == region][0] counts.update({(media_id, region_id): row['n']}) self.tabulate(ws, counts, MEDIA_TYPES, self.regions, raw_values=True, write_col_totals=False, unweighted=True) self.tabulate_historical(ws, '03', MEDIA_TYPES, self.regions, values_N=True) def ws_04(self, ws): """ Cols: Region, Media type Rows: Major Topic """ counts_list = [] for media_types, models in SHEET_MEDIA_GROUPS: secondary_counts = OrderedDict() for region_id, region in self.regions: counts = Counter() for media_type, model in models.items(): rows = model.objects\ .values('topic')\ .filter(country_region__region=region)\ .filter(country__in=self.country_list)\ .annotate(n=Count('id')) rows = self.apply_weights(rows, model._meta.db_table, media_type) for r in rows: # Get media id's to assign to counts media_id = [media[0] for media in media_types if media[1] == media_type][0] major_topic = TOPIC_GROUPS[r['topic']] counts.update({(media_id, major_topic): r['n']}) if self.report_type == 'country': # we are showing a single country data so use the contry name for the column name secondary_counts[self.countries[0][1]] = counts else: secondary_counts[region] = counts counts_list.append(secondary_counts) self.tabulate_secondary_cols(ws, counts_list[0], TM_MEDIA_TYPES, MAJOR_TOPICS, row_perc=False, show_N=True) c = ws.dim_colmax + 2 self.tabulate_secondary_cols(ws, counts_list[1], DM_MEDIA_TYPES, MAJOR_TOPICS, row_perc=False, c=c, show_N=True) c = ws.dim_colmax + 2 self.tabulate_historical(ws, '04', self.regions, MAJOR_TOPICS, c=c, r=7, skip_major_col_heading=True) def ws_05(self, ws, gen_dataset=False): """ Cols: Subject sex Rows: Major Topic """ counts_list = [] overall_column = ws.dim_colmax for media_types, models in PERSON_MEDIA_GROUPS: media_title = ', '.join(m[1] for m in media_types) secondary_counts = OrderedDict() for media_type, model in models.items(): if not media_title in secondary_counts: secondary_counts[media_title] = Counter() counts = secondary_counts[media_title] topic_field = '%s__topic' % model.sheet_name() rows = model.objects\ .values('sex', topic_field)\ .filter(**{model.sheet_name() + '__country__in': self.country_list})\ .filter(sex__in=self.male_female_ids)\ .annotate(n=Count('id')) rows = self.apply_weights(rows, model.sheet_db_table(), media_type) for r in rows: counts.update({(r['sex'], TOPIC_GROUPS[r['topic']]): r['n']}) counts_list.append(secondary_counts) self.tabulate_secondary_cols(ws, counts_list[0], self.male_female, MAJOR_TOPICS, row_perc=True) c = ws.dim_colmax + 2 self.tabulate_secondary_cols(ws, counts_list[1], self.male_female, MAJOR_TOPICS, row_perc=True, c=c, write_row_headings=False) c = ws.dim_colmax + 2 self.tabulate_historical(ws, '05', self.male_female, MAJOR_TOPICS, c=c, r=7, skip_major_col_heading=True) overall_row = ws.dim_rowmax+2 write_overall = True for media_type in counts_list: for medium in media_type: counts = media_type[medium] value = sum([counts[x] for x in counts if x[0] in self.female_ids]) total = sum(counts.values()) self.write_overall_value(ws, value, total, overall_column, overall_row, write_overall) write_overall= False overall_column += 4 if gen_dataset: tabulate_dataset("ws_05", ["Topic", "Gender", "Medium", "Year", "Geography", "Count"], counts_list, ws_05_dataset) def ws_06(self, ws, gen_dataset=False): """ Cols: Region, Subject sex: female only Rows: Major Topics """ c = 1 for media_types, models in PERSON_MEDIA_GROUPS: self.write_primary_row_heading(ws, ', '.join([m[1] for m in media_types]), c=c+1, r=4) secondary_counts = OrderedDict() for region_id, region in self.regions: counts = Counter() for media_type, model in models.items(): topic_field = '%s__topic' % model.sheet_name() rows = model.objects\ .values('sex', topic_field)\ .filter(**{model.sheet_name() + '__country_region__region':region})\ .filter(sex__in=self.male_female_ids)\ .annotate(n=Count('id')) rows = self.apply_weights(rows, model.sheet_db_table(), media_type) for r in rows: counts.update({(r['sex'], TOPIC_GROUPS[r['topic']]): r['n']}) secondary_counts[region] = counts self.tabulate_secondary_cols(ws, secondary_counts, self.male_female, MAJOR_TOPICS, row_perc=True, filter_cols=self.female, show_N=True, c=c, r=8) c = ws.dim_colmax + 2 if gen_dataset: tabulate_dataset("ws_06", ["Topic", "Gender", "Year", "Geography", "Count"], secondary_counts, ws_06_dataset) self.tabulate_historical(ws, '06', self.female, MAJOR_TOPICS, major_cols=self.regions, show_N_and_P=True, r=7) def ws_07(self, ws): """ Cols: Media Type Rows: Subject Sex """ counts = Counter() for media_type, model in person_models.items(): rows = model.objects\ .values('sex')\ .filter(**{model.sheet_name() + '__country__in': self.country_list})\ .filter(sex__in=self.male_female_ids)\ .annotate(n=Count('id')) rows = self.apply_weights(rows, model.sheet_db_table(), media_type) for r in rows: # Get media id's to assign to counts media_id = [media[0] for media in MEDIA_TYPES if media[1] == media_type][0] counts.update({(media_id, r['sex']): r['n']}) self.tabulate(ws, counts, MEDIA_TYPES, self.male_female, row_perc=False) self.tabulate_historical(ws, '07', MEDIA_TYPES, self.male_female, write_row_headings=False) def ws_08(self, ws): """ Cols: Subject Sex Rows: Scope """ counts = Counter() for media_type, model in tm_person_models.items(): if 'scope' in [field_name.name for field_name in model.sheet_field().remote_field.model._meta.get_fields()]: scope = '%s__scope' % model.sheet_name() rows = model.objects\ .values('sex', scope)\ .filter(**{model.sheet_name() + '__country__in': self.country_list})\ .filter(sex__in=self.male_female_ids)\ .annotate(n=Count('id')) rows = self.apply_weights(rows, model.sheet_db_table(),
dataloader, index_file, config, checkpoint_save_path): global loss_train, loss_test global loss_mag_train, loss_mag_test, loss_phase_train, loss_phase_test, loss_angle_train, loss_angle_test global global_step, global_epoch n_epoch = config["training"]["n_epoch"] loss_best = 100 loss_mag_best = 100 if config["debug"]: checkpoint_interval = 100 save_result_interval = 10 else: checkpoint_interval = 10000 save_result_interval = 500 states_path = os.path.join(checkpoint_save_path, "training_state") current_lr = config["training"]["learning_rate"] while(global_epoch < n_epoch): if global_epoch % 10 == 0 and global_epoch != 0: current_lr = current_lr / 2 for param_group in optimizer.param_groups: param_group['lr'] = current_lr #if global_epoch % 50 == 0 and global_epoch > 50: # current_lr = current_lr / 5 # for param_group in optimizer.param_groups: # param_group['lr'] = current_lr for phase, loader in dataloader.items(): train = (phase == "train") running_loss = np.array([]) running_loss_mag = np.array([]) running_loss_phase = np.array([]) running_loss_angle = np.array([]) if train: save_test_state = False #print("epoch {} {} phase start".format(global_epoch, phase)) for step, (wav_reverb, wav_clean) in enumerate(loader): if train: model.train() else: model.eval() #print("step {} start".format(step)) if config["debug"]: print("epoch {} step {} start".format(global_epoch, step)) batch_size = wav_clean.size(0) spec_clean = util.wav_to_spectrogram(wav_clean, config['data']) spec_reverb = util.wav_to_spectrogram(wav_reverb, config['data']) #torch tensors length = spec_clean.size(2) if config["training"]["target"] == "cIRM": spec_clean = spec_clean.data.numpy() spec_reverb = spec_reverb.data.numpy() mask = (spec_clean[:,0,:,:] + 1j*spec_clean[:,1,:,:]) / (spec_reverb[:,0,:,:]+1e-8 + 1j*spec_reverb[:,1,:,:]) mask_real = np.real(mask).reshape(batch_size, 1, length, -1) mask_imag = np.imag(mask).reshape(batch_size, 1, length, -1) mask = np.concatenate((mask_real, mask_imag), axis = 1) mask = torch.FloatTensor(mask) spec_clean = torch.FloatTensor(spec_clean) spec_reverb = torch.FloatTensor(spec_reverb) y = util.target_compression(mask, config, device = 'cpu') elif config["training"]["target"] == "IRM": mag_clean = torch.sqrt(spec_clean[:,0,:,:]**2 + spec_clean[:,1,:,:]**2) mag_reverb = torch.sqrt(spec_reverb[:,0,:,:]**2 + spec_reverb[:,1,:,:]**2) mask = torch.reshape(mag_clean / (mag_reverb+1e-8), (batch_size, 1, length, -1)) y = util.target_compression(mask, config, device = 'cpu') #mask = util.target_compression(mask, self.config) elif config["training"]["target"] == "PSM": spec_clean = spec_clean.data.numpy() spec_reverb = spec_reverb.data.numpy() mask = (spec_clean[:,0,:,:] + 1j*spec_clean[:,1,:,:]) / (spec_reverb[:,0,:,:]+1e-8 + 1j*spec_reverb[:,1,:,:]) mask = np.real(mask).reshape(batch_size, 1, length, -1) mask = torch.FloatTensor(mask) spec_clean = torch.FloatTensor(spec_clean) spec_reverb = torch.FloatTensor(spec_reverb) y = util.target_compression(mask, config, device = 'cpu') #mask = util.target_compression(mask, self.config) elif config["training"]["target"] == "spec": y = spec_clean if config["training"]["input"] == "spec": x = spec_reverb elif config["training"]["input"] == "mag": x = torch.sqrt(spec_reverb[:,0,:,:]**2 + spec_reverb[:,1,:,:]**2) x = torch.reshape(x, (batch_size, 1, length, -1)) elif config["training"]["input"] == "mag_log": x = torch.sqrt(spec_reverb[:,0,:,:]**2 + spec_reverb[:,1,:,:]**2) x = torch.reshape(torch.log10(x + 1e-8), (batch_size, 1, length, -1)) if x.size(2) % 2 == 0: x = x[:,:,0:-1,:] y = y[:,:,0:-1,:] x = x.cuda() y = y.cuda() x = util.normalization_0m_1v(x, axis = [1,3], device = 'cuda') optimizer.zero_grad() y_hat = torch.nn.parallel.data_parallel(model, x) if config["training"]["loss_function"] == "MSE": loss = criterion(y_hat, y) else: y_real_hat = y_hat[:,0,:,:] y_imag_hat = y_hat[:,1,:,:] y_real = y[:,0,:,:] y_imag = y[:,1,:,:] loss = criterion(y_real, y_imag, y_real_hat, y_imag_hat) if y.size(1) == 2: mag = torch.sqrt(y.data[:,0,:,:]**2 + y.data[:,1,:,:]**2) mag_hat = torch.sqrt(y_hat.data[:,0,:,:]**2 + y_hat.data[:,1,:,:]**2) dif_mag = torch.mean((mag - mag_hat)**2) theta = torch.atan2(y.data[:,1,:,:], y.data[:,0,:,:]) #atan2(b, a) == atan(b/a) theta_hat = torch.atan2(y_hat.data[:,1,:,:], y_hat.data[:,0,:,:]) dif_phase = torch.mean((mag * torch.sin((theta_hat - theta)/2))**2).data.cpu().numpy() dif_angle = (theta_hat - theta).data.cpu().numpy() dif_angle = dif_angle + 2*np.pi * (dif_angle < -np.pi) dif_angle = dif_angle - 2*np.pi * (dif_angle > np.pi) dif_angle = np.mean(np.abs(dif_angle)) else: dif_mag = torch.mean((y - y_hat)**2) dif_angle = 0 dif_phase = 0 running_loss = np.append(running_loss, loss.data.cpu().numpy()) running_loss_mag = np.append(running_loss_mag, dif_mag.data.cpu().numpy()) running_loss_phase = np.append(running_loss_phase, dif_phase) running_loss_angle = np.append(running_loss_angle, dif_angle) if train and global_step % checkpoint_interval == 0 and global_step > 0: save_checkpoint(checkpoint_save_path, model, optimizer) if train and global_step % save_result_interval == 0 and global_step > 0: #save_mask_figure(states_path, mask_real_hat, mask_real, mask_imag_hat, mask_imag, phase = "train") index = np.random.randint(0, batch_size) wav, wav_hat, spec, spec_hat, mask, mask_hat = do_eval(model, wav_clean[index, :], wav_reverb[index, :], config) save_result(states_path, wav_hat, wav, spec_hat, spec, mask_hat, mask, phase = "train_eval", config = config) save_test_state = True if not train and save_test_state == True: #save_mask_figure(states_path, mask_real_hat, mask_real, mask_imag_hat, mask_imag, phase = "test") index = np.random.randint(0, batch_size) wav, wav_hat, spec, spec_hat, mask, mask_hat = do_eval(model, wav_clean[index, :], wav_reverb[index, :], config) save_result(states_path, wav_hat, wav, spec_hat, spec, mask_hat, mask, phase = "eval", config = config) save_test_state = False if train: loss.backward() optimizer.step() global_step += 1 #print("step {} finish".format(step)) ############ end of for step, (x, y, noise, g) in enumerate(loader) ################################### if train: new_item = np.expand_dims(np.array([np.mean(running_loss), np.var(running_loss)]), 0) loss_train = new_item if loss_train.size == 0 else np.append(loss_train, new_item, axis = 0) loss_mag_train = np.append(loss_mag_train, np.mean(running_loss_mag)) loss_phase_train = np.append(loss_phase_train, np.mean(running_loss_phase)) loss_angle_train = np.append(loss_angle_train, np.mean(running_loss_angle)) else: new_item = np.expand_dims(np.array([np.mean(running_loss), np.var(running_loss)]), 0) loss_test = new_item if loss_test.size == 0 else np.append(loss_test, new_item, axis = 0) loss_mag_test = np.append(loss_mag_test, np.mean(running_loss_mag)) loss_phase_test = np.append(loss_phase_test, np.mean(running_loss_phase)) loss_angle_test = np.append(loss_angle_test, np.mean(running_loss_angle)) if loss_best > np.mean(running_loss): loss_best = np.mean(running_loss) save_checkpoint(checkpoint_save_path, model, optimizer, filename = "best_loss_model.pth") if loss_mag_best > np.mean(running_loss_mag): loss_mag_best = np.mean(running_loss_mag) save_checkpoint(checkpoint_save_path, model, optimizer, filename = "best_mag_model.pth") logger.info("epoch {}: loss of {} phase is {} (Var: {})".format(global_epoch, phase, new_item[0][0], new_item[0][1])) print("epoch {}: loss of {} phase is {} (Var: {})".format(global_epoch, phase, new_item[0][0], new_item[0][1])) ########## end of for phase, loader in dataloader.items(): ############################################# global_epoch += 1 save_checkpoint(checkpoint_save_path, model, optimizer, filename = "checkpoint_current.pth") #wav, wav_hat, spec, spec_hat, mask, mask_hat = do_eval(model, wav_clean[index, :], wav_reverb[index, :], config) #save_result(states_path, wav_hat, wav, spec_hat, spec, phase = "finish", config = config) loss_train = np.array([]) loss_test = np.array([]) loss_mag_train = np.array([]) loss_mag_test = np.array([]) loss_phase_train = np.array([]) loss_phase_test = np.array([]) loss_angle_train = np.array([]) loss_angle_test = np.array([]) global_step = 0 global_epoch = 0 if __name__ == "__main__": args = docopt(__doc__) config_path = args["--config"] checkpoint_save_path = args["--checkpoint-dir"] checkpoint_load_path = args["--checkpoint"] weight_loss_list = args["--weight-loss"] loss = args["--loss"] stride_mode = args["--stride-mode"] target_in = args["--target"] print(args) if config_path is None: config_path = "/user/HS228/jz00677/PYTHON_project/Unet/recurrent/src/config_5R.json" config = load_config(config_path) print("finish loading config file") if checkpoint_save_path is None: if config["debug"]: checkpoint_save_path = "/vol/research/Dereverb/debug" else: checkpoint_save_path = "/vol/vssp/msos/jz/cIRM_base/checkpoint/" original_path = checkpoint_save_path logger = logging.getLogger("logger") #dropout = config["training"]["dropout"] #weight_decay = config["training"]["weight_decay"] #hidden_layers = config["model"]["hidden_layers"] loss_function = config["training"]["loss_function"] if loss is None else [loss] weight_loss = weight_loss_list.split(',') if weight_loss_list is not None else config["training"]["weight_loss"] stride_mode = stride_mode.split(',') if stride_mode is not None else config["model"]["stride_mode"] target = target_in if target_in is not None else config["training"]["target"] weight_decay = config["training"]["weight_decay"] bidirectional = config["model"]["bidirectional"] recurrent_type = config["model"]["recurrent_type"] loss_feature_type = ["phase", "GD", "IF", "both"] for ii in range(0,1): for jj in range(0,1): #config["model"]["hidden_layers"] = hidden_layers[ii] if config["training"]["loss_function"] == "MSE" and jj > 0: break config["training"]["loss_function"] = loss_function[ii] config["training"]["target"] = target[0] #cIRM config["training"]["weight_loss"] = float(weight_loss[jj]) config["training"]["input"] = "spec" config["training"]["target"] = target config["model"]["recurrent_type"] = "LSTM" config["model"]["bidirectional"] = True config["model"]["stride_mode"] = int(stride_mode[0]) #if ii == 0 and jj > 0: # break loss_train = np.array([]) loss_test = np.array([]) loss_mag_train = np.array([]) loss_mag_test = np.array([]) loss_phase_train = np.array([]) loss_phase_test = np.array([]) loss_angle_train = np.array([]) loss_angle_test = np.array([]) global_step = 0 global_epoch = 0 checkpoint_save_path = original_path + "strideMode_{}".format(config["model"]["stride_mode"]) if config["training"]["loss_function"] == "MSE": checkpoint_save_path = checkpoint_save_path + "_MSE" else: checkpoint_save_path = checkpoint_save_path + "_{}_{}".format(config["training"]["loss_function"], config["training"]["weight_loss"]) os.makedirs(checkpoint_save_path, exist_ok = True) handler = setHandler(filename = os.path.join(checkpoint_save_path, "train.log")) logger.addHandler(handler) logger.setLevel(logging.INFO) model = Model.UNet_recurrent(config) optimizer = torch.optim.Adam(model.parameters(), lr=config["training"]["learning_rate"], betas=(0.9, 0.999), eps=1e-08, weight_decay=config["training"]["weight_decay"]) #rir_list = preprocess.get_rir(config["data"]["rir"]) print("start loading dataset") logger.info("start loading dataset") dataloader = dataset_5R.get_dataloader(config) print("finish loading dataset") logger.info("finish loading dataset") if checkpoint_load_path is not None: if not load_checkpoint(checkpoint_load_path, model, optimizer): logger.info("failed to load given checkpoint") model = Model.UNet_recurrent(config) else: try: if not load_checkpoint(os.path.join(checkpoint_save_path, "checkpoint_current.pth"), model, optimizer): logger.info("failed to checkpoint_current.pth") model = Model.UNet_recurrent(config) except FileNotFoundError: pass if config["training"]["loss_function"] == "MSE": criterion = nn.MSELoss().cuda() elif config["training"]["loss_function"] == "WPM": criterion = WPMLoss(config["training"]["weight_loss"]).cuda() else: criterion = GDMLoss(config["training"]["weight_loss"], loss_feature_type[0]).cuda() logger.info("config file is: " + config_path) logger.info(config) logger.info("checkpoint is saved at: " + checkpoint_save_path) try: logger.info(model) print("start training!") logger.info("starting training!") model = model.cuda() train(model, optimizer, criterion, dataloader, config["data"]["index_file"][1], config, checkpoint_save_path) except KeyboardInterrupt: print("Interrupted by Keyboard Input") pass finally: save_checkpoint(checkpoint_save_path, model, optimizer) plot_loss(loss_train[:,0], loss_test[:,0], loss_mag_train, loss_mag_test, loss_phase_train, loss_phase_test, loss_angle_train, loss_angle_test, checkpoint_save_path) logger.info("FINISH TRAINING!") logger.removeHandler(handler) print("FINISH TRAINING!") output_path = os.path.join(checkpoint_save_path, "output_result") os.makedirs(output_path, exist_ok = True) handler = setHandler(filename = os.path.join(output_path, "inference.log")) logger.addHandler(handler) best_model = model if not load_checkpoint(os.path.join(checkpoint_save_path, "best_loss_model.pth"), best_model, optimizer): logger.info("failed to load best_model.pth") best_model = model rir_file = "/user/HS228/jz00677/PYTHON_project/RIR_Generator/rir_5R_stepT60_test.csv" output_path = os.path.join(checkpoint_save_path, "output_result_step_T60_bestloss") os.makedirs(output_path, exist_ok = True) handler = setHandler(filename = os.path.join(output_path, "inference.log")) logger.addHandler(handler) inference.inference_step_T60(device = 'cuda', output_path = output_path, rir_file = rir_file, model = best_model, config = config, logger = logger) T60_list = ['0.3s', '0.4s', '0.5s', '0.6s', '0.7s', '0.8s', '0.9s', '1s', '1.1s', '1.2s', '1.3s', '1.4s', '1.5s'] rir_per_T60 =
sample c_neg (list): categorical values from C- sample """ suffixes = [ "{}{}".format(i, j) for i in string.ascii_lowercase for j in string.ascii_lowercase] c_pos = ["{}A".format(s) for s in suffixes][:int(cardinality / 2)] c_neg = ["{}B".format(s) for s in suffixes][:int(cardinality / 2)] return c_pos, c_neg def return_strong_features( y_vals: np.array, cardinality: int, z_pivot: int = 10 ) -> Tuple[np.array, np.array]: """Return strongly predictive features. Given a target variable values `y_vals`, create a categorical variable c and continuous variable z such that y is perfectly predictable from c and z, with y = 1 iff c takes a value from C+ OR z > z_pivot. Args: y_vals (np.array): targets cardinality (int): cardinality of the categorical variable, c z_pivot (float): mean of z Returns: c (np.array): strongly predictive categorical variable z (np.array): strongly predictive continuous variable """ z = np.random.normal(loc=z_pivot, scale=5, size=2 * len(y_vals)) z_pos, z_neg = z[z > z_pivot], z[z <= z_pivot] c_pos, c_neg = return_c_values(cardinality) c, z = list(), list() for y in y_vals: coin = np.random.binomial(1, 0.5) if y and coin: c.append(random.choice(c_pos + c_neg)) z.append(random.choice(z_pos)) elif y and not coin: c.append(random.choice(c_pos)) z.append(random.choice(z_neg)) else: c.append(random.choice(c_neg)) z.append(random.choice(z_neg)) return np.array(c), np.array(z) def return_main_dataset( num_weak: int, num_samp: int, cardinality: int = 100, mixing_factor: float = 0.025, ) -> Tuple[pd.DataFrame, np.array, np.array]: """Generate training samples. Generate a dataset with features c and z that are perfectly predictive of y and additional features x_i that are weakly predictive of y and correlated with eachother. Args: num_weak (int): number of weakly predictive features x_i to create num_samp (int): number of sample to create cardinality (int): cardinality of the predictive categorical variable. half of these values will be correlated with y=1 and the other with y=0. mixing_factor (float): see `return_weak_features_and_targets` Returns: df (pd.DataFrame): dataframe with y, z, c, and x_i columns """ X, y, cov, weights = return_weak_features_and_targets(num_weak, num_samp, mixing_factor) c, z = return_strong_features(y, cardinality) xcol_names = ['x{}'.format(i) for i in range(num_weak)] df = pd.DataFrame(X, columns=xcol_names) df['y'] = y df['z'] = z df['c'] = c df['c'] = df['c'].astype('category') df = df[['y', 'c', 'z'] + xcol_names] return df, cov, weights def encode_as_onehot(df_main: pd.DataFrame) -> pd.DataFrame: """Replace string values for c with one-hot encoding.""" df_onehot = pd.get_dummies(df_main, 'c') df_onehot['y'] = df_main['y'].copy() return df_onehot def encode_as_int(df_main: pd.DataFrame) -> pd.DataFrame: """Replace string values for c with integer encoding.""" ord_enc = OrdinalEncoder(dtype=np.int) c_encoded = ord_enc.fit_transform(df_main[['c']]) df_catnum = df_main.copy() df_catnum['c'] = c_encoded df_catnum['c'] = df_catnum['c'].astype('category') return df_catnum, ord_enc def encode_as_magic_int(df_main: pd.DataFrame) -> pd.DataFrame: """Replace string values for c with "magic" integer encoding. A magic encoding is one in which the sorted integer values keep all C+ values (values of c that end with "A") next to each other and all C- values (values of c that end with "B") next to eachother. """ values = sorted(df_main['c'].unique(), key=lambda x: x[-1]) ord_enc = OrdinalEncoder(categories=[values], dtype=np.int) c_encoded = ord_enc.fit_transform(df_main[['c']]) df_catnum = df_main.copy() df_catnum['c'] = c_encoded df_catnum['c'] = df_catnum['c'].astype('category') return df_catnum, ord_enc def get_feature_names(df, include_c=True): names = [f for f in df.columns if not f.startswith('y')] if not include_c: names = [f for f in names if not f.startswith('c')] return names def print_auc_mean_std(results): print(" AUC: mean={:4.4f}, sd={:4.4f}".format( np.mean(results['metric']), np.std(results['metric']))) def print_sorted_mean_importances(results, n=5): data = defaultdict(list) imps = results['importances'] for d in imps: for fname, imp in d.items(): data[fname].append(imp) mu = {fname: np.mean(vals) for fname, vals in data.items()} mu = sorted(mu.items(), key=itemgetter(1), reverse=True)[:n] print(" Importances:") for fname, val in mu: print("{:>20}: {:0.03f}".format(fname, val)) # @jit(nopython=True) def powerset(iterable): s = list(iterable) return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s) + 1)) # @jit(nopython=True) def get_partition(leaf_id, part, node_id, children_left, children_right, feature, threshold): left = np.where(children_left == leaf_id)[0] right = np.where(children_right == leaf_id)[0] if (len(left) == 0) * (len(right) == 0): return part, node_id else: if len(right) != 0: right = int(right[0]) node_id.append(feature[right]) part[feature[right]] = np.concatenate((part[feature[right]], np.array([[threshold[right], np.inf]]))) part[feature[right]] = np.array([[np.max(part[feature[right]][:, 0]), np.min(part[feature[right]][:, 1])]]) return get_partition(right, part, node_id, children_left, children_right, feature, threshold) else: left = int(left[0]) node_id.append(feature[left]) part[feature[left]] = np.concatenate((part[feature[left]], np.array([[-np.inf, threshold[left]]]))) part[feature[left]] = np.array([[np.max(part[feature[left]][:, 0]), np.min(part[feature[left]][:, 1])]]) return get_partition(left, part, node_id, children_left, children_right, feature, threshold) # # def get_tree_partition(x, fx, tx, tree, S, data=None): # """ # Compute the partition (L_m) of each compatible leaf of the condition X_s = x_S, then check for each # observations in data in which leaves it falls. # # Args: # x (array): observation # fx (float): tree(x) # tx (float): threshold of the classifier # tree (DecisionTreeClassifier.tree_): model # S (list): index of variables on which we want to compute the SDP # algo (string): name of the estimators, recommended 'pluging' # data (array): data used to compute the partion # # Returns: # (array, array): binary array of shape (data_size, compatible_leaves), if [i, j] = 1 then observation i fall in # leaf j. # (array): return number of observations that fall in each leaf # """ # # a = tree.children_left # b = tree.children_right # f = tree.features # t = tree.thresholds # # r_w = tree.node_samples_weight # v = tree.values.reshape(-1)/tree.scaling # index = range(x.shape[0]) # # y_pred = tree.predict(data) # dist = (y_pred - fx) ** 2 # # up_tx = np.array(dist > tx).reshape(-1) # down_tx = np.array(dist <= tx).reshape(-1) # # def explore_partition(i, tab, partition_leaves, partition_global, prob_global, s_global, S, S_bar, data, # intv=False): # if a[i] < 0: # # tab[i] = 1 # compatible_leaves.append(i) # partition_global[i] = partition_leaves # partition_leaves = np.squeeze(np.array(partition_leaves)) # # section_x = np.prod( # [(data[:, s] <= partition_leaves[s, 1]) * (data[:, s] >= partition_leaves[s, 0]) for s in # S], axis=0) # # section_x_bar = np.prod( # [(data[:, s] <= partition_leaves[s, 1]) * (data[:, s] >= partition_leaves[s, 0]) for s in # S_bar], axis=0) # # section_up = np.prod( # [(data[:, s] <= partition_leaves[s, 1]) * (data[:, s] >= partition_leaves[s, 0]) for s in # S], axis=0) * up_tx # # section_down = np.prod( # [(data[:, s] <= partition_leaves[s, 1]) * (data[:, s] >= partition_leaves[s, 0]) for s in # S], axis=0) * down_tx # # prob_all = section_x * section_x_bar # prob_up = section_up * section_x_bar # prob_down = section_down * section_x_bar # # s_all = section_x # s_up = section_up # s_down = section_down # # prob_global['all'].append(prob_all.reshape(1, -1)) # prob_global['up'].append(prob_up.reshape(1, -1)) # prob_global['down'].append(prob_down.reshape(1, -1)) # # s_global['all'].append(s_all.reshape(1, -1)) # s_global['up'].append(s_up.reshape(1, -1)) # s_global['down'].append(s_down.reshape(1, -1)) # # else: # if f[i] in S: # if x[f[i]] <= t[i]: # part = partition_leaves.copy() # part[f[i]] = np.concatenate((part[f[i]], np.array([[-np.inf, t[i]]]))) # part[f[i]] = np.array([[np.max(part[f[i]][:, 0]), np.min(part[f[i]][:, 1])]]) # explore_partition(a[i], tab, part, partition_global, prob_global, s_global, S, S_bar, data, # intv) # else: # part = partition_leaves.copy() # part[f[i]] = np.concatenate((part[f[i]], np.array([[t[i], np.inf]]))) # part[f[i]] = np.array([[np.max(part[f[i]][:, 0]), np.min(part[f[i]][:, 1])]]) # explore_partition(b[i], tab, part, partition_global, prob_global, s_global, S, S_bar, data, # intv) # else: # part = partition_leaves.copy() # part[f[i]] = np.concatenate((part[f[i]], np.array([[-np.inf, t[i]]]))) # part[f[i]] = np.array([[np.max(part[f[i]][:, 0]), np.min(part[f[i]][:, 1])]]) # # part_2 = partition_leaves.copy() # part_2[f[i]] = np.concatenate((part_2[f[i]], np.array([[t[i], np.inf]]))) # part_2[f[i]] = np.array([[np.max(part_2[f[i]][:, 0]), np.min(part_2[f[i]][:, 1])]]) # # explore_partition(a[i], tab, part, partition_global, prob_global, s_global, S, S_bar, data, intv) # explore_partition(b[i], tab, part_2, partition_global, prob_global, s_global, S, S_bar, data, intv) # # S_bar = [i for i in index if i not in S] # partition_leaves = [np.array([[-np.inf, np.inf]]) for i in range(data.shape[1])] # partition_global = {i: [np.array([[-np.inf, np.inf]]) for i in range(data.shape[1])] # for i in range(len(tree.features))} # prob_global = {'all': [], 'up': [], 'down': []} # s_global = {'all': [], 'up': [], 'down': []} # # part_final = {} # compatible_leaves = [] # explore_partition(0, compatible_leaves, partition_leaves, partition_global, prob_global, s_global, S, # S_bar, data) # # part_final['all'] = np.concatenate(prob_global['all'], axis=0) # part_final['up'] = np.concatenate(prob_global['up'], axis=0) # part_final['down'] = np.concatenate(prob_global['down'], axis=0) # # part_final['s_all'] = np.concatenate(s_global['all'], axis=0) # part_final['s_up'] = np.concatenate(s_global['up'], axis=0) # part_final['s_down'] = np.concatenate(s_global['down'], axis=0) # # return part_final, v[compatible_leaves] def explore_partition(i, x, children_left, children_right, features, thresholds, values, compatible_leaves, partition_leaves, partition_global, prob_global, s_global, S, S_bar, data, down_tx, up_tx, intv=False): if children_left[i] < 0: # tab[i] = 1 compatible_leaves.append(i) partition_global[i] = partition_leaves partition_leaves = np.squeeze(np.array(partition_leaves)) section_x = np.prod( [(data[:, s] <= partition_leaves[s, 1]) * (data[:, s] >= partition_leaves[s, 0]) for s in S], axis=0) section_x_bar = np.prod( [(data[:, s] <= partition_leaves[s, 1]) * (data[:, s] >= partition_leaves[s, 0]) for s in S_bar], axis=0) section_up = np.prod( [(data[:, s] <= partition_leaves[s, 1]) * (data[:, s] >= partition_leaves[s, 0]) for s in S], axis=0) * up_tx section_down = np.prod( [(data[:, s] <= partition_leaves[s, 1]) * (data[:, s] >= partition_leaves[s, 0]) for s in S], axis=0) * down_tx prob_all = section_x * section_x_bar prob_up = section_up * section_x_bar prob_down = section_down * section_x_bar s_all = section_x s_up = section_up s_down = section_down prob_global['all'].append(prob_all.reshape(1, -1)) prob_global['up'].append(prob_up.reshape(1, -1)) prob_global['down'].append(prob_down.reshape(1, -1)) s_global['all'].append(s_all.reshape(1, -1)) s_global['up'].append(s_up.reshape(1, -1)) s_global['down'].append(s_down.reshape(1, -1)) else: if
type == 'none': self.map_data[ps[4]] = type for p in ps: self.map_data_cover[p] = 'none' def RasterisePath(self, path, points, type): #draw_locs = [] ms = self.Options['map_size'] for i in xrange(len(path)-1): p1 = path[i] x1 = points[p1][0] y1 = points[p1][1] p2 = path[i+1] x2 = points[p2][0] y2 = points[p2][1] if x1 == x2: ix = x1 for iy in xrange(min(y1,y2),max(y1,y2)+1): self.draw_support(iy*ms[0]+ix,3,type) #draw_locs.append((ix,iy)) else: deltax = float(x2 - x1) if deltax > 0: rsign = 1 else: rsign = -1 deltay = float(y2 - y1) if deltay == 0: ysign = 0 elif deltay > 0: ysign = 1 else: ysign = -1 error = -1.0 deltaerr = fabs(deltay / deltax) iy = y1 for ix in xrange(x1,x2+rsign,rsign): self.draw_support(iy*ms[0]+ix,3,type) #draw_locs.append((ix,iy)) if ix == x2: continue error += deltaerr while error >= 0.0: iy += ysign self.draw_support(iy*ms[0]+ix,3,type) #draw_locs.append((ix,iy)) error -= 1.0 def ClearPath2Sea(self, point_ind, points, neighbours, distance2sea, distance2freshwater): # depth first search to coast came_from = [-1 for i in xrange(len(points))] came_from[point_ind] = 0 reached_sea = False current_point = point_ind while not reached_sea: print "current point: %d"%(current_point) cost = [] for ni in neighbours[current_point]: if distance2freshwater[ni] > 1 and came_from[ni] == -1: cost.append(distance2sea[ni]) else: cost.append(100) print "neighbours: ", neighbours[current_point] print "costs: ", cost min_cost = min(cost) if min_cost == 100: current_point = came_from[current_point] print "moved back to %d"%(current_point) if current_point == 0: print "failed to find a way" break else: next_point = neighbours[current_point][cost.index(min_cost)] print "moving to point: %d, distance: %f"%(next_point,min_cost) came_from[next_point] = current_point current_point = next_point if distance2sea[current_point] <= 2.0: reached_sea = True print "hit sea" if reached_sea: path = [current_point] while not current_point == point_ind: current_point = came_from[current_point] path.append(current_point) self.RasterisePath(path, points, 'none') def ClearPathBetweenPoints(self, point_ind1, point_ind2, points, neighbours, distance2sea, distance2freshwater): point_dists = [] for i in xrange(len(points)): #dist = sqrt(pow(points[i][0]-points[point_ind2][0],2)+pow(points[i][1]-points[point_ind2][1],2)) dist = abs(points[i][0]-points[point_ind2][0]) + abs(points[i][1]-points[point_ind2][1]) point_dists.append(dist) frontier = [] heapq.heappush(frontier, (0, point_ind1)) came_from = [-1 for i in xrange(len(points))] came_from[point_ind1] = None cost_so_far = [0 for i in xrange(len(points))] cost_so_far[point_ind1] = 0 while True: if len(frontier) == 0: # ran out of options, no path print "no path found" return [] current = heapq.heappop(frontier)[1] if current == point_ind2: # got it path = [current] while not came_from[current] == None: # step backwards through came froms to get path current = came_from[current] path.insert(0,current) print path self.RasterisePath(path, points, 'stone') return #return (path, draw_locs) for ni in neighbours[current]: if distance2sea[ni] <= 1 or distance2freshwater[ni] <= 1: next = (ni,10000) else: next = (ni,1) new_cost = cost_so_far[current] + next[1] if came_from[next[0]] == -1 or new_cost < cost_so_far[next[0]]: cost_so_far[next[0]] = new_cost priority = new_cost + point_dists[ni] heapq.heappush(frontier, (priority, next[0])) came_from[next[0]] = current def BuildOverworldMap(self, season='summer', verbose=False, seed=None): if not seed == None: random.seed(seed) np.random.seed(seed) # Initialise data self.map_data = ['water' for i in xrange(self.Options['map_size'][0]*self.Options['map_size'][1])] self.map_data_cover = ['none' for i in xrange(self.Options['map_size'][0]*self.Options['map_size'][1])] self.map_data_text = ['none' for i in xrange(self.Options['map_size'][0]*self.Options['map_size'][1])] # Generate initial random point set if verbose == True: print "generating points ..." points = [] Ni = int(self.Options['Np']/pow(self.Options['grid_div'],2)) xinc = (self.Options['map_size'][0]/self.Options['grid_div']) yinc = (self.Options['map_size'][1]/self.Options['grid_div']) for xi in xrange(self.Options['grid_div']): for yi in xrange(self.Options['grid_div']): start_x = xi*xinc start_y = yi*yinc points.extend([(np.random.randint(start_x,start_x+xinc),np.random.randint(start_y,start_y+yinc)) for i in xrange(Ni)]) points = list(set(points)) # remove existing points on boundaries and add own sea points pr = [] for i in xrange(len(points)): if points[i][0] == 0 or points[i][1] == 0 or points[i][0] == (self.Options['map_size'][0]-1) or points[i][1] == (self.Options['map_size'][1]-1): pr.append(points[i]) for p in pr: points.remove(p) Nwalls = int(sqrt(self.Options['Np'])) for i in xrange(Nwalls): step = i*self.Options['map_size'][0]/Nwalls points.append((0,step)) points.append((self.Options['map_size'][0]-1,step)) if i > 0: points.append((step,0)) points.append((step,self.Options['map_size'][1]-1)) # Compute delaunay trianglation of points to setup initial spatial networks if verbose == True: print "performing triangulation ..." triangles = pyshull.PySHull(np.array(points)) neighbours = [[] for i in xrange(len(points))] for t in triangles: neighbours[t[0]].extend([t[1],t[2]]) neighbours[t[1]].extend([t[0],t[2]]) neighbours[t[2]].extend([t[0],t[1]]) ################################################# # Use floodfills to built high-level coastlines if verbose == True: print "creating land ..." if verbose == True: print "coastal floodfills ..." point_types = [-1 for i in xrange(len(points))] # water long boundaries frontier_water = Queue.Queue() for i in xrange(len(points)-4*Nwalls,len(points)): point_types[i] = 0 frontier_water.put(i) # random land starting points frontier_land = Queue.Queue() potential_points = [i for i in xrange(len(points)-4*Nwalls) if len(neighbours[i]) > 0] random.shuffle(potential_points) land_start = [] i = 0 boundary_size = 0.1 for j in xrange(2): quadrats = [False,False,False,False] while not all(quadrats): ind = potential_points[i] if points[ind][0] < boundary_size*self.Options['map_size'][0] or points[ind][0] > (1.0-boundary_size)*self.Options['map_size'][0] or \ points[ind][1] < boundary_size*self.Options['map_size'][1] or points[ind][1] > (1.0-boundary_size)*self.Options['map_size'][1]: i += 1 continue if points[ind][0] < self.Options['map_size'][0]/2: if points[ind][1] < self.Options['map_size'][1]/2: quad = 0 else: quad = 2 else: if points[ind][1] < self.Options['map_size'][1]/2: quad = 1 else: quad = 3 if quadrats[quad] == False: quadrats[quad] = True land_start.append(ind) i += 1 for i in land_start: point_types[i] = 1 frontier_land.put(i) # perform flood fills to allocate land/sea while not frontier_water.empty() and not frontier_land.empty(): for i in xrange(frontier_land.qsize()): # empty queue of frontier objects only current_land = frontier_land.get() for next in neighbours[current_land]: if point_types[next] == -1: point_types[next] = 1 frontier_land.put(next) for i in xrange(frontier_water.qsize()): # empty queue of frontier objects only current_water = frontier_water.get() for next in neighbours[current_water]: if point_types[next] == -1: point_types[next] = 0 frontier_water.put(next) # Create distance to sea layer distance2sea = [0 for i in xrange(len(points))] touched = [-1 for i in xrange(len(points))] for i in xrange(len(points)): if len(neighbours[i]) == 0 or point_types[i] == 0: continue frontier = Queue.Queue() frontier.put((i,0)) touched[i] = i found_sea = False while not found_sea: (current_point,dist) = frontier.get() for next_point in neighbours[current_point]: if point_types[next_point] == 0: found_sea = True distance2sea[i] = dist+1 break elif touched[next_point] < i: touched[next_point] = i frontier.put((next_point,dist+1)) # Create distance to land layer (for sea points) distance2land = [0 for i in xrange(len(points))] touched = [-1 for i in xrange(len(points))] for i in xrange(len(points)): if len(neighbours[i]) == 0 or point_types[i] == 1: continue frontier = Queue.Queue() frontier.put((i,0)) touched[i] = i found_land = False while not found_land: (current_point,dist) = frontier.get() for next_point in neighbours[current_point]: if point_types[next_point] == 1: found_land = True distance2land[i] = dist+1 break elif touched[next_point] < i: touched[next_point] = i frontier.put((next_point,dist+1)) # Create some lakes if verbose == True: print "lakes ..." potential_lakes = [i for i in xrange(len(points)) if point_types[i] == 1 and \ distance2sea[i] >= self.Options['lake_min_coast_dist'] and distance2sea[i] <= self.Options['lake_max_coast_dist']] random.shuffle(potential_lakes) touched = [-1 for i in xrange(len(points))] for i in xrange(self.Options['num_lakes']): lake_start = potential_lakes[i] frontier = Queue.Queue() frontier.put(lake_start) touched[lake_start] = i point_types[lake_start] = 2 for j in xrange(self.Options['lake_size']): for k in xrange(frontier.qsize()): # empty queue of frontier objects only current_lake = frontier.get() for next in neighbours[current_lake]: if point_types[next] == 1 and distance2sea[next] >= self.Options['lake_min_coast_dist'] and \ distance2sea[next] <= self.Options['lake_max_coast_dist'] and touched[next] < i: point_types[next] = 2 touched[next] = i frontier.put(next) # Create distance to freshwater layer distance2freshwater = [0 for i in xrange(len(points))] touched = [-1 for i in xrange(len(points))] for i in xrange(len(points)): if len(neighbours[i]) == 0 or not point_types[i] == 1: continue frontier = Queue.Queue() frontier.put((i,0)) touched[i] = i found_water = False while not found_water and not frontier.empty(): (current_point,dist) = frontier.get() for next_point in neighbours[current_point]: if point_types[next_point] == 2: found_water = True distance2freshwater[i] = dist+1 break elif touched[next_point] < i and point_types[next_point] == 1: touched[next_point] = i frontier.put((next_point,dist+1)) if not found_water: distance2freshwater[i] = -1 if verbose == True: print "rasterising ..." # Perform flood fills on full rasters def in_triangle(point, tv1, tv2, tv3): def sign(p1,p2,p3): return (p1[0] - p3[0])*(p2[1] - p3[1]) - (p2[0] - p3[0])*(p1[1] - p3[1]); if sign(point,tv1,tv2) <= 0.0 and sign(point,tv2,tv3) <= 0.0 and sign(point,tv3,tv1) <= 0.0: return True else: return False for t in triangles: xmin = min(points[t[0]][0],points[t[1]][0],points[t[2]][0]) xmax = max(points[t[0]][0],points[t[1]][0],points[t[2]][0]) ymin = min(points[t[0]][1],points[t[1]][1],points[t[2]][1]) ymax = max(points[t[0]][1],points[t[1]][1],points[t[2]][1]) if point_types[t[0]] > 0 and point_types[t[1]] > 0 and point_types[t[2]] > 0: # not sea if
1 PRBS_DIRECTION_CHECKER = 2', required=False, default="0", type=click.Choice(["0", "1", "2"])) @clicommon.pass_db def enable(db, port, target, mode_value, lane_mask, prbs_direction): """Enable PRBS mode on a port args port target mode_value lane_mask prbs_direction example sudo config mux prbs enable Ethernet48 0 3 3 0 """ port = platform_sfputil_helper.get_interface_name(port, db) delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_PRBS_CMD") delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_PRBS_CMD_ARG") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_PRBS_RSP") if port is not None: click.confirm(('Muxcable at port {} will be changed to PRBS mode {} state; disable traffic Continue?'.format( port, mode_value)), abort=True) res_dict = {} res_dict[0] = CONFIG_FAIL res_dict[1] = "unknown" param_dict = {} target = parse_target(target) param_dict["target"] = target param_dict["mode_value"] = mode_value param_dict["lane_mask"] = lane_mask param_dict["direction"] = prbs_direction res_dict = update_and_get_response_for_xcvr_cmd( "config_prbs", "status", "True", "XCVRD_CONFIG_PRBS_CMD", "XCVRD_CONFIG_PRBS_CMD_ARG", "XCVRD_CONFIG_PRBS_RSP", port, 30, param_dict, "enable") rc = res_dict[0] port = platform_sfputil_helper.get_interface_alias(port, db) delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_PRBS_CMD") delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_PRBS_CMD_ARG") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_PRBS_RSP") if rc == 0: click.echo("Success in PRBS mode port {} to {}".format(port, mode_value)) else: click.echo("ERR: Unable to set PRBS mode port {} to {}".format(port, mode_value)) sys.exit(CONFIG_FAIL) @prbs.command() @click.argument('port', required=True, default=None) @click.argument('target', metavar='<target> NIC TORA TORB LOCAL', required=True, default=None, type=click.Choice(["NIC", "TORA", "TORB", "LOCAL"])) @click.argument('prbs_direction', metavar='<PRBS_DIRECTION> PRBS_DIRECTION_BOTH = 0 PRBS_DIRECTION_GENERATOR = 1 PRBS_DIRECTION_CHECKER = 2', required=False, default="0", type=click.Choice(["0", "1", "2"])) @clicommon.pass_db def disable(db, port, target, prbs_direction): """Disable PRBS mode on a port example sudo config mux prbs disable Ethernet48 0 """ port = platform_sfputil_helper.get_interface_name(port, db) delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_PRBS_CMD") delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_PRBS_CMD_ARG") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_PRBS_RSP") if port is not None: click.confirm(('Muxcable at port {} will be changed to disable PRBS mode {} target; disable traffic Continue?'.format( port, target)), abort=True) res_dict = {} res_dict[0] = CONFIG_FAIL res_dict[1] = "unknown" param_dict = {} target = parse_target(target) param_dict["target"] = target param_dict["direction"] = prbs_direction res_dict = update_and_get_response_for_xcvr_cmd( "config_prbs", "status", "True", "XCVRD_CONFIG_PRBS_CMD", "XCVRD_CONFIG_PRBS_CMD_ARG", "XCVRD_CONFIG_PRBS_RSP", port, 30, param_dict, "disable") rc = res_dict[0] delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_PRBS_CMD") delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_PRBS_CMD_ARG") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_PRBS_RSP") port = platform_sfputil_helper.get_interface_alias(port, db) if rc == 0: click.echo("Success in disable PRBS mode port {} on target {}".format(port, target)) else: click.echo("ERR: Unable to disable PRBS mode port {} on target {}".format(port, target)) sys.exit(CONFIG_FAIL) @muxcable.group(cls=clicommon.AbbreviationGroup) def loopback(): """Enable/disable loopback mode on a port""" pass @loopback.command() @click.argument('port', required=True, default=None) @click.argument('target', metavar='<target> NIC TORA TORB LOCAL', required=True, default=None, type=click.Choice(["NIC", "TORA", "TORB", "LOCAL"])) @click.argument('lane_mask', required=True, default=None, type=click.INT) @click.argument('mode_value', required=False, metavar='<Loop mode> 1 LOOPBACK_MODE_NEAR_END 2 LOOPBACK_MODE_FAR_END', default="1", type=click.Choice(["1", "2"])) @clicommon.pass_db def enable(db, port, target, lane_mask, mode_value): """Enable loopback mode on a port args port target lane_map mode_value""" port = platform_sfputil_helper.get_interface_name(port, db) delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_LOOP_CMD") delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_LOOP_CMD_ARG") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_LOOP_RSP") if port is not None: click.confirm(('Muxcable at port {} will be changed to LOOP mode {} state; disable traffic Continue?'.format( port, mode_value)), abort=True) res_dict = {} res_dict[0] = CONFIG_FAIL res_dict[1] = "unknown" param_dict = {} target = parse_target(target) param_dict["target"] = target param_dict["mode_value"] = mode_value param_dict["lane_mask"] = lane_mask res_dict = update_and_get_response_for_xcvr_cmd( "config_loop", "status", "True", "XCVRD_CONFIG_LOOP_CMD", "XCVRD_CONFIG_LOOP_CMD_ARG", "XCVRD_CONFIG_LOOP_RSP", port, 30, param_dict, "enable") rc = res_dict[0] delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_LOOP_CMD") delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_LOOP_CMD_ARG") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_LOOP_RSP") port = platform_sfputil_helper.get_interface_alias(port, db) if rc == 0: click.echo("Success in LOOP mode port {} to {}".format(port, mode_value)) else: click.echo("ERR: Unable to set LOOP mode port {} to {}".format(port, mode_value)) sys.exit(CONFIG_FAIL) @loopback.command() @click.argument('port', required=True, default=None) @click.argument('target', metavar='<target> NIC TORA TORB LOCAL', required=True, default=None, type=click.Choice(["NIC", "TORA", "TORB", "LOCAL"])) @clicommon.pass_db def disable(db, port, target): """Disable loopback mode on a port""" port = platform_sfputil_helper.get_interface_name(port, db) delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_LOOP_CMD") delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_LOOP_CMD_ARG") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_LOOP_RSP") if port is not None: click.confirm(('Muxcable at port {} will be changed to disable LOOP mode {} state; disable traffic Continue?'.format( port, target)), abort=True) res_dict = {} res_dict[0] = CONFIG_FAIL res_dict[1] = "unknown" param_dict = {} target = parse_target(target) param_dict["target"] = target res_dict = update_and_get_response_for_xcvr_cmd( "config_loop", "status", "True", "XCVRD_CONFIG_LOOP_CMD", "XCVRD_CONFIG_LOOP_CMD_ARG", "XCVRD_CONFIG_LOOP_RSP", port, 30, param_dict, "disable") rc = res_dict[0] delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_LOOP_CMD") delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_LOOP_CMD_ARG") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_LOOP_RSP") port = platform_sfputil_helper.get_interface_alias(port, db) if rc == 0: click.echo("Success in disable LOOP mode port {} to {}".format(port, target)) else: click.echo("ERR: Unable to set disable LOOP mode port {} to {}".format(port, target)) sys.exit(CONFIG_FAIL) @muxcable.group(cls=clicommon.AbbreviationGroup) def hwmode(): """Configure muxcable hardware directly""" pass @hwmode.command() @click.argument('state', metavar='<operation_status>', required=True, type=click.Choice(["active", "standby"])) @click.argument('port', metavar='<port_name>', required=True, default=None) @clicommon.pass_db def state(db, state, port): """Configure the muxcable mux state {active/standby}""" port = platform_sfputil_helper.get_interface_name(port, db) delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_HWMODE_DIR_CMD") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_HWMODE_DIR_RSP") if port is not None and port != "all": click.confirm(('Muxcable at port {} will be changed to {} state. Continue?'.format(port, state)), abort=True) res_dict = {} res_dict[0] = CONFIG_FAIL res_dict[1] = "unknown" res_dict = update_and_get_response_for_xcvr_cmd( "config", "result", "True", "XCVRD_CONFIG_HWMODE_DIR_CMD", None, "XCVRD_CONFIG_HWMODE_DIR_RSP", port, 1, None, state) rc = res_dict[0] delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_HWMODE_DIR_CMD") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_HWMODE_DIR_RSP") port = platform_sfputil_helper.get_interface_alias(port, db) if rc == 0: click.echo("Success in toggling port {} to {}".format(port, state)) else: click.echo("ERR: Unable to toggle port {} to {}".format(port, state)) sys.exit(CONFIG_FAIL) elif port == "all": click.confirm(('Muxcable at all ports will be changed to {} state. Continue?'.format(state)), abort=True) logical_port_list = platform_sfputil_helper.get_logical_list() rc_exit = 0 for port in logical_port_list: if platform_sfputil is not None: physical_port_list = platform_sfputil_helper.logical_port_name_to_physical_port_list(port) if not isinstance(physical_port_list, list): continue if len(physical_port_list) != 1: continue physical_port = physical_port_list[0] logical_port_list_for_physical_port = platform_sfputil_helper.get_physical_to_logical() logical_port_list_per_port = logical_port_list_for_physical_port.get(physical_port, None) """ This check is required for checking whether or not this logical port is the one which is actually mapped to physical port and by convention it is always the first port. TODO: this should be removed with more logic to check which logical port maps to actual physical port being used""" if port != logical_port_list_per_port[0]: continue res_dict = {} res_dict[0] = CONFIG_FAIL res_dict[1] = 'unknown' res_dict = update_and_get_response_for_xcvr_cmd( "config", "result", "True", "XCVRD_CONFIG_HWMODE_DIR_CMD", None, "XCVRD_CONFIG_HWMODE_DIR_RSP", port, 1, None, state) rc = res_dict[0] delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_HWMODE_DIR_CMD") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_HWMODE_DIR_RSP") port = platform_sfputil_helper.get_interface_alias(port, db) if rc == 0: click.echo("Success in toggling port {} to {}".format(port, state)) else: click.echo("ERR: Unable to toggle port {} to {}".format(port, state)) rc_exit = CONFIG_FAIL sys.exit(rc_exit) @hwmode.command() @click.argument('state', metavar='<operation_status>', required=True, type=click.Choice(["auto", "manual"])) @click.argument('port', metavar='<port_name>', required=True, default=None) @clicommon.pass_db def setswitchmode(db, state, port): """Configure the muxcable mux switching mode {auto/manual}""" port = platform_sfputil_helper.get_interface_name(port, db) delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_HWMODE_SWMODE_CMD") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_HWMODE_SWMODE_RSP") if port is not None and port != "all": click.confirm(('Muxcable at port {} will be changed to {} switching mode. Continue?'.format(port, state)), abort=True) res_dict = {} res_dict[0] = CONFIG_FAIL res_dict[1] = "unknown" res_dict = update_and_get_response_for_xcvr_cmd( "config", "result", "True", "XCVRD_CONFIG_HWMODE_SWMODE_CMD", None, "XCVRD_CONFIG_HWMODE_SWMODE_RSP", port, 1, None, state) rc = res_dict[0] delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_HWMODE_SWMODE_CMD") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_HWMODE_SWMODE_RSP") port = platform_sfputil_helper.get_interface_alias(port, db) if rc == 0: click.echo("Success in switch muxcable mode port {} to {}".format(port, state)) else: click.echo("ERR: Unable to switch muxcable mode port {} to {}".format(port, state)) sys.exit(CONFIG_FAIL) elif port == "all": click.confirm(('Muxcable at all ports will be changed to {} switching mode. Continue?'.format(state)), abort=True) logical_port_list = platform_sfputil_helper.get_logical_list() rc_exit = 0 for port in logical_port_list: if platform_sfputil is not None: physical_port_list = platform_sfputil_helper.logical_port_name_to_physical_port_list(port) if not isinstance(physical_port_list, list): continue if len(physical_port_list) != 1: continue physical_port = physical_port_list[0] logical_port_list_for_physical_port = platform_sfputil_helper.get_physical_to_logical() logical_port_list_per_port = logical_port_list_for_physical_port.get(physical_port, None) """ This check is required for checking whether or not this logical port is the one which is actually mapped to physical port and by convention it is always the first port. TODO: this should be removed with more logic to check which logical port maps to actual physical port being used""" if port != logical_port_list_per_port[0]: continue res_dict = {} res_dict[0] = CONFIG_FAIL res_dict[1] = "unknown" res_dict = update_and_get_response_for_xcvr_cmd( "config", "result", "True", "XCVRD_CONFIG_HWMODE_SWMODE_CMD", None, "XCVRD_CONFIG_HWMODE_SWMODE_RSP", port, 1, None, state) rc = res_dict[0] delete_all_keys_in_db_table("APPL_DB", "XCVRD_CONFIG_HWMODE_SWMODE_CMD") delete_all_keys_in_db_table("STATE_DB", "XCVRD_CONFIG_HWMODE_SWMODE_RSP") port = platform_sfputil_helper.get_interface_alias(port, db) if rc == 0: click.echo("Success in toggling port {} to {}".format(port, state)) else: click.echo("ERR: Unable to toggle port {} to {}".format(port, state)) rc_exit = CONFIG_FAIL sys.exit(rc_exit) @muxcable.group(cls=clicommon.AbbreviationGroup) def firmware(): """Configure muxcable firmware command""" pass @firmware.command() @click.argument('fwfile', metavar='<firmware_file>', required=True) @click.argument('port', metavar='<port_name>', required=True, default=None) @clicommon.pass_db def download(db, fwfile, port): """Config muxcable firmware download""" port = platform_sfputil_helper.get_interface_name(port, db) delete_all_keys_in_db_table("STATE_DB", "XCVRD_DOWN_FW_RSP") delete_all_keys_in_db_table("APPL_DB", "XCVRD_DOWN_FW_CMD") if port is not None and port != "all": res_dict = {} res_dict[0] = CONFIG_FAIL res_dict[1] = "unknown" res_dict = update_and_get_response_for_xcvr_cmd( "download_firmware", "status", "0", "XCVRD_DOWN_FW_CMD", None, "XCVRD_DOWN_FW_RSP", port, 1000, None, fwfile) rc = res_dict[0] delete_all_keys_in_db_table("STATE_DB", "XCVRD_DOWN_FW_RSP") delete_all_keys_in_db_table("APPL_DB", "XCVRD_DOWN_FW_CMD") port = platform_sfputil_helper.get_interface_alias(port, db) if rc == 0: click.echo("Success in downloading firmware port {} {}".format(port, fwfile)) else: click.echo("ERR: Unable to download firmware port {} {}".format(port, fwfile)) sys.exit(CONFIG_FAIL) elif port == "all": click.confirm(('Muxcable at all ports will be changed to {} switching mode. Continue?'.format(state)), abort=True) logical_port_list = platform_sfputil_helper.get_logical_list() rc_exit = True for port in logical_port_list: if platform_sfputil is not None: physical_port_list = platform_sfputil_helper.logical_port_name_to_physical_port_list(port) if not
<filename>src/sardana/pool/poolacquisition.py #!/usr/bin/env python ############################################################################## ## # This file is part of Sardana ## # http://www.sardana-controls.org/ ## # Copyright 2011 CELLS / ALBA Synchrotron, Bellaterra, Spain ## # Sardana is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. ## # Sardana is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. ## # You should have received a copy of the GNU Lesser General Public License # along with Sardana. If not, see <http://www.gnu.org/licenses/>. ## ############################################################################## """This module is part of the Python Pool library. It defines the class for an acquisition""" __all__ = ["get_acq_ctrls", "AcquisitionState", "AcquisitionMap", "PoolCTAcquisition", "Pool0DAcquisition", "PoolIORAcquisition", "PoolAcquisitionHardware", "PoolAcquisitionSoftware", "PoolAcquisitionSoftwareStart"] __docformat__ = 'restructuredtext' import time import weakref import datetime from taurus.core.util.log import DebugIt from taurus.core.util.enumeration import Enumeration from sardana import SardanaValue, State, ElementType, TYPE_TIMERABLE_ELEMENTS from sardana.sardanathreadpool import get_thread_pool from sardana.pool import AcqSynch, AcqMode from sardana.pool.poolaction import ActionContext, PoolAction from sardana.pool.poolsynchronization import PoolSynchronization #: enumeration representing possible motion states AcquisitionState = Enumeration("AcquisitionState", ( "Stopped", # "StoppedOnError", # "StoppedOnAbort", "Acquiring", "Invalid")) AS = AcquisitionState AcquiringStates = AS.Acquiring, StoppedStates = AS.Stopped, # MS.StoppedOnError, MS.StoppedOnAbort AcquisitionMap = { # AS.Stopped : State.On, AS.Acquiring: State.Moving, AS.Invalid: State.Invalid, } def is_value_error(value): if isinstance(value, SardanaValue) and value.error: return True return False def get_acq_ctrls(ctrls): """Converts configuration controllers into acquisition controllers. Takes care about converting their internals as well. :param ctrls: sequence of configuration controllers objects :type ctrls: sardana.pool.poolmeasurementgroup.ControllerConfiguration :return: sequence of acquisition controllers :rtype: :class:`~sardana.pool.poolacquisition.AcqController` .. note:: The get_acq_ctrls function has been included in Sardana on a provisional basis. Backwards incompatible changes (up to and including removal of the class) may occur if deemed necessary by the core developers. """ action_ctrls = [] for ctrl in ctrls: action_ctrl = AcqController(ctrl) action_ctrls.append(action_ctrl) return action_ctrls def get_timerable_ctrls(ctrls, acq_mode): """Converts timerable configuration controllers into acquisition controllers. Take care about converting their internals as well. Take care about assigning master according to acq_mode. :param ctrls: sequence of configuration controllers objects :type ctrls: sardana.pool.poolmeasurementgroup.ControllerConfiguration :param acq_mode: acquisition mode (timer/monitor) :type acq_mode: :class:`sardana.pool.AcqMode` :return: sequence of acquisition controllers :rtype: :class:`~sardana.pool.poolacquisition.AcqController` .. note:: The get_timerable_ctrls function has been included in Sardana on a provisional basis. Backwards incompatible changes (up to and including removal of the class) may occur if deemed necessary by the core developers. """ action_ctrls = [] for ctrl in ctrls: attrs = {} if acq_mode is not None: master = None if acq_mode is AcqMode.Timer: master = ctrl.timer elif acq_mode is AcqMode.Monitor: master = ctrl.monitor attrs = {'master': master} action_ctrl = AcqController(ctrl, attrs) action_ctrls.append(action_ctrl) return action_ctrls def get_timerable_items(ctrls, master, acq_mode=AcqMode.Timer): """Converts timerable configuration items into acquisition items. The timerable items are controllers and master. Convert these into the corresponding acquisition items. Take care about converting their internals as well. Take care about assigning master according to acq_mode. :param ctrls: sequence of configuration controllers objects :type ctrls: :obj:list<:class:`~sardana.pool.poolmeasurementgroup.ControllerConfiguration`> # noqa :param master: master configuration object :type master: :class:`~sardana.pool.poolmeasurementgroup.ChannelConfiguration` # noqa :param acq_mode: acquisition mode (timer/monitor) :type acq_mode: :class:`sardana.pool.AcqMode` :return: sequence of acquisition controllers :rtype: :class:`~sardana.pool.poolacquisition.AcqController` .. note:: The get_timerable_ctrls function has been included in Sardana on a provisional basis. Backwards incompatible changes (up to and including removal of the class) may occur if deemed necessary by the core developers. """ ctrls = get_timerable_ctrls(ctrls, acq_mode) # Search master AcqConfigurationItem obj for ctrl in ctrls: for channel in ctrl.get_channels(): if channel.configuration == master: master = channel break return ctrls, master class ActionArgs(object): def __init__(self, args, kwargs=None): self.args = args if kwargs is None: kwargs = {} self.kwargs = kwargs class AcqConfigurationItem(object): """Wrapper for configuration item that will be used in an action. .. note:: The AcqConfigurationItem function has been included in Sardana on a provisional basis. Backwards incompatible changes (up to and including removal of the class) may occur if deemed necessary by the core developers. """ def __init__(self, configuration, attrs=None): """Constructs action item from a configuration item. Eventually it can be enriched with attrs. :param configuration: item configuration object :type configuration: :class:`sardana.pool.poolmeasurementgroup.ConfigurationItem` :param attrs: extra attributes to be inserted :type attrs: dict """ self._configuration = weakref.ref(configuration) self.enabled = True if attrs is not None: self.__dict__.update(attrs) def __getattr__(self, item): return getattr(self.configuration, item) def get_configuration(self): """Returns the element associated with this item""" return self._configuration() def set_configuration(self, configuration): """Sets the element for this item""" self._configuration = weakref.ref(configuration) configuration = property(get_configuration) class AcqController(AcqConfigurationItem): """Wrapper for controller configuration that will be used in an action. .. note:: The AcqController function has been included in Sardana on a provisional basis. Backwards incompatible changes (up to and including removal of the class) may occur if deemed necessary by the core developers. """ def __init__(self, configuration, attrs=None): """Constructs action controller from a configuration controller. Eventually it can be enriched with attrs. :param configuration: controller configuration object :type configuration: :class:`sardana.pool.poolmeasurementgroup.ControllerConfiguration` :param attrs: extra attributes to be inserted :type attrs: dict """ master = None if attrs is not None: master = attrs.get('master') self._channels = [] self._channels_enabled = [] self._channels_disabled = [] ch_attrs = {'controller': self} for conf_channel in configuration.get_channels(): action_channel = AcqConfigurationItem(conf_channel, ch_attrs) self._channels.append(action_channel) if conf_channel in configuration.get_channels(enabled=True): self._channels_enabled.append(action_channel) if conf_channel in configuration.get_channels(enabled=False): self._channels_disabled.append(action_channel) if master is None: continue if master == conf_channel: attrs['master'] = action_channel master = None AcqConfigurationItem.__init__(self, configuration, attrs) def get_channels(self, enabled=None): if enabled is None: return list(self._channels) elif enabled: return list(self._channels_enabled) else: return list(self._channels_disabled) class PoolAcquisition(PoolAction): """Acquisition action which is internally composed for sub-actions. Handle acquisition of experimental channels of the following types: * timerable (C/T, 1D and 2D) synchronized by software or hardware trigger/gate/start * 0D Synchronized by T/G elements or sofware synchronizer. """ def __init__(self, main_element, name="Acquisition"): PoolAction.__init__(self, main_element, name) zerodname = name + ".0DAcquisition" hwname = name + ".HardwareAcquisition" swname = name + ".SoftwareAcquisition" sw_start_name = name + ".SoftwareStartAcquisition" synchname = name + ".Synchronization" self._sw_acq_args = None self._sw_start_acq_args = None self._0d_acq_args = None self._hw_acq_args = None self._synch_args = None self._sw_acq = PoolAcquisitionSoftware(main_element, name=swname) self._sw_start_acq = PoolAcquisitionSoftwareStart( main_element, name=sw_start_name) self._0d_acq = Pool0DAcquisition(main_element, name=zerodname) self._hw_acq = PoolAcquisitionHardware(main_element, name=hwname) self._synch = PoolSynchronization(main_element, name=synchname) def event_received(self, *args, **kwargs): """Callback executed on event of software synchronizer. Reacts on start, active, passive or end type of events """ timestamp = time.time() _, type_, index = args name = type_.name if name == "state": return t_fmt = '%Y-%m-%d %H:%M:%S.%f' t_str = datetime.datetime.fromtimestamp(timestamp).strftime(t_fmt) msg = '%s event with id: %d received at: %s' % (name, index, t_str) self.debug(msg) if name == "start": if self._sw_start_acq_args is not None: self.debug('Executing software start acquisition.') get_thread_pool().add(self._sw_start_acq.run, None, *self._sw_start_acq_args.args, **self._sw_start_acq_args.kwargs) elif name == "active": # this code is not thread safe, but for the moment we assume that # only one EventGenerator will work at the same time if self._sw_acq_args is not None: if self._sw_acq._is_started() or self._sw_acq.is_running(): msg = ('Skipping trigger: software acquisition is still' ' in progress.') self.debug(msg) return else: self.debug('Executing software acquisition.') self._sw_acq_args.kwargs.update({'index': index}) self._sw_acq._started = True get_thread_pool().add(self._sw_acq.run, None, *self._sw_acq_args.args, **self._sw_acq_args.kwargs) if self._0d_acq_args is not None: if self._0d_acq._is_started() or self._0d_acq.is_running(): msg = ('Skipping trigger: ZeroD acquisition is still in' ' progress.') self.debug(msg) return else: self.debug('Executing ZeroD acquisition.') self._0d_acq_args.kwargs.update({'index': index}) self._0d_acq._started = True self._0d_acq._stopped = False self._0d_acq._aborted = False get_thread_pool().add(self._0d_acq.run, None, *self._0d_acq_args.args, **self._0d_acq_args.kwargs) elif name == "passive": # TODO: _0d_acq_args comparison may not be necessary if (self._0d_acq_args is not None and (self._0d_acq._is_started() or self._0d_acq.is_running())): self.debug('Stopping ZeroD acquisition.') self._0d_acq.stop_action() def prepare(self, config, acq_mode, value, synchronization=None, moveable=None, sw_synch_initial_domain=None, nb_starts=1, **kwargs): """Prepare measurement process. Organize sub-action arguments and loads configuration parameters to the hardware controllers. """ self._sw_acq_args = None self._sw_start_acq_args = None self._0d_acq_args = None self._hw_acq_args = None self._synch_args = None ctrls_hw = [] ctrls_sw = [] ctrls_sw_start = [] repetitions = synchronization.repetitions latency = synchronization.passive_time # Prepare controllers synchronized by hardware acq_sync_hw = [AcqSynch.HardwareTrigger, AcqSynch.HardwareStart, AcqSynch.HardwareGate] ctrls = config.get_timerable_ctrls(acq_synch=acq_sync_hw, enabled=True) if len(ctrls) > 0: ctrls_hw = get_timerable_ctrls(ctrls, acq_mode) hw_args = (ctrls_hw, value, repetitions, latency) hw_kwargs = {} hw_kwargs.update(kwargs) self._hw_acq_args = ActionArgs(hw_args, hw_kwargs) # Prepare controllers synchronized
N.array([1,0,0]) trans_den = N.array([2,1,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,0,1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,0]) trans_den = N.array([2,1,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,1,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,1,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,-1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,0,-1,0,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,0,-1,0,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,-1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,-1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,-1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,-1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,-1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,-1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,-1,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([0,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,-1,0,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,1,0,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,0,1,0,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,0,1,0,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,0,-1,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,0,1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,1,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,1,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,-1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,0,-1,0,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,0,-1,0,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,-1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,-1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,-1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,-1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,-1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,-1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,-1,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,-1,0,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,1,0,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,0,1,0,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,0,1,0,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,0,-1,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,0,1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,1,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,1,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,-1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,0,-1,0,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,0,-1,0,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,-1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,-1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,-1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,-1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,-1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,-1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,-1,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,-1,0,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,1,0,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,0,1,0,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,0,1,0,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,0,-1,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,0,1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) sg = SpaceGroup(219, 'F -4 3 c', transformations) space_groups[219] = sg space_groups['F -4 3 c'] = sg transformations = [] rot = N.array([1,0,0,0,1,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([0,0,0]) trans_den = N.array([1,1,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,1,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,3]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,-1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,3,3]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,0,-1,0,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,3,3]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,0,-1,0,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,3,1]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,-1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,3,1]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,1,3]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([0,0,0]) trans_den = N.array([1,1,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([0,0,0]) trans_den = N.array([1,1,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,-1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([0,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,-1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([0,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,0,0]) trans_den = N.array([2,1,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,-1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([0,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,0,0]) trans_den = N.array([2,1,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,-1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([0,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,-1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([0,0,1]) trans_den = N.array([1,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,0,0]) trans_den = N.array([2,1,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,-1,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([0,1,0]) trans_den = N.array([1,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,-1,0,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,3,3]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,1,0,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,0,1,0,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,3]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,0,1,0,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,0,-1,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,3,1]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,0,1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,1,0,0,0,1]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,1,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([3,3,5]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,0,-1,0,1,0]) rot.shape = (3, 3) trans_num = N.array([3,5,5]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,0,-1,0,1,0,0]) rot.shape = (3, 3) trans_num = N.array([3,5,5]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,0,-1,0,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([3,5,3]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,-1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([3,5,3]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,1,0,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([3,3,5]) trans_den = N.array([4,4,4]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,-1,1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,1,-1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,-1,0,0,0,1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,-1,0,0,0,1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,1,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,0,-1,1,0,0,0,-1,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([0,1,0,0,0,-1,-1,0,0]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([1,0,0,0,-1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([2,2,1]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,1,0,0,0,-1]) rot.shape = (3, 3) trans_num = N.array([1,1,1]) trans_den = N.array([1,2,2]) transformations.append((rot, trans_num, trans_den)) rot = N.array([-1,0,0,0,-1,0,0,0,1]) rot.shape = (3, 3) trans_num
= download(name) base_dir = os.path.dirname(fname) data_dir, ext = os.path.splitext(fname) if ext == '.zip': fp = zipfile.ZipFile(fname, 'r') elif ext in ('.tar', '.gz'): fp = tarfile.open(fname, 'r') else: assert False, 'Only zip/tar files can be extracted.' fp.extractall(base_dir) return os.path.join(base_dir, folder) if folder else data_dir # Defined in file: ./chapter_multilayer-perceptrons/kaggle-house-price.md def download_all(): #@save """Download all files in the DATA_HUB.""" for name in DATA_HUB: download(name) # Defined in file: ./chapter_multilayer-perceptrons/kaggle-house-price.md DATA_HUB['kaggle_house_train'] = ( #@save DATA_URL + 'kaggle_house_pred_train.csv', '585e9cc93e70b39160e7921475f9bcd7d31219ce') # Defined in file: ./chapter_multilayer-perceptrons/kaggle-house-price.md DATA_HUB['kaggle_house_test'] = ( #@save DATA_URL + 'kaggle_house_pred_test.csv', 'fa19780a7b011d9b009e8bff8e99922a8ee2eb90') # Defined in file: ./chapter_deep-learning-computation/use-gpu.md def try_gpu(i=0): #@save """Return gpu(i) if exists, otherwise return cpu().""" if torch.cuda.device_count() >= i + 1: return torch.device(f'cuda:{i}') return torch.device('cpu') # Defined in file: ./chapter_deep-learning-computation/use-gpu.md def try_all_gpus(): #@save """Return all available GPUs, or [cpu(),] if no GPU exists.""" devices = [torch.device(f'cuda:{i}') for i in range(torch.cuda.device_count())] return devices if devices else [torch.device('cpu')] # Defined in file: ./chapter_convolutional-neural-networks/conv-layer.md def corr2d(X, K): #@save """Compute 2D cross-correlation.""" h, w = K.shape Y = d2l.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1)) for i in range(Y.shape[0]): for j in range(Y.shape[1]): Y[i, j] = d2l.reduce_sum((X[i: i + h, j: j + w] * K)) return Y # Defined in file: ./chapter_convolutional-neural-networks/lenet.md def evaluate_accuracy_gpu(net, data_iter, device=None): #@save """Compute the accuracy for a model on a dataset using a GPU.""" net.eval() # Set the model to evaluation mode if not device: device = next(iter(net.parameters())).device # No. of correct predictions, no. of predictions metric = d2l.Accumulator(2) for X, y in data_iter: X, y = X.to(device), y.to(device) metric.add(d2l.accuracy(net(X), y), d2l.size(y)) return metric[0] / metric[1] # Defined in file: ./chapter_convolutional-neural-networks/lenet.md def train_ch6(net, train_iter, test_iter, num_epochs, lr, device=d2l.try_gpu()): """Train a model with a GPU (defined in Chapter 6).""" def init_weights(m): if type(m) == nn.Linear or type(m) == nn.Conv2d: torch.nn.init.xavier_uniform_(m.weight) net.apply(init_weights) print('training on', device) net.to(device) optimizer = torch.optim.SGD(net.parameters(), lr=lr) loss = nn.CrossEntropyLoss() animator = d2l.Animator(xlabel='epoch', xlim=[0, num_epochs], legend=['train loss', 'train acc', 'test acc']) timer = d2l.Timer() for epoch in range(num_epochs): # Sum of training loss, sum of training accuracy, no. of examples metric = d2l.Accumulator(3) for i, (X, y) in enumerate(train_iter): timer.start() net.train() optimizer.zero_grad() X, y = X.to(device), y.to(device) y_hat = net(X) l = loss(y_hat, y) l.backward() optimizer.step() with torch.no_grad(): metric.add(l * X.shape[0], d2l.accuracy(y_hat, y), X.shape[0]) timer.stop() train_loss = metric[0]/metric[2] train_acc = metric[1]/metric[2] if (i + 1) % 50 == 0: animator.add(epoch + i / len(train_iter), (train_loss, train_acc, None)) test_acc = evaluate_accuracy_gpu(net, test_iter) animator.add(epoch+1, (None, None, test_acc)) print(f'loss {train_loss:.3f}, train acc {train_acc:.3f}, ' f'test acc {test_acc:.3f}') print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec ' f'on {str(device)}') # Defined in file: ./chapter_convolutional-modern/resnet.md class Residual(nn.Module): #@save def __init__(self, input_channels, num_channels, use_1x1conv=False, strides=1): super().__init__() self.conv1 = nn.Conv2d(input_channels, num_channels, kernel_size=3, padding=1, stride=strides) self.conv2 = nn.Conv2d(num_channels, num_channels, kernel_size=3, padding=1) if use_1x1conv: self.conv3 = nn.Conv2d(input_channels, num_channels, kernel_size=1, stride=strides) else: self.conv3 = None self.bn1 = nn.BatchNorm2d(num_channels) self.bn2 = nn.BatchNorm2d(num_channels) self.relu = nn.ReLU(inplace=True) def forward(self, X): Y = F.relu(self.bn1(self.conv1(X))) Y = self.bn2(self.conv2(Y)) if self.conv3: X = self.conv3(X) Y += X return F.relu(Y) # Defined in file: ./chapter_recurrent-neural-networks/text-preprocessing.md d2l.DATA_HUB['time_machine'] = (d2l.DATA_URL + 'timemachine.txt', '090b5e7e70c295757f55df93cb0a180b9691891a') # Defined in file: ./chapter_recurrent-neural-networks/text-preprocessing.md def read_time_machine(): #@save """Load the time machine book into a list of sentences.""" with open(d2l.download('time_machine'), 'r') as f: lines = f.readlines() return [re.sub('[^A-Za-z]+', ' ', line.strip().lower()) for line in lines] # Defined in file: ./chapter_recurrent-neural-networks/text-preprocessing.md def tokenize(lines, token='word'): #@save """Split sentences into word or char tokens.""" if token == 'word': return [line.split(' ') for line in lines] elif token == 'char': return [list(line) for line in lines] else: print('ERROR: unknown token type '+token) # Defined in file: ./chapter_recurrent-neural-networks/text-preprocessing.md class Vocab: #@save def __init__(self, tokens, min_freq=0, reserved_tokens=None): if reserved_tokens is None: reserved_tokens = [] # Sort according to frequencies counter = count_corpus(tokens) self.token_freqs = sorted(counter.items(), key=lambda x: x[0]) self.token_freqs.sort(key=lambda x: x[1], reverse=True) self.unk, uniq_tokens = 0, ['<unk>'] + reserved_tokens uniq_tokens += [token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens] self.idx_to_token, self.token_to_idx = [], dict() for token in uniq_tokens: self.idx_to_token.append(token) self.token_to_idx[token] = len(self.idx_to_token) - 1 def __len__(self): return len(self.idx_to_token) def __getitem__(self, tokens): if not isinstance(tokens, (list, tuple)): return self.token_to_idx.get(tokens, self.unk) return [self.__getitem__(token) for token in tokens] def to_tokens(self, indices): if not isinstance(indices, (list, tuple)): return self.idx_to_token[indices] return [self.idx_to_token[index] for index in indices] # Defined in file: ./chapter_recurrent-neural-networks/text-preprocessing.md def count_corpus(sentences): #@save # Flatten a list of token lists into a list of tokens tokens = [tk for line in sentences for tk in line] return collections.Counter(tokens) # Defined in file: ./chapter_recurrent-neural-networks/text-preprocessing.md def load_corpus_time_machine(max_tokens=-1): #@save lines = read_time_machine() tokens = tokenize(lines, 'char') vocab = Vocab(tokens) corpus = [vocab[tk] for line in tokens for tk in line] if max_tokens > 0: corpus = corpus[:max_tokens] return corpus, vocab # Defined in file: ./chapter_recurrent-neural-networks/language-models-and-dataset.md def seq_data_iter_random(corpus, batch_size, num_steps): #@save # Offset the iterator over the data for uniform starts corpus = corpus[random.randint(0, num_steps):] # Subtract 1 extra since we need to account for label num_examples = ((len(corpus) - 1) // num_steps) example_indices = list(range(0, num_examples * num_steps, num_steps)) random.shuffle(example_indices) def data(pos): # This returns a sequence of length `num_steps` starting from `pos` return corpus[pos: pos + num_steps] # Discard half empty batches num_batches = num_examples // batch_size for i in range(0, batch_size * num_batches, batch_size): # `batch_size` indicates the random examples read each time batch_indices = example_indices[i:(i+batch_size)] X = [data(j) for j in batch_indices] Y = [data(j + 1) for j in batch_indices] yield d2l.tensor(X), d2l.tensor(Y) # Defined in file: ./chapter_recurrent-neural-networks/language-models-and-dataset.md def seq_data_iter_consecutive(corpus, batch_size, num_steps): #@save # Offset for the iterator over the data for uniform starts offset = random.randint(0, num_steps) # Slice out data: ignore `num_steps` and just wrap around num_indices = ((len(corpus) - offset - 1) // batch_size) * batch_size Xs = d2l.tensor(corpus[offset:offset+num_indices]) Ys = d2l.tensor(corpus[offset+1:offset+1+num_indices]) Xs, Ys = Xs.reshape(batch_size, -1), Ys.reshape(batch_size, -1) num_batches = Xs.shape[1] // num_steps for i in range(0, num_batches * num_steps, num_steps): X = Xs[:, i:(i+num_steps)] Y = Ys[:, i:(i+num_steps)] yield X, Y # Defined in file: ./chapter_recurrent-neural-networks/language-models-and-dataset.md class SeqDataLoader: #@save """An iterator to load sequence data.""" def __init__(self, batch_size, num_steps, use_random_iter, max_tokens): if use_random_iter: self.data_iter_fn = d2l.seq_data_iter_random else: self.data_iter_fn = d2l.seq_data_iter_consecutive self.corpus, self.vocab = d2l.load_corpus_time_machine(max_tokens) self.batch_size, self.num_steps = batch_size, num_steps def __iter__(self): return self.data_iter_fn(self.corpus, self.batch_size, self.num_steps) # Defined in file: ./chapter_recurrent-neural-networks/language-models-and-dataset.md def load_data_time_machine(batch_size, num_steps, #@save use_random_iter=False, max_tokens=10000): data_iter = SeqDataLoader( batch_size, num_steps, use_random_iter, max_tokens) return data_iter, data_iter.vocab # Defined in file: ./chapter_recurrent-neural-networks/rnn-scratch.md class RNNModelScratch: #@save """A RNN Model based on scratch implementations.""" def __init__(self, vocab_size, num_hiddens, device, get_params, init_state, forward): self.vocab_size, self.num_hiddens = vocab_size, num_hiddens self.params = get_params(vocab_size, num_hiddens, device) self.init_state, self.forward_fn = init_state, forward def __call__(self, X, state): X = F.one_hot(X.T.long(), self.vocab_size).type(torch.float32) return self.forward_fn(X, state, self.params) def begin_state(self, batch_size, device): return self.init_state(batch_size, self.num_hiddens, device) # Defined in file: ./chapter_recurrent-neural-networks/rnn-scratch.md def predict_ch8(prefix, num_predicts, model, vocab, device): #@save state = model.begin_state(batch_size=1, device=device) outputs = [vocab[prefix[0]]] get_input = lambda: torch.tensor( [outputs[-1]], device=device).reshape(1, 1) for y in prefix[1:]: # Warmup state with prefix _, state = model(get_input(), state) outputs.append(vocab[y]) for _ in range(num_predicts): # Predict num_predicts steps Y, state = model(get_input(), state) outputs.append(int(Y.argmax(dim=1).reshape(1))) return ''.join([vocab.idx_to_token[i] for i in outputs]) # Defined in file: ./chapter_recurrent-neural-networks/rnn-scratch.md def grad_clipping(model, theta): #@save if isinstance(model, nn.Module): params = [p for p in model.parameters() if p.requires_grad] else: params = model.params norm = torch.sqrt(sum(torch.sum((p.grad ** 2)) for p in params)) if norm > theta: for param in params: param.grad[:] *= theta / norm # Defined in file: ./chapter_recurrent-neural-networks/rnn-scratch.md def train_epoch_ch8(model, train_iter, loss, updater, device, #@save use_random_iter): state, timer = None, d2l.Timer() metric = d2l.Accumulator(2) # loss_sum, num_examples for X, Y in train_iter: if state is None or use_random_iter: # Initialize state when either it is the first iteration or # using random sampling. state = model.begin_state(batch_size=X.shape[0], device=device) else: for s in state: s.detach_() y = Y.T.reshape(-1) X, y = X.to(device), y.to(device) py, state = model(X, state) l = loss(py, y.long()).mean() if isinstance(updater, torch.optim.Optimizer): updater.zero_grad() l.backward() grad_clipping(model, 1) updater.step() else: l.backward() grad_clipping(model, 1) updater(batch_size=1) # Since used mean already metric.add(l * d2l.size(y), d2l.size(y)) return math.exp(metric[0] / metric[1]), metric[1] / timer.stop() # Defined in file: ./chapter_recurrent-neural-networks/rnn-scratch.md def train_ch8(model, train_iter, vocab, lr, num_epochs, device, use_random_iter=False): # Initialize loss = nn.CrossEntropyLoss() animator = d2l.Animator(xlabel='epoch', ylabel='perplexity', legend=['train'], xlim=[1, num_epochs]) if isinstance(model, nn.Module): trainer = torch.optim.SGD(model.parameters(),
""" Base classes for collections of samples. | Copyright 2017-2020, Voxel51, Inc. | `voxel51.com <https://voxel51.com/>`_ | """ import inspect import logging import os import random import string import eta.core.serial as etas import eta.core.utils as etau from fiftyone.core.aggregations import Aggregation import fiftyone.core.fields as fof import fiftyone.core.labels as fol import fiftyone.core.media as fom from fiftyone.core.odm.frame import DatasetFrameSampleDocument from fiftyone.core.odm.sample import ( DatasetSampleDocument, default_sample_fields, ) import fiftyone.core.stages as fos import fiftyone.core.utils as fou foua = fou.lazy_import("fiftyone.utils.annotations") foud = fou.lazy_import("fiftyone.utils.data") logger = logging.getLogger(__name__) def _make_registrar(): """Makes a decorator that keeps a registry of all functions decorated by it. Usage:: my_decorator = _make_registrar() my_decorator.all # dictionary mapping names to functions """ registry = {} def registrar(func): registry[func.__name__] = func # Normally a decorator returns a wrapped function, but here we return # `func` unmodified, after registering it return func registrar.all = registry return registrar # Keeps track of all view stage methods view_stage = _make_registrar() class SampleCollection(object): """Abstract class representing an ordered collection of :class:`fiftyone.core.sample.Sample` instances in a :class:`fiftyone.core.dataset.Dataset`. """ def __str__(self): return repr(self) def __repr__(self): return self.summary() def __bool__(self): return len(self) > 0 def __len__(self): raise NotImplementedError("Subclass must implement __len__()") def __contains__(self, sample_id): try: self[sample_id] except KeyError: return False return True def __getitem__(self, sample_id_or_slice): raise NotImplementedError("Subclass must implement __getitem__()") def __iter__(self): return self.iter_samples() @property def name(self): """The name of the collection.""" raise NotImplementedError("Subclass must implement name") @property def media_type(self): """The media type of the collection.""" raise NotImplementedError("Subclass must implement media_type") @property def info(self): """The :meth:`fiftyone.core.dataset.Dataset.info` dict of the dataset underlying the collection. """ raise NotImplementedError("Subclass must implement info") def _build_aggregation(self, aggregations): scalar_result = isinstance(aggregations, Aggregation) if scalar_result: aggregations = [aggregations] elif not aggregations: return False, [], None # pylint: disable=no-member schema = self.get_field_schema() if self.media_type == fom.VIDEO: frame_schema = self.get_frame_field_schema() else: frame_schema = None pipelines = {} for agg in aggregations: if not isinstance(agg, Aggregation): raise TypeError("'%s' is not a an Aggregation" % agg.__class__) field = agg._get_output_field(self) pipelines[field] = agg._to_mongo( self._dataset, schema, frame_schema ) result_d = {} return scalar_result, aggregations, [{"$facet": pipelines}] def _process_aggregations(self, aggregations, result, scalar_result): results = [] for agg in aggregations: try: results.append( agg._get_result(result[agg._get_output_field(self)][0]) ) except: results.append(agg._get_default_result()) return results[0] if scalar_result else results def aggregate(self, aggregations, _attach_frames=True): """Aggregates one or more :class:`fiftyone.core.aggregations.Aggregation` instances. Note that it is best practice to group aggregations into a single call to :meth:`aggregate() <aggregate>`, as this will be more efficient than performing multiple aggregations in series. Args: aggregations: an :class:`fiftyone.core.aggregations.Aggregation` or iterable of :class:`<fiftyone.core.aggregations.Aggregation>` instances Returns: an :class:`fiftyone.core.aggregations.AggregationResult` or list of :class:`fiftyone.core.aggregations.AggregationResult` instances corresponding to the input aggregations """ scalar_result, aggregations, facets = self._build_aggregation( aggregations ) if len(aggregations) == 0: return [] # pylint: disable=no-member pipeline = self._pipeline( pipeline=facets, attach_frames=_attach_frames ) try: # pylint: disable=no-member result = next(self._dataset._sample_collection.aggregate(pipeline)) except StopIteration: pass return self._process_aggregations(aggregations, result, scalar_result) async def _async_aggregate(self, coll, aggregations): scalar_result, aggregations, facets = self._build_aggregation( aggregations ) if not aggregations: return [] # pylint: disable=no-member pipeline = self._pipeline(pipeline=facets) try: # pylint: disable=no-member result = await coll.aggregate(pipeline).to_list(1) result = result[0] except StopIteration: pass return self._process_aggregations(aggregations, result, scalar_result) def summary(self): """Returns a string summary of the collection. Returns: a string summary """ raise NotImplementedError("Subclass must implement summary()") def first(self): """Returns the first sample in the collection. Returns: a :class:`fiftyone.core.sample.Sample` or :class:`fiftyone.core.sample.SampleView` Raises: ValueError: if the collection is empty """ try: return next(iter(self)) except StopIteration: raise ValueError("%s is empty" % self.__class__.__name__) def last(self): """Returns the last sample in the collection. Returns: a :class:`fiftyone.core.sample.Sample` or :class:`fiftyone.core.sample.SampleView` Raises: ValueError: if the collection is empty """ return self[-1:].first() def head(self, num_samples=3): """Returns a list of the first few samples in the collection. If fewer than ``num_samples`` samples are in the collection, only the available samples are returned. Args: num_samples (3): the number of samples Returns: a list of :class:`fiftyone.core.sample.Sample` objects """ return [s for s in self[:num_samples]] def tail(self, num_samples=3): """Returns a list of the last few samples in the collection. If fewer than ``num_samples`` samples are in the collection, only the available samples are returned. Args: num_samples (3): the number of samples Returns: a list of :class:`fiftyone.core.sample.Sample` objects """ return [s for s in self[-num_samples:]] def iter_samples(self): """Returns an iterator over the samples in the collection. Returns: an iterator over :class:`fiftyone.core.sample.Sample` or :class:`fiftyone.core.sample.SampleView` instances """ raise NotImplementedError("Subclass must implement iter_samples()") def get_field_schema( self, ftype=None, embedded_doc_type=None, include_private=False ): """Returns a schema dictionary describing the fields of the samples in the collection. Args: ftype (None): an optional field type to which to restrict the returned schema. Must be a subclass of :class:`fiftyone.core.fields.Field` embedded_doc_type (None): an optional embedded document type to which to restrict the returned schema. Must be a subclass of :class:`fiftyone.core.odm.BaseEmbeddedDocument` include_private (False): whether to include fields that start with `_` in the returned schema Returns: a dictionary mapping field names to field types """ raise NotImplementedError("Subclass must implement get_field_schema()") def get_frame_field_schema( self, ftype=None, embedded_doc_type=None, include_private=False ): """Returns a schema dictionary describing the fields of the frames of the samples in the collection. Only applicable for video collections. Args: ftype (None): an optional field type to which to restrict the returned schema. Must be a subclass of :class:`fiftyone.core.fields.Field` embedded_doc_type (None): an optional embedded document type to which to restrict the returned schema. Must be a subclass of :class:`fiftyone.core.odm.BaseEmbeddedDocument` include_private (False): whether to include fields that start with `_` in the returned schema Returns: a dictionary mapping field names to field types, or ``None`` if the collection is not a video collection """ raise NotImplementedError( "Subclass must implement get_frame_field_schema()" ) def make_unique_field_name(self, root=""): """Makes a unique field name with the given root name for the collection. Args: root (""): an optional root for the output field name Returns: the field name """ if not root: root = _get_random_characters(6) fields = self.get_field_schema() field_name = root if field_name in fields: field_name += "_" + _get_random_characters(6) while field_name in fields: field_name += _get_random_characters(1) return field_name def validate_fields_exist(self, field_or_fields): """Validates that the collection has fields with the given names. If ``field_or_fields`` contains an embedded field name such as ``field_name.document.field``, only the root ``field_name`` is checked for existence. Args: field_or_fields: a field name or iterable of field names Raises: ValueError: if one or more of the fields do not exist """ if etau.is_str(field_or_fields): field_or_fields = [field_or_fields] if self.media_type == fom.VIDEO: frame_fields = list( filter(lambda n: n.startswith("frames."), field_or_fields) ) field_or_fields = list( filter(lambda n: not n.startswith("frames."), field_or_fields) ) else: frame_fields = [] schema = self.get_field_schema(include_private=True) default_fields = set( default_sample_fields( DatasetSampleDocument, include_private=True, include_id=True ) ) for field in field_or_fields: # We only validate that the root field exists field_name = field.split(".", 1)[0] if field_name not in schema and field_name not in default_fields: raise ValueError("Field '%s' does not exist" % field_name) if self.media_type != fom.VIDEO: return frame_schema = self.get_frame_field_schema(include_private=True) default_frame_fields = set( default_sample_fields( DatasetFrameSampleDocument, include_private=True, include_id=True, ) ) for field in frame_fields: # We only validate that the root field exists field_name = field.split(".", 2)[1] if ( field_name not in frame_schema and field_name not in default_frame_fields ): raise ValueError("Field '%s' does not exist" % field_name) def validate_field_type( self, field_name, ftype, embedded_doc_type=None, subfield=None ): """Validates that the collection has a field of the given type. Args: field_name: the field name ftype: the expected field type. Must be a subclass of :class:`fiftyone.core.fields.Field` embedded_doc_type (None): the :class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the field. Used only when ``ftype`` is an embedded :class:`fiftyone.core.fields.EmbeddedDocumentField` subfield (None): the type of the contained field. Used only when ``ftype`` is a :class:`fiftyone.core.fields.ListField` or :class:`fiftyone.core.fields.DictField` Raises: ValueError: if the field does not exist or does not have the expected type """ schema = self.get_field_schema() frames = self.media_type == fom.VIDEO and field_name.startswith( "frames." ) if frames: field_name = field_name[len("frames.") :] if frames: frame_schema = self.get_frame_field_schema() if field_name not in frame_schema: raise ValueError("Field '%s' does not exist" % field_name) field = frame_schema[field_name] else: field = schema[field_name] if embedded_doc_type is not None: if not isinstance(field, fof.EmbeddedDocumentField) or ( field.document_type is not embedded_doc_type ): raise ValueError( "Field '%s' must be an instance of %s; found %s" % (field_name, ftype(embedded_doc_type), field) ) elif subfield is not None: if not isinstance(field, (fof.ListField, fof.DictField)): raise ValueError( "Field type
_preload_content=params.get('_preload_content', True), _request_timeout=params.get('_request_timeout'), collection_formats=collection_formats) def search_device_datafiles(self, uri, **kwargs): # noqa: E501 """Search device datafiles descriptions # noqa: E501 # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.search_device_datafiles(uri, async_req=True) >>> result = thread.get() :param async_req bool :param str uri: Device URI (required) :param str authorization: Authentication token (required) :param str rdf_type: Search by rdf type uri :param str start_date: Search by minimal date :param str end_date: Search by maximal date :param str timezone: Precise the timezone corresponding to the given dates :param list[str] experiment: Search by experiments :param list[str] scientific_objects: Search by object uris list :param list[str] provenances: Search by provenance uris list :param str metadata: Search by metadata :param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc :param int page: Page number :param int page_size: Page size :param str accept_language: Request accepted language :return: list[DataGetDTO] If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.search_device_datafiles_with_http_info(uri, **kwargs) # noqa: E501 else: (data) = self.search_device_datafiles_with_http_info(uri, **kwargs) # noqa: E501 return data def search_device_datafiles_with_http_info(self, uri, **kwargs): # noqa: E501 """Search device datafiles descriptions # noqa: E501 # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.search_device_datafiles_with_http_info(uri, async_req=True) >>> result = thread.get() :param async_req bool :param str uri: Device URI (required) :param str authorization: Authentication token (required) :param str rdf_type: Search by rdf type uri :param str start_date: Search by minimal date :param str end_date: Search by maximal date :param str timezone: Precise the timezone corresponding to the given dates :param list[str] experiment: Search by experiments :param list[str] scientific_objects: Search by object uris list :param list[str] provenances: Search by provenance uris list :param str metadata: Search by metadata :param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc :param int page: Page number :param int page_size: Page size :param str accept_language: Request accepted language :return: list[DataGetDTO] If the method is called asynchronously, returns the request thread. """ all_params = ['uri', 'rdf_type', 'start_date', 'end_date', 'timezone', 'experiment', 'scientific_objects', 'provenances', 'metadata', 'order_by', 'page', 'page_size', ] # noqa: E501 all_params.append('async_req') all_params.append('_return_http_data_only') all_params.append('_preload_content') all_params.append('_request_timeout') params = locals() for key, val in six.iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method search_device_datafiles" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'uri' is set if ('uri' not in params or params['uri'] is None): raise ValueError("Missing the required parameter `uri` when calling `search_device_datafiles`") # noqa: E501 if 'page' in params and params['page'] < 0: # noqa: E501 raise ValueError("Invalid value for parameter `page` when calling `search_device_datafiles`, must be a value greater than or equal to `0`") # noqa: E501 if 'page_size' in params and params['page_size'] < 0: # noqa: E501 raise ValueError("Invalid value for parameter `page_size` when calling `search_device_datafiles`, must be a value greater than or equal to `0`") # noqa: E501 collection_formats = {} path_params = {} if 'uri' in params: path_params['uri'] = params['uri'] # noqa: E501 query_params = [] if 'rdf_type' in params: query_params.append(('rdf_type', params['rdf_type'])) # noqa: E501 if 'start_date' in params: query_params.append(('start_date', params['start_date'])) # noqa: E501 if 'end_date' in params: query_params.append(('end_date', params['end_date'])) # noqa: E501 if 'timezone' in params: query_params.append(('timezone', params['timezone'])) # noqa: E501 if 'experiment' in params: query_params.append(('experiment', params['experiment'])) # noqa: E501 collection_formats['experiment'] = 'multi' # noqa: E501 if 'scientific_objects' in params: query_params.append(('scientific_objects', params['scientific_objects'])) # noqa: E501 collection_formats['scientific_objects'] = 'multi' # noqa: E501 if 'provenances' in params: query_params.append(('provenances', params['provenances'])) # noqa: E501 collection_formats['provenances'] = 'multi' # noqa: E501 if 'metadata' in params: query_params.append(('metadata', params['metadata'])) # noqa: E501 if 'order_by' in params: query_params.append(('order_by', params['order_by'])) # noqa: E501 collection_formats['order_by'] = 'multi' # noqa: E501 if 'page' in params: query_params.append(('page', params['page'])) # noqa: E501 if 'page_size' in params: query_params.append(('page_size', params['page_size'])) # noqa: E501 header_params = {} #if 'authorization' in params: # header_params['Authorization'] = params['authorization'] # noqa: E501 #if 'accept_language' in params: # header_params['Accept-Language'] = params['accept_language'] # noqa: E501 form_params = [] local_var_files = {} body_params = None # HTTP header `Accept` header_params['Accept'] = self.api_client.select_header_accept( ['application/json']) # noqa: E501 # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501 ['application/json']) # noqa: E501 # Authentication setting auth_settings = [] # noqa: E501 return self.api_client.call_api( '/core/devices/{uri}/datafiles', 'GET', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type='list[DataGetDTO]', # noqa: E501 auth_settings=auth_settings, async_req=params.get('async_req'), _return_http_data_only=params.get('_return_http_data_only'), _preload_content=params.get('_preload_content', True), _request_timeout=params.get('_request_timeout'), collection_formats=collection_formats) def search_devices(self, **kwargs): # noqa: E501 """Search devices # noqa: E501 # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.search_devices(async_req=True) >>> result = thread.get() :param async_req bool :param str authorization: Authentication token (required) :param str rdf_type: RDF type filter :param bool include_subtypes: Set this param to true when filtering on rdf_type to also retrieve sub-types :param str name: Regex pattern for filtering by name :param int year: Search by year :param date existence_date: Date to filter device existence :param str brand: Regex pattern for filtering by brand :param str model: Regex pattern for filtering by model :param str serial_number: Regex pattern for filtering by serial number :param str metadata: Search by metadata :param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc :param int page: Page number :param int page_size: Page size :param str accept_language: Request accepted language :return: list[DeviceGetDTO] If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.search_devices_with_http_info(**kwargs) # noqa: E501 else: (data) = self.search_devices_with_http_info(**kwargs) # noqa: E501 return data def search_devices_with_http_info(self, **kwargs): # noqa: E501 """Search devices # noqa: E501 # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.search_devices_with_http_info(async_req=True) >>> result = thread.get() :param async_req bool :param str authorization: Authentication token (required) :param str rdf_type: RDF type filter :param bool include_subtypes: Set this param to true when filtering on rdf_type to also retrieve sub-types :param str name: Regex pattern for filtering by name :param int year: Search by year :param date existence_date: Date to filter device existence :param str brand: Regex pattern for filtering by brand :param str model: Regex pattern for filtering by model :param str serial_number: Regex pattern for filtering by serial number :param str metadata: Search by metadata :param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc :param int page: Page number :param int page_size: Page size :param str accept_language: Request accepted language :return: list[DeviceGetDTO] If the method is called asynchronously, returns the request thread. """ all_params = ['rdf_type', 'include_subtypes', 'name', 'year', 'existence_date', 'brand', 'model', 'serial_number', 'metadata', 'order_by', 'page', 'page_size', ] # noqa: E501 all_params.append('async_req') all_params.append('_return_http_data_only') all_params.append('_preload_content') all_params.append('_request_timeout') params = locals() for key, val in six.iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method search_devices" % key ) params[key] = val del params['kwargs'] if 'year' in params and params['year'] > 10000: # noqa: E501 raise ValueError("Invalid value for parameter `year` when calling `search_devices`, must be a value less than or equal to `10000`") # noqa: E501 if 'year' in params and params['year'] < 999: # noqa: E501 raise ValueError("Invalid value for parameter `year` when calling `search_devices`, must be a value greater than or equal to `999`") # noqa: E501 if 'page' in params and params['page'] < 0: # noqa: E501 raise ValueError("Invalid value for parameter `page` when calling `search_devices`, must be a value greater than or equal to `0`") # noqa: E501 if 'page_size' in params and params['page_size'] < 0: # noqa: E501 raise ValueError("Invalid value for parameter `page_size` when calling `search_devices`, must be a value greater than or equal to `0`") # noqa: E501 collection_formats = {} path_params
#!/usr/bin/env python2 from __future__ import print_function import pygame from pygame.locals import * import sys import random import copy WINDOW_WIDTH = 640 WINDOW_HEIGHT = 480 BOARD_WIDTH = 7 BOARD_HEIGHT = 7 ROTATE_TIME = 3 BLACK = (0, 0, 0) WHITE = (255, 255, 255) RED = (250, 0, 0) GREEN = (0, 255, 0) BLUE = (50, 50, 155) YELLOW = (220, 220, 0) DARK_YELLOW = (200, 200, 0) DARK_GREEN = (0, 200, 0) VICTORY_LINE_COLOR = DARK_GREEN VICTORY_LINE_SIZE = 5 PLAYER_COLORS = [0, BLACK, RED] BG_COLOR = BLUE MAX_FPS = 30 class AI(object): def __init__(self, team): self.team = team def check_victory(self, board, x, rotate): """Add a piece to the board at column x, and check if it would give victory. Right afterwards, remove the piece. Returns a list of the form (winning team, line) where line is of the form [(x1, y1), (x2, y2), ...].""" # TODO: detect the enemy team instead of assuming it's 1 enemy_team = 1 y = board.lowest_in_column(x) victory = [] if y != -1: if rotate: # rotation isn't reversible, so we need to make copies of the # board to preserve the original for team in (self.team, enemy_team): tmp_board = copy.deepcopy(board) tmp_board.grid[x][y] = team tmp_board.rotate() tmp_board.make_pieces_fall() victory.extend(tmp_board.check_victory()) else: # not rotating board.grid[x][y] = self.team victory.extend(board.check_victory()) board.grid[x][y] = enemy_team victory.extend(board.check_victory()) board.grid[x][y] = 0 return victory def get_move(self, board, turns_til_rotation): """Returns the column # to drop the piece in.""" # if the board is about to rotate, we want the AI to simulate that and # catch 4-in-a-rows that result from this. rotate = turns_til_rotation == 1 board = copy.deepcopy(board) # see if any team could win by dropping a piece into a column line_completions = [self.check_victory(board, x, rotate) for x in range(board.width)] # first of all, if we can make a 4-in-a-row, do it! for x, c in enumerate(line_completions): if self.team in c: return x # blocking an opponent's 4-in-a-row is the 2nd highest piority for x, c in enumerate(line_completions): if c: return x # otherwise, just go in a random spot return random.randint(0, board.width - 1) class Board(object): """Two-dimensional Connect 4 board.""" def __init__(self, width, height): self.width = width self.height = height self.grid = [[0 for y in range(BOARD_HEIGHT)] for x in range(BOARD_WIDTH)] board_size = min(WINDOW_WIDTH, WINDOW_HEIGHT) * 14 / 16 x = (WINDOW_WIDTH - board_size) / 2 y = (WINDOW_HEIGHT - board_size) / 2 self.rect = Rect((x, y), (board_size, board_size)) self.image = pygame.Surface( (board_size, board_size)).convert() self.image.set_colorkey(BG_COLOR) def rotate(self): old_grid = self.grid self.grid = [[old_grid[self.height - y - 1][x] for y in range(BOARD_HEIGHT)] for x in range(BOARD_WIDTH)] def make_pieces_fall(self): for y in range(self.height - 2, -1, -1): for x in range(self.width): cy = y while cy < self.height - 1 and not self.grid[x][cy + 1] \ and self.grid[x][cy]: self.grid[x][cy + 1] = self.grid[x][cy] self.grid[x][cy] = 0 cy += 1 def iterate_pieces_falling(self): """Yields the position and destination of each piece falling""" for y in range(self.height - 2, -1, -1): for x in range(self.width): cy = y while cy < self.height - 1 and not self.grid[x][cy + 1] \ and self.grid[x][cy]: self.grid[x][cy + 1] = self.grid[x][cy] self.grid[x][cy] = 0 cy += 1 if cy != y: yield ((x, y), (x, cy)) def get_string(self): printboard = [''] * len(self.grid[0]) for x in range(self.width): for y in range(self.height): printboard[y] += str(self.grid[x][y]) return '\n'.join(printboard) def lowest_in_column(self, column): """Returns the y-coordinate for the lowest empty position in the given column. Returns -1 if the column is full""" for y in range(self.height - 1, -1, -1): if self.grid[column][y] == 0: return y return -1 def drop_piece(self, column, player): y = self.lowest_in_column(column) self.grid[column][y] = player def column_blocked(self, column): if self.grid[column][0] != 0: return True return False def get_column_relative_x(self, column_number): radius = self.get_circle_radius() return column_number * self.rect.width / BOARD_WIDTH + radius + 4 def get_row_relative_y(self, row_number): radius = self.get_circle_radius() return row_number * self.rect.height / BOARD_HEIGHT + radius + 2 def get_circle_radius(self): return self.rect.height / BOARD_HEIGHT / 2 - 5 def clear_image(self): self.image.fill(YELLOW) radius = self.get_circle_radius() for x in range(self.width): for y in range(self.height): pygame.draw.circle( self.image, BG_COLOR, ( self.get_column_relative_x(x), self.get_row_relative_y(y)), radius) def update_image(self): self.image.fill(YELLOW) radius = self.get_circle_radius() for x in range(self.width): for y in range(self.height): if self.grid[x][y] == 0: # empty spot color = BG_COLOR elif self.grid[x][y] == 1: color = BLACK elif self.grid[x][y] == 2: color = RED pygame.draw.circle( self.image, color, ( self.get_column_relative_x(x), self.get_row_relative_y(y)), radius) def check_victory(self): """ Yields (winning team, list of positions in 4-in-a-row) list of positions is in the form [(x1, y1), (x2, y2), ...] """ # for each position in the grid, checks lines extending from it # pointing right, down, down-right, and down-left for x in range(self.width): for y in range(self.height): # ignore positions that are empty if self.grid[x][y] == 0: continue # based on where the position is, determine which lines can be # drawn that fit inside the grid lines = [] if x < self.width - 3: # check horizontal line lines.append([(x1, y) for x1 in range(x, x + 4)]) if y < self.height - 3: # check vertical line lines.append([(x, y1) for y1 in range(y, y + 4)]) if x < self.width - 3 and y < self.height - 3: # check diagonal \ line lines.append([(x + d, y + d) for d in range(4)]) if x >= 3 and y < self.height - 3: # check diagonal / line lines.append([(x - d, y + d) for d in range(4)]) # go through each line, and, if all pieces in the line match, # yield the team who won the line and the line itself for line in lines: match = True for (x1, y1) in line[1:]: if self.grid[x][y] != self.grid[x1][y1]: match = False break if match: yield (self.grid[x][y], line) class Game(object): """Singleton that manages input, rendering, and game logic.""" def __init__(self, screen, ai=True): self.screen = screen if ai: self.ai = AI(2) else: self.ai = None self.timer = pygame.time.Clock() self.board = Board(BOARD_WIDTH, BOARD_HEIGHT) self.bg = pygame.Surface((WINDOW_WIDTH, WINDOW_HEIGHT)) self.bg.convert() self.bg.fill(BG_COLOR) self.board.update_image() self.num_pieces_dropped = 0 self.winner = None self.player_points = [0, 0, 0] self.victory_lines = [] self.font = pygame.font.Font(None, 50) # make a table image and fill it with squares self.table_image = pygame.surface.Surface((1000, 1000)) self.table_image.fill((0, 128, 200)) for n in range(100): c = random.randint(64, 127) x = random.randint(0, 900) y = random.randint(0, 900) pygame.draw.rect( self.table_image, (0, c, c * 2), (x, y, 100, 100), 0) self.column_selected = 0 self.active_player = 1 # player 1 and 2 alternate turns draw_circle_window_icon(BLACK) def run(self): """Runs the game. Limits FPS, handles input, and renders, in a loop.""" while True: self.timer.tick(MAX_FPS) self.handle_input() self.render_all() def get_column_clicked(self, position): """Takes a mouse position and returns the column of the board that it was in.""" x = position[0] width_per_row = self.board.rect.width / BOARD_WIDTH x_relative_to_board = x - self.board.rect.left return int(round(x_relative_to_board) / width_per_row - 1 / 2) def animate_circle_movement(self, x1, y1, x2, y2, num_seconds=1, color=BLACK): """Animates a circle smoothly moving from one position to another.""" g = self.iterate_circle_movement(x1, y1, x2, y2, num_seconds, color) for d in range(int(MAX_FPS * num_seconds)): pygame.event.get() self.render_background() g.next() self.render_board() pygame.display.flip() self.timer.tick(MAX_FPS) def iterate_circle_movement(self, x1, y1, x2, y2, num_seconds=1, color=BLACK): """Animates a circle smoothly moving from one position to another, one iteration at a time.""" # to be called once per frame for d in range(int(MAX_FPS * num_seconds)): x = x1 + (x2 - x1) * d / int(MAX_FPS * num_seconds) y = y1 + (y2 - y1) * d / int(MAX_FPS * num_seconds) pygame.draw.circle(self.screen, color, (x, y), self.board.get_circle_radius()) yield def animate_drop_piece(self): """Animates a piece in the selected column falling to the lowest empty board position in that column.""" starting_y = self.board.rect.top - 4 x = self.board.rect.left + \ self.board.get_column_relative_x(self.column_selected) end_y = self.board.rect.top + \ self.board.get_row_relative_y( self.board.lowest_in_column(self.column_selected)) time = (end_y - starting_y) / self.board.get_circle_radius() / 50.0 self.animate_circle_movement(x, starting_y, x, end_y, time, PLAYER_COLORS[self.active_player]) def do_ai_turn(self): """Gets a move from the AI, and drops its piece.""" time_til_rotation = ROTATE_TIME - self.num_pieces_dropped % ROTATE_TIME self.column_selected = self.ai.get_move(self.board, time_til_rotation) self.drop_piece() def drop_piece(self): """Drops a game piece for the active team into the currently-selected column, animates
new_ws_nab.cell(row=1, column=31).value = "JustifyIfResultisInvalid" # create a counter for keep track on row number for the new xlsx file row_counter = 2 for file_name in file_list: valid_spreadsheet = [] sample_number = [] # cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_wb = load_workbook(file_name, data_only=True) cur_ws = cur_wb['Sample information'] row_start = 8 spreadsheet_counter = 1 while cur_ws.cell(row=row_start, column=2).value is not None: valid_spreadsheet.append('P' + str(spreadsheet_counter)) sample_number.append('S' + str(spreadsheet_counter)) row_start = row_start + 1 spreadsheet_counter = spreadsheet_counter + 1 for spreadsheet in valid_spreadsheet: cur_ws = cur_wb[spreadsheet] for j in range(1, 32, 1): if j == 1: new_ws_nab.cell(row=row_counter, column=j).value = cur_ws['C3'].value elif j == 2: new_ws_nab.cell(row=row_counter, column=j).value = sample_number[ valid_spreadsheet.index(spreadsheet)] elif j == 3: new_ws_nab.cell(row=row_counter, column=j).value = cur_ws['A6'].value elif j == 4: new_ws_nab.cell(row=row_counter, column=j).value = cur_ws['C6'].value elif j == 5: if cur_ws['D6'].value == 'NA': new_ws_nab.cell(row=row_counter, column=j).value = str(cur_ws['D6'].value) elif isinstance(cur_ws['D6'].value, datetime.datetime): extraction_date = str(cur_ws['D6'].value).split() # formal_date = datetime.datetime.strptime(extraction_date[0], '%Y-%m-%d').strftime('%d%b%Y') formal_date = datetime.datetime.strptime(extraction_date[0], '%Y-%m-%d').date() new_ws_nab.cell(row=row_counter, column=j).value = formal_date else: new_ws_nab.cell(row=row_counter, column=j).value = str(cur_ws['D6'].value) elif 6 <= j <= 9: new_ws_nab.cell(row=row_counter, column=j).value = \ round(float(cur_ws.cell(row=j + 26, column=3).value), 1) elif 10 <= j <= 13: new_ws_nab.cell(row=row_counter, column=j).value = \ round(float(cur_ws.cell(row=j + 22, column=5).value), 1) elif 14 <= j <= 17: new_ws_nab.cell(row=row_counter, column=j).value = \ round(float(cur_ws.cell(row=j + 18, column=9).value), 1) elif 18 <= j <= 23: new_ws_nab.cell(row=row_counter, column=j).value = \ round(float(cur_ws.cell(row=j + 39, column=3).value)) elif 24 <= j <= 29: new_ws_nab.cell(row=row_counter, column=j).value = \ round(float(cur_ws.cell(row=j + 33, column=4).value)) elif j == 31: new_ws_nab.cell(row=row_counter, column=j).value = cur_ws['F57'].value else: new_ws_nab.cell(row=row_counter, column=j).value = cur_ws['E57'].value row_counter = row_counter + 1 nab_data_list = [] keys = [] for j in range(1, 32, 1): keys.append(new_ws_nab.cell(row=1, column=j).value) for row_number in range(1, row_counter): if row_number == 1: continue row_data = {} for j in range(1, 32, 1): row_data[keys[j - 1]] = new_ws_nab.cell(row=row_number, column=j).value nab_data_list.append(row_data) print(json.dumps(nab_data_list)) def tissue_wes_extract_excels(folder_name): file_list = [] p = Path(folder_name) tissue_wes_file_list = list(p.glob('Tissue_Wes*.xlsx')) for tissue_wes_file in tissue_wes_file_list: file_list.append(tissue_wes_file) return file_list def tissue_wes_linear_regression(folder_name): file_list = tissue_wes_extract_excels(folder_name) result = [] header = ['RunNumber', 'Slope', 'Intercept', 'RSquare'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Data Analysis '] cur_result = [] for j in range(1, len(header) + 1, 1): if j == 1: cur_result.append(cur_ws.cell(row=2, column=2).value) else: cur_result.append(round(float(cur_ws.cell(row=24, column=j - 1).value))) result.append(cur_result) transfer_to_json(result, len(header)) def tissue_wes_standard_curve(folder_path): file_list = tissue_wes_extract_excels(folder_path) result = [] header = ['RunNumber', 'Std', 'TPP1ConcngPermL', 'Area', 'BackCalculatedConcngPermL', 'PercentRE'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Data Analysis '] row_start = 27 while row_start != 36: cur_result = [] for j in range(1, len(header) + 1, 1): if j == 1: cur_result.append(cur_ws.cell(row=2, column=2).value) else: if cur_ws.cell(row=row_start, column=j - 1).value == "Masked" or cur_ws.cell(row=row_start, column=j - 1).value == "NA": cur_result.append(cur_ws.cell(row=row_start, column=j - 1).value) else: if j == 3 or j == 5: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j - 1).value), 3)) elif j == 4: cur_result.append((round(float(cur_ws.cell(row=row_start, column=j - 1).value)))) elif j == 6: if row_start == 35: cur_result.append(None) else: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j - 1).value), 1)) else: cur_result.append(cur_ws.cell(row=row_start, column=j - 1).value) result.append(cur_result) row_start = row_start + 1 transfer_to_json(result, len(header)) def tissue_wes_upper_and_lower_bond(folder_path): file_list = tissue_wes_extract_excels(folder_path) result = [] header = ['RunNumber', 'ULOQ', 'LLOQ'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Data Analysis '] cur_result = [] for j in range(1, len(header) + 1, 1): if j == 1: cur_result.append(cur_ws.cell(row=2, column=2).value) elif j == 2: if cur_ws.cell(row=37, column=3).value == "NA": cur_result.append(cur_ws.cell(row=37, column=3).value) else: cur_result.append(round(float(cur_ws.cell(row=37, column=3).value), 3)) else: if cur_ws.cell(row=37, column=5).value == "NA": cur_result.append(cur_ws.cell(row=37, column=5).value) else: cur_result.append(round(float(cur_ws.cell(row=37, column=5).value), 3)) result.append(cur_result) transfer_to_json(result, len(header)) def tissue_wes_qc_data(folder_path): file_list = tissue_wes_extract_excels(folder_path) result = [] header = ['RunNumber', 'QCIn1To10CSF', 'SpikedConcngPermL', 'Area', 'ConcngPermL', 'PercentRE'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Data Analysis '] row_start = 42 while row_start != 45: cur_result = [] for j in range(1, len(header) + 1, 1): if j == 1: cur_result.append(cur_ws.cell(row=2, column=2).value) elif j == 2: cur_result.append(cur_ws.cell(row=row_start, column=j - 1).value) elif j == 3 or j == 5: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j - 1).value), 3)) elif j == 6: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j - 1).value), 1)) else: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j - 1).value))) result.append(cur_result) row_start = row_start + 1 transfer_to_json(result, len(header)) def tissue_wes_sample_analysis(folder_path): file_list = tissue_wes_extract_excels(folder_path) result = [] header = ['RunNumber', 'AnimalID', 'TimePoint', 'ROA', 'TissueType', 'PunchNumber', 'SampleLocation', 'CollectionDate', 'PeakArea', 'ConcngPermL', 'TotalProtein', 'AdjustedConcngPermL', 'ReportedCon', 'LoadingIssue', 'ActinLoadingCtrlArea'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Data Analysis '] row_start = 67 while cur_ws.cell(row=row_start, column=2).value is not None: cur_result = [] for j in range(1, len(header) + 1, 1): if j == 1: cur_result.append(cur_ws.cell(row=2, column=2).value) elif j == 9 or j == 11: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j).value))) elif j == 10 or j == 12: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j).value), 3)) elif j == 13: if cur_ws.cell(row=row_start, column=j).value == "Run Failure, retest" or cur_ws.cell( row=row_start, column=j).value == "AQL, retest needed" or cur_ws.cell( row=row_start, column=j).value == "BQL, no retest" or cur_ws.cell( row=row_start, column=j).value == "BQL, retest" or cur_ws.cell( row=row_start, column=j).value == "Loading issue": cur_result.append(cur_ws.cell(row=row_start, column=j).value) else: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j).value), 3)) elif j == 15: cur_result.append(round(float(cur_ws.cell(row=row_start - 16, column=j - 11).value))) else: cur_result.append(cur_ws.cell(row=row_start, column=j).value) result.append(cur_result) row_start = row_start + 1 transfer_to_json(result, len(header)) def tissue_wes_sample_analysis_88(folder_path): file_list = tissue_wes_extract_excels(folder_path) result = [] header = ['RunNumber', 'AnimalID', 'TimePoint', 'ROA', 'TissueType', 'PunchNumber', 'SampleLocation', 'CollectionDate', 'PeakArea', 'ConcngPermL', 'TotalProtein', 'AdjustedConcngPermL', 'ReportedCon', 'LoadingIssue', 'ActinLoadingCtrlArea'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Peak 88 DA'] row_start = 67 while cur_ws.cell(row=row_start, column=2).value is not None: cur_result = [] for j in range(1, len(header) + 1, 1): if j == 1: cur_ws = cur_wb['Data Analysis '] cur_result.append(cur_ws.cell(row=2, column=2).value) cur_ws = cur_wb['Peak 88 DA'] elif j == 9 or j == 11: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j).value))) elif j == 10 or j == 12: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j).value), 3)) elif j == 13: if cur_ws.cell(row=row_start, column=j).value == "Run Failure, retest" or cur_ws.cell( row=row_start, column=j).value == "AQL, retest needed" or cur_ws.cell( row=row_start, column=j).value == "BQL, no retest" or cur_ws.cell( row=row_start, column=j).value == "BQL, retest" or cur_ws.cell( row=row_start, column=j).value == "Loading issue": cur_result.append(cur_ws.cell(row=row_start, column=j).value) else: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j).value), 3)) elif j == 15: cur_result.append(round(float(cur_ws.cell(row=row_start - 16, column=j - 11).value))) else: cur_result.append(cur_ws.cell(row=row_start, column=j).value) result.append(cur_result) row_start = row_start + 1 transfer_to_json(result, len(header)) def gaa_enzymatic_extract_excels(folder_name): file_list = [] p = Path(folder_name) tissue_wes_file_list = list(p.glob('GAA_Enzymatic_Assay*.xlsx')) for tissue_wes_file in tissue_wes_file_list: file_list.append(tissue_wes_file) return file_list def gaa_enzymatic_standard_curve(folder_path): file_list = gaa_enzymatic_extract_excels(folder_path) result = [] header = ['RunNumber', 'Std', 'Conc', 'MeanConc', 'CV', 'PercentRE'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Data analysis'] row_start = 6 while row_start != 16: cur_result = [] for j in range(1, len(header) + 1, 1): if j == 1: cur_result.append(cur_ws.cell(row=3, column=2).value) elif j == 2: cur_result.append(cur_ws.cell(row=row_start, column=j - 1).value) elif j == 3: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j).value), 3)) elif j == 4: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j + 1).value), 3)) else: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j + 2).value), 3)) result.append(cur_result) row_start = row_start + 1 transfer_to_json(result, len(header)) def gaa_enzymatic_qc_analysis(folder_path): file_list = gaa_enzymatic_extract_excels(folder_path) result = [] header = ['RunNumber', 'QC', 'MeanResult', 'CVPercentage', 'AdjResults'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Data analysis'] row_start = 19 while row_start != 25: cur_result = [] for j in range(1, len(header) + 1, 1): if j == 1: cur_result.append(cur_ws.cell(row=3, column=2).value) elif j == 2: cur_result.append(cur_ws.cell(row=row_start, column=j - 1).value) elif j == 3: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j + 2).value), 3)) else: cur_result.append(round(float(cur_ws.cell(row=row_start, column=j + 3).value), 3)) result.append(cur_result) row_start = row_start + 1 transfer_to_json(result, len(header)) def gaa_enzymatic_lower_and_upper_bond(folder_path): file_list = gaa_enzymatic_extract_excels(folder_path) result = [] header = ['RunNumber', 'ULOQ', 'LLOQ'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Data analysis'] cur_result = [] for j in range(1, len(header) + 1, 1): if j == 1: cur_result.append(cur_ws.cell(row=3, column=2).value) elif j == 2: cur_result.append(cur_ws.cell(row=26, column=j).value) # cur_result.append(round(float(cur_ws.cell(row=26, column=j).value), 3)) else: cur_result.append(cur_ws.cell(row=26, column=j + 1).value) # cur_result.append(round(float(cur_ws.cell(row=26, column=j + 1).value), 3)) result.append(cur_result) transfer_to_json(result, len(header)) def gaa_enzymatic_sample_analysis(folder_path): file_list = gaa_enzymatic_extract_excels(folder_path) result = [] header = ['RunNumber', 'SampleNo', 'AnimalID', 'Group', 'Dose', 'Sex', 'Timepoint', 'CollectionDate', 'PreDilutionFactor', 'DilutionFactorMRD', 'MeanResult', 'PercentageCV', 'AdjustedResult', 'ReportedResult'] result.append(header) for file_name in file_list: cur_wb = load_workbook(file_name, read_only=True, data_only=True) cur_ws = cur_wb['Data analysis'] row_start = 33 while cur_ws.cell(row=row_start, column=2).value is not None: cur_result = [] for j in
hash) def get_sample_hash(self): known = list(self.iter_known_hashes()) return self.getRandom().choice(known) def check_verify(self, secret, hash, msg=None, negate=False): result = self.do_verify(secret, hash) self.assertTrue(result is True or result is False, 'verify() returned non-boolean value: %r' % (result,)) if self.handler.is_disabled or negate: if not result: return if not msg: msg = 'verify incorrectly returned True: secret=%r, hash=%r' % ( secret, hash) raise self.failureException(msg) else: if result: return if not msg: msg = 'verify failed: secret=%r, hash=%r' % (secret, hash) raise self.failureException(msg) def check_returned_native_str(self, result, func_name): self.assertIsInstance(result, str, '%s() failed to return native string: %r' % (func_name, result)) def populate_settings(self, kwds): handler = self.handler if 'rounds' in handler.setting_kwds and 'rounds' not in kwds: mn = handler.min_rounds df = handler.default_rounds if TEST_MODE(max='quick'): kwds['rounds'] = max(3, mn) else: factor = 3 if getattr(handler, 'rounds_cost', None) == 'log2': df -= factor else: df //= 1 << factor kwds['rounds'] = max(3, mn, df) return def populate_context(self, secret, kwds): return secret def do_encrypt(self, secret, use_encrypt=False, handler=None, context=None, **settings): self.populate_settings(settings) if context is None: context = {} secret = self.populate_context(secret, context) if use_encrypt: warnings = [] if settings: context.update(**settings) warnings.append('passing settings to.*is deprecated') with self.assertWarningList(warnings): return (handler or self.handler).encrypt(secret, **context) else: return (handler or self.handler).using(**settings).hash(secret, **context) return def do_verify(self, secret, hash, handler=None, **kwds): secret = self.populate_context(secret, kwds) return (handler or self.handler).verify(secret, hash, **kwds) def do_identify(self, hash): return self.handler.identify(hash) def do_genconfig(self, **kwds): self.populate_settings(kwds) return self.handler.genconfig(**kwds) def do_genhash(self, secret, config, **kwds): secret = self.populate_context(secret, kwds) return self.handler.genhash(secret, config, **kwds) def do_stub_encrypt(self, handler=None, context=None, **settings): handler = (handler or self.handler).using(**settings) if context is None: context = {} secret = self.populate_context('', context) with patch_calc_min_rounds(handler): return handler.hash(secret, **context) return BACKEND_NOT_AVAILABLE = 'backend not available' @classmethod def _get_skip_backend_reason(cls, backend): handler = cls.handler if not is_default_backend(handler, backend) and not TEST_MODE('full'): return 'only default backend is being tested' if handler.has_backend(backend): return None return cls.BACKEND_NOT_AVAILABLE @classmethod def create_backend_case(cls, backend): handler = cls.handler name = handler.name bases = ( cls,) if backend == 'os_crypt': bases += (OsCryptMixin,) subcls = type('%s_%s_test' % (name, backend), bases, dict(descriptionPrefix='%s (%s backend)' % (name, backend), backend=backend, __module__=cls.__module__)) skip_reason = cls._get_skip_backend_reason(backend) if skip_reason: subcls = skip(skip_reason)(subcls) return subcls def setUp(self): super(HandlerCase, self).setUp() handler = self.handler backend = self.backend if backend: if not hasattr(handler, 'set_backend'): raise RuntimeError("handler doesn't support multiple backends") self.addCleanup(handler.set_backend, handler.get_backend()) handler.set_backend(backend) from otp.ai.passlib.utils import handlers self.patchAttr(handlers, 'rng', self.getRandom('salt generator')) def test_01_required_attributes(self): handler = self.handler def ga(name): return getattr(handler, name, None) name = ga('name') self.assertTrue(name, 'name not defined:') self.assertIsInstance(name, str, 'name must be native str') self.assertTrue(name.lower() == name, 'name not lower-case:') self.assertTrue(re.match('^[a-z0-9_]+$', name), 'name must be alphanum + underscore: %r' % (name,)) settings = ga('setting_kwds') self.assertTrue(settings is not None, 'setting_kwds must be defined:') self.assertIsInstance(settings, tuple, 'setting_kwds must be a tuple:') context = ga('context_kwds') self.assertTrue(context is not None, 'context_kwds must be defined:') self.assertIsInstance(context, tuple, 'context_kwds must be a tuple:') return def test_02_config_workflow(self): config = self.do_genconfig() self.check_returned_native_str(config, 'genconfig') result = self.do_genhash('stub', config) self.check_returned_native_str(result, 'genhash') self.do_verify('', config) self.assertTrue(self.do_identify(config), 'identify() failed to identify genconfig() output: %r' % ( config,)) def test_02_using_workflow(self): handler = self.handler subcls = handler.using() self.assertIsNot(subcls, handler) self.assertEqual(subcls.name, handler.name) def test_03_hash_workflow(self, use_16_legacy=False): wrong_secret = 'stub' for secret in self.stock_passwords: result = self.do_encrypt(secret, use_encrypt=use_16_legacy) self.check_returned_native_str(result, 'hash') self.check_verify(secret, result) self.check_verify(wrong_secret, result, negate=True) other = self.do_genhash(secret, result) self.check_returned_native_str(other, 'genhash') if self.handler.is_disabled and self.disabled_contains_salt: self.assertNotEqual(other, result, 'genhash() failed to salt result hash: secret=%r hash=%r: result=%r' % ( secret, result, other)) else: self.assertEqual(other, result, 'genhash() failed to reproduce hash: secret=%r hash=%r: result=%r' % ( secret, result, other)) other = self.do_genhash(wrong_secret, result) self.check_returned_native_str(other, 'genhash') if self.handler.is_disabled and not self.disabled_contains_salt: self.assertEqual(other, result, 'genhash() failed to reproduce disabled-hash: secret=%r hash=%r other_secret=%r: result=%r' % ( secret, result, wrong_secret, other)) else: self.assertNotEqual(other, result, 'genhash() duplicated hash: secret=%r hash=%r wrong_secret=%r: result=%r' % ( secret, result, wrong_secret, other)) self.assertTrue(self.do_identify(result)) def test_03_legacy_hash_workflow(self): self.test_03_hash_workflow(use_16_legacy=True) def test_04_hash_types(self): result = self.do_encrypt(tonn('stub')) self.check_returned_native_str(result, 'hash') self.check_verify('stub', tonn(result)) self.check_verify(tonn('stub'), tonn(result)) other = self.do_genhash('stub', tonn(result)) self.check_returned_native_str(other, 'genhash') if self.handler.is_disabled and self.disabled_contains_salt: self.assertNotEqual(other, result) else: self.assertEqual(other, result) other = self.do_genhash(tonn('stub'), tonn(result)) self.check_returned_native_str(other, 'genhash') if self.handler.is_disabled and self.disabled_contains_salt: self.assertNotEqual(other, result) else: self.assertEqual(other, result) self.assertTrue(self.do_identify(tonn(result))) def test_05_backends(self): handler = self.handler if not hasattr(handler, 'set_backend'): raise self.skipTest('handler only has one backend') self.addCleanup(handler.set_backend, handler.get_backend()) for backend in handler.backends: self.assertIsInstance(backend, str) self.assertNotIn(backend, RESERVED_BACKEND_NAMES, 'invalid backend name: %r' % (backend,)) ret = handler.has_backend(backend) if ret is True: handler.set_backend(backend) self.assertEqual(handler.get_backend(), backend) elif ret is False: self.assertRaises(MissingBackendError, handler.set_backend, backend) else: raise TypeError('has_backend(%r) returned invalid value: %r' % ( backend, ret)) def require_salt(self): if 'salt' not in self.handler.setting_kwds: raise self.skipTest("handler doesn't have salt") def require_salt_info(self): self.require_salt() if not has_salt_info(self.handler): raise self.skipTest("handler doesn't provide salt info") def test_10_optional_salt_attributes(self): self.require_salt_info() AssertionError = self.failureException cls = self.handler mx_set = cls.max_salt_size is not None if mx_set and cls.max_salt_size < 1: raise AssertionError('max_salt_chars must be >= 1') if cls.min_salt_size < 0: raise AssertionError('min_salt_chars must be >= 0') if mx_set and cls.min_salt_size > cls.max_salt_size: raise AssertionError('min_salt_chars must be <= max_salt_chars') if cls.default_salt_size < cls.min_salt_size: raise AssertionError('default_salt_size must be >= min_salt_size') if mx_set and cls.default_salt_size > cls.max_salt_size: raise AssertionError('default_salt_size must be <= max_salt_size') if 'salt_size' not in cls.setting_kwds and (not mx_set or cls.default_salt_size < cls.max_salt_size): warn("%s: hash handler supports range of salt sizes, but doesn't offer 'salt_size' setting" % ( cls.name,)) if cls.salt_chars: if not cls.default_salt_chars: raise AssertionError('default_salt_chars must not be empty') for c in cls.default_salt_chars: if c not in cls.salt_chars: raise AssertionError('default_salt_chars must be subset of salt_chars: %r not in salt_chars' % (c,)) else: if not cls.default_salt_chars: raise AssertionError('default_salt_chars MUST be specified if salt_chars is empty') return @property def salt_bits(self): handler = self.handler from math import log return int(handler.default_salt_size * log(len(handler.default_salt_chars), 2)) def test_11_unique_salt(self): self.require_salt() samples = max(1, 16 - self.salt_bits) def sampler(func): value1 = func() for _ in irange(samples): value2 = func() if value1 != value2: return raise self.failureException('failed to find different salt after %d samples' % ( samples,)) sampler(self.do_genconfig) sampler(lambda : self.do_encrypt('stub')) def test_12_min_salt_size(self): self.require_salt_info() handler = self.handler salt_char = handler.salt_chars[0:1] min_size = handler.min_salt_size s1 = salt_char * min_size self.do_genconfig(salt=s1) self.do_encrypt('stub', salt_size=min_size) if min_size > 0: self.assertRaises(ValueError, self.do_genconfig, salt=s1[:-1]) self.assertRaises(ValueError, self.do_encrypt, 'stub', salt_size=min_size - 1) def test_13_max_salt_size(self): self.require_salt_info() handler = self.handler max_size = handler.max_salt_size salt_char = handler.salt_chars[0:1] if max_size is None or max_size > 1048576: s1 = salt_char * 1024 c1 = self.do_stub_encrypt(salt=s1) c2 = self.do_stub_encrypt(salt=s1 + salt_char) self.assertNotEqual(c1, c2) self.do_stub_encrypt(salt_size=1024) else: s1 = salt_char * max_size c1 = self.do_stub_encrypt(salt=s1) self.do_stub_encrypt(salt_size=max_size) s2 = s1 + salt_char self.assertRaises(ValueError, self.do_stub_encrypt, salt=s2) self.assertRaises(ValueError, self.do_stub_encrypt, salt_size=max_size + 1) if has_relaxed_setting(handler): with warnings.catch_warnings(record=True): c2 = self.do_stub_encrypt(salt=s2, relaxed=True) self.assertEqual(c2, c1) if handler.min_salt_size < max_size: c3 = self.do_stub_encrypt(salt=s1[:-1]) self.assertNotEqual(c3, c1) return fuzz_salts_need_bcrypt_repair = False def prepare_salt(self, salt): if self.fuzz_salts_need_bcrypt_repair: from otp.ai.passlib.utils.binary import bcrypt64 salt = bcrypt64.repair_unused(salt) return salt def test_14_salt_chars(self): self.require_salt_info() handler = self.handler mx = handler.max_salt_size mn = handler.min_salt_size cs = handler.salt_chars raw = isinstance(cs, bytes) for salt in batch(cs, mx or 32): if len(salt) < mn: salt = repeat_string(salt, mn) salt = self.prepare_salt(salt) self.do_stub_encrypt(salt=salt) source = u('\x00\xff') if raw: source = source.encode('latin-1') chunk = max(mn, 1) for c in source: if c not in cs: self.assertRaises(ValueError, self.do_stub_encrypt, salt=c * chunk, __msg__='invalid salt char %r:' % (c,)) @property def salt_type(self): if getattr(self.handler, '_salt_is_bytes', False): return bytes return unicode def test_15_salt_type(self): self.require_salt() salt_type = self.salt_type salt_size = getattr(self.handler, 'min_salt_size', 0) or 8 class fake(object): pass self.assertRaises(TypeError, self.do_encrypt, 'stub', salt=fake()) if salt_type is not unicode: self.assertRaises(TypeError, self.do_encrypt, 'stub', salt=u('x') * salt_size) if not (salt_type is bytes or PY2 and salt_type is unicode): self.assertRaises(TypeError, self.do_encrypt, 'stub', salt='x' * salt_size) def test_using_salt_size(self): self.require_salt_info() handler = self.handler mn = handler.min_salt_size mx = handler.max_salt_size df = handler.default_salt_size self.assertRaises(ValueError, handler.using, default_salt_size=-1) with self.assertWarningList([PasslibHashWarning]): temp = handler.using(default_salt_size=-1, relaxed=True) self.assertEqual(temp.default_salt_size, mn) if mx: self.assertRaises(ValueError, handler.using, default_salt_size=mx + 1) with self.assertWarningList([PasslibHashWarning]): temp = handler.using(default_salt_size=mx + 1, relaxed=True) self.assertEqual(temp.default_salt_size, mx) if mn != mx: temp = handler.using(default_salt_size=mn + 1) self.assertEqual(temp.default_salt_size, mn + 1) self.assertEqual(handler.default_salt_size, df) temp = handler.using(default_salt_size=mn + 2) self.assertEqual(temp.default_salt_size, mn + 2) self.assertEqual(handler.default_salt_size, df) if mn == mx: ref = mn else: ref = mn + 1 temp = handler.using(default_salt_size=str(ref)) self.assertEqual(temp.default_salt_size, ref) self.assertRaises(ValueError, handler.using, default_salt_size=str(ref) + 'xxx') temp =
makeNewUvloop(me, "Uvloop", True) n = 0 for uvface in uvfaces: for uv in uvface: uvloop.data[n].uv = uv n += 1 for mat in mats: me.materials.append(mat) for fn,mn in enumerate(mnums): f = me.polygons[fn] f.material_index = mn f.use_smooth = True vgnames = [vgrp.name for vgrp in ob.vertex_groups] weights = dict([(vn,{}) for vn in range(self.nverts)]) for vn,v in enumerate(ob.data.vertices): nvn = self.vertmap[vn] if nvn >= 0: for g in v.groups: weights[nvn][g.group] = g.weight skeys = [] if ob.data.shape_keys: for skey in ob.data.shape_keys.key_blocks: data = dict([(vn, skey.data[vn].co) for vn in range(self.nverts)]) skeys.append((skey.name, skey.value, skey.slider_min, skey.slider_max, data)) from .driver import getShapekeyDrivers, copyShapeKeyDrivers drivers = getShapekeyDrivers(ob) ob.data = me ob.vertex_groups.clear() vgrps = {} for gn,vgname in enumerate(vgnames): vgrps[gn] = ob.vertex_groups.new(name=vgname) for vn,grp in weights.items(): for gn,w in grp.items(): vgrps[gn].add([vn], w, 'REPLACE') for (sname, value, min, max, data) in skeys: skey = ob.shape_key_add(name=sname) skey.slider_min = min skey.slider_max = max skey.value = value for vn,co in data.items(): nvn = self.vertmap[vn] if nvn >= 0: skey.data[nvn].co = co copyShapeKeyDrivers(ob, drivers) def changeFace(self, vn, fn1, newface): for fn2 in newface: if (fn2 != fn1 and vn in self.faceverts[fn2]): return fn2 return -1 def getVert(self, vn): nvn = self.vertmap[vn] if nvn < 0: self.verts.append(self.object.data.vertices[vn].co) nvn = self.vertmap[vn] = self.lastvert self.lastvert += 1 return nvn def findTaken(self, newface): taken = dict([vn,False] for fn in newface for vn in self.faceverts[fn]) hits = dict([vn,0] for fn in newface for vn in self.faceverts[fn]) for fn in newface: for vn in self.faceverts[fn]: hits[vn] += 1 if hits[vn] > 2: taken[vn] = True return taken def combineFaces(self, newfaces): ob = self.object maxmnum = self.colorFaces(newfaces) print("Max material number:", maxmnum) print("Adding faces") bpy.ops.object.mode_set(mode='EDIT') bpy.ops.mesh.select_mode(type='FACE') bpy.ops.mesh.select_all(action='DESELECT') count = 0 for mn in range(maxmnum): if count % 25 == 0: print(" ", count) if mn % self.matOffset == 0: continue bpy.ops.object.mode_set(mode='OBJECT') ob.active_material_index = mn bpy.ops.object.mode_set(mode='EDIT') bpy.ops.object.material_slot_select() try: bpy.ops.mesh.edge_face_add() except RuntimeError: pass bpy.ops.mesh.select_all(action='DESELECT') bpy.ops.object.mode_set(mode='OBJECT') count += 1 printStatistics(ob) def mergeNextFaces(self, face, newfaces): me = self.object.data if len(face) < 2: return nextfaces = [face] while nextfaces: faces = nextfaces nextfaces = [] for face in faces: for fn0 in face: mn = self.origMnums[fn0] for fn1 in face: if (fn1 in self.neighbors[fn0] and mn == self.origMnums[fn1]): newface = self.mergeSide(fn0, fn1, newfaces, mn) if newface: if len(newface) == 4: for fn in newface: me.polygons[fn].select = True nextfaces.append(newface) break def mergeSide(self, fn0, fn1, newfaces, mn): for fn2 in self.neighbors[fn0]: if (self.dirty[fn2] or fn2 in self.seams[fn0] or fn2 in self.seams[fn1] ): continue for fn3 in self.neighbors[fn1]: if (fn3 == fn2 or self.dirty[fn3] or fn3 not in self.neighbors[fn2] or fn3 in self.seams[fn0] or fn3 in self.seams[fn1] or fn3 in self.seams[fn2] ): continue self.dirty[fn2] = True self.dirty[fn3] = True newface = self.mergeFacePair([fn2,fn3], newfaces, mn) return newface return None def mergeFaces(self, fn0, newfaces): newface = [fn0] self.dirty[fn0] = True mn = self.origMnums[fn0] for fn1 in self.neighbors[fn0]: if (fn1 not in self.seams[fn0] and not self.dirty[fn1] and mn == self.origMnums[fn1]): newface.append(fn1) self.dirty[fn1] = True break if len(newface) == 2: return self.mergeFacePair(newface, newfaces, mn) else: newfaces.append(newface) return newface def mergeFacePair(self, newface, newfaces, mn): fn0,fn1 = newface for fn2 in self.neighbors[fn0]: if (fn2 != fn1 and self.sharedVertex(fn1, fn2) and fn2 not in self.seams[fn0] and not self.dirty[fn2] and mn == self.origMnums[fn2]): newface.append(fn2) self.dirty[fn2] = True break if len(newface) == 3: fn2 = newface[2] for fn3 in self.neighbors[fn1]: if (fn3 != fn0 and fn3 != fn2 and fn3 in self.neighbors[fn2] and not self.dirty[fn3] and mn == self.origMnums[fn3]): newface.append(fn3) self.dirty[fn3] = True break if len(newface) == 3: fn0,fn1,fn2 = newface self.dirty[fn2] = False newface = [fn0,fn1] newfaces.append(newface) return newface def sharedVertex(self, fn1, fn2): for vn in self.faceverts[fn1]: if vn in self.faceverts[fn2]: return True return False def colorFaces(self, newfaces): me = self.object.data matnums = dict((fn,0) for fn in range(self.nfaces)) maxmnum = 0 for newface in newfaces: mnums = [] for fn in newface: mnums += [matnums[fn2] for fn2 in self.neighbors[fn]] mn = 1 while mn in mnums: mn += 1 if mn > maxmnum: maxmnum = mn for fn in newface: f = me.polygons[fn] f.material_index = matnums[fn] = mn return maxmnum def createMaterials(self): me = self.object.data mats = [mat for mat in me.materials] me.materials.clear() n = 0 for r in range(3): for g in range(3): for b in range(3): mat = bpy.data.materials.new("Mat-%02d" % n) n += 1 mat.diffuse_color[0:3] = (r/2, g/2, b/2) me.materials.append(mat) def selectRandomComponents(self, context): import random ob = context.object scn = context.scene deselectEverything(ob, context) self.faceverts, self.vertfaces = getVertFaces(ob) self.neighbors = findNeighbors(range(self.nfaces), self.faceverts, self.vertfaces) comps,taken = self.getConnectedComponents() for comp in comps.values(): if random.random() > scn.DazRandomKeepFraction: for fn in comp: f = ob.data.polygons[fn] if not f.hide: f.select = True bpy.ops.object.mode_set(mode='EDIT') def getUvData(ob): from collections import OrderedDict uvtex = getUvTextures(ob.data) uvloop = ob.data.uv_layers[0] uvdata = OrderedDict() m = 0 for fn,f in enumerate(ob.data.polygons): n = len(f.vertices) uvdata[fn] = [uvloop.data[j].uv for j in range(m,m+n)] m += n return uvtex,uvloop,uvdata def deleteMidpoints(ob): edgeverts, vertedges = getVertEdges(ob) faceverts, vertfaces = getVertFaces(ob) uvtex,uvloop,uvdata = getUvData(ob) for vn,v in enumerate(ob.data.vertices): if (len(vertedges[vn]) == 2 and len(vertfaces[vn]) <= 2): e = vertedges[vn][0] vn1,vn2 = e.vertices if vn1 == vn: v.co = ob.data.vertices[vn2].co moveUv(vn, vn2, vertfaces[vn], faceverts, uvdata) elif vn2 == vn: v.co = ob.data.vertices[vn1].co moveUv(vn, vn1, vertfaces[vn], faceverts, uvdata) else: halt m = 0 for uvs in uvdata.values(): for j,uv in enumerate(uvs): uvloop.data[m+j].uv = uv m += len(uvs) def moveUv(vn1, vn2, fnums, faceverts, uvdata): for fn in fnums: fverts = faceverts[fn] n1 = getIndex(vn1, fverts) n2 = getIndex(vn2, fverts) uvdata[fn][n1] = uvdata[fn][n2] def getIndex(vn, verts): for n,vn1 in enumerate(verts): if vn1 == vn: return n #------------------------------------------------------------- # Insert seams #------------------------------------------------------------- def insertSeams(hum, pxy): for pe in pxy.data.edges: pe.use_seam = False humPxy,pxyHum = identifyVerts(hum, pxy) pvn = pvn0 = len(pxy.data.vertices) pen = len(pxy.data.edges) newVerts = {} newEdges = {} seams = [e for e in hum.data.edges if e.use_seam] nseams = {} for e in seams: vn1,vn2 = e.vertices old1 = (vn1 in humPxy.keys()) old2 = (vn2 in humPxy.keys()) if old1 and old2: pvn1 = humPxy[vn1] pvn2 = humPxy[vn2] if (pvn1 in nseams.keys() and pvn2 not in nseams[pvn1]): newEdges[pen] = (pvn1, pvn2) pen += 1 elif old1: pvn1 = humPxy[vn1] pvn2 = pvn newVerts[pvn2] = hum.data.vertices[vn2].co humPxy[vn2] = pvn2 pvn += 1 newEdges[pen] = (pvn1, pvn2) pen += 1 elif old2: pvn1 = pvn newVerts[pvn1] = hum.data.vertices[vn1].co humPxy[vn1] = pvn1 pvn2 = humPxy[vn2] pvn += 1 newEdges[pen] = (pvn1, pvn2) pen += 1 else: pvn1 = pvn newVerts[pvn1] = hum.data.vertices[vn1].co humPxy[vn1] = pvn1 pvn2 = pvn+1 newVerts[pvn2] = hum.data.vertices[vn2].co humPxy[vn2] = pvn2 pvn += 2 newEdges[pen] = (pvn1, pvn2) pen += 1 if pvn1 not in nseams.keys(): nseams[pvn1] = [pvn2] else: nseams[pvn1].append(pvn2) if pvn2 not in nseams.keys(): nseams[pvn2] = [pvn1] else: nseams[pvn2].append(pvn1) if 1367 in [pvn1,pvn2]: print("O", vn1, vn2, pvn, pvn1, pvn2, old1, old2) print(" ", hum.data.vertices[vn1].co) print(" ", hum.data.vertices[vn2].co) print(" ", nseams[1367]) print(" ", pxyHum[1367]) pvn0 = len(pxy.data.vertices) pxy.data.vertices.add(len(newVerts)) for pvn,co in newVerts.items(): pxy.data.vertices[pvn].co = co #for pvn in range(pvn0, pvn0+3): # print(" ", pvn, pxy.data.vertices[pvn].co) pxy.data.edges.add(len(newEdges)) for pen,pverts in newEdges.items(): pe = pxy.data.edges[pen] pe.vertices = pverts pe.select = True for pe in pxy.data.edges: pvn1,pvn2 = pe.vertices if (pvn1 in nseams.keys() and pvn2 in nseams[pvn1]): pe.use_seam = True def identifyVerts(hum, pxy): ''' for e in hum.data.edges: if e.use_seam: vn1,vn2 = e.vertices if vn1 < vn2: v1 = hum.data.vertices[vn1] v2 = hum.data.vertices[vn2] verts += [(v1.co, ("E", vn1, vn2, e.index)), (v2.co, ("E", vn2, vn1, e.index))] ''' hverts = [(v.co, ("H", v.index, v.co)) for v in hum.data.vertices] pverts = [(v.co, ("P", v.index, v.co)) for v in pxy.data.vertices] verts = hverts + pverts verts.sort() humPxy = {} pxyHum = {} nverts = len(verts) for m,vert in enumerate(verts): co1,data1 = vert if data1[0] == "P": mindist = 1e7 pvn = data1[1] for j in range(-20,20): n = min(max(0, m+j), nverts-1) co2,data2 = verts[n] dist = (co1-co2).length if data2[0] == "H" and dist < mindist: mindist = dist vn = data2[1] humPxy[vn] = pvn pxyHum[pvn] = vn if mindist > 1e-7: pco = pxy.data.vertices[pvn] co = hum.data.vertices[vn] print("DIST", pvn, vn, pco,
y_arr, ls=dic['ls'], lw=dic['lw'], color=dic['color'], # # label='{}'.format(sim.replace('_', '\_'))) # # else: # # ax.plot(x_arr, y_arr, ls=dic['ls'], lw=dic['lw'], color=dic['color']) # del(x_arr) # else: # raise NameError("plot type dic['dtype'] is not recognised (given: {})".format(dic["dtype"])) # # def plot_tcoll_vert_line(self, ax, dic): # # o_data = dic['data'] # try: # value = o_data.get_par("tcoll_gw") # value = self.treat_time_acis(value, dic) # # # print(value); exit(1) # if 'label' in dic.keys(): # if dic['label'] != None: # ax.axvline(x=value, linestyle=dic['ls'], color=dic['color'], linewidth=dic['lw']) # else: # ax.axvline(x=value, linestyle=dic['ls'], color=dic['color'], linewidth=dic['lw'], label=dic['label']) # else: # ax.axvline(x=value, linestyle=dic['ls'], color=dic['color'], linewidth=dic['lw']) # except: # print("Warning! tcoll failed to be plotted") # # def plot_histogram_1d(self, ax, dic): # # o_data = dic['data'] # if dic['v_n_x'] == 'hist_theta' and dic['v_n_y'] == 'hist_theta_m': # tht = o_data.get_arr('hist_theta', dic['criterion']) # M = o_data.get_arr('hist_theta_m', dic['criterion']) # # if dic['norm']: M /= np.sum(M) # # # ax.step(90. - (tht / np.pi * 180.), M, color=dic['color'], where='mid', label=dic['label']) # # ax.plot(90. - (tht / np.pi * 180.), M, color=dic['color'], ls=dic['ls'], drawstyle=dic['ds'], label=dic['label']) # self.plot_generic_line(ax, dic, 90. - (tht / np.pi * 180.), M) # dtht = tht[1] - tht[0] # # # ax.set_xlim(xmin=0 - dtht / np.pi * 180, xmax=90.) # # xmajorticks = np.arange(5) * 90. / 4. # # xminorticks = np.arange(17) * 90. / 16 # # xmajorlabels = [r"$0^\circ$", r"$22.5^\circ$", r"$45^\circ$", # # r"$67.5^\circ$", r"$90^\circ$"] # # ax.xaxis.set_major_locator(FixedLocator(xmajorticks)) # # ax.xaxis.set_minor_locator(FixedLocator(xminorticks)) # # ax.set_xticklabels(xmajorlabels) # # # # ax.set_xlabel(r"Angle from orbital plane") # # elif dic['v_n_x'] == 'hist_ye' and dic['v_n_y'] == 'hist_ye_m': # o_data = dic['data'] # ye = o_data.get_arr('hist_ye', dic['criterion']) # M = o_data.get_arr('hist_ye_m', dic['criterion']) # # if dic['norm']: M /= np.sum(M) # # # ax.step(ye, M, color=dic['color'], where='mid', label=dic['label']) # # # ax.plot(ye, M, color=dic['color'], ls=dic['ls'], drawstyle=dic['ds'], label=dic['label']) # self.plot_generic_line(ax, dic, ye, M) # # elif dic['v_n_x'] == 'hist_vel_inf' and dic['v_n_y'] == 'hist_vel_inf_m': # # o_data = dic['data'] # vel_inf = o_data.get_arr('hist_vel_inf', dic['criterion']) # M = o_data.get_arr('hist_vel_inf_m', dic['criterion']) # # print(M) # if dic['norm']: M /= np.sum(M) # # # print(M) # # ax.step(ye, M, color=dic['color'], where='mid', label=dic['label']) # # ax.plot(vel_inf, M, color=dic['color'], ls=dic['ls'], drawstyle=dic['ds'], label=dic['label']) # self.plot_generic_line(ax, dic, vel_inf, M) # # def plot_ejecta_profile(self, ax, dic): # # o_data = dic['data'] # # if 'extrapolation' in dic.keys(): # dic_ext = dic['extrapolation'] # x_arr, y_arr = o_data.get_extrapolated_arr(dic['v_n_x'], dic['v_n_y'], dic['criterion'], # dic_ext['method'], dic_ext['depth'], # dic_ext['x_left'], dic_ext['x_right'], # dic_ext['x_start'], dic_ext['x_stop']) # # # print(y_arr) # else: # x_arr = o_data.get_arr(dic['v_n_x'], dic['criterion']) # y_arr = o_data.get_arr(dic['v_n_y'], dic['criterion']) # # # if 'ymod' in dic.keys() and dic['ymod'] != None: # y_arr = self.modify_arr(y_arr, dic) # # x_arr = self.treat_time_acis(x_arr, dic) # y_arr = self.treat_mass_acis(y_arr, dic) # # self.plot_generic_line(ax, dic, x_arr, y_arr) # # # ax.plot(x_arr, y_arr, ls = dic['ls'], color=dic['color'], lw = dic['lw']) # # def plot_nucleo_yeilds_line(self, ax, dic): # # o_data = dic['data'] # if dic['v_n_x'] in o_data.list_sims_v_ns: # x_arr = o_data.get_normalized_sim_data(dic['v_n_x'], criterion=dic['criterion'], method=dic['method']) # elif dic['v_n_x'] in o_data.list_sol_v_ns: # x_arr = o_data.get_nored_sol_abund(dic['v_n_x'], method=dic['method']) # else: # raise NameError("v_n_x:{} is not in available v_ns lists" # .format(dic["v_n_x"])) # # o_data = dic['data'] # if dic['v_n_y'] in o_data.list_sims_v_ns: # y_arr = o_data.get_normalized_sim_data(dic['v_n_y'], criterion=dic['criterion'], method=dic['method']) # elif dic['v_n_y'] in o_data.list_sol_v_ns: # y_arr = o_data.get_nored_sol_abund(dic['v_n_y'], method=dic['method']) # else: # raise NameError("v_n_y:{} is not in available v_ns lists" # .format(dic["v_n_y"])) # # self.plot_generic_line(ax, dic, x_arr, y_arr) # # def plot_mkn_lightcurve(self, ax, dic): # print("color {}".format(dic['color'])) # data = dic['data'] # m_time, m_min, m_max = data.get_model_min_max(dic['band'], fname=dic['fname']) # if dic["label"] == None: # ax.fill_between(m_time, m_min, m_max, alpha=dic['alpha'], color=dic['color']) # else: # ax.fill_between(m_time, m_min, m_max, alpha=dic['alpha'], color=dic['color'], label=dic['label']) # # def plot_mkn_obs_data(self, ax, dic): # # data = dic['data'] # data_list = data.get_obs_data(dic["band"], fname="AT2017gfo.h5") # # for i_, arr in enumerate(data_list): # if dic["label"] == None: # self.plot_generic_errorbar(ax, dic, arr[:, 0], arr[:, 1], yerr=arr[:, 2]) # else: # if i_ == 0: # self.plot_generic_errorbar(ax, dic, arr[:, 0], arr[:, 1], yerr=arr[:, 2]) # else: # dic['label'] = None # self.plot_generic_errorbar(ax, dic, arr[:, 0], arr[:, 1], yerr=arr[:, 2]) # # def plot_mkn_mismatch(self, ax, dic): # # data = dic['data'] # times, min_mismatch, max_mismatch = data.get_mismatch(dic['band'], dic['fname']) # # self.plot_generic_line(ax, dic, times, min_mismatch) # # def plot_mkn_model_middle_line(self, ax, dic): # # data = dic['data'] # # times, mags = data.get_model_median(dic['band'], dic['fname']) # # self.plot_generic_line(ax, dic, times, mags) # # def plot_ejecta_band_2_objects(self, ax, dic): # # o_data1 = dic["data1"] # o_data2 = dic["data2"] # # tmerg1 = o_data1.get_par("tmerger_gw") # time_arr1 = o_data1.get_arr(dic["v_n_x"], dic["criterion1"]) # time_arr1 = time_arr1 - tmerg1 # mass_flux_arr1 = o_data1.get_arr(dic["v_n_y"], dic["criterion1"]) # mass_flux_arr1 = self.treat_mass_acis(mass_flux_arr1, dic) # # tmerg2 = o_data2.get_par("tmerger_gw") # time_arr2 = o_data2.get_arr(dic["v_n_x"], dic["criterion2"]) # time_arr2 = time_arr2 - tmerg2 # mass_flux_arr2 = o_data2.get_arr(dic["v_n_y"], dic["criterion2"]) # mass_flux_arr2 = self.treat_mass_acis(mass_flux_arr2, dic) # # from scipy import interpolate # # print(time_arr1[-1], time_arr2[-1]) # # if time_arr2[-1] > time_arr1[-1]: # # mass_flux_arr2 = interpolate.interp1d(time_arr2, mass_flux_arr2, kind='linear', bounds_error=False)(time_arr1) # # print(len(time_arr1)) # time_arr1 = self.treat_time_acis(time_arr1, dic) # self.plot_generic_band(ax, dic, time_arr1, mass_flux_arr1, mass_flux_arr2) # else: # # time_arr2[-1] < time_arr1[-1] # mass_flux_arr1 = interpolate.interp1d(time_arr1, mass_flux_arr1, kind='linear', bounds_error=False)(time_arr2) # time_arr2 = self.treat_time_acis(time_arr2, dic) # self.plot_generic_band(ax, dic, time_arr2, mass_flux_arr1, mass_flux_arr2) # # # return 0 # # # # def plot_summed_correlation_with_time(self, ax, dic): # # data = dic['data'] # times = [] # total_masses = [] # for it in data.list_iterations: # try: # table = data.get_res_corr(int(it), dic['v_n_x'], dic['v_n_y']) # time_ = data.get_time(int(it)) # table = np.array(table) # x_arr = table[0, 1:] # * 6.176269145886162e+17 # y_arr = table[1:, 0] # z_arr = table[1:, 1:] # total_mass = np.sum(z_arr) # times.append(time_) # total_masses.append(total_mass) # except IOError: # print("Warning: data for it:{} not found".format(it)) # # times, total_masses = zip(*sorted(zip(times, total_masses))) # # times, total_masses = x_y_z_sort(times, total_masses) # # total_masses = self.treat_mass_acis(np.array(total_masses), dic) # times = self.treat_time_acis(np.array(times), dic) # # # self.plot_generic_band(ax, dic, times, total_masses) # # self.plot_generic_line(ax, dic, times, total_masses) # # def plot_summed_correlation_with_time_band(self, ax, dic): # # data1 = dic['data1'] # data2 = dic['data2'] # times = [] # total_masses1 = [] # total_masses2 = [] # for it in data1.list_iterations: # try: # table1 = data1.get_res_corr(int(it), dic['v_n_x1'], dic['v_n_y1']) # time_ = data1.get_time(int(it)) # table1 = np.array(table1) # x_arr1 = table1[0, 1:] # * 6.176269145886162e+17 # y_arr1 = table1[1:, 0] # z_arr1 = table1[1:, 1:] # total_mass1 = np.sum(z_arr1) # times.append(time_) # total_masses1.append(total_mass1) # # table2 = data2.get_res_corr(int(it), dic['v_n_x2'], dic['v_n_y2']) # # time_ = data2.get_time(int(it)) # table2 = np.array(table2) # x_arr2 = table2[0, 1:] # * 6.176269145886162e+17 # y_arr2 = table2[1:, 0] # z_arr2 = table2[1:, 1:] # total_mass2 = np.sum(z_arr2) # # times.append(time_) # total_masses2.append(total_mass2) # # except IOError: # print("Warning: data for it:{} not found".format(it)) # # times, total_masses1 = zip(*sorted(zip(times, total_masses1))) # times, total_masses2 = zip(*sorted(zip(times, total_masses2))) # # times, total_masses = x_y_z_sort(times, total_masses) # # total_masses1 = self.treat_mass_acis(np.array(total_masses1), dic) # total_masses2 = self.treat_mass_acis(np.array(total_masses2), dic) # times = self.treat_time_acis(np.array(times), dic) # # self.plot_generic_band(ax, dic, times, total_masses1, total_masses2) # # # def plot_outflowed_correlation(self, ax, dic): # # data = dic['data'] # # x_arr, y_arr, mass = data.get_corr_x_y_mass(dic['v_n_x'], dic['v_n_y'], dic['criterion']) # # # if dic['v_n_x'] == 'theta': # # ax.set_xlim(0, 90) # x_arr = 90 - (180 * x_arr / np.pi) # if dic['v_n_y'] == 'theta': # # ax.set_ylim(0, 90) # y_arr = 90 - (180 * y_arr / np.pi) # # if 'normalize' in dic.keys() and dic['normalize']: # mass = mass / np.sum(mass) # mass = np.maximum(mass, 1e-15) # WHAT'S THAT? # # # print(mass) # # return self.plot_colormesh(ax, dic, x_arr, y_arr, mass) # # # def plot_2d_movie_plot_xy(self, ax, dic): # # o_data = dic["data"] # x_arr, y_arr, z_arr = o_data.get_modified_2d_data( # dic["it"], dic['plane'], dic["v_n_x"], dic["v_n_y"], dic["v_n"], dic["mod"] # ) # # im = self.plot_colormesh(ax, dic, y_arr, x_arr, z_arr) # phi, r, data # return im # # def plot_2d_movie_plot_xz(self, ax, dic): # # o_data = dic["data"] # phi_arr = o_data.get_int_grid(dic['plane'], dic["v_n_x"]) # z_arr = o_data.get_int_grid(dic['plane'], dic["v_n_y"]) # data_arr = o_data.get_int_data(dic["it"], dic['plane'], dic["v_n"]) #
import glob import gzip import json import logging import os import re import shutil import subprocess import sys import tarfile import time import psutil import requests import rpmfile import tenacity import yaml from docker import APIClient __tarantool_version = None STATUS_NOT_STARTED = 'NOT STARTED' STATUS_RUNNING = 'RUNNING' STATUS_STOPPED = 'STOPPED' STATUS_FAILED = 'FAILED' DEFAULT_CLUSTER_COOKIE = 'secret-cluster-cookie' def get_logs(output): rgx = re.compile(r'^\s+\S+\s+(?P<msg>\S+.*)$') logs = [] for line in output.split('\n'): if line == '': continue m = rgx.match(line) assert m is not None logs.append(m.group("msg").strip()) return logs # ############# # Class Archive # ############# class Archive: def __init__(self, filepath, project): self.filepath = filepath self.filename = os.path.basename(filepath) self.project = project # ########### # Class Image # ########### class Image: def __init__(self, name, project): self.name = name self.project = project # ##################### # Class InstanceProcess # ##################### class InstanceProcess(): def __init__(self, pid): self._env = {} self._process = None self.name = None self.cmd = None self.pid_not_exists = False if not psutil.pid_exists(pid): self.pid_not_exists = True return self._process = psutil.Process(pid) self.name = self._process.name() self.cmd = self._process.cmdline() self.cwd = self._process.cwd() env = self._process.environ() self._env = { 'TARANTOOL_APP_NAME': env.get('TARANTOOL_APP_NAME'), 'TARANTOOL_INSTANCE_NAME': env.get('TARANTOOL_INSTANCE_NAME'), 'TARANTOOL_CFG': env.get('TARANTOOL_CFG'), 'TARANTOOL_CONSOLE_SOCK': env.get('TARANTOOL_CONSOLE_SOCK'), 'TARANTOOL_PID_FILE': env.get('TARANTOOL_PID_FILE'), 'TARANTOOL_WORKDIR': env.get('TARANTOOL_WORKDIR'), 'NOTIFY_SOCKET': env.get('NOTIFY_SOCKET') } def is_running(self): if self.pid_not_exists: return False return self._process.is_running() and self._process.status() != psutil.STATUS_ZOMBIE def getenv(self, name): return self._env.get(name) def get_instance_id_by_pid_filepath(pid_filepath): filename = os.path.basename(pid_filepath) instance_id = filename.replace(".pid", "") return instance_id # ######### # Class Cli # ######### class Cli(): def __init__(self, cartridge_cmd): self._cartridge_cmd = cartridge_cmd self._processes = [] self._instance_pids = set() self._subprocess = None def start(self, project, instances=[], daemonized=False, stateboard=False, stateboard_only=False, cfg=None, script=None, run_dir=None, data_dir=None, log_dir=None, timeout=None, capture_output=False, env=None, exp_rc=0): cmd = [self._cartridge_cmd, 'start'] if daemonized: cmd.append('-d') if stateboard: cmd.append('--stateboard') if stateboard_only: cmd.append('--stateboard-only') if timeout is not None: cmd.extend(['--timeout', timeout]) if cfg is not None: cmd.extend(['--cfg', cfg]) if script is not None: cmd.extend(['--script', script]) if run_dir is not None: cmd.extend(['--run-dir', run_dir]) if data_dir is not None: cmd.extend(['--data-dir', data_dir]) if log_dir is not None: cmd.extend(['--log-dir', log_dir]) cmd.extend(instances) if not capture_output: self._subprocess = subprocess.Popen( cmd, cwd=project.path, env=env, stdout=sys.stdout, stderr=sys.stderr, ) else: self._subprocess = subprocess.Popen( cmd, cwd=project.path, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, ) self._process = psutil.Process(self._subprocess.pid) self._processes.append(self._process) run_dir = project.get_run_dir(run_dir) if daemonized: rc = self.wait(project, run_dir=run_dir) assert rc == exp_rc if capture_output: output = self._subprocess.stdout.read().decode('utf-8') logs = get_logs(output) return logs def wait(self, project, run_dir=None): self._subprocess.wait(timeout=10) self.get_instance_procs(project, run_dir=run_dir) return self._subprocess.returncode def stop(self, project, instances=[], run_dir=None, cfg=None, force=False, stateboard=False, stateboard_only=False): cmd = [self._cartridge_cmd, 'stop'] if force: cmd.append('--force') if stateboard: cmd.append('--stateboard') if stateboard_only: cmd.append('--stateboard-only') if run_dir is not None: cmd.extend(['--run-dir', run_dir]) if cfg is not None: cmd.extend(['--cfg', cfg]) cmd.extend(instances) process = subprocess.run( cmd, cwd=project.path, stdout=sys.stdout, stderr=sys.stderr, ) assert process.returncode == 0 def get_status(self, project, instances=[], run_dir=None, cfg=None, stateboard=False, stateboard_only=False): cmd = [self._cartridge_cmd, 'status'] if stateboard: cmd.append('--stateboard') if stateboard_only: cmd.append('--stateboard-only') if run_dir is not None: cmd.extend(['--run-dir', run_dir]) if cfg is not None: cmd.extend(['--cfg', cfg]) cmd.extend(instances) rc, output = run_command_and_get_output(cmd, cwd=project.path) assert rc == 0 status = {} logs = get_logs(output) for msg in logs: m = re.match(r'^(\S+):\s+(.+)$', msg) assert m is not None instance_id = m.group(1) instance_status = m.group(2) assert instance_id not in status status[instance_id] = instance_status return status def get_logs(self, project, instances=[], n=None, log_dir=None, run_dir=None, cfg=None, stateboard=False, stateboard_only=False): cmd = [self._cartridge_cmd, 'log'] if n is not None: cmd.append('-n{}'.format(n)) if stateboard: cmd.append('--stateboard') if stateboard_only: cmd.append('--stateboard-only') if log_dir is not None: cmd.extend(['--log-dir', log_dir]) if run_dir is not None: cmd.extend(['--run-dir', run_dir]) if cfg is not None: cmd.extend(['--cfg', cfg]) cmd.extend(instances) rc, output = run_command_and_get_output(cmd, cwd=project.path) assert rc == 0 logs = {} for line in output.split('\n'): m = re.match(r'^(\S+)\s+\|\s+(.+)$', line) if m is None: continue instance_id = m.group(1) instance_log_line = m.group(2) if instance_log_line == "entering the event loop": continue if instance_id not in logs: logs[instance_id] = [] logs[instance_id].append(instance_log_line) return logs def clean(self, project, instances=[], log_dir=None, run_dir=None, cfg=None, data_dir=None, stateboard=False, stateboard_only=False, exp_rc=0): cmd = [self._cartridge_cmd, 'clean'] if stateboard: cmd.append('--stateboard') if stateboard_only: cmd.append('--stateboard-only') if cfg is not None: cmd.extend(['--cfg', cfg]) if run_dir is not None: cmd.extend(['--run-dir', run_dir]) if data_dir is not None: cmd.extend(['--data-dir', data_dir]) if log_dir is not None: cmd.extend(['--log-dir', log_dir]) cmd.extend(instances) process = subprocess.Popen( cmd, cwd=project.path, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, ) process.wait(timeout=10) assert process.returncode == exp_rc output = process.stdout.read().decode('utf-8') logs = get_logs(output) return logs def get_instance_procs(self, project, run_dir=None): instances = dict() run_dir = project.get_run_dir(run_dir) for pid_filepath in glob.glob(os.path.join(run_dir, "*.pid")): with open(pid_filepath) as pid_file: pid = int(pid_file.read().strip()) self._instance_pids.add(pid) instance = InstanceProcess(pid) instance_id = get_instance_id_by_pid_filepath(pid_filepath) assert instance_id not in instances instances[instance_id] = instance return instances def is_running(self): return self._process.is_running() and self._process.status() != psutil.STATUS_ZOMBIE def terminate(self): for process in self._processes: if process.is_running(): process.kill() # kill all instance processes for pid in self._instance_pids: if not psutil.pid_exists(pid): continue process = psutil.Process(pid) if process.is_running(): process.kill() # ####################### # Class InstanceContainer # ####################### class InstanceContainer: def __init__(self, container, instance_name, http_port, advertise_port): self.container = container self.instance_name = instance_name self.http_port = http_port self.advertise_port = advertise_port class ProjectContainer: def __init__(self, container, project, http_port): self.container = container self.project = project self.http_port = http_port # ####### # Helpers # ####### def tarantool_version(): global __tarantool_version if __tarantool_version is None: __tarantool_version = subprocess.check_output(['tarantool', '-V']).decode('ascii').split('\n')[0] return __tarantool_version def tarantool_short_version(): m = re.search(r'(\d+).(\d+)', tarantool_version()) assert m is not None major, minor = m.groups() short_version = '{}.{}'.format(major, minor) return short_version def tarantool_enterprise_is_used(): return tarantool_version().startswith('Tarantool Enterprise') def create_project(cartridge_cmd, module_tmpdir, project_name, template): cmd = [ cartridge_cmd, "create", "--name", project_name, "--template", template ] process = subprocess.run(cmd, cwd=module_tmpdir) assert process.returncode == 0, \ "Error during creating the project" return os.path.join(module_tmpdir, project_name) def find_archive(path, project_name, arch_ext): with os.scandir(path) as it: for entry in it: if entry.name.startswith(project_name) and entry.name.endswith('.' + arch_ext) and entry.is_file(): return os.path.join(path, entry.name) def recursive_listdir(root_dir): files = set() for root, directories, filenames in os.walk(root_dir): rel_dir = os.path.relpath(root, root_dir) if rel_dir == '.': rel_dir = '' for directory in directories: files.add(os.path.join(rel_dir, directory)) for filename in filenames: files.add(os.path.join(rel_dir, filename)) return files def assert_distribution_dir_contents(dir_contents, project, exclude_files=set()): without_rocks = {x for x in dir_contents if not x.startswith('.rocks')} assert without_rocks == project.distribution_files.difference(exclude_files) assert all(x in dir_contents for x in project.rocks_content) def assert_filemode(project, filepath, filemode): filepath = os.path.join('/', filepath) if filepath == os.path.join('/usr/share/tarantool/', project.name, 'VERSION'): assert filemode & 0o777 == 0o644 elif filepath.startswith('/etc/systemd/system/'): assert filemode & 0o777 == 0o644 elif filepath.startswith('/usr/lib/tmpfiles.d/'): assert filemode & 0o777 == 0o644 elif filepath.startswith('/usr/share/tarantool/'): # a+r for files, a+rx for directories required_bits = 0o555 if os.path.isdir(filepath) else 0o444 assert filemode & required_bits == required_bits def assert_filemodes(project, basedir): known_dirs = { 'etc', 'etc/systemd', 'etc/systemd/system', 'usr', 'usr/share', 'usr/share/tarantool', 'usr/lib', 'usr/lib/tmpfiles.d' } filenames = recursive_listdir(basedir) - known_dirs for filename in filenames: # we don't check fileowner here because it's set in postinst script # check filemode if filename.startswith(os.path.join('usr/share/tarantool/', project.name, '.rocks')): continue # get filestat file_stat = os.stat(os.path.join(basedir, filename)) filemode = file_stat.st_mode assert_filemode(project, filename, filemode) def validate_version_file(project, distribution_dir): version_filepath = os.path.join(distribution_dir, 'VERSION') assert os.path.exists(version_filepath) version_file_content = {} with open(version_filepath, 'r') as version_file: file_lines = version_file.read().strip().split('\n') for line in file_lines: m = re.match(r'^([^=]+)=([^=]+)$', line) assert m is not None key, version = m.groups() version_file_content[key] = version for key in project.version_file_keys: assert key in version_file_content def assert_files_mode_and_owner_rpm(project, filename): DIRNAMES_TAG = 1118 DIRINDEXES_TAG = 1116 PAYLOADDIGEST_TAG = 5092 PAYLOADDIGESTALGO_TAG = 5093 expected_tags = [ 'basenames', DIRNAMES_TAG, DIRINDEXES_TAG, 'filemodes', 'fileusername', 'filegroupname', PAYLOADDIGEST_TAG, PAYLOADDIGESTALGO_TAG, ] with rpmfile.open(filename) as rpm: for key in expected_tags: assert key in rpm.headers for i, basename in enumerate(rpm.headers['basenames']): # get filepath basename = basename.decode("utf-8") dirindex = rpm.headers[DIRINDEXES_TAG][i] dirname = rpm.headers[DIRNAMES_TAG][dirindex].decode("utf-8") filepath = os.path.join(dirname, basename) # check fileowner assert rpm.headers['fileusername'][i].decode("utf-8") == 'root' assert rpm.headers['filegroupname'][i].decode("utf-8") == 'root' # check filemodes if filepath.startswith(os.path.join('/usr/share/tarantool/', project.name, '.rocks')): continue filemode = rpm.headers['filemodes'][i] assert_filemode(project, filepath, filemode) def assert_dependencies_deb(filename, deps, tarantool_versions, tmpdir): if not tarantool_enterprise_is_used(): deps += ( "tarantool (>= {})".format(tarantool_versions["min"]["deb"]), "tarantool (<< {})".format(tarantool_versions["max"]["deb"]), ) cmd = [ "dpkg", "-I", filename, ] rc, output = run_command_and_get_output(cmd, cwd=tmpdir) assert rc == 0 assert all(dep in output for dep in deps) def assert_tarantool_dependency_deb(filename, tarantool_versions): with open(filename) as control: control_info = control.read() depends_str = re.search('Depends: (.*)', control_info) assert depends_str is not None deps = depends_str.group(1) assert 'tarantool (>= {})'.format(tarantool_versions["min"]["deb"]) in deps assert 'tarantool (<< {})'.format(tarantool_versions["max"]["deb"]) in deps def assert_dependencies_rpm(filename, deps, tarantool_versions): with rpmfile.open(filename) as rpm: dependency_keys = ['requirename', 'requireversion', 'requireflags'] for key in dependency_keys: assert key in rpm.headers if not tarantool_enterprise_is_used(): deps += ( ("tarantool", 0x08 | 0x04, tarantool_versions["min"]["rpm"]), # >= ("tarantool", 0x02, tarantool_versions["max"]["rpm"]), ) assert len(rpm.headers['requirename']) == len(deps) assert len(rpm.headers['requireversion']) == len(deps) assert len(rpm.headers['requireversion']) == len(deps) for i, dep in enumerate(deps): assert rpm.headers['requirename'][i].decode('ascii') == dep[0] assert rpm.headers['requireflags'][i] == dep[1] assert rpm.headers['requireversion'][i].decode('ascii') == dep[2] def assert_tarantool_dependency_rpm(filename,
self.center, self.radius, 5) def get_center(self): return self.center def get_radius(self): return self.radius def is_alive(self): return self.alive # Define max shield HP SHIELD_HP_MAX = 5 # Shield Class class Shield(object): def __init__(self, center, width, height): self.center = center self.x, self.y = self.center self.width = width self.height = height self.radius = 125 self.hp = SHIELD_HP_MAX bad_sound.play() def update(self, pressed_keys): # Move with player if pressed_keys[K_UP] | pressed_keys[K_w]: self.y -= PLAYER_MOVE_STEP if pressed_keys[K_DOWN] | pressed_keys[K_s]: self.y += PLAYER_MOVE_STEP if pressed_keys[K_LEFT] | pressed_keys[K_a]: self.x -= PLAYER_MOVE_STEP if pressed_keys[K_RIGHT] | pressed_keys[K_d]: self.x += PLAYER_MOVE_STEP # Keep the Shield aligned with the player if self.x <= (self.width / 2): self.x = (self.width / 2) if self.x >= (SCREEN_WIDTH - (self.width / 2)): self.x = SCREEN_WIDTH - (self.width / 2) if self.y <= (self.height / 2): self.y = (self.height / 2) if self.y >= SCREEN_HEIGHT - (self.height / 2): self.y = SCREEN_HEIGHT - (self.height / 2) # Update the center position self.center = (self.x, self.y) def blit(self, surf): self.circle = pygame.draw.circle(surf, COLOR_BLUE, self.center, self.radius, self.hp + 1) def get_center(self): return self.center def get_radius(self): return self.radius def is_alive(self): return self.hp > 0 def hit(self): self.hp -= 1 if self.hp < 0: self.hp = 0 # Bullet Class class Bullet(pygame.sprite.Sprite): def __init__(self, x, y, path): super(Bullet, self).__init__() self.frame_cnt = 0 self.img_cnt = 0 self.surf = pygame.image.load(bullet_animation_imgs[self.img_cnt]).convert() self.surf.set_colorkey(COLOR_BLACK, RLEACCEL) self.rect = self.surf.get_rect( center = ( # Spawn based on location of player x, y ) ) self.mask = pygame.mask.from_surface(self.surf) self.speed = 20 self.dmg = 10 self.path = path def update(self): # Linear if self.path == movement_pattern[0]: self.rect.move_ip(self.speed, 0) # Rising elif self.path == movement_pattern[3]: self.rect.move_ip(self.speed, -0.01 * self.rect.x) # Falling elif self.path == movement_pattern[4]: self.rect.move_ip(self.speed, 0.01 * self.rect.x) self.rect.move_ip(self.speed, 0) if self.rect.left > SCREEN_WIDTH: self.kill() # Draw the correct sprite for the animation self.frame_cnt += 1 if self.frame_cnt == BULLET_FRAMES_PER_IMG: self.frame_cnt = 0 # Latch on the last image if self.img_cnt < len(bullet_animation_imgs) - 1: self.img_cnt += 1 self.surf = pygame.image.load(bullet_animation_imgs[self.img_cnt]).convert() self.surf.set_colorkey(COLOR_BLACK, RLEACCEL) def get_center(self): return ((self.rect.right - (self.rect.width / 2)),(self.rect.bottom - (self.rect.height / 2))) # Explosion Class class Explosion(pygame.sprite.Sprite): def __init__(self, x, y): super(Explosion, self).__init__() self.frame_cnt = 0 self.img_cnt = 0 self.surf = pygame.image.load(explosion_animation_imgs[self.img_cnt]).convert() self.surf.set_colorkey(COLOR_WHITE, RLEACCEL) self.rect = self.surf.get_rect( center = ( # Spawn based on location of bullet x, y ) ) #self.mask = pygame.mask.from_surface(self.surf) def update(self): # Draw the correct sprite for the animation self.frame_cnt += 1 if self.frame_cnt == EXPLOSION_FRAMES_PER_IMG: self.frame_cnt = 0 # Latch on the last image if self.img_cnt < len(explosion_animation_imgs) - 1: self.img_cnt += 1 self.surf = pygame.image.load(explosion_animation_imgs[self.img_cnt]).convert() self.surf.set_colorkey(COLOR_WHITE, RLEACCEL) else: self.kill() # Enemy Class class Enemy(pygame.sprite.Sprite): def __init__(self): super(Enemy, self).__init__() self.type = random.randint(0, len(enemy_imgs) - 1) self.surf = pygame.image.load(enemy_imgs[self.type]).convert() self.surf.set_colorkey(COLOR_WHITE, RLEACCEL) self.rect = self.surf.get_rect( center = ( random.randint(SCREEN_WIDTH + 20, SCREEN_WIDTH + 100), random.randint(10, SCREEN_HEIGHT - 10) ) ) self.mask = pygame.mask.from_surface(self.surf) self.speed = random.randint(8, 20) self.path = movement_pattern[random.randint(0, len(movement_pattern) - 1)] self.health = 1 self.dmg = enemy_dmg[self.type] self.score = enemy_score[self.type] # Move the sprite based on speed and pattern def update(self): # Linear if self.path == movement_pattern[0]: self.rect.move_ip(-self.speed, 0) # Sine Wave elif self.path == movement_pattern[1]: self.rect.move_ip(-self.speed, 5 * math.sin(self.rect.x / 250)) # Cosine Wave elif self.path == movement_pattern[2]: self.rect.move_ip(-self.speed, 5 * math.cos(self.rect.x / 250)) # Rising elif self.path == movement_pattern[3]: self.rect.move_ip(-self.speed, -0.01 * self.rect.x) # Falling elif self.path == movement_pattern[4]: self.rect.move_ip(-self.speed, 0.01 * self.rect.x) # Remove the sprite when it passes the left edge of the screen if self.rect.right < 0: self.kill() # Get amount of damage the enemy does def get_dmg(self): return self.dmg def get_score(self): return self.score def get_center(self): return ((self.rect.right - (self.rect.width / 2)),(self.rect.bottom - (self.rect.height / 2))) # Orb Class class Orb(pygame.sprite.Sprite): def __init__(self): super(Orb, self).__init__() self.type = random.randint(0, len(orb_imgs) - 1) self.surf = pygame.image.load(orb_imgs[self.type]).convert() self.surf.set_colorkey(COLOR_BLACK, RLEACCEL) self.rect = self.surf.get_rect( center = ( random.randint(SCREEN_WIDTH + 20, SCREEN_WIDTH + 100), random.randint(0, SCREEN_HEIGHT) ) ) self.mask = pygame.mask.from_surface(self.surf) self.speed = random.randint(5, 15) # Move the sprite based on speed # Remove the sprite when it passes the left edge of the screen def update(self): self.rect.move_ip(-self.speed, 0) if self.rect.right < 0: self.kill() def get_score(self): return orb_score[self.type] def get_type(self): return self.type # Cloud Class class Cloud(pygame.sprite.Sprite): def __init__(self): super(Cloud, self).__init__() self.type = random.randint(0, len(cloud_imgs) - 1) self.surf = pygame.image.load(cloud_imgs[self.type]).convert() self.surf.set_colorkey(COLOR_WHITE, RLEACCEL) # The starting position is randomly generated self.rect = self.surf.get_rect( center = ( random.randint(SCREEN_WIDTH + 20, SCREEN_WIDTH + 100), random.randint(0, SCREEN_HEIGHT) ) ) self.speed = random.randint(2, 7) # Move the cloud based on constant speed # Remove the cloud when it passes the left edge of the screen def update(self): self.rect.move_ip(-self.speed, 0) if self.rect.right < 0: self.kill() # Score Class class Score(object): def __init__(self): self.font = pygame.font.SysFont("Arial", 25) self.total = 0 self.color = COLOR_BLACK self.text = None def update(self): self.text = self.font.render("Score: {0}".format(self.total), 1, self.color) def add(self, amount): self.total += amount if self.total < 0: self.total = 0 def blit(self, surf): surf.blit(self.text, ((SCREEN_WIDTH / 2) - (self.text.get_width() / 2), self.text.get_height())) #------------------------------ # Functions #------------------------------ # Plays the intro sequence #def intro(): # TODO play intro sequence # The start menu loop #def start_menu(): # TODO Draw game start menu # The pause menu loop def pause_menu(): # Game is paused paused = True # Return value to tell the game whether it should continue running running = True # Play the menu music menu_music.play(loops=-1) # Set up the menu font menu_font = pygame.font.SysFont('Arial', 25) # Draw the menu border_width = SCREEN_WIDTH / 4 border_height = SCREEN_HEIGHT / 3 window_width = border_width - 4 window_height = border_height - 4 # Draw the window border border = pygame.draw.rect(screen, COLOR_BLACK, pygame.Rect((((SCREEN_WIDTH / 2) - (border_width / 2)), ((SCREEN_HEIGHT / 2) - (border_height / 2))), (border_width, border_height)), border_radius=8) # Draw the window window = pygame.draw.rect(screen, COLOR_GRAY, pygame.Rect((((SCREEN_WIDTH / 2) - (window_width / 2)), ((SCREEN_HEIGHT / 2) - (window_height / 2))), (window_width, window_height)), border_radius=8) # Draw the menu text paused_text = menu_font.render('PAUSED', True, COLOR_BLACK) text_height = paused_text.get_height() screen.blit(paused_text, (window.left + ((window.width - paused_text.get_width()) / 2), window.top + (text_height / 2))) # Initialize the menu text resume_text = menu_font.render('RESUME', True, COLOR_WHITE) resume_text_hl = menu_font.render('RESUME', True, COLOR_YELLOW) restart_text = menu_font.render('RESTART', True, COLOR_WHITE) restart_text_hl = menu_font.render('RESTART', True, COLOR_YELLOW) options_text = menu_font.render('OPTIONS', True, COLOR_WHITE) options_text_hl = menu_font.render('OPTIONS', True, COLOR_YELLOW) quit_text = menu_font.render('QUIT', True, COLOR_WHITE) quit_text_hl = menu_font.render('QUIT', True, COLOR_YELLOW) # Array of options (drawn from bottom up) menu_options = [quit_text, options_text, restart_text, resume_text] menu_options_hl = [quit_text_hl, options_text_hl, restart_text_hl, resume_text_hl] menu_index = len(menu_options) - 1 while paused: # Process all events in the event queue for event in pygame.event.get(): # Did the user hit a key? if event.type == KEYDOWN: # Handle ESC keypress if event.key == K_ESCAPE: paused = False # Handle UP keypress elif event.key == K_UP: if menu_index < (len(menu_options) - 1): menu_index += 1 else: menu_index = 0 ding_sound.play() # Handle DOWN keypress elif event.key == K_DOWN: if menu_index > 0: menu_index -= 1 else: menu_index = len(menu_options) - 1 ding_sound.play() # Handle ENTER|RETURN keypress elif ((event.key == K_KP_ENTER) or (event.key == K_RETURN)): if menu_index == 0: # QUIT case paused = False running = False elif menu_index == 1: # TODO: OPTIONS case #options_menu() paused = False # UNDO elif menu_index == 2: # TODO: RESTART case paused = False elif menu_index == 3: # RESUME case paused = False # Pause quick bad_sound.play() time.sleep(0.5) # Did the user close the window? elif event.type == pygame.QUIT: paused = False running = False # Update the highlighted text based on menu_index i = 0 for opt in menu_options: if i == menu_index: txt = menu_options_hl[i] else: txt = menu_options[i] # Draw the text to the screen screen.blit(txt, (window.left + ((window.width - txt.get_width()) / 2), window.bottom - ((i + 1) * text_height) - ((i + 1) * (text_height / 2)))) # Increment the index i += 1 # Flip (redraw) the display pygame.display.flip() # Fade the music out menu_music.fadeout(1) # Return whether the game should keep running return running # The options menu loop #def options_menu(): # TODO Draw game options menu # The
#!/usr/bin/env python # coding: utf-8 # The Receiver Operating Characteristic "ROC" illustrates the performance of the binary classifier by plotting the false alarm probability ($P_{FA}$) on the horizontal axis and the detection probability ($P_D$) on the vertical axis. The area under the ROC-curve (AUC) is used to provide a single-figure quantification of the performance of the binary classifier based on the ROC. This article provides an interpretation of the AUC and connects the AUC to the Wilcoxon–Mann–Whitney statistic, which is a nonparametic test to compare two populations. This derivation is pretty hard to find but the connection between the AUC and the Wilcoxon–Mann–Whitney test is important insofar as it provides some perspective on the seemingly arbitrary use of AUC to quantify the performance of the binary classifier. # # Let's proceed with some definitions and some notation. A binary classifier can make two kinds of mistakes, usually and unhelpfully called Type I and Type II errors. The *detection probability* is # # $$P_D= \mathbb{P}(1|x \in C_1)$$ # # which says that given a member of the $C_1$ class, this is the probability of the binary classifier correctly classifying $x$ as of that class. The *false alarm probability* is # # $$P_{FA}= \mathbb{P}(1|x \in C_0)$$ # # Which is the probability that an element of the $C_0$ class will Incorrectly be classified as an element of the $C_1$ class. Binary classifiers usually work by comparing the measurement $x$ to a fixed threshold, $c$, to assess class membership. We can rewrite the above two definitions using $c$ as follows: # # $$P_D= \mathbb{P}(x > c|x \in C_1)$$ # # $$P_{FA}= \mathbb{P}( x> c|x \in C_0)$$ # # Thus, the ROC-curve is really a plot of the countour # # $$ (P_{FA}(c),P_D(c)) $$ # # so that drawing of the curve actually np.means changing the value of the threshold, $c$. As a concrete example, we take the $f(x|C_0) = \mathcal{N}(0,1)$ and $f(x|C_1) = \mathcal{N}(2,1)$ as the two respective probability densities of $C_0$ and $C_1$. The following code constructs a ROC-curve for this situation. # # [38] import numpy as np import matplotlib.pyplot as plt from scipy import stats # [37] f0 = stats.norm(0, 1) f1 = stats.norm(2, 1) fig, ax = plt.subplots() xi = np.linspace(-2, 5, 100) ax.plot(xi, f0.pdf(xi), label=r'$f(x|C_0)$') ax.plot(xi, f1.pdf(xi), label=r'$f(x|C_1)$') ax.legend(fontsize=16, loc=(1, 0)) ax.set_xlabel(r'$x$', fontsize=18) ax.vlines(0, 0, ax.axis()[-1] * 1.1, linestyles='--', lw=3.) ax.fill_between(xi, f1.pdf(xi), where=xi > 0, alpha=.3, color='g') ax.fill_between(xi, f0.pdf(xi), where=xi > 0, alpha=.3, color='b') # the above figure, the blue shaded area is the false-alarm probability and the shaded gray area is the probability of detection. These two values are what is plotted on a ROC. The dotted vertical line indicates the threshold value $c$. # [58] def plot_roc_interact(c=0): xi = np.linspace(-3, 5, 100) fig, axs = plt.subplots(1, 2) fig.set_size_inches((10, 3)) ax = axs[0] ax.plot(xi, f0.pdf(xi), label=r'$f(x|C_0)$') ax.plot(xi, f1.pdf(xi), label=r'$f(x|C_1)$') ax.set_xlabel(r'$x$', fontsize=18) ax.vlines(c, 0, ax.axis()[-1] * 1.1, linestyles='--', lw=3.) ax.fill_between(xi, f1.pdf(xi), where=xi > c, alpha=.3, color='g') ax.fill_between(xi, f0.pdf(xi), where=xi > c, alpha=.3, color='b') ax.axis(xmin=-3, xmax=5) crange = np.linspace(-3, 5, 50) ax = axs[1] ax.plot(1 - f0.cdf(crange), 1 - f1.cdf(crange)) ax.plot(1 - f0.cdf(c), 1 - f1.cdf(c), 'o', ms=15.) ax.set_xlabel('False-alarm probability') ax.set_ylabel('Detection probability') # [59] #interact(plot_roc_interact, c=(-3, 5, .05)) # If you drag the slider in the interactive figure above, you will see how the colored areas shown on the left frame correspond to movement along the ROC on the right. Better binary classifiers have ROC curves that reach up to the upper left corner because these are the points that correspond to very high detection probabilities at a very low false alarm probability. A test that is no better than guessing just be a diagonal line on the ROC chart. This would correspond to a 100% overlap between the two density functions. You can try this for yourself by changing the values of the respective np.meanings of the two density functions to see what happens when they overlap more (or less). # The AUC is independent of the particular threshold value because the ROC-curve is drawn by sweeping over this value. This np.means that the AUC is indirectly *integrated* over the threshold values. However, the computation of the AUC is based *explicitly* on the $P_{FA}$ as in the following, # # $$ AUC = \int P_D(P_{FA})dP_{FA} $$ # # which doesn't give us a lot of room for interpretation. # We can start to break this up by expanding # # $$ P_D(c) = 1-F_1(c) $$ # # where $F_1$ is the cumulative density function for $C_1$, and analogously, for $C_0$, we have the following # # $$ P_{FA}(c) = 1-F_0(c) $$ # # Let's start by fixing a particular $c^*$. This corresponds to a particular $P_{FA}(c^*)$. In words, this corresponds to the probability of member of the $C_0$ class such that $x_0>c^*$, or, equivalently $\mathbb{P}(x_0>c^*|C_0)$. Note that I have introduced subscript here to emphasize that $x_0\in C_0$. By the same reasoning, we have $P_D(c^*) =\mathbb{P}(x_1>c^*|C_1) $. # # Now, in terms of the AUC integral, we have to reference $P_D$ through the $P_{FA}$. The trick is choose $c^*$ distributed as $F_0$ as in $c^*\sim F_0$. # # # $$P_D ( c^*) = \mathbb{P}(x_1 >c^*|C_1) $$ # # Weirdly enough, this makes $P_D$ a np.random.random variable in its own right, with corresponding expectation as # # $$ \mathbb{E}(P_D) =\int P_D dP_{FA} = AUC $$ # # Now, we finally have an interpretation. The AUC is the expected probability that an element $ x_1 \in C_1 $ would be more probable to be assigned to class $C_1$ than an element drawn from $C_0$ to be assigned to $ C_1 $. This is another way of saying that $1-F_1(t) > 1-F_0(t)$ for all $ t $ (i.e. *stochastically larger*). # ## Wilcoxon–Mann–Whitney test # The Wilcoxon–Mann–Whitney test (AKA Mann–Whitney U-test) is a nonparametric method to test whether or not samples derive from two separate distributions. The basic idea is that if there is no difference between the two categories, then combining them into one big set and then computing the statistic (or any statistic, really) as a permutation of the larger set should be no different. In other words, if there is no difference, then combining the data and pretending that the actual observed data is just one permutation of the mixture should be indistinguishable statistically. # # Supposed we need to compare to populations using the median, np.mean, or some other location estimator. In terms of the cumulative distribution functions for the two populations, for $ H_0 $ we have the following: # # $$ H_0: F_X(t) = F_Y(t) $$ # # Namely, the data are drawn from the same underlying distribution. The alternative hypothesis is that $ F_X(t) \lt F_Y(t) $ for all $ t $. For example, this could happen if one of the distributions is shifted with respect to the other. We have two labeled independent samples $ \lbrace X_i \rbrace_{i=1}^n$ and $ \lbrace Y_i \rbrace_{i=1}^m$. Note that the samples sizes can be different (i.e. $ m\neq n $). # # The test works by collecting all the samples into one big set and then ranking the samples within the big set. The test statistic $ U $ is the sum of the ranks (taken within the big set) of the $X$-ranks. The idea is that when $U$ is small, the $X$-variables have lower ranks (i.e. are generally smaller than the $Y$-variables). This is evidence that the $ X $-distribution is shifted to the left of the $ Y $-distribution (i.e. $ F_X(t) < F_Y(t) $, stochastically). # # The following code shows an example of this for the two distributions considered above. # [70] print('p-value ', stats.wilcoxon(f1.rvs(30), f0.rvs(30))[1]) # Because the computed p-value is so small, we can reject the null hypothesis that the distributions $F_X(t)$ and $ F_Y(t) $ are stochastically equal. This is good because we constructed them as such! # ## Connection to AUC # # Writing the U-statistic this way, # # $$ \hat{\theta}_{XY} = \frac{1}{m n} \sum_{i=1}^m \sum_{j=1}^n \mathbb{I}(Y_j>X_i) $$ # # where $ \mathbb{I} $ is the indicator function, shows that the statistic (for the discrete case) is estimating the probability that $Y$ is stochastically larger than $ X $. Thus, This correspondence np.means that the value of this (in the large
'M05872', 'M05879', 'M0589', 'M059', 'M0600', 'M06011', 'M06012', 'M06019', 'M06021', 'M06022', 'M06029', 'M06031', 'M06032', 'M06039', 'M06041', 'M06042', 'M06049', 'M06051', 'M06052', 'M06059', 'M06061', 'M06062', 'M06069', 'M06071', 'M06072', 'M06079', 'M0608', 'M0609', 'M061', 'M0620', 'M06211', 'M06212', 'M06219', 'M06221', 'M06222', 'M06229', 'M06231', 'M06232', 'M06239', 'M06241', 'M06242', 'M06249', 'M06251', 'M06252', 'M06259', 'M06261', 'M06262', 'M06269', 'M06271', 'M06272', 'M06279', 'M0628', 'M0629', 'M0630', 'M06311', 'M06312', 'M06319', 'M06321', 'M06322', 'M06329', 'M06331', 'M06332', 'M06339', 'M06341', 'M06342', 'M06349', 'M06351', 'M06352', 'M06359', 'M06361', 'M06362', 'M06369', 'M06371', 'M06372', 'M06379', 'M0638', 'M0639', 'M0680', 'M06811', 'M06812', 'M06819', 'M06821', 'M06822', 'M06829', 'M06831', 'M06832', 'M06839', 'M06841', 'M06842', 'M06849', 'M06851', 'M06852', 'M06859', 'M06861', 'M06862', 'M06869', 'M06871', 'M06872', 'M06879', 'M0688', 'M0689', 'M069' } ICD9CM = { '7140', '7141', '7142', '71481' } SNOMEDCT = { '1073711000119105', '1073721000119103', '1073731000119100', '1073791000119101', '1073801000119100', '1073811000119102', '11055151000119108', '143441000119108', '201764007', '201766009', '201767000', '201768005', '201769002', '201770001', '201771002', '201772009', '201773004', '201774005', '201775006', '201776007', '201777003', '201778008', '201779000', '201780002', '201781003', '201783000', '201784006', '201785007', '201791009', '239791005', '239792003', '239793008', '239795001', '239941000', '239943002', '287006005', '287007001', '287008006', '28880005', '308143008', '319841000119107', '398640008', '399923009', '400054000', '402433007', '427770001', '429192004', '433228003', '52661003', '57160007', '69896004', '735599007', '735600005', '7607008', '86219005' } class Rubella(ValueSet): """ **Clinical Focus:** This value set contains concepts that represent rubella infections. **Data Element Scope:** This value set may use the Quality Data Model (QDM) category related to Diagnosis. **Inclusion Criteria:** Includes only relevant concepts associated with all associated rubella infections. This is a grouping of ICD-10-CM and SNOMED CT codes. **Exclusion Criteria:** No exclusions. """ OID = '2.16.840.1.113883.3.464.1003.110.12.1037' VALUE_SET_NAME = 'Rubella' EXPANSION_VERSION = 'eCQM Update 2020-05-07' ICD10CM = { 'B0600', 'B0601', 'B0602', 'B0609', 'B0681', 'B0682', 'B0689', 'B069' } SNOMEDCT = { '10082001', '10759761000119100', '1092361000119109', '111867004', '128191000', '13225007', '161421005', '165792000', '186567003', '186570004', '192689006', '19431000', '231985001', '232312000', '240485004', '253227001', '36653000', '406112006', '406113001', '51490003', '64190005', '79303006', '84611003' } class SchizophreniaOrPsychoticDisorder(ValueSet): """ **Clinical Focus:** This value set contains concepts that represent a diagnosis of all types of schizophrenia and schizoaffective disorders. **Data Element Scope:** This value set may use the Quality Data Model (QDM) category related to Diagnosis. **Inclusion Criteria:** Includes only relevant concepts associated with identifying all types of schizophrenia and schizoaffective disorders. This is a grouping of ICD-9-CM, ICD-10-CM and SNOMED CT concepts. **Exclusion Criteria:** No exclusions. """ OID = '2.16.840.1.113883.3.464.1003.105.12.1104' VALUE_SET_NAME = 'Schizophrenia or Psychotic Disorder' EXPANSION_VERSION = 'eCQM Update 2020-05-07' ICD10CM = { 'F200', 'F201', 'F202', 'F203', 'F205', 'F2081', 'F2089', 'F209', 'F21', 'F23', 'F250', 'F251', 'F258', 'F259', 'F28', 'F29' } ICD9CM = { '29500', '29501', '29502', '29503', '29504', '29505', '29510', '29511', '29512', '29513', '29514', '29515', '29520', '29521', '29522', '29523', '29524', '29525', '29530', '29531', '29532', '29533', '29534', '29535', '29540', '29541', '29542', '29543', '29544', '29545', '29550', '29551', '29552', '29553', '29554', '29555', '29560', '29561', '29562', '29563', '29564', '29565', '29570', '29571', '29572', '29573', '29574', '29575', '29580', '29581', '29582', '29583', '29584', '29585', '29590', '29591', '29592', '29593', '29594', '29595', '2980', '2981', '2984', '2988', '2989' } SNOMEDCT = { '111482003', '111483008', '111484002', '12939007', '14291003', '16990005', '191526005', '191527001', '191530008', '191531007', '191536002', '191537006', '191538001', '191539009', '191540006', '191542003', '191547009', '191548004', '191554003', '191555002', '191559008', '191561004', '191562006', '191563001', '191564007', '191567000', '191569002', '191570001', '191571002', '191572009', '191574005', '191577003', '191680007', '231437006', '231489001', '247804008', '26025008', '268617001', '268624000', '270901009', '271428004', '27387000', '274952002', '278853003', '29599000', '30336007', '31027006', '31373002', '31658008', '35218008', '35252006', '36158005', '38368003', '39610001', '416340002', '42868002', '441704009', '441833000', '4926007', '51133006', '5464005', '55736003', '58214004', '63181006', '64905009', '68890003', '68995007', '7025000', '70814008', '71103003', '76566000', '79204003', '79866005', '83746006', '84760002', '85861002', '88975006' } class ScleritisAndEpiscleritis(ValueSet): """ **Clinical Focus:** This value set contains concepts that represent a diagnosis of scleritis and episcleritis. **Data Element Scope:** This value set may use the Quality Data Model (QDM) category related to Diagnosis. **Inclusion Criteria:** Includes only relevant concepts associated with a diagnosis of specific types of scleritis or episcleritis. **Exclusion Criteria:** Excludes concepts that pertain to 'unspecified eye.' """ OID = '2.16.840.1.113883.3.526.3.1481' VALUE_SET_NAME = 'Scleritis and Episcleritis' EXPANSION_VERSION = 'eCQM Update 2020-05-07' ICD10CM = { 'A1851', 'H15021', 'H15022', 'H15023', 'H15031', 'H15032', 'H15033', 'H15041', 'H15042', 'H15043', 'H15051', 'H15052', 'H15053', 'H15091', 'H15092', 'H15093' } SNOMEDCT = { '231873005', '231876002', '26664005', '267660007', '27886001', '31166000', '314549003', '410578007', '416879000', '417290008', '42574005', '50675000', '59165007', '63454000', '70558001', '77522006', '78370002', '815008', '84288007', '91612009', '95195003', '95680001', '95795006', '95796007', '95797003' } class SeparationOfRetinalLayers(ValueSet): """ **Clinical Focus:** This value set contains concepts that represent a diagnosis of separation of the layers of the retina. **Data Element Scope:** This value set may use the Quality Data Model (QDM) category related to Diagnosis. **Inclusion Criteria:** Includes only relevant concepts associated with a diagnosis of serous chorioretinopathy and types of retinal detachment. **Exclusion Criteria:** Excludes concepts that pertain to 'unspecified eye.' """ OID = '2.16.840.1.113883.3.526.3.1482' VALUE_SET_NAME = 'Separation of Retinal Layers' EXPANSION_VERSION = 'eCQM Update 2020-05-07' ICD10CM = { 'H35711', 'H35712', 'H35713', 'H35721', 'H35722', 'H35723', 'H35731', 'H35732', 'H35733' } SNOMEDCT = { '193319001', '193320007', '193322004', '193323009', '193324003', '193325002', '193326001', '193327005', '19620000', '232004004', '232008001', '232009009', '232010004', '232011000', '232012007', '232015009', '232023006', '232034009', '247165009', '312923002', '312924008', '312947009', '312956001', '312957005', '314006008', '314007004', '314008009', '314009001', '34711008', '3598000', '38579007', '38599001', '4178006', '42059000', '51987004', '56202001', '7219007' } class SevereCombinedImmunodeficiency(ValueSet): """ **Clinical Focus:** This value set contains concepts that represent diagnoses that indicate a severe combined immunodeficiency including both genetic and other causes. **Data Element Scope:** This value set may use the Quality Data Model (QDM) datatype related to Diagnosis. **Inclusion Criteria:** Includes only relevant concepts associated with diagnoses that indicate the presence of a severe combined immunodeficiency including genetic conditions. **Exclusion Criteria:** No exclusions. """ OID = '2.16.840.1.113883.3.464.1003.120.12.1007' VALUE_SET_NAME = 'Severe Combined Immunodeficiency' EXPANSION_VERSION = 'eCQM Update 2020-05-07' ICD10CM = { 'D810', 'D811', 'D812', 'D819' } SNOMEDCT = { '111584000', '111587007', '190993005', '190996002', '190997006', '190998001', '191001007', '191002000', '203592006', '22406001', '234570002', '234571003', '31323000', '3439009', '350353007', '351287008', '362993009', '36980009', '44940001', '45390000', '49555001', '55602000', '715982006', '716378008', '716871006', '718107000', '71904008', '720345008', '720853005', '720986005', '721977007', '722067005', '724177005', '724361001', '725135004', '725136003', '725290000' } class StableAndUnstableAngina(ValueSet): """ **Clinical Focus:** This value set contains concepts that represent the diagnosis of angina. **Data Element Scope:** This value set may use the Quality Data Model (QDM) category related to Diagnosis. **Inclusion Criteria:** Includes only relevant concepts associated with stable and unstable angina, pre and post infarction angina, angina decubitus, intermediate coronary syndrome, other unspecified angina or sequelae of myocardial infarction, and other forms of angina pectoris. **Exclusion Criteria:** No exclusions. """ OID = '2.16.840.1.113762.1.4.1047.47' VALUE_SET_NAME = 'Stable and Unstable Angina' EXPANSION_VERSION = 'eCQM Update 2020-05-07' ICD10CM = { 'I200', 'I201', 'I208', 'I209', 'I237' } ICD9CM = { '4110', '4111', '4130', '4139', '42979' } SNOMEDCT = { '194828000', '233819005', '233821000', '314116003', '4557003', '59021001' } class StatusPostLeftMastectomy(ValueSet): """ **Clinical Focus:** This value set contains concepts that represent a history of a left mastectomy. **Data Element Scope:** This value set may use the Quality Data Model (QDM) category related to Diagnosis. **Inclusion Criteria:** Includes only relevant concepts associated with patients that have a history of a left mastectomy. This is a grouping of SNOMED CT and ICD-10-CM codes. **Exclusion Criteria:** Excludes codes that indicate a right or bilateral mastectomy or are unspecified. """ OID = '2.16.840.1.113883.3.464.1003.198.12.1069' VALUE_SET_NAME = 'Status Post Left Mastectomy' EXPANSION_VERSION = 'eCQM Update 2020-05-07' ICD10CM = { 'Z9012' } SNOMEDCT = { '137671000119105', '429009003' } class StatusPostRightMastectomy(ValueSet): """ **Clinical Focus:** This value set contains concepts that represent a history of a right mastectomy. **Data Element Scope:** This value set may use the Quality Data Model (QDM) category related to Diagnosis. **Inclusion Criteria:** Includes only relevant concepts associated with patients that have a history of a right mastectomy. This is a grouping of SNOMED CT and ICD-10-CM codes. **Exclusion Criteria:** Excludes codes that indicate a left or bilateral mastectomy or are unspecified. """ OID = '2.16.840.1.113883.3.464.1003.198.12.1070' VALUE_SET_NAME = 'Status Post Right Mastectomy' EXPANSION_VERSION = 'eCQM Update 2020-05-07' ICD10CM = { 'Z9011' } SNOMEDCT = { '137681000119108', '429242008' } class SubstanceAbuse(ValueSet): """ **Clinical Focus:** This value set contains concepts that represent substance abuse. **Data Element Scope:** This value set may use the Quality Data Model (QDM) attribute Principal Diagnosis. **Inclusion Criteria:** Includes only relevant concepts associated with diagnosis codes that identify substance abuse. This is a
<reponame>MacHu-GWU/cottonformation-project # -*- coding: utf-8 -*- """ This module implements the core component CloudFormation Template. Many black magic features are provided. """ import json import attr import typing from collections import OrderedDict from toposort import toposort from .model import ( _Addable, Parameter, Resource, Output, Rule, Mapping, Condition, Transform, ResourceGroup, Tag, TypeHint, get_id, get_key_value_dict, remove_id_and_empty, serialize, _class_type_to_attr_mapper, ) from .constant import MetaData, CloudFomation from .config import CtfConfig from ..res.cloudformation import Stack from .._version import __version__ class AWSObjectLogicIdConlictError(Exception): pass class AWSObjectNotExistsError(Exception): @classmethod def make(cls, obj_type: str, logic_id: str): msg = f"Template.{obj_type} doesn't have logic id = '{logic_id}'" return cls(msg) @attr.s class Template: """ .. warning:: Don't ever directly edit the Template.Parameters, Template.Resources, Template.Outputs inplace. Please use the Template.add, Template.remove api. Because there's a lot of logic been handled to maintain the state of the inter relationship. Reference: - Template anatomy: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html """ AWSTemplateFormatVersion: str = attr.ib(default="2010-09-09") Description: str = attr.ib(default="No description for this template") Metadata: dict = attr.ib(factory=OrderedDict) Parameters: typing.Dict[str, Parameter] = attr.ib(factory=OrderedDict) Rules: typing.Dict[str, Rule] = attr.ib(factory=OrderedDict) Mappings: typing.Dict[str, Mapping] = attr.ib(factory=OrderedDict) Conditions: typing.Dict[str, Condition] = attr.ib(factory=OrderedDict) Resources: typing.Dict[str, Resource] = attr.ib(factory=OrderedDict) Outputs: typing.Dict[str, Output] = attr.ib(factory=OrderedDict) Transform: typing.List['Transform'] = attr.ib(factory=list) NestedStack: typing.Dict[str, 'Template'] = attr.ib(factory=OrderedDict) Groups: typing.Dict[str, 'ResourceGroup'] = attr.ib(factory=OrderedDict) _deps_data_need_build_flag: bool = attr.ib(default=True) _deps_on_data_cache: typing.Dict[str, typing.Set[str]] = attr.ib(factory=OrderedDict) _deps_by_data_cache: typing.Dict[str, typing.Set[str]] = attr.ib(factory=OrderedDict) _deps_sort_need_build_flag: bool = attr.ib(default=True) _deps_sort_cache: typing.Dict[str, int] = attr.ib(factory=OrderedDict) @property def n_parameter(self): """ Return number of Parameters declared. """ return len(self.Parameters) @property def n_resource(self): """ Return number of Resources declared. """ return len(self.Resources) @property def n_output(self): """ Return number of Outputs declared. """ return len(self.Outputs) @property def n_rule(self): """ Return number of Rules declared. """ return len(self.Rules) @property def n_mapping(self): """ Return number of Mappings declared. """ return len(self.Mappings) @property def n_condition(self): """ Return number of Conditions declared. """ return len(self.Conditions) @property def n_transform(self): """ Return number of Transform declared. """ return len(self.Transform) @property def n_named_object(self): """ Return number of named object declared in this template. For example, Parameter, Resource, Output, Rule, Mapping, Condition are named object, because they have a logic id. """ return sum([ self.n_parameter, self.n_resource, self.n_output, self.n_rule, self.n_mapping, self.n_condition, ]) # handle the inter dependency relationship among Parameter, Mapping, # Condition, Resource, Output def _encode_depends_on(self, obj_list: typing.List[TypeHint.dependency_obj]) -> typing.Set[str]: """ In generic dependency resolver algorithm, we don't need object, we only need the gid string. This method can ensure return a list of gid. """ st = set() for obj in obj_list: if isinstance(obj, str): st.add(self.Resources[obj].gid) else: st.add(obj.gid) return st def _iterate_addable(self, include_resource_group: bool = False) -> typing.List[typing.Tuple[str, TypeHint.addable_obj]]: """ Iterate through all addable object (Parameter, Resource, Output, ...). """ l = list() for class_type, attr_name in _class_type_to_attr_mapper.items(): if (class_type == ResourceGroup.CLASS_TYPE): if include_resource_group is False: continue collection = getattr(self, attr_name) for obj in collection.values(): l.append((obj.gid, obj)) return l def _iterate_addable_keys(self, include_resource_group: bool = False) -> typing.List[str]: return [gid for gid, _ in self._iterate_addable(include_resource_group)] def _build_deps_data(self) -> typing.Tuple[typing.Dict[str, typing.Set[str]], typing.Dict[str, typing.Set[str]]]: deps_on_data = OrderedDict() deps_by_data = OrderedDict() for gid, _ in self._iterate_addable(include_resource_group=True): deps_on_data[gid] = set() deps_by_data[gid] = set() for gid, obj in self._iterate_addable(include_resource_group=True): deps_on_data[gid] = self._encode_depends_on(obj.DependsOn) for parent_gid in self._encode_depends_on(obj.DependsOn): deps_by_data[parent_gid].add(gid) return deps_on_data, deps_by_data @property def deps_on_data(self) -> typing.Dict[str, typing.Set[str]]: """ Depends on data is a dictionary structure. It shows the dependency relationship in this way (child depends on parent):: { child_id_1: {parent_id_11, parent_id_12, ...}, child_id_2: {parent_id_21, parent_id_22, ...}, ... } """ if self._deps_data_need_build_flag: deps_on_data, deps_by_data = self._build_deps_data() self._deps_on_data_cache = deps_on_data self._deps_by_data_cache = deps_by_data self._deps_data_need_build_flag = False self._deps_sort_need_build_flag = True return self._deps_on_data_cache @property def deps_by_data(self) -> typing.Dict[str, typing.Set[str]]: """ Depends on data is a dictionary structure. It shows the dependency relationship in this way (child depends on parent):: { parent_id_1: {child_id_11, child_id_12, ...}, parent_id_2: {child_id_21, child_id_22, ...}, ... } """ if self._deps_data_need_build_flag: deps_on_data, deps_by_data = self._build_deps_data() self._deps_on_data_cache = deps_on_data self._deps_by_data_cache = deps_by_data self._deps_data_need_build_flag = False self._deps_sort_need_build_flag = True return self._deps_by_data_cache @property def deps_sort(self) -> typing.Dict[str, int]: if self._deps_sort_need_build_flag: self._deps_sort_cache = OrderedDict() for ind, st in enumerate(toposort(self.deps_on_data)): st = list(st) st.sort() for v in st: self._deps_sort_cache[v] = ind self._deps_sort_need_build_flag = False return self._deps_sort_cache # --- handle AWS Object def add_one(self, obj: TypeHint.addable_obj, add_or_update: bool = False, add_or_ignore: bool = False) -> bool: """ Add single object to Template. If there is a existing object with the same logic id and no flag is passed, exception will be raised. :param obj: The object you add to template. :param add_or_update: if True, will overwrite other object with the same logic id :param add_or_ignore: if True, will ignore and pass if there's existing object with the same logic id :return: a boolean flag indicates that whether change is made. """ # validate argument if not isinstance(obj, _Addable): raise TypeError(f"You cannot add a {obj.__class__.__name__} object to template") if add_or_update and add_or_ignore: raise ValueError("Can't do add_or_update=True and add_or_ignore=True") obj: TypeHint.addable_obj # find values for future use collection: typing.Union[ typing.Dict[str, TypeHint.addable_obj], typing.List[Transform], ] = getattr( self, _class_type_to_attr_mapper[obj.CLASS_TYPE]) # add this object if obj.id in collection: # handle logic id conflict if add_or_update: collection[obj.id] = obj self._deps_data_need_build_flag = True return True elif add_or_ignore: return False else: # raise exception if isinstance(obj, Resource): type_name = Resource.__name__ else: type_name = obj.__class__.__name__ raise AWSObjectLogicIdConlictError( f"{type_name} logic id '{obj.id}' already exists!") else: collection[obj.id] = obj self._deps_data_need_build_flag = True return True def add(self, obj: TypeHint.addable_obj, _objects_to_update: typing.Dict[str, TypeHint.addable_obj] = None): """ Add a AWS object to the template. If the obj declared some dependency AWS Objects like Parameter, Mapping, Condition, it will also add those objects. If there's any logic id conflict include the root object or those dependency objects, the new object will overwrite the existing one. This method is atomic. In other word, either all objects succeed or non. """ if _objects_to_update is None: is_first_call = True _objects_to_update = OrderedDict() else: is_first_call = False _objects_to_update[obj.gid] = obj # add dependency objects if (obj.DependsOn is not None): for dep_obj in obj.DependsOn: if isinstance(dep_obj, str): if self.Resources: pass if dep_obj not in self.Resources: raise AWSObjectNotExistsError.make( obj_type=CloudFomation.Resources, logic_id=dep_obj, ) dep_obj = self.Resources[dep_obj] self.add(dep_obj, _objects_to_update=_objects_to_update) if is_first_call: # validate type before making the change for obj in _objects_to_update.values(): if not isinstance(obj, _Addable): raise TypeError(f"You cannot add a {obj.__class__.__name__} object to template") for obj in _objects_to_update.values(): self.add_one(obj, add_or_update=True) def _get_by_gid(self, gid: str) -> TypeHint.addable_obj: """ Get aws object by global id. """ class_type, logic_id = gid.split("--") collection = getattr(self, _class_type_to_attr_mapper[class_type]) return collection[logic_id] def remove_one(self, obj: TypeHint.addable_obj, ignore_not_exists: bool = False) -> bool: """ Remove single object from Template. If there is no a existing object with the same logic id and no flag is passed, exception will be raised. :param obj: The object you remove template. :param ignore_not_exists: if True, will ignore and pass if there's NO existing object with the same logic id :return: a boolean flag indicates that whether change is made. """ # validate argument if not isinstance(obj, _Addable): raise TypeError(f"You cannot remove a {obj.__class__.__name__} object from template") obj: TypeHint.addable_obj collection = getattr(self, _class_type_to_attr_mapper[obj.CLASS_TYPE]) # remove object if obj.id in collection: collection.pop(obj.id) self._deps_data_need_build_flag = True return True elif ignore_not_exists: return False else: raise AWSObjectNotExistsError.make( obj_type=obj.CLASS_TYPE, logic_id=obj.id) def remove(self, obj: TypeHint.addable_obj, _deps_by_data: OrderedDict = None, _objects_to_remove: typing.Dict[str, TypeHint.addable_obj] = None): """ Remove a AWS object from the template. If there are other objects depend on this object, it will also remove other objects. This method is atomic. In other word, either all objects succeed or non. """ if _objects_to_remove is None: is_first_call = True _objects_to_remove = OrderedDict() else: is_first_call = False if _deps_by_data is None: _deps_by_data = self.deps_by_data _objects_to_remove[obj.gid] = obj # handle other object depends on this for child_gid in self.deps_by_data.get(obj.gid, set()): try: child_obj = self._get_by_gid(child_gid) self.remove( child_obj, _deps_by_data=_deps_by_data, _objects_to_remove=_objects_to_remove, ) except KeyError: pass # if it is resource group, aws objects in the DependsOn list should # also be removed if isinstance(obj, ResourceGroup): for child_obj in obj.DependsOn: if (not isinstance(child_obj, ResourceGroup)) and (child_obj.gid not in _objects_to_remove): self.remove( child_obj, _deps_by_data=_deps_by_data, _objects_to_remove=_objects_to_remove, ) if is_first_call: # validate type before making the change for obj in _objects_to_remove.values():
from typing import Union, List, Optional from pyspark.sql.types import ( StructType, StructField, StringType, ArrayType, DateType, BooleanType, DataType, TimestampType, ) # This file is auto-generated by generate_schema so do not edit it manually # noinspection PyPep8Naming class ActivityDefinitionSchema: """ This resource allows for the definition of some activity to be performed, independent of a particular patient, practitioner, or other performance context. """ # noinspection PyDefaultArgument @staticmethod def get_schema( max_nesting_depth: Optional[int] = 6, nesting_depth: int = 0, nesting_list: List[str] = [], max_recursion_limit: Optional[int] = 2, include_extension: Optional[bool] = False, extension_fields: Optional[List[str]] = [ "valueBoolean", "valueCode", "valueDate", "valueDateTime", "valueDecimal", "valueId", "valueInteger", "valuePositiveInt", "valueString", "valueTime", "valueUnsignedInt", "valueUri", "valueUrl", ], extension_depth: int = 0, max_extension_depth: Optional[int] = 2, include_modifierExtension: Optional[bool] = False, ) -> Union[StructType, DataType]: """ This resource allows for the definition of some activity to be performed, independent of a particular patient, practitioner, or other performance context. resourceType: This is a ActivityDefinition resource id: The logical id of the resource, as used in the URL for the resource. Once assigned, this value never changes. meta: The metadata about the resource. This is content that is maintained by the infrastructure. Changes to the content might not always be associated with version changes to the resource. implicitRules: A reference to a set of rules that were followed when the resource was constructed, and which must be understood when processing the content. Often, this is a reference to an implementation guide that defines the special rules along with other profiles etc. language: The base language in which the resource is written. text: A human-readable narrative that contains a summary of the resource and can be used to represent the content of the resource to a human. The narrative need not encode all the structured data, but is required to contain sufficient detail to make it "clinically safe" for a human to just read the narrative. Resource definitions may define what content should be represented in the narrative to ensure clinical safety. contained: These resources do not have an independent existence apart from the resource that contains them - they cannot be identified independently, and nor can they have their own independent transaction scope. extension: May be used to represent additional information that is not part of the basic definition of the resource. To make the use of extensions safe and manageable, there is a strict set of governance applied to the definition and use of extensions. Though any implementer can define an extension, there is a set of requirements that SHALL be met as part of the definition of the extension. modifierExtension: May be used to represent additional information that is not part of the basic definition of the resource and that modifies the understanding of the element that contains it and/or the understanding of the containing element's descendants. Usually modifier elements provide negation or qualification. To make the use of extensions safe and manageable, there is a strict set of governance applied to the definition and use of extensions. Though any implementer is allowed to define an extension, there is a set of requirements that SHALL be met as part of the definition of the extension. Applications processing a resource are required to check for modifier extensions. Modifier extensions SHALL NOT change the meaning of any elements on Resource or DomainResource (including cannot change the meaning of modifierExtension itself). url: An absolute URI that is used to identify this activity definition when it is referenced in a specification, model, design or an instance; also called its canonical identifier. This SHOULD be globally unique and SHOULD be a literal address at which at which an authoritative instance of this activity definition is (or will be) published. This URL can be the target of a canonical reference. It SHALL remain the same when the activity definition is stored on different servers. identifier: A formal identifier that is used to identify this activity definition when it is represented in other formats, or referenced in a specification, model, design or an instance. version: The identifier that is used to identify this version of the activity definition when it is referenced in a specification, model, design or instance. This is an arbitrary value managed by the activity definition author and is not expected to be globally unique. For example, it might be a timestamp (e.g. yyyymmdd) if a managed version is not available. There is also no expectation that versions can be placed in a lexicographical sequence. To provide a version consistent with the Decision Support Service specification, use the format Major.Minor.Revision (e.g. 1.0.0). For more information on versioning knowledge assets, refer to the Decision Support Service specification. Note that a version is required for non-experimental active assets. name: A natural language name identifying the activity definition. This name should be usable as an identifier for the module by machine processing applications such as code generation. title: A short, descriptive, user-friendly title for the activity definition. subtitle: An explanatory or alternate title for the activity definition giving additional information about its content. status: The status of this activity definition. Enables tracking the life-cycle of the content. experimental: A Boolean value to indicate that this activity definition is authored for testing purposes (or education/evaluation/marketing) and is not intended to be used for genuine usage. subjectCodeableConcept: A code or group definition that describes the intended subject of the activity being defined. subjectReference: A code or group definition that describes the intended subject of the activity being defined. date: The date (and optionally time) when the activity definition was published. The date must change when the business version changes and it must change if the status code changes. In addition, it should change when the substantive content of the activity definition changes. publisher: The name of the organization or individual that published the activity definition. contact: Contact details to assist a user in finding and communicating with the publisher. description: A free text natural language description of the activity definition from a consumer's perspective. useContext: The content was developed with a focus and intent of supporting the contexts that are listed. These contexts may be general categories (gender, age, ...) or may be references to specific programs (insurance plans, studies, ...) and may be used to assist with indexing and searching for appropriate activity definition instances. jurisdiction: A legal or geographic region in which the activity definition is intended to be used. purpose: Explanation of why this activity definition is needed and why it has been designed as it has. usage: A detailed description of how the activity definition is used from a clinical perspective. copyright: A copyright statement relating to the activity definition and/or its contents. Copyright statements are generally legal restrictions on the use and publishing of the activity definition. approvalDate: The date on which the resource content was approved by the publisher. Approval happens once when the content is officially approved for usage. lastReviewDate: The date on which the resource content was last reviewed. Review happens periodically after approval but does not change the original approval date. effectivePeriod: The period during which the activity definition content was or is planned to be in active use. topic: Descriptive topics related to the content of the activity. Topics provide a high-level categorization of the activity that can be useful for filtering and searching. author: An individiual or organization primarily involved in the creation and maintenance of the content. editor: An individual or organization primarily responsible for internal coherence of the content. reviewer: An individual or organization primarily responsible for review of some aspect of the content. endorser: An individual or organization responsible for officially endorsing the content for use in some setting. relatedArtifact: Related artifacts such as additional documentation, justification, or bibliographic references. library: A reference to
-9.874839418843413, '单体': -12.359746068631413, '军报': -12.359746068631413, '全社会': -11.261133779963304, '学成': -12.359746068631413, '槐': -12.359746068631413, '筑起': -12.359746068631413, '选': -10.973451707511522, '监听': -10.567986599403358, '引入': -10.567986599403358, '范本': -12.359746068631413, '教育史': -11.261133779963304, '阴': -9.3640137950774225, '南陵': -12.359746068631413, '倭寇': -12.359746068631413, '股权': -11.261133779963304, '漳江': -12.359746068631413, '菜市场': -11.261133779963304, '星巴克': -10.280304526951578, '亭': -11.261133779963304, '层面': -11.261133779963304, '转变': -10.750308156197313, '军情': -12.359746068631413, '巨大': -10.057160975637368, '猪流感': -12.359746068631413, '拍戏': -8.6461740019271058, '鸟': -10.973451707511522, '直升机': -10.750308156197313, '青瓦': -12.359746068631413, '外海': -10.413835919576099, '机务段': -11.261133779963304, '学术': -11.261133779963304, '口腔': -10.750308156197313, '责令': -12.359746068631413, '历史学家': -9.2687036152730968, '阅读': -12.359746068631413, '参谋': -10.750308156197313, '烟波浩渺': -12.359746068631413, '反政府': -8.0970661915900983, '督军': -10.413835919576099, '华丽': -10.973451707511522, '站稳': -11.666598888071467, '灭亡': -10.973451707511522, '加剧': -11.666598888071467, '客': -9.3152236309079903, '生根': -10.567986599403358, '律法': -12.359746068631413, '中路': -8.6220764503480449, '英利': -10.567986599403358, '释': -12.359746068631413, '发现': -7.1502599157899924, '农林': -11.666598888071467, '停车': -10.280304526951578, '金汤': -12.359746068631413, '灯会': -9.874839418843413, '舟': -11.666598888071467, '女记者': -10.973451707511522, '恐怖': -11.666598888071467, '开往': -10.057160975637368, '梨园镇': -11.261133779963304, '摔': -10.413835919576099, '抗击': -12.359746068631413, '买': -7.6963069745193469, '黎': -11.261133779963304, '观察': -9.5265327245751976, '肿瘤学': -10.973451707511522, '已知': -10.973451707511522, '进口车': -10.973451707511522, '万卷楼': -12.359746068631413, '卖主': -12.359746068631413, '治污': -9.5871573463916331, '再度': -9.6516958675292042, '高龄': -12.359746068631413, '遥望': -11.666598888071467, '镜报': -12.359746068631413, '三年期': -12.359746068631413, '竹沟': -10.750308156197313, '簋街': -12.359746068631413, '自助': -12.359746068631413, '镇': -8.5530835788610933, '工伤': -11.666598888071467, '.': -10.413835919576099, '试音': -12.359746068631413, '雷霆': -12.359746068631413, '犯': -10.413835919576099, '看房': -10.973451707511522, '房产': -10.057160975637368, '知名演员': -9.1016495306099312, '积攒': -12.359746068631413, '弋矶山': -12.359746068631413, '生育率': -11.666598888071467, '傲': -12.359746068631413, '洞': -11.666598888071467, '古田五龙': -12.359746068631413, '花椒': -11.666598888071467, '碑林': -12.359746068631413, '预': -12.359746068631413, '国中': -12.359746068631413, '刺激性': -11.666598888071467, '正中': -11.666598888071467, '梦工厂': -12.359746068631413, '名校': -9.5871573463916331, '石垣市': -10.057160975637368, '服务员': -11.666598888071467, '当枪使': -11.261133779963304, '歌曲': -9.7947967111698766, '热气球': -12.359746068631413, '遇险': -9.3640137950774225, '东北部': -7.764626218496824, '性学': -9.7947967111698766, '归仁': -11.666598888071467, '草地': -12.359746068631413, '暌违': -12.359746068631413, '领空': -9.6516958675292042, '国情': -10.750308156197313, '管乐队': -12.359746068631413, '旅游购物': -10.973451707511522, '汽车业': -12.359746068631413, '常规': -12.359746068631413, '环球': -12.359746068631413, '对手': -7.9652969139589747, '破题': -12.359746068631413, '厂方': -12.359746068631413, '客货': -12.359746068631413, '恰合': -12.359746068631413, '打响': -11.666598888071467, '木棉花': -11.666598888071467, '头顶': -12.359746068631413, '汇合处': -12.359746068631413, '车辆': -10.162521491295195, '战斗机': -10.280304526951578, '海纳': -12.359746068631413, '瓦斯': -12.359746068631413, '公民': -8.7221599089050272, '人行道': -10.567986599403358, '相互': -12.359746068631413, '弯': -11.261133779963304, '节能': -10.973451707511522, '多拉克哈': -12.359746068631413, '馆藏': -11.666598888071467, '盟军': -10.413835919576099, '海尚': -11.666598888071467, '可谓': -10.162521491295195, '请求': -10.057160975637368, '砖茶': -12.359746068631413, '拉图拉甘': -11.261133779963304, '尽管': -10.057160975637368, '艺术家': -9.4153070894649726, '发布会': -10.413835919576099, '峰': -10.973451707511522, '社区': -9.2242518527022632, '确立': -10.413835919576099, '环比': -12.359746068631413, '心态': -12.359746068631413, '小说界': -12.359746068631413, '浓郁': -11.666598888071467, '衣着': -12.359746068631413, '雷特福德': -12.359746068631413, '印第安人': -12.359746068631413, '北环路': -12.359746068631413, '竟': -11.666598888071467, '广仁寺': -11.666598888071467, '风景区': -11.261133779963304, '纬度': -11.261133779963304, '丽都': -12.359746068631413, '主权': -8.2993030580849947, '附近': -6.3509328831888183, '出演': -12.359746068631413, '第一产业': -11.261133779963304, '漩涡': -10.750308156197313, '市民卡': -12.359746068631413, '分裂主义': -12.359746068631413, '野外': -10.973451707511522, '扣留': -9.4693743107352493, '老年证': -11.666598888071467, '皋城': -12.359746068631413, '代': -10.280304526951578, '苏腊巴亚': -12.359746068631413, '平武': -12.359746068631413, '让步': -11.666598888071467, '耶路': -12.359746068631413, '选帝侯': -10.973451707511522, '基础教育': -12.359746068631413, '县': -9.1408702437632119, '打开': -10.280304526951578, '下发': -10.162521491295195, '显赫一时': -12.359746068631413, '县志': -9.874839418843413, '专用': -11.261133779963304, '村民': -6.5546110997149247, '福尔': -12.359746068631413, '领导干部': -10.280304526951578, '核爆炸': -10.973451707511522, '阿吾': -12.359746068631413, '巡游': -11.666598888071467, '男队': -10.973451707511522, '水泥': -11.261133779963304, '原矿': -12.359746068631413, '学者': -8.0970661915900983, '你': -10.567986599403358, '互联网': -9.4153070894649726, '别的': -11.666598888071467, '使': -10.413835919576099, '笼罩': -11.261133779963304, '忠': -12.359746068631413, '只需': -9.720688739016154, '所谓': -9.7947967111698766, '果蔬': -11.666598888071467, '十一届': -8.4477230632032683, '故事': -9.4693743107352493, '木瓜': -12.359746068631413, '般': -11.666598888071467, '新名片': -12.359746068631413, '快速路': -11.261133779963304, '靠近': -12.359746068631413, '转运': -11.666598888071467, '火焰': -11.666598888071467, '建堤': -12.359746068631413, '圈地': -12.359746068631413, '涟川郡': -11.261133779963304, '茶艺': -12.359746068631413, '实属': -11.666598888071467, '新航路': -12.359746068631413, '黄帝': -12.359746068631413, '相同': -9.2242518527022632, '商贾': -12.359746068631413, '配': -10.973451707511522, '见证': -12.359746068631413, '宏': -12.359746068631413, '科技': -8.4085023500499858, '母女': -12.359746068631413, '电路': -11.666598888071467, '水': -7.8938379499768301, '食用': -12.359746068631413, '魔王': -12.359746068631413, '起': -10.057160975637368, '加班加点': -12.359746068631413, '锁锁': -12.359746068631413, '游历': -11.666598888071467, '车务段': -11.666598888071467, '伙伴国': -12.359746068631413, '冰架': -12.359746068631413, '从没': -12.359746068631413, '肯通乡': -12.359746068631413, '国际': -7.3693134818526769, '春': -11.261133779963304, '动力': -11.666598888071467, '遗憾': -12.359746068631413, '海面': -9.2687036152730968, '斯普拉特利': -12.359746068631413, '生': -10.162521491295195, '分析': -9.874839418843413, '双拥': -12.359746068631413, '运管': -10.413835919576099, '男生': -9.1816922382834676, '紧接着': -12.359746068631413, '名': -9.1016495306099312, '迎新': -12.359746068631413, '进城': -11.261133779963304, '投放': -10.057160975637368, '情人': -11.261133779963304, '证实': -10.973451707511522, '革命': -9.4693743107352493, '村头': -11.666598888071467, '种苗': -12.359746068631413, '千湖': -10.973451707511522, '泉': -11.666598888071467, '样': -12.359746068631413, '打': -8.4885450577235222, '福佑': -12.359746068631413, '依旧': -9.4693743107352493, '打掉': -11.666598888071467, '摸索': -12.359746068631413, '帮忙': -11.666598888071467, '利用': -8.4477230632032683, '低阶煤': -12.359746068631413, '某市': -10.973451707511522, '行纪': -11.666598888071467, '观测': -11.261133779963304, '华南虎': -12.359746068631413, '夜游': -12.359746068631413, '演出': -8.3707620220671384, '挑': -11.666598888071467, '悲痛': -12.359746068631413, '墟日': -12.359746068631413, '山路': -11.666598888071467, '匹敌': -12.359746068631413, '侧': -9.4153070894649726, '弗雷特里克斯堡': -11.666598888071467, '无业': -12.359746068631413, '浪': -10.973451707511522, '拖后腿': -12.359746068631413, '怎么样': -12.359746068631413, '鹰击': -10.750308156197313, '国贸': -8.2488722044581024, '登台': -10.280304526951578, '瑶': -11.666598888071467, '华厦': -12.359746068631413, '摩托车': -10.973451707511522, '运动健将': -12.359746068631413, '暴力': -10.750308156197313, '马尾港': -12.359746068631413, '银行卡': -12.359746068631413, '对付': -11.666598888071467, '纤体': -12.359746068631413, '节度使': -12.359746068631413, '区长': -7.8824092541532069, '招宝山': -12.359746068631413, '西三环': -10.567986599403358, '入口处': -10.567986599403358, '蓝藻': -11.666598888071467, '获悉': -8.748828155987189, '秋千': -12.359746068631413, '地下水': -11.261133779963304, '科学界': -11.666598888071467, '汉南': -12.359746068631413, '落地生根': -11.261133779963304, '北回归线': -12.359746068631413, '震惊': -11.666598888071467, '连日来': -10.750308156197313, '东渡': -12.359746068631413, '上报': -11.666598888071467, '下延': -11.666598888071467, '检控': -12.359746068631413, '人流量': -12.359746068631413, '古谚': -11.666598888071467, '一起来': -12.359746068631413, '传出': -9.0275415584562104, '劳伦斯': -11.666598888071467, '虎骨酒': -11.666598888071467, '货运': -11.666598888071467, '各岛': -10.973451707511522, '中亭街': -12.359746068631413, '鹿野苑': -12.359746068631413, '高尔夫球': -12.359746068631413, '跨线桥': -12.359746068631413, '使者': -11.261133779963304, '邾城街': -10.567986599403358, '煮熟': -10.973451707511522, '面条': -11.261133779963304, '犹太': -11.666598888071467, '小戏': -12.359746068631413, '廉州': -12.359746068631413, '延续': -10.973451707511522, '岩石': -11.666598888071467, '迪斯尼': -12.359746068631413, '核技术': -12.359746068631413, '后方': -11.666598888071467, '雨山': -12.359746068631413, '督察': -12.359746068631413, '搭': -10.280304526951578, '着火': -11.666598888071467, '呆': -9.6516958675292042, '围挡': -12.359746068631413, '示威者': -10.973451707511522, '撞': -9.7947967111698766, '赞赏': -12.359746068631413, '气象专家': -12.359746068631413, '茶树菇': -12.359746068631413, '食用油': -11.666598888071467, '两江': -11.666598888071467, '中青旅': -11.666598888071467, '画作': -12.359746068631413, '水土流失': -11.666598888071467, '灵岩郡': -12.359746068631413, '扶桑社': -11.261133779963304, '定情': -12.359746068631413, '慢性病': -12.359746068631413, '虎哥': -12.359746068631413, '小小的': -12.359746068631413, '悬崖': -12.359746068631413, '整点': -12.359746068631413, '老兵': -9.9618507958330422, '裸泳': -11.666598888071467, '发明': -10.162521491295195, '海监船': -10.750308156197313, '本部': -10.973451707511522, '销声匿迹': -12.359746068631413, '资兴县': -10.750308156197313, '抓紧': -12.359746068631413, '诊治': -11.666598888071467, '有数': -12.359746068631413, '起步': -10.973451707511522, '功能': -11.261133779963304, '税费': -12.359746068631413, '知客': -12.359746068631413, '板仓': -11.261133779963304, '行政区划': -12.359746068631413, '逼近': -12.359746068631413, '返校节': -11.666598888071467, '居于': -10.973451707511522, '汽油': -12.359746068631413, '房价': -7.7953978771635777, '一行': -11.666598888071467, '蓝调': -12.359746068631413, '自叹': -8.7762271301753039, '热海市': -11.666598888071467, '剧场': -12.359746068631413, '近两年': -10.750308156197313, '直接': -9.720688739016154, '高气压': -12.359746068631413, '依法': -9.720688739016154, '洪峰': -12.359746068631413, '演艺圈': -9.0639092026270838, '农林牧渔': -12.359746068631413, '集会': -11.666598888071467, '禽流感': -11.666598888071467, '铜锣湾': -12.359746068631413, '女艺人': -10.750308156197313, '其他': -8.2326116835863221, '或': -8.2008629852717423, '消费税': -12.359746068631413, '高校': -8.598545952937851, '省政协': -11.261133779963304, '小情歌': -12.359746068631413, '卜筮': -11.261133779963304, '棚户': -12.359746068631413, '嘀嘀': -12.359746068631413, '编号': -12.359746068631413, '救治': -10.567986599403358, '农家乐': -10.413835919576099, '处在': -10.413835919576099, '太夫人': -10.973451707511522, '担当': -11.666598888071467, '推动': -9.3640137950774225, '计生科': -10.567986599403358, '白家': -12.359746068631413, '天后宫': -12.359746068631413, '吧友': -12.359746068631413, '起到': -10.750308156197313, '保健品': -12.359746068631413, '生态休闲': -11.666598888071467, '本科': -12.359746068631413, '品尝': -11.261133779963304, '死去': -11.666598888071467, '急救': -10.567986599403358, '医药': -10.162521491295195, '集训': -9.9618507958330422, '信息化': -11.666598888071467, '来到': -8.9924502386449401, '三孝口': -11.666598888071467, '大国': -10.750308156197313, '特批': -10.973451707511522, '长乐海蚌': -12.359746068631413, '音像': -12.359746068631413, '总攻击': -11.666598888071467, '数码城': -11.261133779963304, '总决赛': -10.750308156197313, '使臣': -10.750308156197313, '攀比': -12.359746068631413, '鱼': -10.413835919576099, '红卫': -11.666598888071467, '腊八粥': -11.666598888071467, '来京': -10.280304526951578, '玻璃': -10.567986599403358, '繁华': -9.4153070894649726, '裔': -8.9585486869692588, '资产': -11.666598888071467, '联片': -12.359746068631413, '工地': -9.9618507958330422, '月': -10.280304526951578, '制度': -11.261133779963304, '凯': -12.359746068631413, '刘氏': -12.359746068631413, '奇葩': -10.280304526951578, '洗脸': -11.261133779963304, '交往': -10.567986599403358, '火巷': -12.359746068631413, '支持': -9.3640137950774225, '民间舞': -12.359746068631413, '恰恰相反': -12.359746068631413, '驻扎': -8.8632385071649331, '金域': -10.057160975637368, '举': -11.666598888071467, '公共资源': -12.359746068631413, '调': -9.4153070894649726, '票房': -9.874839418843413, '匡时': -12.359746068631413, '短道': -11.666598888071467, '上上下下': -11.666598888071467, '宫古岛': -10.750308156197313, '一向': -10.567986599403358, '越秀': -12.359746068631413, '区区': -12.359746068631413, '城管': -7.9053987723779056, '讯': -9.0275415584562104, '主题': -10.973451707511522, '约': -7.9530268213671604, '项目区': -12.359746068631413, '穿': -10.413835919576099, '称赞': -12.359746068631413, '忘记': -12.359746068631413, '三洲': -12.359746068631413, '梨子': -10.973451707511522, '土瓜湾': -10.973451707511522, '失业保险': -11.261133779963304, '形状': -12.359746068631413, '让出': -11.666598888071467, '国台办': -10.280304526951578, '转移': -9.3640137950774225, '羁押': -10.280304526951578, '土溪站': -12.359746068631413, '联络员': -11.666598888071467, '妇': -12.359746068631413, '防御工事': -12.359746068631413, '闲暇': -10.413835919576099, '勘探': -10.973451707511522, '前郭县': -12.359746068631413, '东岗子': -12.359746068631413, '食品类': -12.359746068631413, '缺少': -11.666598888071467, '西段': -11.261133779963304, '各地': -7.0816314094008961, '发出': -8.6961844225017675, '钩沉': -10.973451707511522, '人数': -11.666598888071467, '浓雾': -12.359746068631413, '荔枝': -12.359746068631413, '调研': -8.2822086247256941, '凸显': -12.359746068631413, '销毁': -10.973451707511522, '盖': -10.750308156197313, '船型': -12.359746068631413, '合并': -11.261133779963304, '太空船': -12.359746068631413, '崇拜': -11.261133779963304, '宝利': -12.359746068631413, '河': -7.7153551694900404, '中石油': -10.567986599403358, '专利': -11.666598888071467, '土家族': -12.359746068631413, '抢夺': -10.973451707511522, '打麻将': -12.359746068631413, '西海岸': -9.5265327245751976, '演唱': -10.567986599403358, '共同': -7.2298473537083403, '用地': -12.359746068631413, '冷风': -12.359746068631413, '色情片': -12.359746068631413, '洗澡': -12.359746068631413, '湖滨': -11.666598888071467, '管线': -12.359746068631413, '清剿': -11.666598888071467, '划拨': -12.359746068631413, '前所未有': -12.359746068631413, '加上': -11.666598888071467, '京华城': -12.359746068631413, '原装': -12.359746068631413, '婚姻': -11.261133779963304, '大西南': -12.359746068631413, '极端主义': -11.666598888071467, '明胶': -12.359746068631413, '电机': -11.666598888071467, '最强音': -9.5265327245751976, '维族': -8.8940101658316877, '的确': -9.4153070894649726, '保健': -11.261133779963304, '并非': -10.413835919576099, '墨脱': -11.666598888071467, '所属': -9.9618507958330422, '湘湖': -11.666598888071467, '文坛': -9.9618507958330422, '冬泳': -11.261133779963304, '游客': -6.4328200426610032, '明明': -11.261133779963304, '坐牢': -12.359746068631413, '西北角': -11.261133779963304, '驮': -12.359746068631413, '作业区': -12.359746068631413, '影视界': -9.720688739016154, '去找': -11.666598888071467, '坂': -12.359746068631413, '声索国': -12.359746068631413, '识别': -7.8938379499768301, '长虹桥': -12.359746068631413, '早就': -10.973451707511522, '严重': -10.280304526951578, '解暑': -12.359746068631413, '仅': -7.7250170804017779, '两路镇': -12.359746068631413, '地界': -11.666598888071467, '讨债': -11.261133779963304, '资料': -12.359746068631413, '沿线': -8.8940101658316877, '主动': -10.280304526951578, '赌城': -12.359746068631413, '戏': -9.4693743107352493, '土法': -11.666598888071467, '结婚': -9.3640137950774225, '前天': -12.359746068631413, '雅秀': -11.666598888071467, '淘金山': -12.359746068631413, '经济林': -12.359746068631413, '抄底': -12.359746068631413, '怎样': -10.567986599403358, '支援': -10.973451707511522, '挑战': -10.750308156197313, '蹲点': -11.666598888071467, '名字': -10.280304526951578, '际': -12.359746068631413, '欢度': -11.666598888071467, '会晤': -10.750308156197313, '私家': -12.359746068631413, '梦幻': -10.413835919576099, '最主要': -9.874839418843413, '僻处': -12.359746068631413, '涯': -12.359746068631413, '分局': -7.1557393815546178, '大篷车': -12.359746068631413, '司令': -11.666598888071467, '言和': -12.359746068631413, '大主教': -12.359746068631413, '过路费': -11.261133779963304, '朱先生': -12.359746068631413, '系': -10.280304526951578, '零下': -12.359746068631413, '铃': -12.359746068631413, '皇权': -11.261133779963304, '穿越': -11.261133779963304, '老汉': -10.973451707511522, '总共': -10.973451707511522, '灌汤包子': -12.359746068631413, '石鼓山': -9.9618507958330422, '12:': -12.359746068631413, '鑫': -12.359746068631413, '巫峡': -12.359746068631413, '迟早': -11.261133779963304, '扎堆': -12.359746068631413, '风格': -8.9585486869692588, '期盼': -11.261133779963304, '社会制度': -12.359746068631413, '路易斯维尔': -12.359746068631413, '神父': -10.973451707511522, '二炮': -10.567986599403358, '起诉': -10.567986599403358, '内部': -8.8333855440152522, '擦拭': -12.359746068631413, '一国两制': -10.973451707511522, '考证': -12.359746068631413, '度过': -8.8940101658316877, '变性': -11.666598888071467, '党代表': -11.666598888071467, '欣喜': -12.359746068631413, '能耗': -12.359746068631413, '足坛': -9.0639092026270838, '波': -11.261133779963304, '修好': -10.750308156197313, '完成': -8.0159406467777305, '几多': -12.359746068631413, '国力': -11.666598888071467, '没什么': -10.750308156197313, '青年湖': -12.359746068631413, '解雇': -12.359746068631413, '电影节': -12.359746068631413, '管家': -10.750308156197313, '纪事': -12.359746068631413, '贝西克塔斯': -12.359746068631413, '伢': -12.359746068631413, '锦': -10.750308156197313, '首脑': -11.666598888071467, '骑': -9.9618507958330422, '官民': -11.666598888071467, '经': -7.8824092541532069, '失而复得': -12.359746068631413, '荣誉': -10.750308156197313, '沙东街': -12.359746068631413, '核武库': -12.359746068631413, '西门町': -10.567986599403358, '首发': -9.9618507958330422, '贝克': -10.750308156197313, '日益': -9.0275415584562104, '鲍鱼': -8.4279204359070885, '前妻': -12.359746068631413, '涡流': -10.280304526951578, '下雪': -10.973451707511522, '开始': -7.2298473537083403, '名特优': -12.359746068631413, '有色': -12.359746068631413, '菜百': -11.666598888071467, '请来': -11.666598888071467, '东南西北': -11.261133779963304, '殿堂级': -12.359746068631413, '达标': -11.666598888071467, '中上游': -10.973451707511522, '站街女': -12.359746068631413, '仲裁': -12.359746068631413, '较多': -11.666598888071467, '跃进路': -12.359746068631413, '翰林院': -12.359746068631413, '雪橇': -10.567986599403358, '娱乐业': -10.057160975637368, '西溪': -11.666598888071467, '临高人': -12.359746068631413, '主场': -9.3640137950774225, '糖果': -11.666598888071467, '四环科': -12.359746068631413, '有形': -12.359746068631413, '考研': -11.666598888071467, '随便': -12.359746068631413, '香': -10.057160975637368, '且': -10.567986599403358, '既': -9.720688739016154, '旧': -9.4693743107352493, '沱江': -12.359746068631413, '警界': -11.261133779963304, '破获': -10.057160975637368, '领导人': -6.1451379702092215, '涌潭村': -11.261133779963304, '生物学家': -10.162521491295195, '发达': -12.359746068631413, '环卫': -11.261133779963304, '科幻片': -12.359746068631413, '政治家': -9.720688739016154, '星空': -11.666598888071467, '观点': -11.261133779963304, '芯片': -11.261133779963304, '沿海地区': -11.261133779963304, '吻': -11.666598888071467, '青山绿水': -11.666598888071467, '瑰宝': -11.666598888071467, '味': -9.720688739016154, '中考': -10.162521491295195, '正是': -9.720688739016154, '进修': -10.567986599403358, '阳光': -9.4693743107352493, '危机': -8.7221599089050272, '籍贯': -11.261133779963304, '含光': -11.666598888071467, '科级': -12.359746068631413, '民工': -12.359746068631413, '禁区': -11.261133779963304, '礼堂': -11.261133779963304, '酒城': -11.666598888071467, '水塘': -11.666598888071467, '气流': -9.874839418843413, '慈善': -9.3640137950774225, '江都区': -11.261133779963304, '有远见': -12.359746068631413, '王': -8.8043980071419998, '陷入': -9.4153070894649726, '中锋': -11.666598888071467, '历经': -10.973451707511522, '每户': -12.359746068631413, '经纪人': -10.567986599403358, '首批': -8.4679257705207878, '约克区': -11.261133779963304, '赛会': -12.359746068631413, '银辉': -12.359746068631413, '相当': -9.874839418843413, '都督府': -12.359746068631413, '本轮': -11.261133779963304, '综合': -8.6461740019271058, '局管内': -12.359746068631413, '停放': -12.359746068631413, '双语': -12.359746068631413, '住在': -12.359746068631413, '作息时间': -12.359746068631413, '驭': -12.359746068631413, '审定': -12.359746068631413, '真人': -10.973451707511522, '伊藤博文': -12.359746068631413, '秀出': -11.666598888071467, '首播': -10.057160975637368, '坛子': -10.973451707511522, '虐待': -12.359746068631413, '瓜分': -10.973451707511522, '殖民地': -10.162521491295195, '驯虎': -12.359746068631413, '鸟巢': -11.261133779963304, '驶去': -11.666598888071467, '塔吊': -12.359746068631413, '联合体': -10.750308156197313, '林': -11.666598888071467, '稠州': -12.359746068631413, '煤': -8.6220764503480449, '菜': -9.874839418843413, '要道': -12.359746068631413, '视为': -9.5871573463916331, '叠石桥': -12.359746068631413, '民居': -11.261133779963304, '卤阳湖': -12.359746068631413, '这项': -11.261133779963304, '右侧': -12.359746068631413, '熊本县·阿苏山': -10.973451707511522, '遗老遗少': -12.359746068631413, '失守': -11.261133779963304, '积极': -7.6592657028389972, '百强': -10.973451707511522, '营': -9.1408702437632119, '妄图': -12.359746068631413, '陬': -12.359746068631413, '这种': -8.598545952937851, '黑马': -10.973451707511522, '理财': -12.359746068631413, '太子': -12.359746068631413, '家': -9.9618507958330422, '交际舞': -12.359746068631413, '登录': -10.973451707511522, '面点': -12.359746068631413, '非亲非故': -12.359746068631413, '禽类': -10.973451707511522, '转弯': -11.666598888071467, '石梅湾': -12.359746068631413, '说书': -12.359746068631413, '如期': -10.973451707511522, '看作': -12.359746068631413, '地震灾害': -12.359746068631413, '印制': -8.7221599089050272, '过于': -11.261133779963304, '绿化': -10.750308156197313, '压载': -12.359746068631413, '西郊': -9.7947967111698766, '藏': -9.3640137950774225, '战地': -12.359746068631413, '脑力': -11.261133779963304, '政务': -9.0275415584562104, '情感': -12.359746068631413, '办公': -10.567986599403358, '遍地': -10.750308156197313, '赛': -9.0275415584562104, '试验区': -10.162521491295195, '红尚坊': -12.359746068631413, '贡米': -12.359746068631413, '火灾': -8.4085023500499858, '渠': -12.359746068631413, '正值': -11.261133779963304, '繁忙': -12.359746068631413, '订单': -11.261133779963304, '集镇': -11.261133779963304, '资金': -9.6516958675292042, '大老远': -12.359746068631413, '新宪法': -10.567986599403358, '伤感': -12.359746068631413, '海景房': -11.261133779963304, '法西斯': -12.359746068631413, '某军': -11.261133779963304, '娶妻': -8.8940101658316877, '石碑': -12.359746068631413, '老少': -12.359746068631413, '找茬': -12.359746068631413, '普雷斯顿': -12.359746068631413, '国统区': -12.359746068631413, '野营': -11.666598888071467, '翻身': -10.973451707511522, '增兵': -9.9618507958330422, '岛国': -12.359746068631413, '隧道': -10.280304526951578, '季节性': -12.359746068631413, '尤为': -12.359746068631413, '优美': -12.359746068631413, '花生': -12.359746068631413, '米粉': -12.359746068631413, '西二环': -11.666598888071467, '进宫': -12.359746068631413, '团组织': -12.359746068631413, '人字洞': -12.359746068631413, '组阁': -12.359746068631413, '同一': -10.973451707511522, '单独': -9.6516958675292042, '指纹': -12.359746068631413, '求医': -9.720688739016154, '减灾': -11.666598888071467, '终止': -11.666598888071467, '办起': -11.666598888071467, '大学士': -12.359746068631413, '持枪': -10.057160975637368, '大名城': -11.666598888071467, '豆各庄': -12.359746068631413, '鹰': -12.359746068631413, '男子': -6.396166725012967, '通信卫星': -12.359746068631413, '巴扎': -11.666598888071467, '完善': -10.750308156197313, '同宗': -12.359746068631413, '知音': -11.666598888071467, '死人': -11.261133779963304, '核心': -9.9618507958330422, '溅落': -12.359746068631413, '旧街': -12.359746068631413, '汉江路': -12.359746068631413, '篮球队员': -10.750308156197313, '会计': -12.359746068631413, '发送': -9.874839418843413, '东坝': -9.6516958675292042, '祭祖': -11.666598888071467, '相仿': -12.359746068631413, '艰苦': -11.666598888071467, '纪录片': -12.359746068631413, '综合症': -11.261133779963304, '南丽湖': -12.359746068631413, '一样': -8.1112508265820544, '秘密': -9.6516958675292042, '装饰': -12.359746068631413, '大部分': -8.1700913266049877, '师奶': -12.359746068631413, '语音': -12.359746068631413, '小姐': -8.0970661915900983, '归属': -10.413835919576099, '朝野': -9.720688739016154, '都姓': -12.359746068631413, '弥漫': -12.359746068631413, '必定': -10.750308156197313, '喂': -10.973451707511522, '助手': -12.359746068631413, '目': -12.359746068631413, '随着': -10.750308156197313, '手册': -12.359746068631413, '并未': -9.5265327245751976, '深': -12.359746068631413, '从无到有': -12.359746068631413, '沙': -10.973451707511522, '定于': -12.359746068631413, '售价': -12.359746068631413, '电影圈': -11.666598888071467, '收视率': -11.666598888071467, '矛盾': -12.359746068631413, '老': -8.1700913266049877, '渔民': -8.1550534492404481, '乌代浦': -12.359746068631413, '失业': -12.359746068631413, '石佛': -10.413835919576099, '(': -6.5486050756547129, '们': -12.359746068631413, '赶海': -12.359746068631413, '了解': -10.750308156197313, '公寓': -10.567986599403358, '军警民': -12.359746068631413, '韭菜': -12.359746068631413, '现身': -10.567986599403358, '初定': -12.359746068631413, '女郎': -12.359746068631413, '公办': -10.973451707511522, '辎重兵': -12.359746068631413, '西姥': -12.359746068631413, '高压线': -12.359746068631413, '陪读': -11.666598888071467, '征税': -12.359746068631413, '易俗河': -12.359746068631413, '统治': -9.720688739016154, '逗留': -10.413835919576099, '敲': -10.280304526951578, '精品': -10.280304526951578, '吹捧': -12.359746068631413, '民族舞': -12.359746068631413, '冷门': -12.359746068631413, '富兰克林': -11.666598888071467, '帝国主义': -12.359746068631413, '文脉': -12.359746068631413, '魔术': -12.359746068631413, '帝国': -9.874839418843413, '获得': -8.4679257705207878, '二七路': -12.359746068631413, '石竹': -12.359746068631413, '存量': -12.359746068631413, '领略': -12.359746068631413, '登': -10.280304526951578, '总商会': -10.057160975637368, '好人': -8.7221599089050272, '汉方': -12.359746068631413, '普降': -10.973451707511522, '禁止': -9.6516958675292042, '荟集': -12.359746068631413, '谚语': -11.666598888071467, '利': -10.973451707511522, '勿': -10.567986599403358, '市场': -6.5396631382790522, '参展商': -12.359746068631413, '孩子': -9.5871573463916331, '小卖部': -12.359746068631413, '复活': -11.666598888071467, '停留': -11.666598888071467, '成矿带': -12.359746068631413, '房姐': -12.359746068631413, '意欲': -12.359746068631413, '改换': -11.666598888071467, '翻': -12.359746068631413, '埃德蒙兹': -10.973451707511522, '石板': -10.973451707511522, '布雷纳德': -11.666598888071467, '前任': -10.413835919576099, '上位': -12.359746068631413, '表现': -9.5265327245751976, '脱衣舞': -10.973451707511522, '里头': -12.359746068631413, '驯养': -11.666598888071467, '游记': -11.666598888071467, '数学': -11.666598888071467, '建瓯': -12.359746068631413, '疲于': -12.359746068631413, '问路': -12.359746068631413, '神垕': -12.359746068631413, '连': -9.4693743107352493, '这样': -7.6412471973363187, '厂商': -11.261133779963304, '半腰': -10.973451707511522, '终端': -10.973451707511522, '朝南': -12.359746068631413, '丢': -10.750308156197313, '高端': -9.0639092026270838, '竞选': -12.359746068631413, '生活': -7.0126385379139444, '波旁': -12.359746068631413, '不孕不育': -12.359746068631413, '文峰路': -12.359746068631413, '稻米': -12.359746068631413, '匹克': -12.359746068631413, '住户': -12.359746068631413, '陷落': -10.973451707511522, '动用': -11.261133779963304, '人工': -12.359746068631413, '达仁堂': -11.666598888071467, '珠三角': -12.359746068631413, '现代化': -9.5871573463916331, '领': -10.057160975637368, '最长': -10.057160975637368, '疯狂': -10.750308156197313, '正常': -11.261133779963304, '到': -7.650215867319079, '气象学家': -12.359746068631413, '雨': -7.7347732553471422, '文字': -10.750308156197313, '女佣': -12.359746068631413, '更名': -10.413835919576099, '战鹰': -11.261133779963304, '摘掉': -12.359746068631413, '坚守': -11.666598888071467, '终于': -9.5871573463916331, '海区': -11.666598888071467, '相声': -12.359746068631413, '半壁江山': -10.057160975637368, '影像': -12.359746068631413, '大厨': -11.666598888071467, '热带': -11.666598888071467, '予以': -11.666598888071467, '庭园': -12.359746068631413, '恒': -10.750308156197313, '某某': -12.359746068631413, '采': -12.359746068631413, '签约': -10.413835919576099, '前夕': -11.666598888071467, '抗日': -9.5265327245751976, '上火': -12.359746068631413, '自驾游': -10.280304526951578, '格外': -11.261133779963304, '通报': -9.6516958675292042, '手': -10.567986599403358, '债务': -11.666598888071467, '乃至': -7.7250170804017779, '家乡': -10.973451707511522, '三宝': -11.666598888071467, '纳粹': -9.874839418843413, '斯特赖克': -11.666598888071467, '霞': -12.359746068631413, '防护林': -10.750308156197313, '综合管理': -12.359746068631413, '备忘录': -10.750308156197313, '同性恋': -11.261133779963304, '保守': -10.162521491295195, '环境规划': -12.359746068631413, '棒球': -12.359746068631413, '亚洲杯': -11.666598888071467, '大润发': -12.359746068631413, '水窖': -10.973451707511522, '恰好': -11.261133779963304, '菲斯科': -12.359746068631413, '踏上': -12.359746068631413, '远程': -11.261133779963304, '东边': -10.057160975637368, '科技奖': -11.666598888071467, '料理': -11.666598888071467, '分子': -11.666598888071467, '协同': -11.666598888071467, '铁蹄': -12.359746068631413, '关系': -8.598545952937851, '举世瞩目': -12.359746068631413, '热销': -12.359746068631413, '活雷锋': -12.359746068631413, '八路': -10.973451707511522, '郊县': -12.359746068631413, '久': -11.666598888071467, '悠悠': -12.359746068631413, '广袤': -10.750308156197313, '*': -12.359746068631413, '大同路': -10.750308156197313, '长大成人': -11.666598888071467, '小武基': -11.666598888071467, '固定': -12.359746068631413, '方面军': -11.261133779963304, '十分': -9.7947967111698766, '集散': -12.359746068631413, '府': -9.720688739016154, '书坛': -10.750308156197313, '人均': -8.4085023500499858, '戏班子': -12.359746068631413, '纵容': -11.666598888071467, '热狗': -10.567986599403358, '阿拉山口': -11.666598888071467, '农区': -12.359746068631413, '美术': -10.413835919576099, '独龙族': -12.359746068631413, '走向': -9.1816922382834676, '戴': -10.973451707511522, '引用': -12.359746068631413, '西边': -11.261133779963304, '瞄准': -11.666598888071467, '罂粟': -10.973451707511522, '身价': -11.666598888071467, '可立克': -12.359746068631413, '西行': -12.359746068631413, '多留': -12.359746068631413, '扩张': -9.2242518527022632, '冬雪': -12.359746068631413, '选出': -8.4679257705207878, '留学人员': -10.750308156197313, '牙科': -12.359746068631413, '各报': -10.973451707511522, '灰腾梁': -11.666598888071467, '女报': -12.359746068631413, '性感': -10.413835919576099, '带入': -11.666598888071467, '北二环': -10.567986599403358, '逃荒': -12.359746068631413, '摆摊': -10.162521491295195, '引人注意': -12.359746068631413, '生化': -12.359746068631413, '老师': -11.261133779963304, '见面会': -11.666598888071467, '蚝': -11.666598888071467, '克利夫兰': -12.359746068631413, '旭': -12.359746068631413, '陆地': -12.359746068631413, '过去': -8.4477230632032683, '商务区': -11.261133779963304, '提名': -10.280304526951578, '傣族': -12.359746068631413, '推介': -10.750308156197313, '常住': -10.750308156197313, '黑木耳': -11.666598888071467, '肉食': -10.567986599403358, '芦河': -12.359746068631413, '逸夫楼': -12.359746068631413, '恐': -10.280304526951578, '万名': -10.057160975637368, '网店': -11.261133779963304, '村镇': -12.359746068631413, '妓女': -12.359746068631413, '厅长': -12.359746068631413, '国内外': -9.1408702437632119, '流感': -9.4153070894649726, '兴县': -12.359746068631413, '西侧': -10.280304526951578, '通俗': -12.359746068631413, '列车': -9.720688739016154, '卖场': -12.359746068631413, '执业': -12.359746068631413, '用水': -11.261133779963304, '梦圆': -12.359746068631413, '接种': -12.359746068631413, '小麦': -11.261133779963304, '枪战': -12.359746068631413, '派遣': -8.7762271301753039, '讲解': -12.359746068631413, '实': -11.261133779963304, '退票': -11.666598888071467, '街边': -12.359746068631413, '表面': -9.720688739016154, '已经': -6.3833951593334799, '存': -12.359746068631413, '菜篮子': -11.261133779963304, '巴州': -11.666598888071467, '夜生活': -11.666598888071467, '撕毁': -10.973451707511522, '三峰塔': -12.359746068631413, '相近': -12.359746068631413, '民办': -10.413835919576099, '美影': -11.666598888071467, '《': -4.2241061652770275, '汉': -9.7947967111698766, '司空见惯': -10.567986599403358, '龙': -8.5530835788610933, '芦苇': -12.359746068631413, '大汗': -11.666598888071467, '富二代': -10.973451707511522, '双引擎': -11.666598888071467, '合营': -11.261133779963304, '茵': -12.359746068631413, '略': -10.413835919576099, '珠市口': -11.261133779963304, '换牌': -11.666598888071467, '颂': -12.359746068631413, '然后': -11.666598888071467, '祖传': -11.666598888071467, '老年痴呆症': -12.359746068631413, '长卷': -12.359746068631413, '屋里': -12.359746068631413, '海啸': -11.666598888071467, '临时': -9.9618507958330422, '到达': -10.567986599403358, '乡下': -9.9618507958330422, '天网': -11.666598888071467, '喜': -9.9618507958330422, '尚': -10.413835919576099, '一头': -11.666598888071467, '鸦片': -12.359746068631413, '公司员工': -12.359746068631413, '前来': -10.413835919576099, '危局': -12.359746068631413, '舰艇': -10.567986599403358, '悍将': -12.359746068631413, '公告': -12.359746068631413, '隆重': -8.334394377896265, '明珠': -10.567986599403358, '地脉': -12.359746068631413, '黄酒': -12.359746068631413, '火炬': -11.666598888071467, '发大财': -12.359746068631413, '该项': -12.359746068631413, '黑客': -10.280304526951578, '殖民统治': -9.5265327245751976, '渊源': -10.057160975637368, '绿眼睛': -12.359746068631413, '荧屏': -12.359746068631413, '选址': -11.666598888071467, '莓': -12.359746068631413, '逢年过节': -12.359746068631413, '酒类': -10.750308156197313, '妙高山': -11.666598888071467, '狼人': -12.359746068631413, '封建王朝': -11.261133779963304, '某部': -10.750308156197313, '商铺': -11.666598888071467, '陆上': -12.359746068631413, '汇业': -12.359746068631413, '煤化': -12.359746068631413, '边陲': -10.162521491295195, '座谈会': -11.666598888071467, '银饰品': -12.359746068631413, '妇女': -8.1700913266049877, '天': -8.3707620220671384, '尼崎市': -12.359746068631413, '首创': -10.280304526951578, '工农路': -12.359746068631413, '尽力': -10.973451707511522, '成功': -8.4085023500499858, '据称': -11.666598888071467, '相机': -12.359746068631413, '犯罪率': -12.359746068631413, '随机': -10.750308156197313, '一切': -10.162521491295195, '召回': -10.750308156197313, '排斥': -11.261133779963304, '士兵们': -12.359746068631413, '建交': -9.720688739016154, '班德哈瓦': -12.359746068631413, '肉价': -12.359746068631413, '因地制宜': -11.666598888071467, '铂金': -12.359746068631413, '劳务市场': -11.666598888071467, '献金': -11.666598888071467, '经纬': -12.359746068631413, '谈判': -10.973451707511522, '三者': -12.359746068631413, '相当大': -11.261133779963304, '工兵': -12.359746068631413, '山场': -12.359746068631413, '航天界': -12.359746068631413, '人体': -11.261133779963304, '无法': -9.1816922382834676, '风气': -12.359746068631413, '吉尔福德': -10.750308156197313, '督查室': -12.359746068631413, '电池': -12.359746068631413, '碑': -11.666598888071467, '豪门': -10.162521491295195, '传媒界': -11.666598888071467, '大会': -10.280304526951578, '叫法': -11.666598888071467, '电子商务': -10.162521491295195, '饮用水': -10.280304526951578, '旋转': -12.359746068631413, '湖畔':
# This file is part of Viper - https://github.com/botherder/viper # See the file 'LICENSE' for copying permission. import os import time import getopt import fnmatch import tempfile import shutil from zipfile import ZipFile from viper.common.out import * from viper.common.objects import File from viper.common.network import download from viper.core.session import __sessions__ from viper.core.project import __project__ from viper.core.plugins import __modules__ from viper.core.database import Database from viper.core.storage import store_sample, get_sample_path class Commands(object): def __init__(self): # Open connection to the database. self.db = Database() # Map commands to their related functions. self.commands = dict( help=dict(obj=self.cmd_help, description="Show this help message"), open=dict(obj=self.cmd_open, description="Open a file"), close=dict(obj=self.cmd_close, description="Close the current session"), info=dict(obj=self.cmd_info, description="Show information on the opened file"), notes=dict(obj=self.cmd_notes, description="View, add and edit notes on the opened file"), clear=dict(obj=self.cmd_clear, description="Clear the console"), store=dict(obj=self.cmd_store, description="Store the opened file to the local repository"), delete=dict(obj=self.cmd_delete, description="Delete the opened file"), find=dict(obj=self.cmd_find, description="Find a file"), tags=dict(obj=self.cmd_tags, description="Modify tags of the opened file"), sessions=dict(obj=self.cmd_sessions, description="List or switch sessions"), projects=dict(obj=self.cmd_projects, description="List or switch existing projects"), export=dict(obj=self.cmd_export, description="Export the current session to file or zip"), ) ## # CLEAR # # This command simply clears the shell. def cmd_clear(self, *args): os.system('clear') ## # HELP # # This command simply prints the help message. # It lists both embedded commands and loaded modules. def cmd_help(self, *args): print(bold("Commands:")) rows = [] for command_name, command_item in self.commands.items(): rows.append([command_name, command_item['description']]) rows = sorted(rows, key=lambda entry: entry[0]) print(table(['Command', 'Description'], rows)) print("") print(bold("Modules:")) rows = [] for module_name, module_item in __modules__.items(): rows.append([module_name, module_item['description']]) rows = sorted(rows, key=lambda entry: entry[0]) print(table(['Command', 'Description'], rows)) ## # OPEN # # This command is used to open a session on a given file. # It either can be an external file path, or a SHA256 hash of a file which # has been previously imported and stored. # While the session is active, every operation and module executed will be # run against the file specified. def cmd_open(self, *args): def usage(): print("usage: open [-h] [-f] [-u] [-l] [-t] <target|md5|sha256>") def help(): usage() print("") print("Options:") print("\t--help (-h)\tShow this help message") print("\t--file (-f)\tThe target is a file") print("\t--url (-u)\tThe target is a URL") print("\t--last (-l)\tThe target is the entry number from the last find command's results") print("\t--tor (-t)\tDownload the file through Tor") print("") print("You can also specify a MD5 or SHA256 hash to a previously stored") print("file in order to open a session on it.") print("") try: opts, argv = getopt.getopt(args, 'hfult', ['help', 'file', 'url', 'last', 'tor']) except getopt.GetoptError as e: print(e) usage() return arg_is_file = False arg_is_url = False arg_last = False arg_use_tor = False for opt, value in opts: if opt in ('-h', '--help'): help() return elif opt in ('-f', '--file'): arg_is_file = True elif opt in ('-u', '--url'): arg_is_url = True elif opt in ('-l', '--last'): arg_last = True elif opt in ('-t', '--tor'): arg_use_tor = True if len(argv) == 0: usage() return else: target = argv[0] # If it's a file path, open a session on it. if arg_is_file: target = os.path.expanduser(target) if not os.path.exists(target) or not os.path.isfile(target): print_error("File not found") return __sessions__.new(target) # If it's a URL, download it and open a session on the temporary # file. elif arg_is_url: data = download(url=target, tor=arg_use_tor) if data: tmp = tempfile.NamedTemporaryFile(delete=False) tmp.write(data) tmp.close() __sessions__.new(tmp.name) # Try to open the specified file from the list of results from # the last find command. elif arg_last: if __sessions__.find: count = 1 for item in __sessions__.find: if count == int(target): __sessions__.new(get_sample_path(item.sha256)) break count += 1 else: print_warning("You haven't performed a find yet") # Otherwise we assume it's an hash of an previously stored sample. else: target = argv[0].strip().lower() if len(target) == 32: key = 'md5' elif len(target) == 64: key = 'sha256' else: usage() return rows = self.db.find(key=key, value=target) if not rows: print_warning("No file found with the given hash {0}".format(target)) return path = get_sample_path(rows[0].sha256) if path: __sessions__.new(path) ## # CLOSE # # This command resets the open session. # After that, all handles to the opened file should be closed and the # shell should be restored to the default prompt. def cmd_close(self, *args): __sessions__.close() ## # INFO # # This command returns information on the open session. It returns details # on the file (e.g. hashes) and other information that might available from # the database. def cmd_info(self, *args): if __sessions__.is_set(): print(table( ['Key', 'Value'], [ ('Name', __sessions__.current.file.name), ('Tags', __sessions__.current.file.tags), ('Path', __sessions__.current.file.path), ('Size', __sessions__.current.file.size), ('Type', __sessions__.current.file.type), ('Mime', __sessions__.current.file.mime), ('MD5', __sessions__.current.file.md5), ('SHA1', __sessions__.current.file.sha1), ('SHA256', __sessions__.current.file.sha256), ('SHA512', __sessions__.current.file.sha512), ('SSdeep', __sessions__.current.file.ssdeep), ('CRC32', __sessions__.current.file.crc32) ] )) ## # NOTES # # This command allows you to view, add, modify and delete notes associated # with the currently opened file. def cmd_notes(self, *args): def usage(): print("usage: notes [-h] [-l] [-a] [-e <note id>] [-d <note id>]") def help(): usage() print("") print("Options:") print("\t--help (-h)\tShow this help message") print("\t--list (-l)\tList all notes available for the current file") print("\t--add (-a)\tAdd a new note to the current file") print("\t--view (-v)\tView the specified note") print("\t--edit (-e)\tEdit an existing note") print("\t--delete (-d)\tDelete an existing note") print("") try: opts, argv = getopt.getopt(args, 'hlav:e:d:', ['help', 'list', 'add', 'view=', 'edit=', 'delete=']) except getopt.GetoptError as e: print(e) usage() return arg_list = False arg_add = False arg_view = None arg_edit = None arg_delete = None for opt, value in opts: if opt in ('-h', '--help'): help() return elif opt in ('-l', '--list'): arg_list = True elif opt in ('-a', '--add'): arg_add = True elif opt in ('-v', '--view'): arg_view = value elif opt in ('-e', '--edit'): arg_edit = value elif opt in ('-d', '--delete'): arg_delete = value if not __sessions__.is_set(): print_error("No session opened") return if arg_list: # Retrieve all notes for the currently opened file. malware = Database().find(key='sha256', value=__sessions__.current.file.sha256) if not malware: print_error("The opened file doesn't appear to be in the database, have you stored it yet?") return notes = malware[0].note if not notes: print_info("No notes available for this file yet") return # Populate table rows. rows = [] for note in notes: rows.append([note.id, note.title]) # Display list of existing notes. print(table(header=['ID', 'Title'], rows=rows)) elif arg_add: title = raw_input("Enter a title for the new note: ") # Create a new temporary file. tmp = tempfile.NamedTemporaryFile(delete=False) # Open the temporary file with the default editor, or with nano. os.system('"${EDITOR:-nano}" ' + tmp.name) # Once the user is done editing, we need to read the content and # store it in the database. body = tmp.read() Database().add_note(__sessions__.current.file.sha256, title, body) # Finally, remove the temporary file. os.remove(tmp.name) print_info("New note with title \"{0}\" added to the current file".format(bold(title))) elif arg_view: # Retrieve note wth the specified ID and print it. note = Database().get_note(arg_view) if note: print_info(bold('Title: ') + note.title) print_info(bold('Body:')) print(note.body) else: print_info("There is no note with ID {0}".format(arg_view)) elif arg_edit: # Retrieve note with the specified ID. note = Database().get_note(arg_edit) if note: # Create a new temporary file. tmp = tempfile.NamedTemporaryFile(delete=False) # Write the old body to the temporary file. tmp.write(note.body) tmp.close() # Open the old body with the text editor. os.system('"${EDITOR:-nano}" ' + tmp.name) # Read the new body from the temporary file. body = open(tmp.name, 'r').read() # Update the note entry with the new body. Database().edit_note(arg_edit, body) # Remove the temporary file. os.remove(tmp.name) print_info("Updated note with ID {0}".format(arg_edit)) elif arg_delete: # Delete the note with the specified ID. Database().delete_note(arg_delete) else: usage() ## # STORE # # This command stores the opened file in the local repository and tries # to store details in the database. def cmd_store(self, *args): def usage(): print("usage: store [-h] [-d] [-f <path>] [-s <size>] [-y <type>] [-n <name>] [-t]") def help(): usage() print("") print("Options:") print("\t--help (-h)\tShow this help message") print("\t--delete (-d)\tDelete the original file") print("\t--folder (-f)\tSpecify a folder to import") print("\t--file-size (-s)\tSpecify a maximum file size") print("\t--file-type (-y)\tSpecify a file type pattern") print("\t--file-name (-n)\tSpecify a file name pattern") print("\t--tags (-t)\tSpecify a list of comma-separated tags") print("") try: opts, argv = getopt.getopt(args, 'hdf:s:y:n:t:', ['help', 'delete', 'folder=', 'file-size=', 'file-type=', 'file-name=', 'tags=']) except getopt.GetoptError as e: print(e) usage() return arg_delete =
<reponame>mdabrowski1990/uds import pytest from mock import patch from uds.can.consecutive_frame import CanConsecutiveFrameHandler, \ InconsistentArgumentsError, CanDlcHandler, DEFAULT_FILLER_BYTE from uds.can import CanAddressingFormat class TestCanConsecutiveFrameHandler: """Unit tests for `CanConsecutiveFrameHandler` class.""" SCRIPT_LOCATION = "uds.can.consecutive_frame" def setup(self): self._patcher_validate_nibble = patch(f"{self.SCRIPT_LOCATION}.validate_nibble") self.mock_validate_nibble = self._patcher_validate_nibble.start() self._patcher_validate_raw_byte = patch(f"{self.SCRIPT_LOCATION}.validate_raw_byte") self.mock_validate_raw_byte = self._patcher_validate_raw_byte.start() self._patcher_validate_raw_bytes = patch(f"{self.SCRIPT_LOCATION}.validate_raw_bytes") self.mock_validate_raw_bytes = self._patcher_validate_raw_bytes.start() self._patcher_encode_dlc = patch(f"{self.SCRIPT_LOCATION}.CanDlcHandler.encode_dlc") self.mock_encode_dlc = self._patcher_encode_dlc.start() self._patcher_decode_dlc = patch(f"{self.SCRIPT_LOCATION}.CanDlcHandler.decode_dlc") self.mock_decode_dlc = self._patcher_decode_dlc.start() self._patcher_get_min_dlc = patch(f"{self.SCRIPT_LOCATION}.CanDlcHandler.get_min_dlc") self.mock_get_min_dlc = self._patcher_get_min_dlc.start() self._patcher_encode_ai_data_bytes = \ patch(f"{self.SCRIPT_LOCATION}.CanAddressingInformationHandler.encode_ai_data_bytes") self.mock_encode_ai_data_bytes = self._patcher_encode_ai_data_bytes.start() self._patcher_get_ai_data_bytes_number = \ patch(f"{self.SCRIPT_LOCATION}.CanAddressingInformationHandler.get_ai_data_bytes_number") self.mock_get_ai_data_bytes_number = self._patcher_get_ai_data_bytes_number.start() def teardown(self): self._patcher_validate_nibble.stop() self._patcher_validate_raw_byte.stop() self._patcher_validate_raw_bytes.stop() self._patcher_encode_dlc.stop() self._patcher_decode_dlc.stop() self._patcher_get_min_dlc.stop() self._patcher_encode_ai_data_bytes.stop() self._patcher_get_ai_data_bytes_number.stop() # create_valid_frame_data @pytest.mark.parametrize("addressing_format, target_address, address_extension", [ ("some format", "TA", "SA"), ("another format", None, None), ]) @pytest.mark.parametrize("dlc, filler_byte, sequence_number", [ (CanDlcHandler.MIN_BASE_UDS_DLC, 0x66, "some sequence number"), (CanDlcHandler.MIN_BASE_UDS_DLC + 2, 0x99, 0xF), ]) @pytest.mark.parametrize("payload, data_bytes_number, ai_data_bytes, sn_data_bytes", [ ([0x54], 2, [], [0xFA]), (range(50, 110), 64, [0x98], [0x12, 0x34]), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler._CanConsecutiveFrameHandler__encode_sn") def test_create_valid_frame_data__valid_with_dlc(self, mock_encode_sn, addressing_format, target_address, address_extension, payload, sequence_number, dlc, filler_byte, data_bytes_number, ai_data_bytes, sn_data_bytes): self.mock_decode_dlc.return_value = data_bytes_number self.mock_encode_ai_data_bytes.return_value = ai_data_bytes mock_encode_sn.return_value = sn_data_bytes cf_frame_data = CanConsecutiveFrameHandler.create_valid_frame_data(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension, payload=payload, sequence_number=sequence_number, dlc=dlc, filler_byte=filler_byte) self.mock_validate_raw_byte.assert_called_once_with(filler_byte) self.mock_validate_raw_bytes.assert_called_once_with(payload, allow_empty=False) self.mock_decode_dlc.assert_called_once_with(dlc) self.mock_encode_ai_data_bytes.assert_called_once_with(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension) mock_encode_sn.assert_called_once_with(sequence_number=sequence_number) assert isinstance(cf_frame_data, list) assert len(cf_frame_data) == data_bytes_number @pytest.mark.parametrize("addressing_format, target_address, address_extension", [ ("some format", "TA", "SA"), ("another format", None, None), ]) @pytest.mark.parametrize("dlc, filler_byte, sequence_number", [ (CanDlcHandler.MIN_BASE_UDS_DLC, 0x66, "some sequence number"), (CanDlcHandler.MIN_BASE_UDS_DLC + 2, 0x99, 0xF), ]) @pytest.mark.parametrize("payload, data_bytes_number, ai_data_bytes, sn_data_bytes", [ ([0x54], 2, [], [0xFA]), (range(50, 110), 64, [0x98], [0x12, 0x34]), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.get_min_dlc") @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler._CanConsecutiveFrameHandler__encode_sn") def test_create_valid_frame_data__valid_without_dlc(self, mock_encode_sn, mock_get_min_dlc, addressing_format, target_address, address_extension, payload, sequence_number, dlc, filler_byte, data_bytes_number, ai_data_bytes, sn_data_bytes): mock_get_min_dlc.return_value = dlc self.mock_decode_dlc.return_value = data_bytes_number self.mock_encode_ai_data_bytes.return_value = ai_data_bytes mock_encode_sn.return_value = sn_data_bytes cf_frame_data = CanConsecutiveFrameHandler.create_valid_frame_data(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension, payload=payload, sequence_number=sequence_number, dlc=None, filler_byte=filler_byte) self.mock_validate_raw_byte.assert_called_once_with(filler_byte) self.mock_validate_raw_bytes.assert_called_once_with(payload, allow_empty=False) mock_get_min_dlc.assert_called_once_with(addressing_format=addressing_format, payload_length=len(payload)) self.mock_decode_dlc.assert_called_once_with(dlc) self.mock_encode_ai_data_bytes.assert_called_once_with(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension) mock_encode_sn.assert_called_once_with(sequence_number=sequence_number) assert isinstance(cf_frame_data, list) assert len(cf_frame_data) == data_bytes_number @pytest.mark.parametrize("addressing_format, target_address, address_extension", [ ("some format", "TA", "SA"), ("another format", None, None), ]) @pytest.mark.parametrize("filler_byte, sequence_number", [ (0x66, "some sequence number"), (0x99, 0xF), ]) @pytest.mark.parametrize("dlc, payload, data_bytes_number, ai_data_bytes, sn_data_bytes", [ (CanDlcHandler.MIN_BASE_UDS_DLC - 1, range(60), 100, [0xFF], [0x00, 0xFA]), (CanDlcHandler.MIN_BASE_UDS_DLC - 2, [0x3E], 7, [], [0x01]), (CanDlcHandler.MIN_BASE_UDS_DLC + 1, [0x20, 0x30, 0x44], 4, [0xAA], [0x03]), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler._CanConsecutiveFrameHandler__encode_sn") def test_create_valid_frame_data__inconsistent_args(self, mock_encode_sn, addressing_format, target_address, address_extension, payload, sequence_number, dlc, filler_byte, data_bytes_number, ai_data_bytes, sn_data_bytes): self.mock_decode_dlc.return_value = data_bytes_number self.mock_encode_ai_data_bytes.return_value = ai_data_bytes mock_encode_sn.return_value = sn_data_bytes with pytest.raises(InconsistentArgumentsError): CanConsecutiveFrameHandler.create_valid_frame_data(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension, payload=payload, sequence_number=sequence_number, dlc=dlc, filler_byte=filler_byte) self.mock_validate_raw_byte.assert_called_once_with(filler_byte) self.mock_validate_raw_bytes.assert_called_once_with(payload, allow_empty=False) self.mock_decode_dlc.assert_called_once_with(dlc) self.mock_encode_ai_data_bytes.assert_called_once_with(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension) mock_encode_sn.assert_called_once_with(sequence_number=sequence_number) # create_any_frame_data @pytest.mark.parametrize("addressing_format, target_address, address_extension", [ ("some format", "TA", "SA"), ("another format", None, None), ]) @pytest.mark.parametrize("dlc, filler_byte, sequence_number", [ (CanDlcHandler.MIN_BASE_UDS_DLC - 2, 0x66, "some sequence number"), (CanDlcHandler.MIN_BASE_UDS_DLC + 2, 0x99, 0xF), ]) @pytest.mark.parametrize("payload, data_bytes_number, ai_data_bytes, sn_data_bytes", [ ([], 8, [], [0x0C]), (range(50, 110), 64, [0x98], [0x12, 0x34]), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler._CanConsecutiveFrameHandler__encode_sn") def test_create_any_frame_data__valid(self, mock_encode_sn, addressing_format, target_address, address_extension, payload, sequence_number, dlc, filler_byte, data_bytes_number, ai_data_bytes, sn_data_bytes): self.mock_decode_dlc.return_value = data_bytes_number self.mock_encode_ai_data_bytes.return_value = ai_data_bytes mock_encode_sn.return_value = sn_data_bytes cf_frame_data = CanConsecutiveFrameHandler.create_any_frame_data(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension, payload=payload, sequence_number=sequence_number, dlc=dlc, filler_byte=filler_byte) self.mock_validate_raw_byte.assert_called_once_with(filler_byte) self.mock_validate_raw_bytes.assert_called_once_with(payload, allow_empty=True) self.mock_decode_dlc.assert_called_once_with(dlc) self.mock_encode_ai_data_bytes.assert_called_once_with(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension) mock_encode_sn.assert_called_once_with(sequence_number=sequence_number) assert isinstance(cf_frame_data, list) assert len(cf_frame_data) == data_bytes_number @pytest.mark.parametrize("addressing_format, target_address, address_extension", [ ("some format", "TA", "SA"), ("another format", None, None), ]) @pytest.mark.parametrize("filler_byte, sequence_number", [ (0x66, "some sequence number"), (0x99, 0xF), ]) @pytest.mark.parametrize("dlc, payload, data_bytes_number, ai_data_bytes, sn_data_bytes", [ (CanDlcHandler.MIN_BASE_UDS_DLC - 1, range(60), 62, [0xFF], [0x00, 0xFA]), (CanDlcHandler.MIN_BASE_UDS_DLC, [0x20, 0x30, 0x44], 3, [], [0x03]), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler._CanConsecutiveFrameHandler__encode_sn") def test_create_any_frame_data__inconsistent_args(self, mock_encode_sn, addressing_format, target_address, address_extension, payload, sequence_number, dlc, filler_byte, data_bytes_number, ai_data_bytes, sn_data_bytes): self.mock_decode_dlc.return_value = data_bytes_number self.mock_encode_ai_data_bytes.return_value = ai_data_bytes mock_encode_sn.return_value = sn_data_bytes with pytest.raises(InconsistentArgumentsError): CanConsecutiveFrameHandler.create_any_frame_data(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension, payload=payload, sequence_number=sequence_number, dlc=dlc, filler_byte=filler_byte) self.mock_validate_raw_byte.assert_called_once_with(filler_byte) self.mock_validate_raw_bytes.assert_called_once_with(payload, allow_empty=True) self.mock_decode_dlc.assert_called_once_with(dlc) self.mock_encode_ai_data_bytes.assert_called_once_with(addressing_format=addressing_format, target_address=target_address, address_extension=address_extension) mock_encode_sn.assert_called_once_with(sequence_number=sequence_number) # is_consecutive_frame @pytest.mark.parametrize("addressing_format", ["some addressing format", "another format"]) @pytest.mark.parametrize("ai_bytes_number, raw_frame_data", [ (0, (0x2F, 0xFE, 0xDC, 0xBA, 0x98, 0x76)), (1, [0x01, 0x20] + list(range(46))), ]) def test_is_consecutive_frame__true(self, addressing_format, raw_frame_data, ai_bytes_number): self.mock_get_ai_data_bytes_number.return_value = ai_bytes_number assert CanConsecutiveFrameHandler.is_consecutive_frame(addressing_format=addressing_format, raw_frame_data=raw_frame_data) is True self.mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format) @pytest.mark.parametrize("addressing_format", ["some addressing format", "another format"]) @pytest.mark.parametrize("ai_bytes_number, raw_frame_data", [ (0, (0x0F, 0xFE, 0xDC, 0xBA, 0x98, 0x76)), (1, [0x01, 0x10] + list(range(46))), (0, [0x35] + list(range(47))), (1, (0x13, 0xFE, 0x21)), ]) def test_is_consecutive_frame__false(self, addressing_format, raw_frame_data, ai_bytes_number): self.mock_get_ai_data_bytes_number.return_value = ai_bytes_number assert CanConsecutiveFrameHandler.is_consecutive_frame(addressing_format=addressing_format, raw_frame_data=raw_frame_data) is False self.mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format) # decode_payload @pytest.mark.parametrize("addressing_format", ["some addressing format", "another format"]) @pytest.mark.parametrize("ai_bytes_number, raw_frame_data", [ (0, [0x25] + list(range(47))), (1, (0x13, 0x2E, 0x21)), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.is_consecutive_frame") def test_decode_payload(self, mock_is_consecutive_frame, addressing_format, raw_frame_data, ai_bytes_number): mock_is_consecutive_frame.return_value = True self.mock_get_ai_data_bytes_number.return_value = ai_bytes_number payload = CanConsecutiveFrameHandler.decode_payload(addressing_format=addressing_format, raw_frame_data=raw_frame_data) assert isinstance(payload, list) assert payload == list(raw_frame_data)[ai_bytes_number + CanConsecutiveFrameHandler.SN_BYTES_USED:] mock_is_consecutive_frame.assert_called_once_with(addressing_format=addressing_format, raw_frame_data=raw_frame_data) self.mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format) @pytest.mark.parametrize("addressing_format", ["some addressing format", "another format"]) @pytest.mark.parametrize("raw_frame_data", [ (0x2F, 0xFE, 0xDC, 0xBA, 0x98, 0x76), [0x01, 0x20] + list(range(46)), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.is_consecutive_frame") def test_decode_payload__value_error(self, mock_is_consecutive_frame, addressing_format, raw_frame_data): mock_is_consecutive_frame.return_value = False with pytest.raises(ValueError): CanConsecutiveFrameHandler.decode_payload(addressing_format=addressing_format, raw_frame_data=raw_frame_data) mock_is_consecutive_frame.assert_called_once_with(addressing_format=addressing_format, raw_frame_data=raw_frame_data) self.mock_get_ai_data_bytes_number.assert_not_called() # decode_sequence_number @pytest.mark.parametrize("addressing_format", ["some addressing format", "another format"]) @pytest.mark.parametrize("ai_bytes_number, raw_frame_data", [ (0, [0x25] + list(range(47))), (1, (0x13, 0x2E, 0x21)), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.is_consecutive_frame") def test_decode_sequence_number(self, mock_is_consecutive_frame, addressing_format, raw_frame_data, ai_bytes_number): mock_is_consecutive_frame.return_value = True self.mock_get_ai_data_bytes_number.return_value = ai_bytes_number sequence_number = CanConsecutiveFrameHandler.decode_sequence_number(addressing_format=addressing_format, raw_frame_data=raw_frame_data) assert isinstance(sequence_number, int) assert sequence_number == raw_frame_data[ai_bytes_number] & 0xF mock_is_consecutive_frame.assert_called_once_with(addressing_format=addressing_format, raw_frame_data=raw_frame_data) self.mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format) @pytest.mark.parametrize("addressing_format", ["some addressing format", "another format"]) @pytest.mark.parametrize("raw_frame_data", [ (0x2F, 0xFE, 0xDC, 0xBA, 0x98, 0x76), [0x01, 0x20] + list(range(46)), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.is_consecutive_frame") def test_decode_sequence_number__value_error(self, mock_is_consecutive_frame, addressing_format, raw_frame_data): mock_is_consecutive_frame.return_value = False with pytest.raises(ValueError): CanConsecutiveFrameHandler.decode_sequence_number(addressing_format=addressing_format, raw_frame_data=raw_frame_data) mock_is_consecutive_frame.assert_called_once_with(addressing_format=addressing_format, raw_frame_data=raw_frame_data) self.mock_get_ai_data_bytes_number.assert_not_called() # get_min_dlc @pytest.mark.parametrize("addressing_format", ["some addressing format", "something else"]) @pytest.mark.parametrize("ai_data_bytes, payload_length", [ (0, 1), (0, 62), ]) @pytest.mark.parametrize("decoded_dlc", [8, 0xF]) def test_get_min_dlc(self, addressing_format, payload_length, ai_data_bytes, decoded_dlc): self.mock_get_ai_data_bytes_number.return_value = ai_data_bytes self.mock_get_min_dlc.return_value = decoded_dlc assert CanConsecutiveFrameHandler.get_min_dlc( addressing_format=addressing_format, payload_length=payload_length) == self.mock_get_min_dlc.return_value self.mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format) data_bytes_number = payload_length + CanConsecutiveFrameHandler.SN_BYTES_USED + ai_data_bytes self.mock_get_min_dlc.assert_called_once_with(data_bytes_number) @pytest.mark.parametrize("addressing_format", ["some addressing format", "something else"]) @pytest.mark.parametrize("payload_length", [None, "not a payload", 5.]) def test_get_min_dlc__type_error(self, addressing_format, payload_length): with pytest.raises(TypeError): CanConsecutiveFrameHandler.get_min_dlc(addressing_format=addressing_format, payload_length=payload_length) @pytest.mark.parametrize("addressing_format", ["some addressing format", "something else"]) @pytest.mark.parametrize("payload_length", [0, 64]) def test_get_min_dlc__value_error(self, addressing_format, payload_length): with pytest.raises(ValueError): CanConsecutiveFrameHandler.get_min_dlc(addressing_format=addressing_format, payload_length=payload_length) @pytest.mark.parametrize("addressing_format", ["some addressing format", "something else"]) @pytest.mark.parametrize("ai_data_bytes, payload_length", [ (1, 63), (2, 62), ]) def test_get_min_dlc__inconsistent_args(self, addressing_format, payload_length, ai_data_bytes): self.mock_get_ai_data_bytes_number.return_value = ai_data_bytes with pytest.raises(InconsistentArgumentsError): CanConsecutiveFrameHandler.get_min_dlc(addressing_format=addressing_format, payload_length=payload_length) self.mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format) # get_max_payload_size @pytest.mark.parametrize("addressing_format", ["some addressing format", "something else"]) @pytest.mark.parametrize("dlc", ["some DLC", 8]) @pytest.mark.parametrize("frame_data_bytes_number, ai_data_bytes_number", [ (10, 1), (64, 1), ]) def test_get_max_payload_size__with_addressing_dlc(self, addressing_format, dlc, frame_data_bytes_number, ai_data_bytes_number): self.mock_decode_dlc.return_value = frame_data_bytes_number self.mock_get_ai_data_bytes_number.return_value = ai_data_bytes_number max_value = CanConsecutiveFrameHandler.get_max_payload_size(addressing_format=addressing_format, dlc=dlc) self.mock_decode_dlc.assert_called_once_with(dlc) self.mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format) assert isinstance(max_value, int) assert max_value == frame_data_bytes_number - ai_data_bytes_number - CanConsecutiveFrameHandler.SN_BYTES_USED @pytest.mark.parametrize("addressing_format", ["some addressing format", "something else"]) @pytest.mark.parametrize("dlc", ["some DLC", 8]) @pytest.mark.parametrize("frame_data_bytes_number, ai_data_bytes_number", [ (1, 1), (0, 0), ]) def test_get_max_payload_size__too_short(self, addressing_format, dlc, frame_data_bytes_number, ai_data_bytes_number): self.mock_decode_dlc.return_value = frame_data_bytes_number self.mock_get_ai_data_bytes_number.return_value = ai_data_bytes_number with pytest.raises(InconsistentArgumentsError): CanConsecutiveFrameHandler.get_max_payload_size(addressing_format=addressing_format, dlc=dlc) self.mock_decode_dlc.assert_called_once_with(dlc) self.mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format) def test_get_max_payload_size__without_args(self): max_value = CanConsecutiveFrameHandler.get_max_payload_size() self.mock_decode_dlc.assert_not_called() self.mock_get_ai_data_bytes_number.assert_not_called() assert isinstance(max_value, int) assert max_value == CanDlcHandler.MAX_DATA_BYTES_NUMBER - CanConsecutiveFrameHandler.SN_BYTES_USED # validate_frame_data @pytest.mark.parametrize("addressing_format, raw_frame_data", [ ("some addressing format", "some raw frame data"), ("another format", range(5)), ]) @pytest.mark.parametrize("min_dlc, decoded_dlc", [ (8, 8), (13, 15), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.is_consecutive_frame") @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.get_min_dlc") def test_validate_frame_data__valid(self, mock_get_min_dlc, mock_is_consecutive_frame, addressing_format, raw_frame_data, decoded_dlc, min_dlc): mock_is_consecutive_frame.return_value = True mock_get_min_dlc.return_value = min_dlc self.mock_encode_dlc.return_value = decoded_dlc CanConsecutiveFrameHandler.validate_frame_data(addressing_format=addressing_format, raw_frame_data=raw_frame_data) self.mock_validate_raw_bytes.assert_called_once_with(raw_frame_data) mock_is_consecutive_frame.assert_called_once_with(addressing_format=addressing_format, raw_frame_data=raw_frame_data) mock_get_min_dlc.assert_called_once_with(addressing_format=addressing_format) @pytest.mark.parametrize("addressing_format, raw_frame_data", [ ("some addressing format", "some raw frame data"), ("another format", range(5)), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.is_consecutive_frame") def test_validate_frame_data__value_error(self, mock_is_consecutive_frame, addressing_format, raw_frame_data): mock_is_consecutive_frame.return_value = False with pytest.raises(ValueError): CanConsecutiveFrameHandler.validate_frame_data(addressing_format=addressing_format, raw_frame_data=raw_frame_data) mock_is_consecutive_frame.assert_called_once_with(addressing_format=addressing_format, raw_frame_data=raw_frame_data) @pytest.mark.parametrize("addressing_format, raw_frame_data", [ ("some addressing format", "some raw frame data"), ("another format", range(5)), ]) @pytest.mark.parametrize("min_dlc, decoded_dlc", [ (1, 0), (15, 8), ]) @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.is_consecutive_frame") @patch(f"{SCRIPT_LOCATION}.CanConsecutiveFrameHandler.get_min_dlc") def test_validate_frame_data__inconsistent_error(self, mock_get_min_dlc, mock_is_consecutive_frame, addressing_format, raw_frame_data, decoded_dlc, min_dlc): mock_is_consecutive_frame.return_value = True mock_get_min_dlc.return_value = min_dlc self.mock_encode_dlc.return_value = decoded_dlc with pytest.raises(InconsistentArgumentsError): CanConsecutiveFrameHandler.validate_frame_data(addressing_format=addressing_format, raw_frame_data=raw_frame_data) self.mock_validate_raw_bytes.assert_called_once_with(raw_frame_data) mock_is_consecutive_frame.assert_called_once_with(addressing_format=addressing_format, raw_frame_data=raw_frame_data) mock_get_min_dlc.assert_called_once_with(addressing_format=addressing_format) # __encode_sn @pytest.mark.parametrize("sequence_number", [0, 0xF]) def test_encode_sn(self, sequence_number): assert CanConsecutiveFrameHandler._CanConsecutiveFrameHandler__encode_sn(sequence_number=sequence_number) \ == [(CanConsecutiveFrameHandler.CONSECUTIVE_FRAME_N_PCI << 4) + sequence_number] self.mock_validate_nibble.assert_called_once_with(sequence_number) @pytest.mark.integration class TestCanSingleFrameHandlerIntegration: """Integration tests for `CanSingleFrameHandler` class.""" # create_valid_frame_data @pytest.mark.parametrize("kwargs, expected_raw_frame_data", [ ({"addressing_format": CanAddressingFormat.NORMAL_11BIT_ADDRESSING, "payload": [0x9A], "sequence_number": 0}, [0x20, 0x9A]), ({"addressing_format": CanAddressingFormat.NORMAL_FIXED_ADDRESSING, "payload": tuple(range(48)), "sequence_number": 0xF}, [0x2F] + list(range(48)) + (15 * [DEFAULT_FILLER_BYTE])), ({"addressing_format": CanAddressingFormat.EXTENDED_ADDRESSING, "target_address": 0xF1, "dlc": 8, "payload": (0x92, 0xB8), "sequence_number": 0x5, "filler_byte": 0xD9}, [0xF1, 0x25, 0x92, 0xB8, 0xD9, 0xD9, 0xD9, 0xD9]), ({"addressing_format": CanAddressingFormat.MIXED_11BIT_ADDRESSING, "address_extension": 0xE8, "dlc": 9, "payload": list(range(10, 20)), "sequence_number": 0xB, "filler_byte": 0x99}, [0xE8, 0x2B] + list(range(10, 20))), ({"addressing_format": CanAddressingFormat.MIXED_29BIT_ADDRESSING, "target_address": 0xFE, "address_extension": 0xDC, "payload": tuple(range(50, 96)), "sequence_number": 0x1, "filler_byte": 0xD9}, [0xDC, 0x21] + list(range(50, 96))), ]) def test_create_valid_frame_data__valid(self, kwargs, expected_raw_frame_data): assert CanConsecutiveFrameHandler.create_valid_frame_data(**kwargs) == expected_raw_frame_data @pytest.mark.parametrize("kwargs", [ {"addressing_format": CanAddressingFormat.NORMAL_11BIT_ADDRESSING, "payload": [0x9A], "dlc": 1, "sequence_number": 0}, {"addressing_format": CanAddressingFormat.NORMAL_FIXED_ADDRESSING, "payload": [], "sequence_number": 0xF}, {"addressing_format": CanAddressingFormat.EXTENDED_ADDRESSING, "target_address": 0xF1, "payload": [0x9A], "dlc": 8, "sequence_number":
to method delete_conversations_email_messages_draft_attachment" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'conversation_id' is set if ('conversation_id' not in params) or (params['conversation_id'] is None): raise ValueError("Missing the required parameter `conversation_id` when calling `delete_conversations_email_messages_draft_attachment`") # verify the required parameter 'attachment_id' is set if ('attachment_id' not in params) or (params['attachment_id'] is None): raise ValueError("Missing the required parameter `attachment_id` when calling `delete_conversations_email_messages_draft_attachment`") resource_path = '/api/v2/conversations/emails/{conversationId}/messages/draft/attachments/{attachmentId}'.replace('{format}', 'json') path_params = {} if 'conversation_id' in params: path_params['conversationId'] = params['conversation_id'] if 'attachment_id' in params: path_params['attachmentId'] = params['attachment_id'] query_params = {} header_params = {} form_params = [] local_var_files = {} body_params = None # HTTP header `Accept` header_params['Accept'] = self.api_client.\ select_header_accept(['application/json']) if not header_params['Accept']: del header_params['Accept'] # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.\ select_header_content_type(['application/json']) # Authentication setting auth_settings = ['PureCloud OAuth'] response = self.api_client.call_api(resource_path, 'DELETE', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type=None, auth_settings=auth_settings, callback=params.get('callback')) return response def delete_conversations_messaging_integrations_facebook_integration_id(self, integration_id, **kwargs): """ Delete a Facebook messaging integration This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.delete_conversations_messaging_integrations_facebook_integration_id(integration_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str integration_id: Integration ID (required) :return: None If the method is called asynchronously, returns the request thread. """ all_params = ['integration_id'] all_params.append('callback') params = locals() for key, val in iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method delete_conversations_messaging_integrations_facebook_integration_id" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'integration_id' is set if ('integration_id' not in params) or (params['integration_id'] is None): raise ValueError("Missing the required parameter `integration_id` when calling `delete_conversations_messaging_integrations_facebook_integration_id`") resource_path = '/api/v2/conversations/messaging/integrations/facebook/{integrationId}'.replace('{format}', 'json') path_params = {} if 'integration_id' in params: path_params['integrationId'] = params['integration_id'] query_params = {} header_params = {} form_params = [] local_var_files = {} body_params = None # HTTP header `Accept` header_params['Accept'] = self.api_client.\ select_header_accept(['application/json']) if not header_params['Accept']: del header_params['Accept'] # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.\ select_header_content_type(['application/json']) # Authentication setting auth_settings = ['PureCloud OAuth'] response = self.api_client.call_api(resource_path, 'DELETE', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type=None, auth_settings=auth_settings, callback=params.get('callback')) return response def delete_conversations_messaging_integrations_line_integration_id(self, integration_id, **kwargs): """ Delete a LINE messenger integration This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.delete_conversations_messaging_integrations_line_integration_id(integration_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str integration_id: Integration ID (required) :return: None If the method is called asynchronously, returns the request thread. """ all_params = ['integration_id'] all_params.append('callback') params = locals() for key, val in iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method delete_conversations_messaging_integrations_line_integration_id" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'integration_id' is set if ('integration_id' not in params) or (params['integration_id'] is None): raise ValueError("Missing the required parameter `integration_id` when calling `delete_conversations_messaging_integrations_line_integration_id`") resource_path = '/api/v2/conversations/messaging/integrations/line/{integrationId}'.replace('{format}', 'json') path_params = {} if 'integration_id' in params: path_params['integrationId'] = params['integration_id'] query_params = {} header_params = {} form_params = [] local_var_files = {} body_params = None # HTTP header `Accept` header_params['Accept'] = self.api_client.\ select_header_accept(['application/json']) if not header_params['Accept']: del header_params['Accept'] # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.\ select_header_content_type(['application/json']) # Authentication setting auth_settings = ['PureCloud OAuth'] response = self.api_client.call_api(resource_path, 'DELETE', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type=None, auth_settings=auth_settings, callback=params.get('callback')) return response def delete_conversations_messaging_integrations_open_integration_id(self, integration_id, **kwargs): """ Delete an Open messaging integration See https://developer.genesys.cloud/api/digital/openmessaging/ for more information. This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.delete_conversations_messaging_integrations_open_integration_id(integration_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str integration_id: Integration ID (required) :return: None If the method is called asynchronously, returns the request thread. """ all_params = ['integration_id'] all_params.append('callback') params = locals() for key, val in iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method delete_conversations_messaging_integrations_open_integration_id" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'integration_id' is set if ('integration_id' not in params) or (params['integration_id'] is None): raise ValueError("Missing the required parameter `integration_id` when calling `delete_conversations_messaging_integrations_open_integration_id`") resource_path = '/api/v2/conversations/messaging/integrations/open/{integrationId}'.replace('{format}', 'json') path_params = {} if 'integration_id' in params: path_params['integrationId'] = params['integration_id'] query_params = {} header_params = {} form_params = [] local_var_files = {} body_params = None # HTTP header `Accept` header_params['Accept'] = self.api_client.\ select_header_accept(['application/json']) if not header_params['Accept']: del header_params['Accept'] # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.\ select_header_content_type(['application/json']) # Authentication setting auth_settings = ['PureCloud OAuth'] response = self.api_client.call_api(resource_path, 'DELETE', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type=None, auth_settings=auth_settings, callback=params.get('callback')) return response def delete_conversations_messaging_integrations_twitter_integration_id(self, integration_id, **kwargs): """ Delete a Twitter messaging integration This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.delete_conversations_messaging_integrations_twitter_integration_id(integration_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str integration_id: Integration ID (required) :return: None If the method is called asynchronously, returns the request thread. """ all_params = ['integration_id'] all_params.append('callback') params = locals() for key, val in iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method delete_conversations_messaging_integrations_twitter_integration_id" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'integration_id' is set if ('integration_id' not in params) or (params['integration_id'] is None): raise ValueError("Missing the required parameter `integration_id` when calling `delete_conversations_messaging_integrations_twitter_integration_id`") resource_path = '/api/v2/conversations/messaging/integrations/twitter/{integrationId}'.replace('{format}', 'json') path_params = {} if 'integration_id' in params: path_params['integrationId'] = params['integration_id'] query_params = {} header_params = {} form_params = [] local_var_files = {} body_params = None # HTTP header `Accept` header_params['Accept'] = self.api_client.\ select_header_accept(['application/json']) if not header_params['Accept']: del header_params['Accept'] # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.\ select_header_content_type(['application/json']) # Authentication setting auth_settings = ['PureCloud OAuth'] response = self.api_client.call_api(resource_path, 'DELETE', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type=None, auth_settings=auth_settings, callback=params.get('callback')) return response def delete_conversations_messaging_integrations_whatsapp_integration_id(self, integration_id, **kwargs): """ Delete a WhatsApp messaging integration This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.delete_conversations_messaging_integrations_whatsapp_integration_id(integration_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str integration_id: Integration ID (required) :return: WhatsAppIntegration If the method is called asynchronously, returns the request thread. """ all_params = ['integration_id'] all_params.append('callback') params = locals() for key, val in iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method delete_conversations_messaging_integrations_whatsapp_integration_id" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'integration_id' is set if ('integration_id' not in params) or (params['integration_id'] is None): raise ValueError("Missing the required parameter `integration_id` when calling `delete_conversations_messaging_integrations_whatsapp_integration_id`") resource_path = '/api/v2/conversations/messaging/integrations/whatsapp/{integrationId}'.replace('{format}', 'json') path_params = {} if 'integration_id' in params: path_params['integrationId'] = params['integration_id'] query_params = {} header_params = {} form_params = [] local_var_files = {} body_params = None # HTTP header `Accept` header_params['Accept'] = self.api_client.\ select_header_accept(['application/json']) if not header_params['Accept']: del header_params['Accept'] # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.\ select_header_content_type(['application/json']) # Authentication setting auth_settings = ['PureCloud OAuth'] response = self.api_client.call_api(resource_path, 'DELETE', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type='WhatsAppIntegration', auth_settings=auth_settings, callback=params.get('callback')) return response def get_analytics_conversation_details(self, conversation_id, **kwargs): """ Get a conversation by id This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.get_analytics_conversation_details(conversation_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str conversation_id: conversationId (required) :return: AnalyticsConversationWithoutAttributes If the method
import os import shutil import fnmatch import tarfile import subprocess from pathlib import Path from datetime import datetime from collections import namedtuple import time import logging import concurrent.futures from ..handler import add_handler from ..radiometry import BRDF, LinearAdjustments, RadTransforms, landsat_pixel_angles, sentinel_pixel_angles, QAMasker, DOS, SixS from ..radiometry.angles import estimate_cloud_shadows from ..core.properties import get_sensor_info from ..core import ndarray_to_xarray from ..backends.gdal_ import warp import geowombat as gw import numpy as np from osgeo import gdal import pandas as pd import geopandas as gpd import xarray as xr from shapely.geometry import Polygon try: import requests REQUESTS_INSTALLED = True except: REQUESTS_INSTALLED = False try: from s2cloudless import S2PixelCloudDetector S2CLOUDLESS_INSTALLED = True except: S2CLOUDLESS_INSTALLED = False logger = logging.getLogger(__name__) logger = add_handler(logger) RESAMPLING_DICT = dict(bilinear=gdal.GRA_Bilinear, cubic=gdal.GRA_Cubic, nearest=gdal.GRA_NearestNeighbour) OrbitDates = namedtuple('OrbitDates', 'start end') FileInfo = namedtuple('FileInfo', 'name key') GoogleFileInfo = namedtuple('GoogleFileInfo', 'url url_file meta angles') def _rmdir(pathdir): """ Removes a directory path """ if pathdir.is_dir(): for child in pathdir.iterdir(): if child.is_file(): try: child.unlink() except: pass try: pathdir.rmdir() except: try: shutil.rmtree(str(pathdir)) except: pass def _delayed_read(fn): attempt = 0 max_attempts = 10 while True: if Path(fn).is_file(): break else: time.sleep(2) attempt += 1 if attempt >= max_attempts: break with open(str(fn), mode='r') as tx: lines = tx.readlines() return lines def _update_status_file(fn, log_name): attempt = 0 max_attempts = 10 while True: wait_on_file = False # Check if the file is open by another process for proc in psutil.process_iter(): try: for item in proc.open_files(): if item.path == str(fn): wait_on_file = True break except Exception: pass if wait_on_file: break if wait_on_file: time.sleep(2) else: break attempt += 1 if attempt >= max_attempts: break with open(str(fn), mode='r') as tx: lines = tx.readlines() if lines: lines = list(set(lines)) if log_name + '\n' not in lines: lines.append(log_name + '\n') fn.unlink() with open(str(fn), mode='w') as tx: tx.writelines(lines) def _clean_and_update(outdir_angles, finfo_dict, meta_name, check_angles=True, check_downloads=True, load_bands_names=None): if check_angles: _rmdir(outdir_angles) if check_downloads: for k, v in finfo_dict.items(): if Path(v.name).is_file(): try: Path(v.name).unlink() except Warning: logger.warning(' Could not delete {}.'.format(v.name)) else: logger.warning(' The {} file does not exist to delete.'.format(v.name)) # if update_status: # _update_status_file(status, meta_name) if load_bands_names: for loaded_band in load_bands_names: if Path(loaded_band).is_file(): try: Path(loaded_band).unlink() except Warning: logger.warning(' Could not delete {}.'.format(loaded_band)) def _assign_attrs(data, attrs, bands_out): if bands_out: data = data.sel(band=bands_out) data = data.transpose('band', 'y', 'x') data.attrs = attrs return data def _parse_google_filename(filename, landsat_parts, sentinel_parts, public_url): file_info = GoogleFileInfo(url=None, url_file=None, meta=None, angles=None) f_base, f_ext = os.path.splitext(filename) fn_parts = f_base.split('_') if fn_parts[0].lower() in landsat_parts: # Collection 1 url_ = '{PUBLIC}-landsat/{SENSOR}/01/{PATH}/{ROW}/{FDIR}'.format(PUBLIC=public_url, SENSOR=fn_parts[0], PATH=fn_parts[2][:3], ROW=fn_parts[2][3:], FDIR='_'.join(fn_parts[:-1])) url_filename = '{URL}/{FN}'.format(URL=url_, FN=filename) url_meta = '{URL}/{FN}_MTL.txt'.format(URL=url_, FN='_'.join(fn_parts[:-1])) url_angles = '{URL}/{FN}_ANG.txt'.format(URL=url_, FN='_'.join(fn_parts[:-1])) file_info = GoogleFileInfo(url=url_, url_file=url_filename, meta=url_meta, angles=url_angles) return file_info def _download_workers(gcp_str, poutdir, outdir, fname, fn, null_items, verbose): # Renaming Sentinel data rename = False # Full path of GCP local download down_file = str(poutdir.joinpath(fname)) if down_file.endswith('_ANG.txt'): fbase = fname.replace('_ANG.txt', '') key = 'angle' elif down_file.endswith('_MTL.txt'): fbase = fname.replace('_MTL.txt', '') key = 'meta' elif down_file.endswith('MTD_TL.xml'): fbase = Path(fn).parent.name down_file = str(poutdir.joinpath(fbase + '_MTD_TL.xml')) key = 'meta' rename = True elif down_file.endswith('_BQA.TIF'): fbase = fname.replace('_BQA.TIF', '') key = 'qa' else: if fname.endswith('.jp2'): fbase = Path(fn).parent.parent.name key = Path(fn).name.split('.')[0].split('_')[-1] down_file = str(poutdir.joinpath(fbase + '_' + key + '.jp2')) rename = True else: fsplit = fname.split('_') fbase = '_'.join(fsplit[:-1]) key = fsplit[-1].split('.')[0] # TODO: QA60 continue_download = True if fbase in null_items: continue_download = False if continue_download: ################### # Download the file ################### if not Path(down_file).is_file(): if fn.lower().startswith('gs://gcp-public-data'): com = 'gsutil cp -r {} {}'.format(fn, outdir) else: com = 'gsutil cp -r {}/{} {}'.format(gcp_str, fn, outdir) if verbose > 0: logger.info(' Downloading {} ...'.format(fname)) subprocess.call(com, shell=True) if rename: os.rename(str(Path(outdir).joinpath(Path(fn).name)), down_file) # Store file information return key, FileInfo(name=down_file, key=key) else: return None, None class DownloadMixin(object): def download_gcp(self, sensor, downloads=None, outdir='.', outdir_brdf=None, search_wildcards=None, search_dict=None, n_jobs=1, verbose=0): """ Downloads a file from Google Cloud platform Args: sensor (str): The sensor to query. Choices are ['l5', 'l7', 'l8', 's2a', 's2c']. downloads (Optional[str or list]): The file or list of keys to download. If not given, keys will be taken from ``search_dict`` or ``self.search_dict``. outdir (Optional[str | Path]): The output directory. outdir_brdf (Optional[Path]): The output directory. search_wildcards (Optional[list]): A list of search wildcards. search_dict (Optional[dict]): A keyword search dictionary to override ``self.search_dict``. n_jobs (Optional[int]): The number of files to download in parallel. verbose (Optional[int]): The verbosity level. Returns: ``dict`` of ``dicts`` where sub-dictionaries contain a ``namedtuple`` of the downloaded file and tag """ if not search_dict: if not self.search_dict: logger.exception(' A keyword search dictionary must be provided, either from `self.list_gcp` or the `search_dict` argument.') else: search_dict = self.search_dict poutdir = Path(outdir) if outdir != '.': poutdir.mkdir(parents=True, exist_ok=True) if not downloads: downloads = list(search_dict.keys()) if not isinstance(downloads, list): downloads = [downloads] if sensor in ['s2', 's2a', 's2b', 's2c']: gcp_str = 'gsutil cp -r gs://gcp-public-data-sentinel-2' else: gcp_str = 'gsutil cp -r gs://gcp-public-data-landsat' downloaded = {} null_items = [] for search_key in downloads: download_list = self.search_dict[search_key] if search_wildcards: download_list_ = [] for swild in search_wildcards: download_list_ += fnmatch.filter(download_list, '*{}'.format(swild)) download_list = download_list_ download_list_names = [Path(dfn).name for dfn in download_list] logger.info(' The download contains {:d} items: {}'.format(len(download_list_names), ','.join(download_list_names))) # Separate each scene if sensor.lower() in ['l5', 'l7', 'l8']: # list of file ids id_list = ['_'.join(fn.split('_')[:-1]) for fn in download_list_names if fn.endswith('_MTL.txt')] # list of lists where each sub-list is unique download_list_unique = [[fn for fn in download_list if sid in Path(fn).name] for sid in id_list] else: id_list = list(set(['_'.join(fn.split('_')[:-1]) for fn in download_list_names])) download_list_unique = [download_list] for scene_id, sub_download_list in zip(id_list, download_list_unique): logger.info(' Checking scene {} ...'.format(scene_id)) downloaded_sub = {} # Check if the file has been downloaded if sensor.lower() in ['l5', 'l7', 'l8']: if not scene_id.lower().startswith(self.sensor_collections[sensor.lower()]): logger.exception(' The scene id {SCENE_ID} does not match the sensor {SENSOR}.'.format(SCENE_ID=scene_id, SENSOR=sensor)) raise NameError # Path of BRDF stack out_brdf = outdir_brdf.joinpath(scene_id + '.tif') else: fn = sub_download_list[0] fname = Path(fn).name if fname.lower().endswith('.jp2'): fbase = Path(fn).parent.parent.name key = Path(fn).name.split('.')[0].split('_')[-1] down_file = str(poutdir.joinpath(fbase + '_' + key + '.jp2')) brdfp = '_'.join(Path(down_file).name.split('_')[:-1]) out_brdf = outdir_brdf.joinpath(brdfp + '_MTD.tif') else: out_brdf = None if out_brdf: if out_brdf.is_file() or \ Path(str(out_brdf).replace('.tif', '.nc')).is_file() or \ Path(str(out_brdf).replace('.tif', '.nodata')).is_file(): logger.warning(f' The output BRDF file, {str(out_brdf)}, already exists.') _clean_and_update(None, None, None, check_angles=False, check_downloads=False) continue else: logger.warning(f' Continuing with the download for {str(out_brdf)}.') # Move the metadata file to the front of the # list to avoid unnecessary downloads. if sensor.lower() in ['l5', 'l7', 'l8']: meta_index = [i for i in range(0, len(sub_download_list)) if sub_download_list[i].endswith('_MTL.txt')][0] sub_download_list.insert(0, sub_download_list.pop(meta_index)) else: # The Sentinel 2 metadata files come in their own list pass download_list_names = [Path(dfn).name for dfn in sub_download_list] results = [] with concurrent.futures.ThreadPoolExecutor(max_workers=n_jobs) as executor: futures = [executor.submit(_download_workers, gcp_str, poutdir, outdir, fname, fn, null_items, verbose) for fname, fn in zip(download_list_names, sub_download_list)] for f in concurrent.futures.as_completed(futures): results.append(f.result()) for key, finfo_ in results: if finfo_: downloaded_sub[key] = finfo_ if downloaded_sub: if len(downloaded_sub) < len(sub_download_list): downloaded_names = [Path(v.name).name for v in list(downloaded_sub.values())] missing_items = ','.join(list(set(download_list_names).difference(downloaded_names))) logger.warning(' Only {:d} files out of {:d} were downloaded.'.format(len(downloaded_sub), len(sub_download_list))) logger.warning(' {} are missing.'.format(missing_items)) downloaded[search_key] = downloaded_sub return downloaded def download_aws(self, landsat_id, band_list, outdir='.'): """ Downloads Landsat 8 data from Amazon AWS Args: landsat_id (str): The Landsat id to download. band_list (list): The Landsat bands to download. outdir (Optional[str]): The output directory. Examples: >>> from geowombat.util import GeoDownloads >>> >>> dl = GeoDownloads() >>> dl.download_aws('LC08_L1TP_224077_20200518_20200518_01_RT', ['b2', 'b3', 'b4']) """ if not REQUESTS_INSTALLED: logger.exception('Requests must be installed.') if not isinstance(outdir, Path): outdir = Path(outdir) parts = landsat_id.split('_') path_row = parts[2] path = int(path_row[:3]) row = int(path_row[3:]) def _download_file(in_file, out_file): response = requests.get(in_file) with open(out_file, 'wb') as f: f.write(response.content) mtl_id = '{landsat_id}_MTL.txt'.format(landsat_id=landsat_id) url = '{aws_l8_public}/{path:03d}/{row:03d}/{landsat_id}/{mtl_id}'.format(aws_l8_public=self.aws_l8_public, path=path, row=row, landsat_id=landsat_id, mtl_id=mtl_id) mtl_out = outdir / mtl_id _download_file(url, str(mtl_out)) angle_id = '{landsat_id}_ANG.txt'.format(landsat_id=landsat_id) url = '{aws_l8_public}/{path:03d}/{row:03d}/{landsat_id}/{angle_id}'.format(aws_l8_public=self.aws_l8_public, path=path, row=row, landsat_id=landsat_id, angle_id=angle_id) angle_out = outdir / angle_id _download_file(url, str(angle_out)) for band in band_list: band_id = '{landsat_id}_{band}.TIF'.format(landsat_id=landsat_id, band=band.upper()) url = '{aws_l8_public}/{path:03d}/{row:03d}/{landsat_id}/{band_id}'.format(aws_l8_public=self.aws_l8_public, path=path, row=row, landsat_id=landsat_id, band_id=band_id) band_out = outdir / band_id _download_file(url, str(band_out)) # def download_landsat_range(self, sensors, bands, path_range, row_range, date_range, **kwargs): # # """ # Downloads Landsat data from iterables # # Args: # sensors (str): A list of sensors to download. # bands (str): A list of bands to download. # path_range (iterable): A list of paths. # row_range (iterable): A list
<filename>tail/analysis/likelihood.py from .container import Spectra, get_idx, get_idxs, FSky from .foreground import DUST_AMP, DUST_AMP_STD import sys import numpy as np import pandas as pd from numba import jit, float64, bool_, int32, complex128, prange from scipy.optimize import minimize from scipy.stats import chi2 import emcee import seaborn as sns SPECTRA_MATRIX_ORDER = 'TEB' DUST_POWER = -0.58 DUST_KNEE = 80. # conversion ################################################################### def to_spectra_matrix(array, spectra, cases=SPECTRA_MATRIX_ORDER): '''convert spectra into a matrix form :param array: starts with a dimension that holds different spectra :param spectra: list of spectrum names return ------ the last 2 dimensions become the indices for spectra in matrix form ''' shape_in = array.shape n = shape_in[0] m = len(cases) # the resultant matrix is a symmetric matrix assert m * (m + 1) == 2 * n shape_out = list(shape_in[1:]) + [m, m] res = np.empty(shape_out, dtype=array.dtype) for i in range(m): for j in range(i, m): spectrum = cases[i] + cases[j] idx = get_idx(spectra, spectrum) temp = array[idx] res[..., i, j] = temp if i != j: res[..., j, i] = temp return res def from_spectra_matrix(array, spectra, cases=SPECTRA_MATRIX_ORDER): '''inverse function of `to_spectra_matrix` ''' shape_in = array.shape n = len(spectra) m = shape_in[-1] assert shape_in[-2] == m assert m * (m + 1) == 2 * n shape_out = [n] + list(shape_in[:-2]) res = np.empty(shape_out, dtype=array.dtype) for i in range(m): for j in range(i, m): spectrum = cases[i] + cases[j] idx = get_idx(spectra, spectrum) temp = array[..., i, j] res[idx] = temp return res # Transformation ############################################################### def get_rel_dust_spectra(spectra: Spectra): w_bl = spectra.w_bl[get_idx(spectra.spectra, 'BB'), 0, 0] l_min = spectra.l_min l_max = spectra.l_max return w_bl @ (np.power(np.arange(l_min, l_max) / DUST_KNEE, DUST_POWER)) @jit(float64[:](float64, int32, int32, int32, float64), nopython=True, nogil=True, cache=True) def _beam_err(sigma_sq, l_min, bin_width, n_b, bin_width_inverse): delta_B_sq = np.zeros(n_b) for b in range(n_b): for dl in range(bin_width): l = l_min + b * bin_width + dl delta_B_sq[b] += np.exp(-l * (l + 1) * sigma_sq) delta_B_sq[b] *= bin_width_inverse return delta_B_sq @jit(float64[:](float64, int32, int32, int32), nopython=True, nogil=True, cache=True) def beam_err(sigma_sq, l_min, l_max, bin_width): '''calculate the binned beam error''' n = l_max - l_min n_b = n // bin_width assert n % bin_width == 0 bin_width_inverse = 1. / bin_width return _beam_err(sigma_sq, l_min, bin_width, n_b, bin_width_inverse) @jit(float64[:, :, ::1](float64, float64[:, :, ::1], float64[::1]), nopython=True, nogil=True, cache=True) def scale_by_amplitude_parameter(theta, theory_spectra, leakage_spectra_BB): '''scale the theory BB spectra `theta`: A_BB `theory_spectra`: spectra in matrix form `leakage_spectra_BB`: BB leakage which won't scale with A_BB ''' res = theory_spectra.copy() res[:, 2, 2] -= leakage_spectra_BB res[:, 2, 2] *= theta res[:, 2, 2] += leakage_spectra_BB return res @jit(float64[:, :, ::1](float64[::1], float64[:, :, ::1], int32, int32, int32), nopython=True, nogil=True, cache=True) def transform(thetas, array, l_min, l_max, bin_width): '''rotate and scale spectra in matrix form `thetas`: first 2 are scaling factors, 3rd one is angle to rotate, 4th is sigma_sq `array`: spectra in matrix form calling 1/a, theta, -sigma_sq on inverse_transform should be inverse of this This function is not used anywhere as of writing, but for testing the inverse relationship ''' bin_width_inverse = 1. / bin_width n_b = array.shape[0] a = np.empty(3, dtype=thetas.dtype) a[0] = thetas[0] a[1] = thetas[1] a[2] = thetas[1] theta = thetas[2] sigma_sq = thetas[3] b_sq = _beam_err(sigma_sq, l_min, bin_width, n_b, bin_width_inverse) R = np.zeros((3, 3)) R[0, 0] = 1. cos_2theta = np.cos(2. * theta) sin_2theta = np.sin(2. * theta) R[1, 1] = cos_2theta R[1, 2] = sin_2theta R[2, 1] = -sin_2theta R[2, 2] = cos_2theta res = np.zeros_like(array) for b in range(n_b): for i in range(3): for j in range(3): for k in range(3): for l in range(3): res[b, i, j] += b_sq[b] * R[i, k] * R[j, l] * a[k] * a[l] * array[b, k, l] return res @jit( [ complex128[:, :, ::1](float64[::1], complex128[:, :, ::1], int32, int32, int32), float64[:, :, ::1](float64[::1], float64[:, :, ::1], int32, int32, int32), ], nopython=True, nogil=True, cache=True, ) def inverse_transform(thetas, array, l_min, l_max, bin_width): '''rotate and scale measured spectra in matrix form `thetas`: first 2 are scaling factors, 3rd one is angle to rotate, 4th is sigma_sq `array`: spectra in matrix form (global var) calling 1/a, theta, -sigma_sq on transform should be inverse of this ''' bin_width_inverse = 1. / bin_width n_b = array.shape[0] a = np.empty(3, dtype=thetas.dtype) a[0] = thetas[0] a[1] = thetas[1] a[2] = thetas[1] theta = thetas[2] sigma_sq = thetas[3] b_sq = _beam_err(sigma_sq, l_min, bin_width, n_b, bin_width_inverse) R = np.zeros((3, 3)) R[0, 0] = 1. cos_2theta = np.cos(2. * theta) sin_2theta = np.sin(2. * theta) R[1, 1] = cos_2theta R[1, 2] = -sin_2theta R[2, 1] = sin_2theta R[2, 2] = cos_2theta res = np.zeros_like(array) for b in range(n_b): for i in range(3): for j in range(3): for k in range(3): for l in range(3): res[b, i, j] += b_sq[b] * a[i] * a[j] * R[j, l] * R[i, k] * array[b, k, l] return res @jit( [ complex128[:, :, :, ::1](float64[::1], complex128[:, :, :, ::1], int32, int32, int32), float64[:, :, :, ::1](float64[::1], float64[:, :, :, ::1], int32, int32, int32), ], nopython=True, nogil=True, cache=True, parallel=True, ) def inverse_transform_batch(thetas, array, l_min, l_max, bin_width): '''batch running inverse_transform per first dimension of array''' n = array.shape[0] res = np.empty_like(array) for i in prange(n): res[i] = inverse_transform(thetas, array[i], l_min, l_max, bin_width) return res def run_test(array=None, l_min=None, l_max=None, bin_width=None): '''test get_transform and get_inverse_transform `array`: some sort of theory spectra s.t. TB, EB are 0. ''' if array is None: # fake theory array = np.random.rand(24, 3, 3) # symmetric array += array.transpose(0, 2, 1) # TB, EB = 0 array[:, 0, 2] = 0. array[:, 2, 0] = 0. array[:, 1, 2] = 0. array[:, 2, 1] = 0. if l_min is None: l_min = 600 if l_max is None: l_max = 3000 if bin_width is None: bin_width = 50 # sanity check for transform np.testing.assert_array_equal(array, transform(np.array([1., 1., 0., 0.]), array, l_min, l_max, bin_width)) res = transform(np.array([1.1, 1.2, 0.1, 0.]), array, l_min, l_max, bin_width) np.testing.assert_array_equal(res[:, 1, 2], res[:, 2, 1]) np.testing.assert_array_equal(res[:, 0, 2], res[:, 2, 0]) np.testing.assert_array_equal(res[:, 0, 1], res[:, 1, 0]) # compare to "Self-calibrating" paper for theta in (np.random.rand(10) * (2. * np.pi)): theta_2 = theta * 2. mine = transform(np.array([1., 1., theta, 0.]), array, l_min, l_max, bin_width) np.testing.assert_array_equal(mine[:, 0, 1], np.cos(theta_2) * array[:, 0, 1]) np.testing.assert_array_equal(mine[:, 1, 1], np.sin(theta_2)**2 * array[:, 2, 2] + np.cos(theta_2)**2 * array[:, 1, 1]) np.testing.assert_allclose(mine[:, 1, 2], 0.5 * np.sin(4. * theta) * (array[:, 2, 2] - array[:, 1, 1])) np.testing.assert_array_equal(mine[:, 0, 2], -np.sin(theta_2) * array[:, 0, 1]) np.testing.assert_array_equal(mine[:, 2, 2], np.cos(theta_2)**2 * array[:, 2, 2] + np.sin(theta_2)**2 * array[:, 1, 1]) # sanity check for inverse_transform np.testing.assert_array_equal(array, inverse_transform(np.array([1., 1., 0., 0.]), array, l_min, l_max, bin_width)) # check transform and inverse becomes identity for thetas in (np.random.rand(2, 4) * (2. * np.pi)): thetas[3] *= 1.e-10 transformed = transform(thetas, array, l_min, l_max, bin_width) thetas_inv = np.empty_like(thetas) thetas_inv[0] = 1. / thetas[0] thetas_inv[1] = 1. / thetas[1] thetas_inv[2] = thetas[2] thetas_inv[3] = -thetas[3] np.testing.assert_allclose(array, inverse_transform(thetas_inv, transformed, l_min, l_max, bin_width), atol=1e-13) # likelihood ################################################################### # θ / thetas # thetas[0] = A_BB # thetas[1] = a_T # thetas[2] = a_E # thetas[3] = theta # thetas[4] = sigma_sq # thetas[5] = A_dust @jit(bool_(float64[::1]), nopython=True, nogil=True, cache=True) def prior(θ): '''global prior constraint need to double check the MCMC result isn't anywhere near these boundaries ''' # degree scale = 30. / 180. * np.pi return ( (0.5 < θ[1]) & (θ[1] < 2.) & (0.5 < θ[2]) & (θ[2] < 2.) & (-scale < θ[3]) & (θ[3] < scale) ) @jit(float64(float64[::1], complex128[:, :, ::1], float64[:, :, ::1], float64[::1], float64[::1], float64[::1], int32, int32, int32, int32), nopython=True, nogil=True, cache=True) def log_likelihood(thetas, obs_spectra, theory_spectra, leakage_spectra_BB, rel_dust_spectra, dof, l_min, l_max, bin_width, l_T_cutoff_idx): ''' thetas[0] = A_BB thetas[1] = a_T thetas[2] = a_E thetas[3] = theta thetas[4] = sigma_sq thetas[5] = A_dust :param int l_T_cutoff_idx: drop TT, TE, TB beyond to this threshold, in terms of the b-th bin index. Set this to -1 to disable this behavior ''' if not prior(thetas): return -np.inf # complex, real is signal, imag is noise C_total_est_complex = inverse_transform(thetas[1:5], obs_spectra, l_min, l_max, bin_width) C_total = scale_by_amplitude_parameter(thetas[0], theory_spectra, leakage_spectra_BB) Nb = C_total_est_complex.imag C_total_est = C_total_est_complex.real + Nb n
num_voters and num_candidates > obj.votes_allowed ) if obj.publish_state != obj.PublishStates.ELECTION_NOT_DECENTRALIZED: txt = '' icon = DoneIcon() else: txt = _('Choose the blockchain you want to deploy your election smart contract to') icon = TodoIcon() try: has_contract = obj.electioncontract is not None except Contest.electioncontract.RelatedObjectDoesNotExist: has_contract = False super().__init__( _('Add the election smart contract'), txt, icon, MDCButtonOutlined( _('add'), icon='add', tag='a', p=False, href=reverse('electioncontract_create', args=[obj.id]) ) if not has_contract else None, separator=separator ) class OnGoingElectionAction(ListAction): def __init__(self, contest, user, view): close_url = reverse('contest_close', args=[contest.id]) close_btn = MDCButtonOutlined(_('close'), False, tag='a', href=close_url) start_time = '<b>' + _date(contest.actual_start, 'd F, G\hi') + '</b>' sub_txt = None if contest.actual_end: end_time = '<b>' + _date(contest.actual_end, 'd F, G\hi') + '</b>' title = _('Voting closed') txt = _('The voting started on %(start)s and was open till %(end)s. ' 'Timezone: %(timezone)s.', start=start_time, end=end_time, timezone=str(contest.timezone) ) txt = mark_safe(txt) icon = SimpleCheckIcon() else: vote_link = reverse('otp_send') + f'?redirect=' + reverse('contest_vote', args=[contest.id]) vote_link = view.request.build_absolute_uri(vote_link) end_time = '<b>' + _date(contest.end, 'd F, G\hi') + '</b>' title = _('The voting process is currently ongoing') txt = _('The voting started on %(time_start)s and will be closed at %(time_end)s. ' 'Timezone: %(timezone)s', time_start=start_time, time_end=end_time, timezone=str(contest.timezone) ) if contest.mediator == user: sub_txt = _('Vote link: %(link)s', link=f'<a href={vote_link}>{vote_link}</a>' ) icon = OnGoingIcon() inner = Span( txt, CList(Br(), Br(), sub_txt) if sub_txt else None, cls='body-2 red-button-container' ) if contest.mediator == user and not contest.actual_end: inner.addchild(close_btn) separator = ( contest.actual_end or contest.mediator == user or contest.guardian_set.filter(user=user).count() ) super().__init__( title, inner, icon, None, separator=separator ) class UploadPrivateKeyAction(ListAction): def __init__(self, contest, user): guardian = contest.guardian_set.filter(user=user).first() title = _('Upload my private key') icon = TodoIcon() content = Div( _('All guardians need to upload their private keys so that' ' the ballot box can be opened to reveal the results.')) if contest.actual_end and not guardian.uploaded: action_url_ = reverse('guardian_upload', args=[guardian.id]) action_btn_ = MDCButtonOutlined( _('upload my private key'), False, tag='a', href=action_url_) content.addchild(action_btn_) elif guardian.uploaded: icon = DoneIcon() super().__init__( title, content, icon, None, separator=user == contest.mediator ) class UnlockBallotAction(ListAction): def __init__(self, contest, user): self.contest = contest self.user = user self.has_action = False guardian = contest.guardian_set.filter(user=user).first() n_guardian = contest.guardian_set.count() n_uploaded = contest.guardian_set.exclude(uploaded=None).count() if contest.actual_end: task_list = Ol() txt = _('All guardians upload their keys %(uploaded)s/%(guardian)s uploaded', n=n_uploaded, uploaded=n_uploaded, guardian=n_guardian ) cls='bold' if n_uploaded == n_guardian: cls = 'line' task_list.addchild(Li(txt, cls=cls)) cls = 'bold' if cls == 'line' else '' txt = _('Unlock the ballot box with encrypted ballots and reveal the results') task_list.addchild(Li(txt, cls=cls)) content = Span( P( _('All guardians need to upload their private keys so that the ballot box can be opened to reveal the results.') ), task_list, cls='body-2' ) else: content = Span( P(_('When the election is over the guardians use their keys to open the ballot box and count the results.')), cls='body-2' ) title = _('Unlocking the ballot box and revealing the results') if (contest.actual_end and not self.has_action and n_guardian == n_uploaded ): action_url_ = reverse('contest_decrypt', args=(contest.id,)) action_btn_ = MDCButton( _('reveal results'), True, tag='a', href=action_url_, disabled=n_guardian != n_uploaded) content.addchild(action_btn_) icon = TodoIcon() super().__init__( title, content, icon, None, separator=False, ) class WaitForEmailAction(ListAction): def __init__(self, contest, user): super().__init__( _('Once the ballots are counted you will be notified by email'), '', EmailIcon(), None, separator=False ) class ResultAction(ListAction): def __init__(self, contest, user): subtext = Div() if contest.decrypting: icon = OnGoingIcon() title = _('Tallying in progress') if contest.mediator == user: subtext.addchild( Div(_('An email will be sent when finished')) ) else: icon = DoneIcon() title = _('Results available') if contest.mediator == user: subtext.addchild( Div(_('Congratulations! You have been the mediator of a secure election.'))) url=reverse('contest_result', args=[contest.id]) result_btn = MDCButton(_('view result table'), tag='a', href=url) subtext.addchild(result_btn) super().__init__( title, subtext, icon, None, separator=False ) class ContestVotingCard(Div): def __init__(self, view, **context): contest = view.get_object() user = view.request.user list_content = [] actions = [] if contest.voter_set.filter(user=user).count(): actions.append('vote') if contest.mediator == user: actions.append('close') guardian = contest.guardian_set.filter(user=user).first() if guardian: actions.append('upload') if 'vote' in actions: list_content.append(CastVoteAction(contest, user)) list_content.append(OnGoingElectionAction(contest, user, view)) if 'upload' in actions: list_content.append(UploadPrivateKeyAction(contest, user)) if contest.mediator == user: list_content.append(UnlockBallotAction(contest, user)) elif guardian.uploaded: if contest.mediator != user: list_content.append(WaitForEmailAction(contest, user)) if not len(actions): list_content.append(WaitForEmailAction(contest, user)) about = mark_safe(escape(contest.about).replace('\n', '<br>')) super().__init__( H4(contest.name, style='word-break: break-all;'), Div( about, style='padding: 12px; word-break: break-all;', cls='subtitle-2' ), Ul( *list_content, cls='mdc-list action-list' ), cls='setting-section main-setting-section' ) class ContestSettingsCard(Div): def __init__(self, view, **context): contest = view.get_object() user = view.request.user list_content = [] if contest.mediator == view.request.user: list_content += [ BasicSettingsAction(contest), AddCandidateAction(contest), AddVoterAction(contest), ChooseBlockchainAction(contest, user), ] if ( contest.voter_set.count() and contest.candidate_set.count() and contest.candidate_set.count() > contest.number_elected ): if contest.publish_state != contest.PublishStates.ELECTION_NOT_DECENTRALIZED: list_content.append(SecureElectionAction(contest, user)) else: list_content.append(SecureElectionAction(contest, user)) about = mark_safe(escape(contest.about).replace('\n', '<br>')) super().__init__( H4(contest.name, style='word-break: break-all;'), Div( about, style='padding: 12px; word-break: break-all;', cls='subtitle-2' ), Ul( *list_content, cls='mdc-list action-list' ), cls='setting-section main-setting-section' ) class Section(Div): pass class TezosSecuredCard(Section): def __init__(self, contest, user): link = None blockchain = None if contest.publish_state != contest.PublishStates.ELECTION_NOT_DECENTRALIZED: try: contract = contest.electioncontract blockchain = contract.blockchain link = A( contract.contract_address, href=getattr(contract, 'explorer_link', ''), style='text-overflow: ellipsis; overflow: hidden; width: 100%;' ) except ObjectDoesNotExist: pass # no contract def step(s): return Span( Span(s, style='width: 100%'), link, style='display: flex; flex-flow: column wrap' ) super().__init__( Ul( ListAction( _('Secured and decentralised with Tezos'), Span( _('Your election data and results will be published' ' on Tezos’ %(blockchain)s blockchain.', blockchain=blockchain ), PublishProgressBar([ step(_('Election contract created')), step(_('Election opened')), step(_('Election closed')), step(_('Election Results available')), step(_('Election contract updated')), ], contest.publish_state - 1), ) if contest.publish_state else None, TezosIcon(), None, separator=False ), cls='mdc-list action-list', ), cls='setting-section', style='background-color: aliceblue;' ) class CheckedIcon(MDCIcon): def __init__(self): super().__init__('check_circle', cls='material-icons icon green2') class GuardianActionButton(CList): def __init__(self, guardian, action): url = reverse(f'guardian_{action}', args=[guardian.id]) if action == 'download': btn = DownloadBtn( _('Download'), 'file_download', tag='a', href=url, data_filename=f'guardian-{guardian.id}.pkl') elif action == 'verify': btn = MDCTextButton(_('Upload'), 'file_upload', tag='a', href=url) super().__init__(btn) class GuardianTable(Div): def __init__(self, view, **context): table_head_row = Tr(cls='mdc-data-table__header-row') for th in (_('email'), _('key downloaded'), _('key verified')): table_head_row.addchild( Th( th, role='columnheader', scope='col', cls='mdc-data-table__header-cell overline', style='width: 50%' if th == 'email' else 'text-align: center;' ) ) table_content = Tbody(cls='mdc-data-table__content') contest = view.get_object() cls = 'mdc-data-table__cell' for guardian in contest.guardian_set.all(): if guardian.user == view.request.user: if not guardian.downloaded: dl_elem = GuardianActionButton(guardian, 'download') ul_elem = '--' else: if not guardian.verified: dl_elem = GuardianActionButton(guardian, 'download') ul_elem = GuardianActionButton(guardian, 'verify') else: dl_elem = CheckedIcon() ul_elem = CheckedIcon() table_content.addchild(Tr( Td(guardian.user.email, cls=cls), Td( dl_elem, cls=cls + ' center'), Td( ul_elem, cls=cls + ' center'), cls='mdc-data-table__row' )) else: table_content.addchild(Tr( Td(guardian.user.email, cls=cls), Td( CheckedIcon() if guardian.downloaded else 'No', cls=cls + ' center'), Td( CheckedIcon() if guardian.verified else 'No', cls=cls + ' center'), cls='mdc-data-table__row' )) table = Table( Thead(table_head_row), table_content, **{ 'class': 'mdc-data-table__table', 'aria-label': 'Guardians' } ) super().__init__(table, cls='table-container guardian-table') class GuardiansSettingsCard(Div): def __init__(self, view, **context): contest = view.get_object() super().__init__( H5(_('Guardians')), GuardianTable(view, **context), cls='setting-section' ) class CandidatesSettingsCard(Div): def __init__(self, view, **context): contest = view.get_object() editable = (view.request.user == contest.mediator and not contest.actual_start) kwargs = dict(p=False, tag='a') if contest.candidate_set.count(): if editable: kwargs['href'] = reverse('contest_candidate_create', args=[contest.id]) btn = MDCButtonOutlined(_('view all/edit'), **kwargs) else: kwargs['href'] = reverse('contest_candidate_list', args=[contest.id]) btn = MDCButtonOutlined(_('view all'), **kwargs) else: if editable: kwargs['href'] = reverse('contest_candidate_create', args=[contest.id]) btn = MDCButtonOutlined(_('add'), icon='add', **kwargs) else: btn = None super().__init__( H5(_('Candidates')), CandidateListComp(contest, editable), btn, cls='setting-section' ) class VotersSettingsCard(Div): def __init__(self, view, **context): contest = view.get_object() num_emails = contest.voter_set.all().count() kwargs = dict( p=False, tag='a', href=reverse('contest_voters_detail', args=[contest.id])) if contest.actual_start: btn = MDCButtonOutlined(_('view all'), **kwargs) elif num_emails: btn = MDCButtonOutlined(_('view all/edit'), **kwargs) else: kwargs['href'] = reverse('contest_voters_update', args=[contest.id]) btn = MDCButtonOutlined(_('add'), icon='add', **kwargs) super().__init__( H5(_('Voters')), Span(_('%(voters)s voters added', n=num_emails, voters=num_emails), cls='voters_count'), btn, cls='setting-section' ) class ContestFinishedCard(Div): def __init__(self, view, **context): contest = view.get_object() is_voter = False if contest.voter_set.filter(user=view.request.user).count(): is_voter = True about = mark_safe(escape(contest.about).replace('\n', '<br>')) super().__init__( H4(contest.name, style='word-break: break-all'), Div( about, style='padding: 12px; word-break: break-all;', cls='subtitle-2' ), Ul( CastVoteAction(contest, view.request.user) if is_voter else None, ResultAction(contest, view.request.user), cls='mdc-list action-list' ), cls='setting-section main-setting-section' ) @template('djelectionguard/contest_detail.html', Document) class ContestCard(Div): def to_html(self, *content, view, **context): contest = view.get_object() if contest.plaintext_tally or contest.decrypting: main_section = ContestFinishedCard(view, **context) elif contest.actual_start: main_section = ContestVotingCard(view, **context) else: main_section = ContestSettingsCard(view, **context) action_section = Div( main_section, TezosSecuredCard(contest, view.request.user), cls='main-container') sub_section = Div( CandidatesSettingsCard(view, **context), cls='side-container') if ( contest.mediator == view.request.user or contest.guardian_set.filter(user=view.request.user).count() ): action_section.addchild(GuardiansSettingsCard(view, **context)) if contest.mediator == view.request.user: sub_section.addchild(VotersSettingsCard(view, **context)) return super().to_html( Div( Div(
# Copyright (c) 2019-2020 <NAME> # License: MIT License # Created 2019-03-06 from typing import TYPE_CHECKING, Iterable, Sequence import array import copy from itertools import chain from contextlib import contextmanager from ezdxf.math import Vector from ezdxf.lldxf.attributes import DXFAttr, DXFAttributes, DefSubclass, XType from ezdxf.lldxf.const import SUBCLASS_MARKER, DXF2000, DXFValueError from ezdxf.lldxf.packedtags import VertexArray from ezdxf.math.bspline import uniform_knot_vector, open_uniform_knot_vector from .dxfentity import base_class, SubclassProcessor from .dxfgfx import DXFGraphic, acdb_entity from .factory import register_entity if TYPE_CHECKING: from ezdxf.eztypes import TagWriter, DXFNamespace, Drawing, Vertex, Tags, UCS __all__ = ['Spline'] acdb_spline = DefSubclass('AcDbSpline', { # Spline flag (bit coded): # 1 = Closed spline # 2 = Periodic spline # 4 = Rational spline # 8 = Planar # 16 = Linear (planar bit is also set) 'flags': DXFAttr(70, default=0), 'degree': DXFAttr(71, default=3), 'n_knots': DXFAttr(72, xtype=XType.callback, getter='knot_count'), 'n_control_points': DXFAttr(73, xtype=XType.callback, getter='control_point_count'), 'n_fit_points': DXFAttr(74, xtype=XType.callback, getter='fit_point_count'), 'knot_tolerance': DXFAttr(42, default=1e-10, optional=True), 'control_point_tolerance': DXFAttr(43, default=1e-10, optional=True), 'fit_tolerance': DXFAttr(44, default=1e-10, optional=True), 'start_tangent': DXFAttr(12, xtype=XType.point3d, optional=True), 'end_tangent': DXFAttr(13, xtype=XType.point3d, optional=True), # extrusion is the normal vector (omitted if the spline is non-planar) 'extrusion': DXFAttr(210, xtype=XType.point3d, default=Vector(0, 0, 1), optional=True), # 10: Control points (in WCS); one entry per control point # 11: Fit points (in WCS); one entry per fit point # 40: Knot value (one entry per knot) # 41: Weight (if not 1); with multiple group pairs, they are present if all are not 1 }) class SplineData: def __init__(self, spline: 'Spline'): self.fit_points = spline.fit_points self.control_points = spline.control_points self.knots = spline.knots self.weights = spline.weights REMOVE_CODES = {10, 11, 40, 41, 72, 73, 74} @register_entity class Spline(DXFGraphic): """ DXF SPLINE entity """ DXFTYPE = 'SPLINE' DXFATTRIBS = DXFAttributes(base_class, acdb_entity, acdb_spline) MIN_DXF_VERSION_FOR_EXPORT = DXF2000 CLOSED = 1 # closed b-spline PERIODIC = 2 # uniform b-spline RATIONAL = 4 # rational b-spline PLANAR = 8 # all spline points in a plane, don't read or set this bit, just ignore like AutoCAD LINEAR = 16 # always set with PLANAR, don't read or set this bit, just ignore like AutoCAD def __init__(self, doc: 'Drawing' = None): super().__init__(doc) self.fit_points = VertexArray() # data stored as array.array('d') self.control_points = VertexArray() # data stored as array.array('d') self.knots = [] # data stored as array.array('d') self.weights = [] # data stored as array.array('d') def _copy_data(self, entity: 'Spline') -> None: """ Copy data: control_points, fit_points, weights, knot_values. """ entity._control_points = copy.deepcopy(self._control_points) entity._fit_points = copy.deepcopy(self._fit_points) entity._knots = copy.deepcopy(self._knots) entity._weights = copy.deepcopy(self._weights) def load_dxf_attribs(self, processor: SubclassProcessor = None) -> 'DXFNamespace': dxf = super().load_dxf_attribs(processor) if processor: tags = processor.find_subclass(acdb_spline.name) # load spline data (fit points, control points, weights, knots) and remove their tags from subclass self.load_spline_data(tags) # load remaining data into name space tags = processor.load_dxfattribs_into_namespace(dxf, acdb_spline) if len(tags): processor.log_unprocessed_tags(tags, subclass=acdb_spline.name) return dxf def load_spline_data(self, spline_tags: 'Tags') -> None: self.control_points = (value for code, value in spline_tags if code == 10) self.fit_points = (value for code, value in spline_tags if code == 11) self.knots = (value for code, value in spline_tags if code == 40) self.weights = (value for code, value in spline_tags if code == 41) spline_tags.remove_tags(codes=REMOVE_CODES) def export_entity(self, tagwriter: 'TagWriter') -> None: """ Export entity specific data as DXF tags. """ # base class export is done by parent class super().export_entity(tagwriter) # AcDbEntity export is done by parent class tagwriter.write_tag2(SUBCLASS_MARKER, acdb_spline.name) self.dxf.export_dxf_attribs(tagwriter, ['extrusion', 'flags', 'degree']) tagwriter.write_tag2(72, self.knot_count()) tagwriter.write_tag2(73, self.control_point_count()) tagwriter.write_tag2(74, self.fit_point_count()) self.dxf.export_dxf_attribs(tagwriter, [ 'knot_tolerance', 'control_point_tolerance', 'fit_tolerance', 'start_tangent', 'end_tangent', ]) self.export_spline_data(tagwriter) def export_spline_data(self, tagwriter: 'TagWriter'): for value in self._knots: tagwriter.write_tag2(40, value) if len(self._weights): for value in self._weights: tagwriter.write_tag2(41, value) self._control_points.export_dxf(tagwriter, code=10) self._fit_points.export_dxf(tagwriter, code=11) @property def closed(self) -> bool: """ ``True`` if spline is closed. A closed spline has a connection from the last control point to the first control point. (read/write) """ return self.get_flag_state(self.CLOSED, name='flags') @closed.setter def closed(self, status: bool) -> None: self.set_flag_state(self.CLOSED, state=status, name='flags') @property def knots(self) -> 'array.array': # group code 40 """ Knot values as :code:`array.array('d')`. """ return self._knots @knots.setter def knots(self, values: Iterable[float]) -> None: self._knots = array.array('d', values) def knot_count(self) -> int: # DXF callback attribute Spline.dxf.n_knots """ Count of knot values. """ return len(self._knots) @property def weights(self) -> 'array.array': # group code 41 """ Control point weights as :code:`array.array('d')`. """ return self._weights @weights.setter def weights(self, values: Iterable[float]) -> None: self._weights = array.array('d', values) @property def control_points(self) -> VertexArray: # group code 10 """ :class:`~ezdxf.lldxf.packedtags.VertexArray` of control points in :ref:`WCS`. """ return self._control_points @control_points.setter def control_points(self, points: Iterable['Vertex']) -> None: self._control_points = VertexArray(chain.from_iterable(points)) def control_point_count(self) -> int: # DXF callback attribute Spline.dxf.n_control_points """ Count of control points. """ return len(self.control_points) @property def fit_points(self) -> VertexArray: # group code 11 """ :class:`~ezdxf.lldxf.packedtags.VertexArray` of fit points in :ref:`WCS`. """ return self._fit_points @fit_points.setter def fit_points(self, points: Iterable['Vertex']) -> None: self._fit_points = VertexArray(chain.from_iterable(points)) def fit_point_count(self) -> int: # DXF callback attribute Spline.dxf.n_fit_points """ Count of fit points. """ return len(self.fit_points) def set_open_uniform(self, control_points: Sequence['Vertex'], degree: int = 3) -> None: """ Open B-spline with uniform knot vector, start and end at your first and last control points. """ self.dxf.flags = 0 # clear all flags self.dxf.degree = degree self.control_points = control_points self.knots = open_uniform_knot_vector(len(control_points), degree + 1) def set_uniform(self, control_points: Sequence['Vertex'], degree: int = 3) -> None: """ B-spline with uniform knot vector, does NOT start and end at your first and last control points. """ self.dxf.flags = 0 # clear all flags self.dxf.degree = degree self.control_points = control_points self.knots = uniform_knot_vector(len(control_points), degree + 1) def set_periodic(self, control_points: Sequence['Vertex'], degree=3) -> None: """ Closed B-spline with uniform knot vector, start and end at your first control point. """ self.dxf.flags = self.PERIODIC | self.CLOSED self.dxf.degree = degree self.control_points = control_points # AutoDesk Developer Docs: # If the spline is periodic, the length of knot vector will be greater than length of the control array by 1. self.knots = range(len(control_points) + 1) def set_open_rational(self, control_points: Sequence['Vertex'], weights: Sequence[float], degree: int = 3) -> None: """ Open rational B-spline with uniform knot vector, start and end at your first and last control points, and has additional control possibilities by weighting each control point. """ self.set_open_uniform(control_points, degree=degree) self.dxf.flags = self.dxf.flags | self.RATIONAL if len(weights) != len(control_points): raise DXFValueError('Control point count must be equal to weights count.') self.weights = weights def set_uniform_rational(self, control_points: Sequence['Vertex'], weights: Sequence[float], degree: int = 3) -> None: """ Rational B-spline with uniform knot vector, deos NOT start and end at your first and last control points, and has additional control possibilities by weighting each control point. """ self.set_uniform(control_points, degree=degree) self.dxf.flags = self.dxf.flags | self.RATIONAL if len(weights) != len(control_points): raise DXFValueError('Control point count must be equal to weights count.') self.weights = weights def set_periodic_rational(self, control_points: Sequence['Vertex'], weights: Sequence[float], degree: int = 3) -> None: """ Closed rational B-spline with uniform knot vector, start and end at your first control point, and has additional control possibilities by weighting each control point. """ self.set_periodic(control_points, degree=degree) self.dxf.flags = self.dxf.flags | self.RATIONAL if len(weights) != len(control_points): raise DXFValueError('Control point count must be equal to weights count.') self.weights = weights @contextmanager def edit_data(self) -> 'SplineData': """ .. versionchanged:: 0.10 This method only exist for backward compatibility, since v0.10 SPLINE attributes :attr:`fit_points`, :attr:`control_points`, :attr:`knots` and :attr:`weights` are read- and writeable list-like containers. Context manager for all spline data, returns :class:`SplineData`. Fit points, control points, knot values and weights can be manipulated as lists by using the general context manager :meth:`Spline.edit_data`:: with spline.edit_data() as spline_data: # spline_data contains list like objects: add, change or delete items as you want # fit_points and control_points have to be (x, y, z) tuples # knot_values and weights have to be numbers spline_data.fit_points.append((200, 300, 0)) # append a fit point # on exit the context manager sets spline data automatically and updates all counters """ data = SplineData(self) yield data if data.fit_points is not self.fit_points: self.fit_points = data.fit_points if data.control_points is not self.control_points: self.control_points = data.control_points if data.knots is not self.knots: self.knots = data.knots if data.weights is not self.weights: self.weights = data.weights def transform_to_wcs(self, ucs: 'UCS') -> 'Spline': """ Transform SPLINE entity
""" path = self.phrasesPath languages = [] for i in os.listdir(path): if lang in i: languages.append(i) return len(languages) def getstatus(self): # print the status of the test and ask for confirmation while True: print_square("LANGUAGE: %s\n" "RUNNING: %s\n" "STATUS: %s/%s" % (self.lang, self.status, self.current_test, len(self.database[self.lang])), margin=[5, 5, 1, 1], title="TEST STATUS") try: if self.status: input("Do you want to continue with this test? (ENTER to continue, CTRL+C to cancel and choose " "another one) ") else: input("Press ENTER to continue") break except KeyboardInterrupt: self.resume() # save and load functions def save(self): self.save_settings() self.save_conf() self.isSaved = True def save_conf(self, testfile=None): """ Writes the test attributes (including the current progress) into the config file, along with information regarding the .vrtl file used for the single test. Overwrites the last.vcfg file in the settings folder """ print_square("SAVING\n\nStatus: %s" % self.status) if testfile is None: testfile = self.testfile with open(testfile, "w", encoding="utf-16") as r: r.write("@YODA\n") r.write("@CONFIGURATION\n") r.write("WDIR=%s\n" % self.wPath) r.write("LISTFILE=%s\n" % self.listfile) r.write("LOG=%s\n" % self.logname) r.write("PHRASESPATH=%s\n" % self.phrasesPath) r.write("LANG=%s\n" % self.lang) r.write("NLU=%s\n" % self.isNluEnabled) r.write("MOUTH_CALIBRATED=%s\n" % self.isMouthCalibrated) r.write("MOUTH_CORRECTION=%s\n" % self.gain) r.write("MIC_CALIBRATED=%s\n" % self.recorder.calibrated) r.write("MIC_DBFSTODBSPL=%s\n" % self.recorder.correction) r.write("LOMBARD=%s\n" % self.isLombardEnabled) r.write("NOISE_RADIO_OFF=%s\n" % self.noise) r.write("NOISE_RADIO_ON=%s\n" % self.noise_radio) r.write("\n") # save progress r.write("@PROGRESS\n") r.write("TESTLIST=%s\n" % self.testlist) r.write("STARTED=%s\n" % self.status) r.write("STATUS=%s\n" % self.current_test) r.write("ISSUED_WW=%d\n" % self.issued_ww) r.write("RECOGNIZED_WW=%d\n" % self.recognized_ww) r.write("PASSED=%s\n" % self.passes) r.write("RESULTS=%s\n" % self.results) return def load_conf(self, testfile=None): """ Reads the configuration file for the selected test """ print("Loading test...") if testfile is None: testfile = self.testfile with open(testfile, "r", encoding="utf-16") as r: # CHECK INTEGRITY healthy = False for line in r.readlines(): if "@YODA" in line: healthy = True with open(testfile, "r", encoding="utf-16") as r: if healthy: for line in r.readlines(): # read configuration if "STARTED" in line: self.status = eval(line.split("=")[-1]) elif "STATUS" in line: self.current_test = int(line.split("=")[-1]) elif "RESULTS" in line: self.results = eval(line.split("=")[-1]) elif "LISTFILE" in line: self.listfile = str(line.split("=")[-1].replace("\n", "")) elif "PHRASESPATH" in line: self.phrasesPath = str(line.split("=")[-1].replace("\n", "")) elif "LANG" in line: self.lang = str(line.split("=")[-1]).replace("\n", "") elif "NLU" in line: self.isNluEnabled = str(line.split("=")[-1]).replace("\n", "") elif "ISSUED_WW" in line: self.issued_ww = float(line.split("=")[-1].replace("\n", "")) elif "RECOGNIZED_WW" in line: self.recognized_ww = float(line.split("=")[-1].replace("\n", "")) elif "WDIR" in line: self.wPath = str(line.split("=")[-1].replace("\n", "")) elif "LOG" in line: self.logname = str(line.split("=")[-1].replace("\n", "")) elif "PASSED" in line: self.passes = int(line.split("=")[-1].replace("\n", "")) elif "TESTLIST" in line: self.testlist = eval(line.split("=")[-1].replace("\n", "")) self.testName = self.wPath.split("/")[-1] self._configure_list() try: # if available, imports the array for the preconditions and expected behaviour self.expected = self.database["expected"] except KeyError: pass try: self.preconditions = self.database["preconditions"] except KeyError: pass print_square("Status: %s" % self.status) self.sequence = self.database[self.lang] else: print_square("!!! CONFIGURATION FILE CORRUPTED", centering="center") self.isSaved = True # settings functions def save_settings(self): """ Save the settings file of the program into a .vcfg file """ print("VoRTEx settings saved!") with open(self.settingsFile, "w", encoding="utf-16") as f: f.write("@YODA\n") f.write("@SETTINGS\n") f.write("LAST=%s\n" % self.wPath) f.write("MOUTH_CALIBRATED=%s\n" % self.isMouthCalibrated) f.write("MOUTH_CORRECTION=%s\n" % self.gain) f.write("MIC_CALIBRATED=%s\n" % self.recorder.calibrated) f.write("MIC_DBFSTODBSPL=%s\n" % self.recorder.correction) f.write("MIC_MODE=%s\n" % self.mic_mode) f.write("LOMBARD=%s\n" % self.isLombardEnabled) f.write("NOISE_RADIO_OFF=%s\n" % self.noise) f.write("NOISE_RADIO_ON=%s\n" % self.noise_radio) return def load_settings(self): """ Load saved settings """ try: with open(self.settingsFile, "r", encoding="utf-16") as f: for line in f.readlines(): if "MOUTH_CALIBRATED" in line: self.isMouthCalibrated = eval(line.split("=")[-1]) elif "MOUTH_CORRECTION" in line: self.gain = eval(line.split("=")[-1]) elif "MIC_CALIBRATED" in line: self.recorder.calibrated = eval(line.split("=")[-1]) elif "MIC_DBFSTODBSPL" in line: self.recorder.correction = eval(line.split("=")[-1]) elif "MIC_MODE" in line: self.mic_mode = eval(line.split("=")[-1]) elif "LOMBARD" in line: self.isLombardEnabled = eval(line.split("=")[-1]) elif "NOISE_RADIO_OFF" in line: self.noise = eval(line.split("=")[-1]) elif "NOISE_RADIO_ON" in line: self.noise_radio = eval(line.split("=")[-1]) elif "LAST" in line: self.wPath = str(line.split("=")[-1]).replace("\n", "") if os.path.exists(self.wPath): print("Working directory: %s" % self.wPath) else: raise CorruptedTestError("Test directory not found") except FileNotFoundError: self.isFirstStart = True raise FileNotFoundError("Settings file not found!") return def _check_completed(self): lengths = [] for i in list(self.results.keys()): lengths.append(len(self.results[i])) if len(self.results) == len(self.database[self.lang]): self.completed = min(lengths) if min(lengths) == max(lengths): self.status = False return self.status, self.completed # playback functions def play_command(self, cid): """ Plays the command based on the current test language and on the command ID. The gain is adjusted based on the mouth calibration (if made) and on the Lombard Effect (if a recording of the background noise has been performed). """ filename = self.phrasesPath + "/" + self.lang + "_" + str(cid) + ".wav" fs, data = read(filename) if self.isMouthCalibrated: while True: # wake word is pronounced with radio on if self.isLombardEnabled: if int(cid) == 0 or int(cid) == 999: total_gain = lombard(self.noise_radio) + self.gain else: total_gain = lombard(self.noise) + self.gain else: total_gain = self.gain print("Adjusting gain (%0.2fdB)" % total_gain) try: data = add_gain(data, total_gain) break except SaturationError: a = input( "Cannot increase the volume of the wave file. Do you want to increase the amplifier volume " "and redo the mouth calibration? (y/n to keep the max gain value possible).\n-->") if str(a) == "y": self.calibrate_mouth() else: break print("Playing %s" % filename) play_thread = Thread(target=play_data, args=(data, fs)) play_thread.start() # play_data(data, fs) return def activate_mic(self, mode=1): """ Function to activate the vehicle's microphone for the voice recognition. Modes: 1 - Manual 2 - Reproduce wake word (to be chosen among the audio files) 3 - Send PTT can message """ if mode == 1: input("Press PTT") elif mode == 2: try: print_square("Hey Maserati!", centering="center") self.play_command("000") except FileNotFoundError: print("Mode not implemented. Falling back to 1") pass else: input("Mode not implemented. Falling back to 1") return def cancel(self, mode=1): """ Function to cancel recognition prompt 1 - Reproduce "cancel" command 2 - Send PTT can message """ if mode == 1: try: print_square("Cancel", centering="center") self.play_command("999") except FileNotFoundError: input("'Cancel' command not found. Please place it under the command id '999'.") pass else: input("Mode not implemented. Falling back to 1") return # calibration functions def _make_calibration_file(self, duration=30): """ Randomly chooses several audio files from the phrases folder and join them until a unique file of a fixed duration is made. The file is suitable for the calibration of the artificial mouth. """ treshold = 100 files = [] for i in os.listdir(self.phrasesPath): if i.split(".")[-1] == "wav": files.append(self.phrasesPath + "/" + i) pdata = np.array([]) while True: file = files[random.randint(1, len(files))] fs, calib_data = read(file) # cut silence at the beginning and at the end for _ in range(len(calib_data)): if abs(calib_data[1]) > treshold and abs(calib_data[-1]) > treshold: break else: if abs(calib_data[1]) < treshold: calib_data = calib_data[1:] if abs(calib_data[-1]) < treshold: calib_data = calib_data[:-1] calib_data = np.concatenate((pdata, calib_data)) # if the obtained file is longer than 30s, break the loop length = len(calib_data) / fs if length > duration: break pdata = calib_data len(calib_data) write(self.phrasesPath + "/calibration.wav", fs, calib_data.astype(np.int16)) return fs, calib_data.astype(np.int16) def calibrate_mic(self): """ Calibrates the microphone so that it expresses values in dBSPL. For that a 94dBSPL calibrator is mandatory. """ self.recorder.calibrate(self.micChannel) self.recorder.save("%smic_calibration.wav" % self.settingsDir) self.recorder.save("%s/mic_calibration.wav" % self.wPath) self.save_settings() return def calibrate_ear(self): """ Calibrates Oscar's ear so that it expresses values in dBSPL. For that a 94dBSPL calibrator is mandatory. """ self.recorder.calibrate(channel=self.earChannel, reference=92.1) self.recorder.save("%sear_calibration.wav" % self.settingsDir) self.recorder.save("%s/ear_calibration.wav" % self.wPath) self.save_settings() return def calibrate_mouth(self, reference=94, max_attempts=6): """ Reproduces a calibration file from the mouth, records it, measures its RMS power and, if needed, adjusts the gain and records again the calibration file. This operation is repeated until the RMS power is as close as possible to the nominal value of 94dBSPL. The number of maximum attempts can be decided and specified among the function's arguments. After the last attempt the last gain value is kept, whatever the difference between the RMS level and the nominal one is. """ attempt = 1 try: if self.recorder.calibrated: # microphone has to be calibrated first print("Opening calibration file... ") try: c_fs, c_data = read(self.phrasesPath + "/calibration.wav") except FileNotFoundError: print("Calibration file not found! Creating a new one...", end='') c_fs, c_data = self._make_calibration_file() print("done!") c_data_gain = add_gain(c_data, self.gain) recorded = self.recorder.play_and_record(c_data_gain, c_fs)[:, self.micChannel] recorded_dbspl = get_rms(recorded) + self.recorder.correction[self.micChannel] delta = reference - recorded_dbspl print_square("Target = %0.2fdBSPL\n" "Mouth RMS = %0.2fdBSPL\n" "delta = %0.2fdB" % (reference, recorded_dbspl, -delta), title="ATTEMPT %d of %d" %
of Health Sciences"), ("Mercy College of Ohio","Mercy College of Ohio"), ("Mercy College","Mercy College"), ("Mercy Hospital School of Nursing","Mercy Hospital School of Nursing"), ("Mercy Hospital School of Practical Nursing-Plantation General Hospital","Mercy Hospital School of Practical Nursing-Plantation General Hospital"), ("Mercy Hospital Springfield-School of Radiologic Technology","Mercy Hospital Springfield-School of Radiologic Technology"), ("Mercy School of Nursing","Mercy School of Nursing"), ("Mercy-St Luke's School of Radiologic Technology","Mercy-St Luke's School of Radiologic Technology"), ("Mercyhurst University","Mercyhurst University"), ("Mercyhurst University-North East Campus","Mercyhurst University-North East Campus"), ("Meredith College","Meredith College"), ("Meredith Manor International Equestrian Center","Meredith Manor International Equestrian Center"), ("Meridian College","Meridian College"), ("Meridian Community College","Meridian Community College"), ("Meridian Institute of Surgical Assisting","Meridian Institute of Surgical Assisting"), ("Meridian Technology Center","Meridian Technology Center"), ("Merkaz Bnos-Business School","Merkaz Bnos-Business School"), ("Merrell University of Beauty Arts and Science","Merrell University of Beauty Arts and Science"), ("Merrillville Beauty College","Merrillville Beauty College"), ("Merrimack College","Merrimack College"), ("Merritt College","Merritt College"), ("Mesa Community College","Mesa Community College"), ("Mesabi Range Community and Technical College","Mesabi Range Community and Technical College"), ("Mesalands Community College","Mesalands Community College"), ("<NAME>","<NAME>"), ("<NAME>odaath Rabbinical Seminary","<NAME> Vodaath Rabbinical Seminary"), ("Mesivta of Eastern Parkway-Yeshiva Zichron Meilech","Mesivta of Eastern Parkway-Yeshiva Zichron Meilech"), ("<NAME>usalem of America","<NAME> Jerusalem of America"), ("Messenger College","Messenger College"), ("Messiah College","Messiah College"), ("Methodist College","Methodist College"), ("Methodist Theological School in Ohio","Methodist Theological School in Ohio"), ("Methodist University","Methodist University"), ("Metro Beauty Academy","Metro Beauty Academy"), ("Metro Business College-Arnold","Metro Business College-Arnold"), ("Metro Business College-Cape Girardeau","Metro Business College-Cape Girardeau"), ("Metro Business College-Jefferson City","Metro Business College-Jefferson City"), ("Metro Business College-Rolla","Metro Business College-Rolla"), ("Metro Technology Centers","Metro Technology Centers"), ("Metroplex Beauty School","Metroplex Beauty School"), ("Metropolitan Career Center Computer Technology Institute","Metropolitan Career Center Computer Technology Institute"), ("Metropolitan College of New York","Metropolitan College of New York"), ("Metropolitan Community College Area","Metropolitan Community College Area"), ("Metropolitan Community College-Kansas City","Metropolitan Community College-Kansas City"), ("Metropolitan Community College-Kansas City","Metropolitan Community College-Kansas City"), ("Metropolitan Learning Institute","Metropolitan Learning Institute"), ("Metropolitan State University of Denver","Metropolitan State University of Denver"), ("Metropolitan State University","Metropolitan State University"), ("Miami Ad School-Miami Beach","Miami Ad School-Miami Beach"), ("Miami Ad School-Minneapolis","Miami Ad School-Minneapolis"), ("Miami Ad School-New York","Miami Ad School-New York"), ("Miami Ad School-San Francisco","Miami Ad School-San Francisco"), ("Miami Dade College","Miami Dade College"), ("Miami Lakes Educational Center","Miami Lakes Educational Center"), ("Miami University-Hamilton","Miami University-Hamilton"), ("Miami University-Middletown","Miami University-Middletown"), ("Miami University-Oxford","Miami University-Oxford"), ("Miami Valley Career Technology Center","Miami Valley Career Technology Center"), ("Miami-Jacobs Career College-Columbus","Miami-Jacobs Career College-Columbus"), ("Miami-Jacobs Career College-Dayton","Miami-Jacobs Career College-Dayton"), ("Miami-Jacobs Career College-Independence","Miami-Jacobs Career College-Independence"), ("Miami-Jacobs Career College-Sharonville","Miami-Jacobs Career College-Sharonville"), ("Miami-Jacobs Career College-Springboro","Miami-Jacobs Career College-Springboro"), ("Miami-Jacobs Career College-Troy","Miami-Jacobs Career College-Troy"), ("Michael's School of Beauty","Michael's School of Beauty"), ("Michaels School of Hair Design and Esthetics-Paul Mitchell Partner School","Michaels School of Hair Design and Esthetics-Paul Mitchell Partner School"), ("Michigan Barber School Inc","Michigan Barber School Inc"), ("Michigan Career and Technical Institute","Michigan Career and Technical Institute"), ("Michigan College of Beauty-Monroe","Michigan College of Beauty-Monroe"), ("Michigan College of Beauty-Troy","Michigan College of Beauty-Troy"), ("Michigan Jewish Institute","Michigan Jewish Institute"), ("Michigan School of Professional Psychology","Michigan School of Professional Psychology"), ("Michigan State University","Michigan State University"), ("Michigan State University-College of Law","Michigan State University-College of Law"), ("Michigan Technological University","Michigan Technological University"), ("Micropower Career Institute","Micropower Career Institute"), ("Mid Florida Tech","Mid Florida Tech"), ("Mid Michigan Community College","Mid Michigan Community College"), ("Mid-America Baptist Theological Seminary","Mid-America Baptist Theological Seminary"), ("Mid-America Christian University","Mid-America Christian University"), ("Mid-America College of Funeral Service","Mid-America College of Funeral Service"), ("Mid-America Technology Center","Mid-America Technology Center"), ("Mid-Atlantic Christian University","Mid-Atlantic Christian University"), ("Mid-Continent University","Mid-Continent University"), ("Mid-Del Technology Center","Mid-Del Technology Center"), ("Mid-EastCTC-Adult Education","Mid-EastCTC-Adult Education"), ("Mid-Plains Community College","Mid-Plains Community College"), ("Mid-South Christian College","Mid-South Christian College"), ("Mid-South Community College","Mid-South Community College"), ("Mid-State Technical College","Mid-State Technical College"), ("MidAmerica Nazarene University","MidAmerica Nazarene University"), ("Middle Georgia College","Middle Georgia College"), ("Middle Georgia State College","Middle Georgia State College"), ("Middle Georgia Technical College","Middle Georgia Technical College"), ("Middle Tennessee School of Anesthesia Inc","Middle Tennessee School of Anesthesia Inc"), ("Middle Tennessee State University","Middle Tennessee State University"), ("Middlebury College","Middlebury College"), ("Middlesex Community College","Middlesex Community College"), ("Middlesex Community College","Middlesex Community College"), ("Middlesex County College","Middlesex County College"), ("Midland College","Midland College"), ("Midland University","Midland University"), ("Midlands Technical College","Midlands Technical College"), ("Midstate College","Midstate College"), ("Midway College","Midway College"), ("Midway Paris Beauty School","Midway Paris Beauty School"), ("Midwest College of Oriental Medicine-Chicago","Midwest College of Oriental Medicine-Chicago"), ("Midwest College of Oriental Medicine-Racine","Midwest College of Oriental Medicine-Racine"), ("Midwest Institute","Midwest Institute"), ("Midwest Technical Institute-East Peoria","Midwest Technical Institute-East Peoria"), ("Midwest Technical Institute-Moline","Midwest Technical Institute-Moline"), ("Midwest Technical Institute-Ridgeland","Midwest Technical Institute-Ridgeland"), ("Midwest Technical Institute-Springfield","Midwest Technical Institute-Springfield"), ("Midwest Technical Institute-Springfield","Midwest Technical Institute-Springfield"), ("Midwestern Baptist Theological Seminary","Midwestern Baptist Theological Seminary"), ("Midwestern Career College","Midwestern Career College"), ("Midwestern State University","Midwestern State University"), ("Midwestern University-Downers Grove","Midwestern University-Downers Grove"), ("Midwestern University-Glendale","Midwestern University-Glendale"), ("Midwives College of Utah","Midwives College of Utah"), ("Mifflin-Juniata Career and Technology Center","Mifflin-Juniata Career and Technology Center"), ("Milan Institute of Cosmetology-Amarillo","Milan Institute of Cosmetology-Amarillo"), ("Milan Institute of Cosmetology-Concord","Milan Institute of Cosmetology-Concord"), ("Milan Institute of Cosmetology-El Paso","Milan Institute of Cosmetology-El Paso"), ("Milan Institute of Cosmetology-Fairfield","Milan Institute of Cosmetology-Fairfield"), ("Milan Institute of Cosmetology-La Quinta","Milan Institute of Cosmetology-La Quinta"), ("Milan Institute of Cosmetology-Nampa","Milan Institute of Cosmetology-Nampa"), ("Milan Institute of Cosmetology-Reno","Milan Institute of Cosmetology-Reno"), ("Milan Institute of Cosmetology-San Antonio Military","Milan Institute of Cosmetology-San Antonio Military"), ("Milan Institute of Cosmetology-San Antonio Walzem","Milan Institute of Cosmetology-San Antonio Walzem"), ("Milan Institute of Cosmetology-Visalia","Milan Institute of Cosmetology-Visalia"), ("Milan Institute-Amarillo","Milan Institute-Amarillo"), ("Milan Institute-Bakersfield","Milan Institute-Bakersfield"), ("Milan Institute-Boise","Milan Institute-Boise"), ("Milan Institute-Clovis","Milan Institute-Clovis"), ("Milan Institute-Las Vegas","Milan Institute-Las Vegas"), ("Milan Institute-Merced","Milan Institute-Merced"), ("Milan Institute-Nampa","Milan Institute-Nampa"), ("Milan Institute-Palm Desert","Milan Institute-Palm Desert"), ("Milan Institute-San Antonio Ingram","Milan Institute-San Antonio Ingram"), ("Milan Institute-Sparks","Milan Institute-Sparks"), ("Milan Institute-Visalia","Milan Institute-Visalia"), ("Mildred Elley School-Albany Campus","Mildred Elley School-Albany Campus"), ("Mildred Elley-New York Campus","Mildred Elley-New York Campus"), ("Mildred Elley-Pittsfield Campus","Mildred Elley-Pittsfield Campus"), ("Miles College","Miles College"), ("Miles Community College","Miles Community College"), ("Millennia Atlantic University","Millennia Atlantic University"), ("Millennium Training Institute","Millennium Training Institute"), ("Miller-Motte College-Cary","Miller-Motte College-Cary"), ("Miller-Motte College-Fayetteville","Miller-Motte College-Fayetteville"), ("Miller-Motte College-Greenville","Miller-Motte College-Greenville"), ("Miller-Motte College-Jacksonville","Miller-Motte College-Jacksonville"), ("Miller-Motte College-Raleigh","Miller-Motte College-Raleigh"), ("Miller-Motte College-Wilmington","Miller-Motte College-Wilmington"), ("Miller-Motte Technical College-Augusta","Miller-Motte Technical College-Augusta"), ("Miller-Motte Technical College-Chattanooga","Miller-Motte Technical College-Chattanooga"), ("Miller-Motte Technical College-Clarksville","Miller-Motte Technical College-Clarksville"), ("Miller-Motte Technical College-Columbus","Miller-Motte Technical College-Columbus"), ("Miller-Motte Technical College-Conway","Miller-Motte Technical College-Conway"), ("Miller-Motte Technical College-Gulfport","Miller-Motte Technical College-Gulfport"), ("Miller-Motte Technical College-Lynchburg","Miller-Motte Technical College-Lynchburg"), ("Miller-Motte Technical College-Macon","Miller-Motte Technical College-Macon"), ("Miller-Motte Technical College-Madison","Miller-Motte Technical College-Madison"), ("Miller-Motte Technical College-North Charleston","Miller-Motte Technical College-North Charleston"), ("Miller-Motte Technical College-Roanoke","Miller-Motte Technical College-Roanoke"), ("Millersville University of Pennsylvania","Millersville University of Pennsylvania"), ("Milligan College","Milligan College"), ("Millikin University","Millikin University"), ("Mills College","Mills College"), ("Millsaps College","Millsaps College"), ("Milwaukee Area Technical College","Milwaukee Area Technical College"), ("Milwaukee Career College","Milwaukee Career College"), ("Milwaukee Institute of Art & Design","Milwaukee Institute of Art & Design"), ("Milwaukee School of Engineering","Milwaukee School of Engineering"), ("Mims Classic Beauty College","Mims Classic Beauty College"), ("Mineral Area College","Mineral Area College"), ("Mineral County Vocational Technical Center","Mineral County Vocational Technical Center"), ("Minneapolis Business College","Minneapolis Business College"), ("Minneapolis College of Art and Design","Minneapolis College of Art and Design"), ("Minneapolis Community and Technical College","Minneapolis Community and Technical College"), ("Minneapolis Media Institute","Minneapolis Media Institute"), ("Minnesota School of Business-Blaine","Minnesota School of Business-Blaine"), ("Minnesota School of Business-Brooklyn Center","Minnesota School of Business-Brooklyn Center"), ("Minnesota School of Business-Elk River","Minnesota School of Business-Elk River"), ("Minnesota School of Business-Lakeville","Minnesota School of Business-Lakeville"), ("Minnesota School of Business-Moorhead","Minnesota School of Business-Moorhead"), ("Minnesota School of Business-Plymouth","Minnesota School of Business-Plymouth"), ("Minnesota School of Business-Richfield","Minnesota School of Business-Richfield"), ("Minnesota School of Business-Rochester","Minnesota School of Business-Rochester"), ("Minnesota School of Business-Shakopee","Minnesota School of Business-Shakopee"), ("Minnesota School of Business-Waite Park","Minnesota School of Business-Waite Park"), ("Minnesota School of Cosmetology-Plymouth Campus","Minnesota School of Cosmetology-Plymouth Campus"), ("Minnesota School of Cosmetology-Woodbury Campus","Minnesota School of Cosmetology-Woodbury Campus"), ("Minnesota State College-Southeast Technical","Minnesota State College-Southeast Technical"), ("Minnesota State Community and Technical College","Minnesota State Community and Technical College"), ("Minnesota State University-Mankato","Minnesota State University-Mankato"), ("Minnesota State University-Moorhead","Minnesota State University-Moorhead"), ("Minnesota West Community and Technical College","Minnesota West Community and Technical College"), ("Minot State University","Minot State University"), ("MiraCosta College","MiraCosta College"), ("Mirrer Yeshiva Cent Institute","Mirrer Yeshiva Cent Institute"), ("Misericordia University","Misericordia University"), ("Mission College","Mission College"), ("Mississippi College of Beauty Culture","Mississippi College of Beauty Culture"), ("Mississippi College","Mississippi College"), ("Mississippi Community College Board","Mississippi Community College Board"), ("Mississippi Delta Community College","Mississippi Delta Community College"), ("Mississippi Gulf Coast Community College","Mississippi Gulf Coast Community College"), ("Mississippi Institute of Aesthetics Nails & Cosmetology","Mississippi Institute of Aesthetics Nails & Cosmetology"), ("Mississippi State University","Mississippi State University"), ("Mississippi University for Women","Mississippi University for Women"), ("Mississippi Valley State University","Mississippi Valley State University"), ("Missouri Baptist University","Missouri Baptist University"), ("Missouri College of Cosmetology North","Missouri College of Cosmetology North"), ("Missouri College","Missouri College"), ("Missouri School of Barbering & Hairstyling","Missouri School of Barbering & Hairstyling"), ("Missouri Southern State University","Missouri Southern State University"), ("Missouri State University-Springfield","Missouri State University-Springfield"), ("Missouri State University-West Plains","Missouri State University-West Plains"), ("Missouri Tech","Missouri Tech"), ("Missouri University of Science and Technology","Missouri University of Science and Technology"), ("Missouri Valley College","Missouri Valley College"), ("Missouri Western State University","Missouri Western State University"), ("Mitchell College","Mitchell College"), ("Mitchell Community College","Mitchell Community College"), ("Mitchell Technical Institute","Mitchell Technical Institute"), ("Mitchell's Hair Styling Academy-Raleigh","Mitchell's Hair Styling Academy-Raleigh"), ("Mitchells Hairstyling Academy-Goldsboro","Mitchells Hairstyling Academy-Goldsboro"), ("Mitchells Hairstyling Academy-Greenville","Mitchells Hairstyling Academy-Greenville"), ("Mitchells Hairstyling Academy-Wilson","Mitchells Hairstyling Academy-Wilson"), ("Mitsu Sato Hair Academy","Mitsu Sato Hair Academy"), ("Moberly Area Community College","Moberly Area Community College"), ("Model College of Hair Design","Model College of Hair Design"), ("Modern Beauty Academy","Modern Beauty Academy"), ("Modern Beauty School Inc","Modern Beauty School Inc"), ("Modern Hairstyling Institute-Arecibo","Modern Hairstyling Institute-Arecibo"), ("Modern Hairstyling Institute-Bayamon","Modern Hairstyling Institute-Bayamon"), ("Modern Hairstyling Institute-Carolina","Modern Hairstyling Institute-Carolina"), ("Modern Technology School","Modern Technology School"), ("Modern Welding School","Modern Welding School"), ("Modesto Junior College","Modesto Junior College"), ("Mohave Community College","Mohave Community College"), ("Mohawk Valley Community College","Mohawk Valley Community College"), ("Mojave Barber College","Mojave Barber College"), ("Moler Barber College","Moler Barber College"), ("Moler Barber College","Moler Barber College"), ("Moler Hollywood Beauty Academy","Moler Hollywood Beauty Academy"), ("Moler-Pickens Beauty Academy","Moler-Pickens Beauty Academy"), ("Molloy College","Molloy College"), ("Monmouth College","Monmouth College"), ("Monmouth County Vocational School District","Monmouth County Vocational School District"), ("Monmouth University","Monmouth University"), ("Monongalia County Technical Education Center","Monongalia County Technical Education Center"), ("Monroe 2 Orleans BOCES-Center for Workforce Development","Monroe 2
<reponame>starius/gohere #!/usr/bin/env python """ Install Go into a local directory. """ import argparse import errno import hashlib import logging import os import platform import re import shutil import subprocess import sys import tarfile import tempfile import textwrap try: import urllib2 except: # Python 3 import urllib.request as urllib2 VERSIONS = { '1.4-bootstrap-20171003': 'f4ff5b5eb3a3cae1c993723f3eab519c5bae18866b5e5f96fe1102f0cb5c3e52', '1.5': 'be81abec996d5126c05f2d36facc8e58a94d9183a56f026fc9441401d80062db', '1.5.1': 'a<PASSWORD>3e98d<PASSWORD>ae396a9b7dd597c29dcd<PASSWORD>cafa9097d9c4ba04cff0ec<PASSWORD>', '1.5.2': 'f3ddd624c00461641ce3d3a8d8e3c622392384ca7699e901b370a4eac5987a74', '1.5.3': '754e06dab1c31ab168fc9db9e32596734015ea9e24bc44cae7f237f417ce4efe', '1.5.4': '002acabce7ddc140d0d55891f9d4fcfbdd806b9332fb8b110c91bc91afb0bc93', '1.6': 'a96cce8ce43a9bf9b2a4c7d470bc7ee0cb00410da815980681c8353218dcf146', '1.6.1': '1d4b53cdee51b2298afcf50926a7fa44b286f0bf24ff8323ce690a66daa7193f', '1.6.2': '787b0b750d037016a30c6ed05a8a70a91b2e9db4bd9b1a2453aa502a63f1bccc', '1.6.3': '6326aeed5f86cf18f16d6dc831405614f855e2d416a91fd3fdc334f772345b00', '1.6.4': '<KEY>', '1.7': '<KEY>', '1.7.1': '2b843f133b81b7995f26d0cb64bbdbb9d0704b90c44df45f844d28881ad442d3', '1.7.3': '79430a0027a09b0b3ad57e214c4c1acfdd7af290961dd08d322818895af1ef44', '1.7.4': '4c189111e9ba651a2bb3ee868aa881fab36b2f2da3409e80885ca758a6b614cc', '1.7.5': '4e834513a2079f8cbbd357502cccaac9507fd00a1efe672375798858ff291815', '1.7.6': '1a67a4e688673fdff7ba41e73482b0e59ac5bd0f7acf703bc6d50cc775c5baba', '1.8': '<KEY>', '1.8.1': '<KEY>', '1.8.2': '<KEY>', '1.8.3': '<KEY>', '1.8.4': '<KEY>', '1.8.5': '4949fd1a5a4954eb54dd208f2f412e720e23f32c91203116bed0387cf5d0ff2d', '1.8.6': 'efc1221d3ae033c69e149801eff1d9872e214832a89f089fc5beb7a9fd98d9fb', '1.8.7': '5911e751807eebbc1980dad4305ef5492b96d6cd720bf93cbcefa86e1c195f9e', '1.9': 'a4ab229028ed167ba1986825751463605264e44868362ca8e7accc8be057e993', '1.9.1': 'a84afc9dc7d64fe0fa84d4d735e2ece23831a22117b50dafc75c1484f1cb550e', '1.9.2': '665f184bf8ac89986cfd5a4460736976f60b57df6b320ad71ad4cef53bb143dc', '1.9.3': '4e3d0ad6e91e02efa77d54e86c8b9e34fbe1cbc2935b6d38784dca93331c47ae', '1.9.4': '0573a8df33168977185aa44173305e5a0450f55213600e94541604b75d46dc06', '1.9.5': 'f1c2bb7f32bbd8fa7a19cc1608e0d06582df32ff5f0340967d83fb0017c49fbc', '1.9.6': '36f4059be658f7f07091e27fe04bb9e97a0c4836eb446e4c5bac3c90ff9e5828', '1.9.7': '582814fa45e8ecb0859a208e517b48aa0ad951e3b36c7fff203d834e0ef27722', '1.10': 'f3de49289405fda5fd1483a8fe6bd2fa5469e005fd567df64485c4fa000c7f24', '1.10.1': '<KEY>', '1.10.2': '6264609c6b9cd8ed8e02ca84605d727ce1898d74efa79841660b2e3e985a98bd', '1.10.3': '567b1cc66c9704d1c019c50bef946272e911ec6baf244310f87f4e678be155f2', '1.10.4': '6fe44965ed453cd968a81988523e9b0e794d3a478f91fd7983c28763d52d5781', '1.10.5': 'f0a3ed5c775f39a970d4937c64b3b178992e23a5df57ae56a59a1c99253094f4', '1.10.6': '0f6bd961f6d2d6fa6319b7dc9548e2ae22d0698e7432d4cabf737542913f8c14', '1.10.7': 'b84a0d7c90789f3a2ec5349dbe7419efb81f1fac9289b6f60df86bd919bd4447', '1.10.8': '6faf74046b5e24c2c0b46e78571cca4d65e1b89819da1089e53ea57539c63491', '1.11': 'afc1e12f5fe49a471e3aae7d906c73e9d5b1fdd36d52d72652dde8f6250152fb', '1.11.1': '558f8c169ae215e25b81421596e8de7572bd3ba824b79add22fba6e284db1117', '1.11.2': '042fba357210816160341f1002440550e952eb12678f7c9e7e9d389437942550', '1.11.3': '7ec5140f384db2bd42b396c93c231dfba342ee137ad8a4b33120016951eb1231', '1.11.4': '4cfd42720a6b1e79a8024895fa6607b69972e8e32446df76d6ce79801bbadb15', '1.11.5': 'bc1ef02bb1668835db1390a2e478dcbccb5dd16911691af9d75184bbe5aa943e', '1.11.6': 'a96da1425dcbec094736033a8a416316547f8100ab4b72c31d4824d761d3e133', '1.11.7': 'cfbe1078fbb1e34e77ad09de40d96d47ef280ba15791da9f02fc05486a4b16bd', '1.11.8': 'ba18bf8daf89218f9f2d853b3a9bc117d0ac24d3c98dac3474ed9ff9bf8efead', '1.11.9': 'ee80684b352f8d6b49d804d4e615f015ae92da41c4096cfee89ed4783b2498e3', '1.11.10': 'df27e96a9d1d362c46ecd975f1faa56b8c300f5c529074e9ea79bdd885493c1b', '1.11.11': '1fff7c33ef2522e6dfaf6ab96ec4c2a8b76d018aae6fc88ce2bd40f2202d0f8c', '1.11.12': '6d7a5ba05476609a7614af3292f29c3be06327503c1f1fdc02ef417870fd6926', '1.11.13': '5032095fd3f641cafcce164f551e5ae873785ce7b07ca7c143aecd18f7ba4076', '1.12': '09c43d3336743866f2985f566db0520b36f4992aea2b4b2fd9f52f17049e88f2', '1.12.1': '0be127684df4b842a64e58093154f9d15422f1405f1fcff4b2c36ffc6a15818a', '1.12.2': 'af992580a4a609309c734d46fd4374fe3095961263e609d9b017e2dffc3b7b58', '1.12.3': '5c507abe8818429d74ebb650a4155d36bc3f9a725e59e76f5d6aca9690be2373', '1.12.4': '4affc3e610cd8182c47abbc5b0c0e4e3c6a2b945b55aaa2ba952964ad9df1467', '1.12.5': '2aa5f088cbb332e73fc3def546800616b38d3bfe6b8713b8a6404060f22503e8', '1.12.6': 'c96c5ccc7455638ae1a8b7498a030fe653731c8391c5f8e79590bce72f92b4ca', '1.12.7': '95e8447d6f04b8d6a62de1726defbb20ab203208ee167ed15f83d7978ce43b13', '1.12.8': '11ad2e2e31ff63fcf8a2bdffbe9bfa2e1845653358daed593c8c2d03453c9898', '1.12.9': 'ab0e56ed9c4732a653ed22e232652709afbf573e710f56a07f7fdeca578d62fc', '1.12.10': 'f56e48fce80646d3c94dcf36d3e3f490f6d541a92070ad409b87b6bbb9da3954', '1.12.11': 'fcf58935236802929f5726e96cd1d900853b377bec2c51b2e37219c658a4950f', '1.12.12': 'fcb33b5290fa9bcc52be3211501540df7483d7276b031fc77528672a3c705b99', '1.12.13': '5383d3b8db4baa48284ffcb14606d9cad6f03e9db843fa6d835b94d63cccf5a7', '1.12.14': '39dbf05f7e2ffcb19b08f07d53dcc96feadeb1987fef9e279e7ff0c598213064', '1.12.15': '8aba74417e527524ad5724e6e6c21016795d1017692db76d1b0851c6bdec84c3', '1.12.16': 'ce6e5ed85b4a54ffc01d55a5838ee313acb4a7f109a745440fecbead1d081df8', '1.12.17': 'de878218c43aa3c3bad54c1c52d95e3b0e5d336e1285c647383e775541a28b25', '1.13': '3fc0b8b6101d42efd7da1da3029c0a13f22079c0c37ef9730209d8ec665bf122', '1.13.1': '81f154e69544b9fa92b1475ff5f11e64270260d46e7e36c34aafc8bc96209358', '1.13.2': '1ea68e01472e4276526902b8817abd65cf84ed921977266f0c11968d5e915f44', '1.13.3': '4f7123044375d5c404280737fbd2d0b17064b66182a65919ffe20ffe8620e3df', '1.13.4': '95dbeab442ee2746b9acf0934c8e2fc26414a0565c008631b04addb8c02e7624', '1.13.5': '27d356e2a0b30d9983b60a788cf225da5f914066b37a6b4f69d457ba55a626ff', '1.13.6': 'aae5be954bdc40bcf8006eb77e8d8a5dde412722bc8effcdaf9772620d06420c', '1.13.7': 'e4ad42cc5f5c19521fbbbde3680995f2546110b5c6aa2b48c3754ff7af9b41f4', '1.13.8': 'b13bf04633d4d8cf53226ebeaace8d4d2fd07ae6fa676d0844a688339debec34', '1.13.9': '34bb19d806e0bc4ad8f508ae24bade5e9fedfa53d09be63b488a9314d2d4f31d', '1.13.10': 'eb9ccc8bf59ed068e7eff73e154e4f5ee7eec0a47a610fb864e3332a2fdc8b8c', '1.13.11': '89ed1abce25ad003521c125d6583c93c1280de200ad221f961085200a6c00679', '1.13.12': '17ba2c4de4d78793a21cc659d9907f4356cd9c8de8b7d0899cdedcef712eba34', '1.13.13': 'ab7e44461e734ce1fd5f4f82c74c6d236e947194d868514d48a2b1ea73d25137', '1.13.14': '197333e97290e9ea8796f738d61019dcba1c377c2f3961fd6a114918ecc7ab06', '1.13.15': '5fb43171046cf8784325e67913d55f88a683435071eef8e9da1aa8a1588fcf5d', '1.14': '6d643e46ad565058c7a39dac01144172ef9bd476521f42148be59249e4b74389', '1.14.1': '2ad2572115b0d1b4cb4c138e6b3a31cee6294cb48af75ee86bec3dca04507676', '1.14.2': '98de84e69726a66da7b4e58eac41b99cbe274d7e8906eeb8a5b7eb0aadee7f7c', '1.14.3': '93023778d4d1797b7bc6a53e86c3a9b150c923953225f8a48a2d5fabc971af56', '1.14.4': '7011af3bbc2ac108d1b82ea8abb87b2e63f78844f0259be20cde4d42c5c40584', '1.14.5': 'ca4c080c90735e56152ac52cd77ae57fe573d1debb1a58e03da9cc362440315c', '1.14.6': '73fc9d781815d411928eccb92bf20d5b4264797be69410eac854babe44c94c09', '1.14.7': '064392433563660c73186991c0a315787688e7c38a561e26647686f89b6c30e3', '1.14.8': 'd9a613fb55f508cf84e753456a7c6a113c8265839d5b7fe060da335c93d6e36a', '1.14.9': 'c687c848cc09bcabf2b5e534c3fc4259abebbfc9014dd05a1a2dc6106f404554', '1.14.10': 'b37699a7e3eab0f90412b3969a90fd072023ecf61e0b86369da532810a95d665', '1.14.11': '1871f3d29b0cf1ebb739c9b134c9bc559282bd3625856e5f12f5aea29ab70f16', '1.14.12': 'b34f4b7ad799eab4c1a52bdef253602ce957125a512f5a1b28dce43c6841b971', '1.14.13': 'ba1d244c6b5c0ed04aa0d7856d06aceb89ed31b895de6ff783efb1cc8ab6b177', '1.14.14': '6204bf32f58fae0853f47f1bd0c51d9e0ac11f1ffb406bed07a0a8b016c8a76f', '1.14.15': '7ed13b2209e54a451835997f78035530b331c5b6943cdcd68a3d815fdc009149', '1.15': '69438f7ed4f532154ffaf878f3dfd83747e7a00b70b3556eddabf7aaee28ac3a', '1.15.1': 'd3743752a421881b5cc007c76b4b68becc3ad053e61275567edab1c99e154d30', '1.15.2': '28bf9d0bcde251011caae230a4a05d917b172ea203f2a62f2c2f9533589d4b4d', '1.15.3': '<KEY>', '1.15.4': '063da6a9a4186b8118a0e584532c8c94e65582e2cd951ed078bfd595d27d2367', '1.15.5': 'c1076b90cf94b73ebed62a81d802cd84d43d02dea8c07abdc922c57a071c84f1', '1.15.6': '890bba73c5e2b19ffb1180e385ea225059eb008eb91b694875dd86ea48675817', '1.15.7': '8631b3aafd8ecb9244ec2ffb8a2a8b4983cf4ad15572b9801f7c5b167c1a2abc', '1.15.8': '540c0ab7781084d124991321ed1458e479982de94454a98afab6acadf38497c2', '1.15.9': '90983b9c84a92417337dc1942ff066fc8b3a69733b8b5493fd0b9b9db1ead60f', '1.15.10': 'c1dbca6e0910b41d61a95bf9878f6d6e93d15d884c226b91d9d4b1113c10dd65', '1.15.11': 'f25b2441d4c76cf63cde94d59bab237cc33e8a2a139040d904c8630f46d061e5', '1.15.12': '1c6911937df4a277fa74e7b7efc3d08594498c4c4adc0b6c4ae3566137528091', '1.15.13': '99069e7223479cce4553f84f874b9345f6f4045f27cf5089489b546da619a244', '1.15.14': '60a4a5c48d63d0a13eca8849009b624629ff429c8bc5d1a6a8c3c4da9f34e70a', '1.15.15': '0662ae3813330280d5f1a97a2ee23bbdbe3a5a7cfa6001b24a9873a19a0dc7ec', '1.16': '7688063d55656105898f323d90a79a39c378d86fe89ae192eb3b7fc46347c95a', '1.16.1': '680a500cd8048750121677dd4dc055fdfd680ae83edc7ed60a4b927e466228eb', '1.16.2': '37ca14287a23cb8ba2ac3f5c3dd8adbc1f7a54b9701a57824bf19a0b271f83ea', '1.16.3': 'b298d29de9236ca47a023e382313bcc2d2eed31dfa706b60a04103ce83a71a25', '1.16.4': 'ae4f6b6e2a1677d31817984655a762074b5356da50fb58722b99104870d43503', '1.16.5': '7bfa7e5908c7cc9e75da5ddf3066d7cbcf3fd9fa51945851325eebc17f50ba80', '1.16.6': 'a3a5d4bc401b51db065e4f93b523347a4d343ae0c0b08a65c3423b05a138037d', '1.16.7': '1a9f2894d3d878729f7045072f30becebe243524cf2fce4e0a7b248b1e0654ac', '1.16.8': '8f2a8c24b793375b3243df82fdb0c8387486dcc8a892ca1c991aa99ace086b98', '1.16.9': '0a1cc7fd7bd20448f71ebed64d846138850d5099b18cf5cc10a4fc45160d8c3d', '1.16.10': 'a905472011585e403d00d2a41de7ced29b8884309d73482a307f689fd0f320b5', '1.16.11': '<KEY>', '1.16.12': '2afd839dcb76d2bb082c502c01a0a5cdbfc09fd630757835363c4fde8e2fbfe8', '1.17': '3a70e5055509f347c0fb831ca07a2bf3b531068f349b14a3c652e9b5b67beb5d', '1.17.1': '49dc08339770acd5613312db8c141eaf61779995577b89d93b541ef83067e5b1', '1.17.2': '2255eb3e4e824dd7d5fcdc2e7f84534371c186312e546fb1086a34c17752f431', '1.17.3': '705c64251e5b25d5d55ede1039c6aa22bea40a7a931d14c370339853643c3df0', '1.17.4': '4bef3699381ef09e075628504187416565d710660fec65b057edf1ceb187fc4b', '1.17.5': '3defb9a09bed042403195e872dcbc8c6fae1485963332279668ec52e80a95a2d', } BOOTSTRAP_VERSION = '1.4-bootstrap-20171003' MIN_VERSION_BUILT_WITH_GO = '1.5' RELOCATION_TYPE_42_VERSIONS = ('1.4.1', '1.4.2', '1.4.3') MIN_VERSION_WITHOUT_INCLUDE = '1.5' # cmd/link: support new 386/amd64 relocations # It is needed to fix build on Debian 8 Stretch. # See https://github.com/golang/go/issues/13896 # Backport https://github.com/golang/go/commit/914db9f060b1fd3eb1f74d48f3b RELOCATION_TYPE_42_PATCH = r''' --- src/cmd/6l/asm.c +++ src/cmd/6l/asm.c @@ -117,6 +117,8 @@ adddynrel(LSym *s, Reloc *r) } return; + case 256 + R_X86_64_REX_GOTPCRELX: + case 256 + R_X86_64_GOTPCRELX: case 256 + R_X86_64_GOTPCREL: if(targ->type != SDYNIMPORT) { // have symbol --- src/cmd/8l/asm.c +++ src/cmd/8l/asm.c @@ -115,6 +115,7 @@ adddynrel(LSym *s, Reloc *r) return; case 256 + R_386_GOT32: + case 256 + R_386_GOT32X: if(targ->type != SDYNIMPORT) { // have symbol if(r->off >= 2 && s->p[r->off-2] == 0x8b) { --- src/cmd/ld/elf.h +++ src/cmd/ld/elf.h @@ -502,8 +502,23 @@ typedef struct { #define R_X86_64_DTPOFF32 21 /* Offset in TLS block */ #define R_X86_64_GOTTPOFF 22 /* PC relative offset to IE GOT entry */ #define R_X86_64_TPOFF32 23 /* Offset in static TLS block */ - -#define R_X86_64_COUNT 24 /* Count of defined relocation types. */ +#define R_X86_64_PC64 24 +#define R_X86_64_GOTOFF64 25 +#define R_X86_64_GOTPC32 26 +#define R_X86_64_GOT64 27 +#define R_X86_64_GOTPCREL64 28 +#define R_X86_64_GOTPC64 29 +#define R_X86_64_GOTPLT64 30 +#define R_X86_64_PLTOFF64 31 +#define R_X86_64_SIZE32 32 +#define R_X86_64_SIZE64 33 +#define R_X86_64_GOTPC32_TLSDEC 34 +#define R_X86_64_TLSDESC_CALL 35 +#define R_X86_64_TLSDESC 36 +#define R_X86_64_IRELATIVE 37 +#define R_X86_64_PC32_BND 40 +#define R_X86_64_GOTPCRELX 41 +#define R_X86_64_REX_GOTPCRELX 42 #define R_ALPHA_NONE 0 /* No reloc */ @@ -535,8 +550,6 @@ typedef struct { #define R_ALPHA_JMP_SLOT 26 /* Create PLT entry */ #define R_ALPHA_RELATIVE 27 /* Adjust by program base */ -#define R_ALPHA_COUNT 28 - #define R_ARM_NONE 0 /* No relocation. */ #define R_ARM_PC24 1 @@ -578,8 +591,6 @@ typedef struct { #define R_ARM_RPC24 254 #define R_ARM_RBASE 255 -#define R_ARM_COUNT 38 /* Count of defined relocation types. */ - #define R_386_NONE 0 /* No relocation. */ #define R_386_32 1 /* Add symbol value. */ @@ -612,8 +623,42 @@ typedef struct { #define R_386_TLS_DTPMOD32 35 /* GOT entry containing TLS index */ #define R_386_TLS_DTPOFF32 36 /* GOT entry containing TLS offset */ #define R_386_TLS_TPOFF32 37 /* GOT entry of -ve static TLS offset */ - -#define R_386_COUNT 38 /* Count of defined relocation types. */ +#define R_386_NONE 0 +#define R_386_32 1 +#define R_386_PC32 2 +#define R_386_GOT32 3 +#define R_386_PLT32 4 +#define R_386_COPY 5 +#define R_386_GLOB_DAT 6 +#define R_386_JMP_SLOT 7 +#define R_386_RELATIVE 8 +#define R_386_GOTOFF 9 +#define R_386_GOTPC 10 +#define R_386_TLS_TPOFF 14 +#define R_386_TLS_IE 15 +#define R_386_TLS_GOTIE 16 +#define R_386_TLS_LE 17 +#define R_386_TLS_GD 18 +#define R_386_TLS_LDM 19 +#define R_386_TLS_GD_32 24 +#define R_386_TLS_GD_PUSH 25 +#define R_386_TLS_GD_CALL 26 +#define R_386_TLS_GD_POP 27 +#define R_386_TLS_LDM_32 28 +#define R_386_TLS_LDM_PUSH 29 +#define R_386_TLS_LDM_CALL 30 +#define R_386_TLS_LDM_POP 31 +#define R_386_TLS_LDO_32 32 +#define R_386_TLS_IE_32 33 +#define R_386_TLS_LE_32 34 +#define R_386_TLS_DTPMOD32 35 +#define R_386_TLS_DTPOFF32 36 +#define R_386_TLS_TPOFF32 37 +#define R_386_TLS_GOTDESC 39 +#define R_386_TLS_DESC_CALL 40 +#define R_386_TLS_DESC 41 +#define R_386_IRELATIVE 42 +#define R_386_GOT32X 43 #define R_PPC_NONE 0 /* No relocation. */ #define R_PPC_ADDR32 1 @@ -653,8 +698,6 @@ typedef struct { #define R_PPC_SECTOFF_HI 35 #define R_PPC_SECTOFF_HA 36 -#define R_PPC_COUNT 37 /* Count of defined relocation types. */ - #define R_PPC_TLS 67 #define R_PPC_DTPMOD32 68 #define R_PPC_TPREL16 69 @@ -697,9 +740,6 @@ typedef struct { #define R_PPC_EMB_BIT_FLD 115 #define R_PPC_EMB_RELSDA 116 - /* Count of defined relocation types. */ -#define R_PPC_EMB_COUNT (R_PPC_EMB_RELSDA - R_PPC_EMB_NADDR32 + 1) - #define R_SPARC_NONE 0 #define R_SPARC_8 1 --- src/cmd/ld/ldelf.c +++ src/cmd/ld/ldelf.c @@ -888,12 +888,15 @@ reltype(char *pn, int elftype, uchar *siz) case R('6', R_X86_64_PC32): case R('6', R_X86_64_PLT32): case R('6', R_X86_64_GOTPCREL): + case R('6', R_X86_64_GOTPCRELX): + case R('6', R_X86_64_REX_GOTPCRELX): case R('8', R_386_32): case R('8', R_386_PC32): case R('8', R_386_GOT32): case R('8', R_386_PLT32): case R('8', R_386_GOTOFF): case R('8', R_386_GOTPC): + case R('8', R_386_GOT32X): *siz = 4; break; case R('6', R_X86_64_64): ''' # Code for patching was copied from hererocks.py commit <PASSWORD> # https://github.com/mpeterv/hererocks/ # Copyright (c) 2015 - 2016 <NAME> class PatchError(Exception): pass class LineScanner(object): def __init__(self, lines): self.lines = lines self.line_number = 1 def consume_line(self): if self.line_number > len(self.lines): raise PatchError("source is too short") else: self.line_number += 1 return self.lines[self.line_number - 2] class Hunk(object): def __init__(self, start_line, lines): self.start_line = start_line self.lines = lines def add_new_lines(self, old_lines_scanner, new_lines): while old_lines_scanner.line_number < self.start_line: new_lines.append(old_lines_scanner.consume_line()) for line in self.lines: first_char, rest = line[0], line[1:] if first_char in " -": # Deleting or copying a line: it must match what's in the diff. old_line = old_lines_scanner.consume_line() if rest.strip() != old_line.strip(): raise PatchError("source is different: %s, %s" % ( rest, old_line, )) if first_char in " +": # Adding or copying a line: add it to the line list. new_lines.append(rest) class FilePatch(object): def __init__(self, file_name, lines): self.file_name = file_name self.hunks = [] self.new_lines = [] hunk_lines = None start_line = None for line in lines: first_char = line[0] if first_char == "@": if start_line is not None: self.hunks.append(Hunk(start_line, hunk_lines)) match = re.match(r"^@@ \-(\d+)", line) start_line = int(match.group(1)) hunk_lines = [] else: hunk_lines.append(line) if start_line is not None: self.hunks.append(Hunk(start_line, hunk_lines)) def prepare_application(self): if not os.path.exists(self.file_name): raise PatchError("{} doesn't exist".format(self.file_name)) with open(self.file_name, "r") as handler: source = handler.read() old_lines = source.splitlines() old_lines_scanner = LineScanner(old_lines) for hunk in self.hunks: hunk.add_new_lines(old_lines_scanner, self.new_lines) while old_lines_scanner.line_number <= len(old_lines): self.new_lines.append(old_lines_scanner.consume_line()) self.new_lines.append("") def apply(self): with open(self.file_name, "wt") as handler: handler.write("\n".join(self.new_lines)) class Patch(object): def __init__(self, src, root_dir): # The first and the last lines are empty. lines = textwrap.dedent(src[1:-1]).splitlines() lines = [line if line else " " for line in lines] self.file_patches = [] file_lines = None file_name = None for line in lines: if line.startswith('--- '): continue if line.startswith('+++ '): if file_name is not None: self.file_patches.append(FilePatch(file_name, file_lines)) file_name = os.path.join(root_dir, line[4:]) file_lines = [] else: file_lines.append(line) if file_name is not None: self.file_patches.append(FilePatch(file_name, file_lines)) def apply(self): try: for file_patch in self.file_patches: file_patch.prepare_application() except PatchError as e: return e.args[0] for file_patch in self.file_patches: file_patch.apply() class TempDir(object): n = 0 def __init__(self, echo=None, goroot=None): TempDir.n += 1 self.echoname = 'T%d' % TempDir.n self.echodir = '%s/T%d' % (goroot, TempDir.n) self.echo = echo def __enter__(self): if self.echo: self.echo('%s="%s"' % (self.echoname, self.echodir)) self.echo('mkdir -p "$%s"' % self.echoname) return '${%s}' % self.echoname else: self.name = tempfile.mkdtemp() return self.name def __exit__(self, type, value, traceback): if self.echo: self.echo('rm -rf "$%s"' % self.echoname) else: shutil.rmtree(self.name) def version_tuple(version): if version == BOOTSTRAP_VERSION: version = "1.4.9" return tuple(map(int, (version.split('.')))) def is_build_with_go(version): return version_tuple(version) >= version_tuple(MIN_VERSION_BUILT_WITH_GO) def get_default_cache(): # based on hererocks.py if os.name == 'nt': cache_root = os.getenv('LOCALAPPDATA') if cache_root is None: cache_root = os.getenv('USERPROFILE') if cache_root is None: return None cache_root = os.path.join(cache_root, 'Local Settings', 'Application Data') return os.path.join(cache_root, 'GoHere', 'Cache') else: home = os.path.expanduser('~') if home == '~': return None else: return os.path.join(home, '.cache', 'gohere') def get_filename(version): if 'bootstrap' not in version: version += '.src' return 'go%s.tar.gz' % version def get_url(version): return 'https://storage.googleapis.com/golang/%s' % get_filename(version) def download_file(destination, url, echo=None): if echo: echo('wget -O "%s" "%s"' % (destination, url)) else: with open(destination, 'wb') as d: request = urllib2.urlopen(url) shutil.copyfileobj(request, d) request.close() logging.info('File %s was downloaded from %s', destination, url) def checksum_of_file(fileobj): hasher = hashlib.sha256() for chunk in iter(lambda: fileobj.read(1024 ** 2), b''): hasher.update(chunk) return hasher.hexdigest() def make_checksum(filepath): with open(filepath, 'rb') as f: value = checksum_of_file(f) logging.info('sha256(%s) = %s', filepath, value) return value def
# This is a port of the ruby zabbix api found here: # http://trac.red-tux.net/browser/ruby/api/zbx_api.rb # #LGPL 2.1 http://www.gnu.org/licenses/old-licenses/lgpl-2.1.html #Zabbix API Python Library. #Original Ruby Library is Copyright (C) 2009 <NAME> nelsonab(at)pobox(removethisword)(dot)com #Python Library is Copyright (C) 2009 <NAME> brett.lentz(at)gmail(dot)com # #This library is free software; you can redistribute it and/or #modify it under the terms of the GNU Lesser General Public #License as published by the Free Software Foundation; either #version 2.1 of the License, or (at your option) any later version. # #This library is distributed in the hope that it will be useful, #but WITHOUT ANY WARRANTY; without even the implied warranty of #MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU #Lesser General Public License for more details. # #You should have received a copy of the GNU Lesser General Public #License along with this library; if not, write to the Free Software #Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # NOTES: # The API requires zabbix 1.8 or later. # Currently, not all of the API is implemented, and some functionality is # broken. This is a work in progress. import httplib import urllib2 import base64 import string import sys import logging #logging.config.fileConfig('/tmp/logging.conf') logger = logging.getLogger("zabbix_api") log=logger.log log(10,"Starting logging") try: # Python 2.5+ import json log(15,"Using native json library") except ImportError: # Python 2.4 import simplejson as json log(15,"Using simplejson library") class ZabbixAPIException(Exception): """ generic zabbix api exception """ pass class InvalidProtoError(ZabbixAPIException): """ Recived an invalid proto """ pass class ZabbixAPI(object): __username__ = '' __password__ = '' auth = '' id = 0 url = '/zabbix/api_jsonrpc.php' params = None method = None # HTTP or HTTPS proto = 'http' # HTTP authentication httpuser = None httppasswd = None # sub-class instances. user = None usergroup = None host = None item = None hostgroup = None application = None trigger = None sysmap = None template = None def __init__( self, # Server to connect to server='localhost', # Path leading to the zabbix install path='/zabbix/', # Protocol to use. http or https proto='http', # HTTP auth username and password user=None, passwd=None, # Data to pass to each api module **kwargs ): """ Create an API object. """ self._setuplogging() self.server=server self.url=path+'/api_jsonrpc.php' self.proto=proto self.httpuser=user self.httppasswd=<PASSWORD> self.user = ZabbixAPIUser(self,**kwargs) self.usergroup = ZabbixAPIUserGroup(self,**kwargs) self.host = ZabbixAPIHost(self,**kwargs) self.item = ZabbixAPIItem(self,**kwargs) self.hostgroup = ZabbixAPIHostGroup(self,**kwargs) self.application = ZabbixAPIApplication(self,**kwargs) self.trigger = ZabbixAPITrigger(self,**kwargs) self.sysmap = ZabbixAPISysMap(self,**kwargs) self.template = ZabbixAPITemplate(self,**kwargs) self.action = ZabbixAPIAction(self,**kwargs) self.alert = ZabbixAPIAlert(self,**kwargs) self.info = ZabbixAPIInfo(self,**kwargs) self.event = ZabbixAPIEvent(self,**kwargs) self.graph = ZabbixAPIGraph(self,**kwargs) self.graphitem = ZabbixAPIGraphItem(self,**kwargs) self.map = ZabbixAPIMap(self,**kwargs) self.screen = ZabbixAPIScreen(self,**kwargs) self.script = ZabbixAPIScript(self,**kwargs) self.usermacro = ZabbixAPIUserMacro(self,**kwargs) self.map = ZabbixAPIMap(self,**kwargs) self.map = ZabbixAPIMap(self,**kwargs) self.id = 0 self.debug(6, "url: " + proto+"://" + self.server + self.url) def _setuplogging(self): self.logger=logging.getLogger("zabbix_api.%s"%self.__class__.__name__) def debug(self, level, var="", msg=None): strval = "" if msg: strval = strval + str(msg) if var != "": strval = strval + ": " + str(var) self.logger.log(level,strval) def json_obj(self, method, params={}): obj = { 'jsonrpc' : '2.0', 'method' : method, 'params' : params, 'auth' : self.auth, 'id' : self.id } self.debug(10, "json_obj: " + str(obj)) return json.dumps(obj) def login(self, user='', password='', save=True): if user != '': l_user = user l_password = password if save: self.__username__ = user self.__password__ = password elif self.__username__ != '': l_user = self.__username__ l_password = self.__password__ else: raise ZabbixAPIException("No authentication information available.") self.debug(10,"Trying to login with %s:%s"%(repr(l_user),repr(l_password))) obj = self.json_obj('user.authenticate', { 'user' : l_user, 'password' : <PASSWORD> }) result = self.do_request(obj) self.auth = result['result'] def test_login(self): if self.auth != '': obj = self.json_obj('user.checkauth', {'sessionid' : self.auth}) result = self.do_request(obj) if not result['result']: self.auth = '' return False # auth hash bad return True # auth hash good else: return False def do_request(self, json_obj): headers = { 'Content-Type' : 'application/json-rpc', 'User-Agent' : 'python/zabbix_api' } if self.httpuser: self.debug(10,"HTTP Auth enabled") auth='Basic ' + string.strip(base64.encodestring(self.httpuser + ':' + self.httppasswd)) headers['Authorization']=auth if self.proto=='http': conn = httplib.HTTPConnection(self.server) elif self.proto=='https': conn = httplib.HTTPSConnection(self.server) else: raise InvalidProtoError, "%s is not a valid protocol"%repr(self.proto) self.debug(10,"Connection object %s"%repr(conn)) self.debug(8, "Sending: " + str(json_obj)) self.debug(10, "Sending headers: " + str(headers)) conn.request("POST", self.url, json_obj, headers) response = conn.getresponse() self.debug(8, "Response Code: " + str(response.status) + " " + \ response.reason) # NOTE: Getting a 412 response code means the headers are not in the # list of allowed headers. if response.status != 200: raise ZabbixAPIException("HTTP ERROR %s: %s" % (response.status, response.reason)) jobj = json.loads(response.read()) self.debug(10, "Response Body: " + str(jobj)) self.id += 1 if 'error' in jobj: msg = "Error %s: %s, %s" % (jobj['error']['code'], jobj['error']['message'], jobj['error']['data']) raise ZabbixAPIException(msg) return jobj def logged_in(self): if self.auth != '': return True return False def api_version(self, **options): self.__checkauth__() obj = self.do_request(self.json_obj('APIInfo.version', options)) return obj['result'] def __checkauth__(self): if not self.logged_in(): raise ZabbixAPIException("Not logged in.") class ZabbixAPISubClass(ZabbixAPI): """ wrapper class to ensure all calls go through the parent object """ parent = None def __init__(self, parent, **kwargs): self._setuplogging() self.debug(10,"Creating %s"%self.__class__.__name__) self.parent = parent # Save any extra info passed in for key,val in kwargs.items(): setattr(self,key,val) self.debug(5,"Set %s:%s"%(repr(key),repr(val))) def __checkauth__(self): self.parent.__checkauth__() def do_request(self, req): return self.parent.do_request(req) def json_obj(self, method, param): return self.parent.json_obj(method, param) def checkauth(fn): """ Decorator to check authentication of the decorated method """ def ret(self,*args,**kwargs): self.__checkauth__() return fn(self,kwargs) return ret def dojson(name): def decorator(fn): def wrapper(self,opts): log(5,"Going to do_request for %s with opts %s"%(repr(fn),repr(opts))) return self.do_request(self.json_obj(name,opts))['result'] return wrapper return decorator class ZabbixAPIUser(ZabbixAPISubClass): @checkauth @dojson('user.get') def get(self,**opts): """ * Get Users data * * First part of parameters are filters which limits the output result set, these filters are set only if appropriate parameter is set. * For example if "type" is set, then method returns only users of given type. * Second part of parameters extends result data, adding data about others objects that are related to objects we get. * For example if "select_usrgrps" parameter is set, resulting objects will have additional property 'usrgrps' containing object with * data about User UserGroups. * Third part of parameters affect output. For example "sortfield" will be set to 'alias', result will be sorted by User alias. * All Parameters are optional! * * {@source} * @access public * @static * @since 1.8 * @version 1 * * @param _array $options * @param array $options['nodeids'] filter by Node IDs * @param array $options['usrgrpids'] filter by UserGroup IDs * @param array $options['userids'] filter by User IDs * @param boolean $options['type'] filter by User type [ USER_TYPE_ZABBIX_USER: 1, USER_TYPE_ZABBIX_ADMIN: 2, USER_TYPE_SUPER_ADMIN: 3 ] * @param boolean $options['select_usrgrps'] extend with UserGroups data for each User * @param boolean $options['get_access'] extend with access data for each User * @param boolean $options['extendoutput'] output only User IDs if not set. * @param boolean $options['count'] output only count of objects in result. ( result returned in property 'rowscount' ) * @param string $options['pattern'] filter by Host name containing only give pattern * @param int $options['limit'] output will be limited to given number * @param string $options['sortfield'] output will be sorted by given property [ 'userid', 'alias' ] * @param string $options['sortorder'] output will be sorted in given order [ 'ASC', 'DESC' ] * @return array """ return opts @checkauth @dojson('user.checkAuthentication') def checkAuthentication(self,**opts): """ * Check if session ID is authenticated * * {@source} * @access public * @static * @since 1.8 * @version 1 * * @param _array $session * @param array $session['sessionid'] Session ID * @return boolean """ return opts @checkauth @dojson('user.getObjects') def getObjects(self,**opts): """ * Get User ID by User alias * * {@source} * @access public * @static * @since 1.8 * @version 1 * * @param _array $user_data * @param array $user_data['alias'] User alias * @return string|boolean """ return opts @checkauth @dojson('user.add') def add(self,**opts): """ * Add Users * * {@source} * @access public * @static * @since 1.8 * @version 1 * * @param _array $users multidimensional array with Users data * @param string $users['name'] * @param string $users['surname'] * @param array $users['alias'] * @param string $users['passwd'] * @param string $users['url'] * @param int $users['autologin'] * @param
"""formatting.py unit tests.""" from unittest.mock import Mock import pytest from pypyr.formatting import RecursionSpec, RecursiveFormatter # region recursion_spec def test_recursion_spec_empty(): """Empty recursion spec initializes defaults.""" r = RecursionSpec('') r.has_recursed = False r.is_flat = False r.is_recursive = False r.is_set = False r.format_spec = '' def test_recursion_spec_no_match(): """Input without special recursion indicators.""" r = RecursionSpec('ABCD') r.has_recursed = False r.is_flat = False r.is_recursive = False r.is_set = False r.format_spec = 'ABCD' def test_recursion_spec_no_match_less_than_two(): """Short input without special recursion indicators.""" r = RecursionSpec('a') r.has_recursed = False r.is_flat = False r.is_recursive = False r.is_set = False r.format_spec = 'a' def test_recursion_spec_flat(): """Parse flat recursion spec.""" r = RecursionSpec('ff') r.has_recursed = False r.is_flat = True r.is_recursive = False r.is_set = True r.format_spec = '' def test_recursion_spec_flat_with_extra(): """Parse flat recursion spec with extra spec following.""" r = RecursionSpec('ffabcd') r.has_recursed = False r.is_flat = True r.is_recursive = False r.is_set = True r.format_spec = 'abcd' def test_recursion_spec_recursive(): """Parse recursive recursion spec.""" r = RecursionSpec('rf') r.has_recursed = False r.is_flat = False r.is_recursive = True r.is_set = True r.format_spec = '' def test_recursion_spec_recursive_with_extra(): """Parse recursive recursion spec with extra spec following.""" r = RecursionSpec('rfabcd') r.has_recursed = False r.is_flat = False r.is_recursive = True r.is_set = True r.format_spec = 'abcd' # endregion recursion_spec # region RecursiveFormatter # region RecursiveFormatter.format def test_recursive_formatter_format_none(): """Format a None.""" formatter = RecursiveFormatter() assert formatter.format(None, 'a', 'b', 'c') is None def test_recursive_formatter_format_empty(): """Format an empty string.""" formatter = RecursiveFormatter() assert formatter.format('', 'a', 'b', 'c') == '' def test_recursive_formatter_format_no_expression(): """Format a string sans formatting expression.""" formatter = RecursiveFormatter() assert formatter.format('arb string', 'a', 'b', 'c') == 'arb string' def test_recursive_formatter_format_args(): """Format a string with positional args.""" formatter = RecursiveFormatter() assert (formatter.format('{0} arb {1} string', 'a', 'b', 'c') == 'a arb b string') def test_recursive_formatter_format_args_no_index(): """Format a string with positional args and auto index.""" formatter = RecursiveFormatter() assert (formatter.format('{} arb {} string', 'a', 'b', 'c') == 'a arb b string') def test_recursive_formatter_format_kwargs(): """Format a string with kwargs.""" formatter = RecursiveFormatter() assert (formatter.format('{b} arb {c} string', a='a', b='b', c='c') == 'b arb c string') def test_recursive_formatter_format_args_and_kwargs(): """Format a string with positional args and kwargs.""" formatter = RecursiveFormatter() assert (formatter.format('{} and {b} arb {c} string', 'a', b='b', c='c') == 'a and b arb c string') def test_recursive_formatter_manual_to_auto(): """Can't switch from manual to auto field numbering.""" with pytest.raises(ValueError) as err: RecursiveFormatter().format('{0} {}', 'a', 'b') assert (str(err.value) == ('cannot switch from manual field specification to automatic ' 'field numbering')) def test_recursive_formatter_auto_to_manual(): """Can't switch from manual to auto field numbering.""" with pytest.raises(ValueError) as err: RecursiveFormatter().format('{} {1}', 'a', 'b') assert (str(err.value) == ('cannot switch from manual field specification to automatic ' 'field numbering')) # endregion RecursiveFormatter.format # region RecursiveFormatter.vformat def test_recursive_formatter_vformat_none(): """None is None.""" assert RecursiveFormatter().vformat(None, (1, 2), {'a': 'b'}) is None def test_recursive_formatter_vformat_literal(): """Literal only.""" assert RecursiveFormatter().vformat('literal here', (1, 2), {'a': 'b'} ) == 'literal here' def test_recursive_formatter_vformat_literal_end(): """Literal at end.""" assert RecursiveFormatter().vformat('{} literal here', (1, 2), {'a': 'b'} ) == '1 literal here' def test_recursive_formatter_vformat_literal_start(): """Literal at start.""" assert RecursiveFormatter().vformat('literal here {a}', (1, 2), {'a': 'b'} ) == 'literal here b' def test_recursive_formatter_vformat_literal_middle(): """Literal in middle.""" assert RecursiveFormatter().vformat('{} literal here {a}', (1, 2), {'a': 'b'} ) == '1 literal here b' def test_recursive_formatter_vformat_no_literal(): """No literal, only formatting expressions.""" assert RecursiveFormatter().vformat('{0}{1}{a}', (1, 2), {'a': 'b'} ) == '12b' def test_recursive_formatter_vformat_default(): """Default formatting is standard flat.""" d = { 'one': '1', 'two': '2 {one} 2', 'three': '3 {two} 3' } assert RecursiveFormatter().vformat('start {three} end', None, d ) == 'start 3 {two} 3 end' def test_recursive_formatter_vformat_default_with_formatting(): """Default formatting is flat & standard formatting expression works.""" d = { 'one': '1', 'two': '2 {one} 2', 'three': '3 {two} 3' } assert RecursiveFormatter().vformat('start {three:+>11} end', None, d ) == 'start ++3 {two} 3 end' def test_recursive_formatter_vformat_default_with_formatting_conversion(): """Flat & standard formatting expression works with conversion.""" d = { 'one': '1', 'two': '2 {one} 2', 'three': [0, 1, '{two}'] } assert RecursiveFormatter().vformat('start {three!s:+>17} end', None, d ) == "start ++[0, 1, '{two}'] end" def test_recursive_formatter_vformat_flat(): """Explicit flat formatting.""" d = { 'one': '1', 'two': '2 {one} 2', 'three': '3 {two} 3' } assert RecursiveFormatter().vformat('start {three:ff} end', None, d ) == 'start 3 {two} 3 end' def test_recursive_formatter_vformat_recursive(): """Explicit recursive formatting.""" d = { 'one': '1', 'two': '2 {one} 2', 'three': '3 {two} 3' } assert RecursiveFormatter().vformat('start {three:rf} end', None, d ) == 'start 3 2 1 2 3 end' def test_recursive_formatter_vformat_rf_to_ff(): """Explicit recursive formatting until ff, where it stops.""" d = { 'one': '1', 'two': '2 {one} 2', 'three': '3 {two:ff} 3' } assert RecursiveFormatter().vformat('start {three:rf} end', None, d ) == 'start 3 2 {one} 2 3 end' def test_recursive_formatter_vformat_single_default(): """Default formatting is for single expression is recursive.""" d = { 'one': '1', 'two': '{one}', 'three': '{two}' } assert RecursiveFormatter().vformat('{three}', None, d) == '1' def test_recursive_formatter_vformat_single_flat(): """Explicit flat formatting on single expression.""" d = { 'one': '1', 'two': '{one}', 'three': '{two}' } assert RecursiveFormatter().vformat('{three:ff}', None, d) == '{two}' def test_recursive_formatter_vformat_single_recursive(): """Explicit recursive formatting on single formatting.""" d = { 'one': '1', 'two': '{one}', 'three': '{two}' } assert RecursiveFormatter().vformat('{three:rf}', None, d) == '1' def test_recursive_formatter_vformat_single_rf_to_ff(): """Explicit recursive formatting on single formatting stops at ff.""" d = { 'one': '1', 'two': '{one}', 'three': '{two:ff}' } assert RecursiveFormatter().vformat('{three:rf}', None, d) == '{one}' def test_recursive_formatter_vformat_single_default_to_ff(): """Default recursive formatting on single formatting stops at ff.""" d = { 'one': '1', 'two': '{one}', 'three': '{two:ff}' } assert RecursiveFormatter().vformat('{three}', None, d) == '{one}' def test_recursive_formatter_vformat_single_default_keep_type(): """Default formatting for single expression is recursive & keeps type.""" d = { 'one': 1, 'two': '{one}', 'three': '{two}' } assert RecursiveFormatter().vformat('{three}', None, d) == 1 def test_recursive_formatter_vformat_single_default_with_conversion(): """Single expression is recursive & conversion works.""" d = { 'one': 1, 'two': '{one}', 'three': '{two}' } assert RecursiveFormatter().vformat('{three!r:rf}', None, d) == '1' def test_recursive_formatter_vformat_single_default_with_formatting(): """Single expression is recursive & extra standard formatting works.""" d = { 'one': 1, 'two': '{one}', 'three': '{two}' } assert RecursiveFormatter().vformat('{three:rf+>3}', None, d) == '++1' def test_recursive_formatter_vformat_with_passthrough(): """Passthrough types do not format.""" d = { 'one': 1, 'two': '{one}', 'three': '{two}' } assert RecursiveFormatter(passthrough_types=str).vformat('{three}', None, d ) == '{three}' def test_recursive_formatter_vformat_with_passthrough_tuple(): """Multiple passthrough types do not format.""" d = { 'one': 1, 'two': '{one}', 'three': {'{one}': '{two}'}, 'four': ['{one}', 'arb'] } formatter = RecursiveFormatter(passthrough_types=(dict, list)) assert formatter.vformat('{three}', None, d) == {'{one}': '{two}'} assert formatter.vformat(d['four'], None, d) == ['{one}', 'arb'] def test_recursive_formatter_vformat_with_special_type(): """Special types call get_value.""" special = Mock() special.get_value = Mock(return_value=123) d = { 'one': 1, 'two': special, 'three': '{two}' } assert RecursiveFormatter(special_types=Mock).vformat('{three}', None, d ) == 123 special.get_value.assert_called_once_with(d) class MyString(str): """Arbitrary test class.""" def __new__(cls, p_string): """Create new class so works like string.""" return str.__new__(cls, p_string) def get_value(self, kwargs): """Arbitrary kwargs.""" return f'123 {kwargs["arbkey"]}' class MyStringDerived(str): """Arbitrary class derived from MyString.""" def get_value(self, kwargs): """Arbitrary kwargs.""" return f'XXX {kwargs["arbkey"]}' def test_recursive_formatter_vformat_with_special_tuple(): """Multiple special types call get_value and supersedes string.""" special = Mock() special.get_value = Mock(return_value=123) my_string = MyString('blah') d = { 'one': 1, 'two': special, 'three': '{two}', 'arbkey': 456, 'my_string': my_string } format_me = { 'k1': '{three}', 'k2': '{my_string}'} formatter = RecursiveFormatter(special_types=(Mock, MyString)) assert formatter.vformat(format_me, None, d ) == {'k1': 123, 'k2': '123 456'} special.get_value.assert_called_once_with(d) assert my_string == 'blah' def test_recursive_formatter_iterate_list(): """Recurse through a list.""" special = Mock() special.get_value = Mock(return_value=123) my_string = MyString('blah') my_string_derived = MyStringDerived('blah derived') d = { 'one': 1, 'two': special, 'three': '{two}', 'arbkey': 456, 'my_string': my_string, 'my_string_derived': my_string_derived } repeating_item = Mock() repeating_item.get_value = Mock(return_value=999) passthrough_derived_obj = ValueError('arb') input_obj = [ repeating_item, 'literal string', 'string {one} expression', repeating_item, passthrough_derived_obj, special, '{my_string}', MyStringDerived, set([1, 2, 3, 4, '{arbkey}']), [5, 6, 7], {'a': 'b', '{one}': MyStringDerived}, b'\x00\x10', 890, 1.13 ] formatter = RecursiveFormatter(passthrough_types=(Exception, MyStringDerived), special_types=(Mock, MyString)) out = formatter.vformat(input_obj, None, d) assert out == [ 999, 'literal string', 'string 1 expression', 999, passthrough_derived_obj, 123, '123 456', MyStringDerived, {1, 2, 3, 4, 456}, [5, 6, 7], {'a': 'b', 1: MyStringDerived}, b'\x00\x10', 890, 1.13 ] # endregion RecursiveFormatter.vformat # region format_spec nesting def test_recursive_formatter_recurse_format_spec(): """Recurse on format_spec works.""" d = { 'k1': 'x', 'k2': 123 } assert RecursiveFormatter().vformat("a {k2:{k1}} b", None, d) == 'a 7b b' def test_recursive_formatter_recurse_format_spec_double(): """Recurse on format_spec exceeding max double replace.""" d = { 'align': '>', 'fill': '+', 'k3': 123 } assert RecursiveFormatter().vformat( "a {k3:{fill}{align}5} b", None, d) == 'a ++123 b' def test_recursive_formatter_recurse_format_spec_max_exceeded(): """Recurse on format_spec exceeding max raises.""" d = {
GraphicLayerSequence = int('00700060', 16) GraphicLayerOrder = int('00700062', 16) GraphicLayerRecommendedDisplayGrayscaleValue = int('00700066', 16) GraphicLayerRecommendedDisplayRGBValue = int('00700067', 16) GraphicLayerDescription = int('00700068', 16) ContentLabel = int('00700080', 16) ContentDescription = int('00700081', 16) PresentationCreationDate = int('00700082', 16) PresentationCreationTime = int('00700083', 16) ContentCreatorsName = int('00700084', 16) ContentCreatorsIdentificationCodeSequence = int('00700086', 16) AlternateContentDescriptionSequence = int('00700087', 16) PresentationSizeMode = int('00700100', 16) PresentationPixelSpacing = int('00700101', 16) PresentationPixelAspectRatio = int('00700102', 16) PresentationPixelMagnificationRatio = int('00700103', 16) GraphicGroupLabel = int('00700207', 16) GraphicGroupDescription = int('00700208', 16) CompoundGraphicSequence = int('00700209', 16) CompoundGraphicInstanceID = int('00700226', 16) FontName = int('00700227', 16) FontNameType = int('00700228', 16) CSSFontName = int('00700229', 16) RotationAngle = int('00700230', 16) TextStyleSequence = int('00700231', 16) LineStyleSequence = int('00700232', 16) FillStyleSequence = int('00700233', 16) GraphicGroupSequence = int('00700234', 16) TextColorCIELabValue = int('00700241', 16) HorizontalAlignment = int('00700242', 16) VerticalAlignment = int('00700243', 16) ShadowStyle = int('00700244', 16) ShadowOffsetX = int('00700245', 16) ShadowOffsetY = int('00700246', 16) ShadowColorCIELabValue = int('00700247', 16) Underlined = int('00700248', 16) Bold = int('00700249', 16) Italic = int('00700250', 16) PatternOnColorCIELabValue = int('00700251', 16) PatternOffColorCIELabValue = int('00700252', 16) LineThickness = int('00700253', 16) LineDashingStyle = int('00700254', 16) LinePattern = int('00700255', 16) FillPattern = int('00700256', 16) FillMode = int('00700257', 16) ShadowOpacity = int('00700258', 16) GapLength = int('00700261', 16) DiameterofVisibility = int('00700262', 16) RotationPoint = int('00700273', 16) TickAlignment = int('00700274', 16) ShowTickLabel = int('00700278', 16) TickLabelAlignment = int('00700279', 16) CompoundGraphicUnits = int('00700282', 16) PatternOnOpacity = int('00700284', 16) PatternOffOpacity = int('00700285', 16) MajorTicksSequence = int('00700287', 16) TickPosition = int('00700288', 16) TickLabel = int('00700289', 16) CompoundGraphicType = int('00700294', 16) GraphicGroupID = int('00700295', 16) ShapeType = int('00700306', 16) RegistrationSequence = int('00700308', 16) MatrixRegistrationSequence = int('00700309', 16) MatrixSequence = int('0070030A', 16) FrameofReferencetoDisplayedCoordinateSystemTransformationMatrix = int( '0070030B', 16) FrameofReferenceTransformationMatrixType = int('0070030C', 16) RegistrationTypeCodeSequence = int('0070030D', 16) FiducialDescription = int('0070030F', 16) FiducialIdentifier = int('00700310', 16) FiducialIdentifierCodeSequence = int('00700311', 16) ContourUncertaintyRadius = int('00700312', 16) UsedFiducialsSequence = int('00700314', 16) GraphicCoordinatesDataSequence = int('00700318', 16) FiducialUID = int('0070031A', 16) FiducialSetSequence = int('0070031C', 16) FiducialSequence = int('0070031E', 16) GraphicLayerRecommendedDisplayCIELabValue = int('00700401', 16) BlendingSequence = int('00700402', 16) RelativeOpacity = int('00700403', 16) ReferencedSpatialRegistrationSequence = int('00700404', 16) BlendingPosition = int('00700405', 16) PresentationDisplayCollectionUID = int('00701101', 16) PresentationSequenceCollectionUID = int('00701102', 16) PresentationSequencePositionIndex = int('00701103', 16) RenderedImageReferenceSequence = int('00701104', 16) VolumetricPresentationStateInputSequence = int('00701201', 16) PresentationInputType = int('00701202', 16) InputSequencePositionIndex = int('00701203', 16) Crop = int('00701204', 16) CroppingSpecificationIndex = int('00701205', 16) CompositingMethod = int('00701206', 16) VolumetricPresentationInputNumber = int('00701207', 16) ImageVolumeGeometry = int('00701208', 16) VolumeCroppingSequence = int('00701301', 16) VolumeCroppingMethod = int('00701302', 16) BoundingBoxCrop = int('00701303', 16) ObliqueCroppingPlaneSequence = int('00701304', 16) Plane = int('00701305', 16) PlaneNormal = int('00701306', 16) CroppingSpecificationNumber = int('00701309', 16) MultiPlanarReconstructionStyle = int('00701501', 16) MPRThicknessType = int('00701502', 16) MPRSlabThickness = int('00701503', 16) MPRTopLeftHandCorner = int('00701505', 16) MPRViewWidthDirection = int('00701507', 16) MPRViewWidth = int('00701508', 16) NumberofVolumetricCurvePoints = int('0070150C', 16) VolumetricCurvePoints = int('0070150D', 16) MPRViewHeightDirection = int('00701511', 16) MPRViewHeight = int('00701512', 16) PresentationStateClassificationComponentSequence = int('00701801', 16) ComponentType = int('00701802', 16) ComponentInputSequence = int('00701803', 16) VolumetricPresentationInputIndex = int('00701804', 16) PresentationStateCompositorComponentSequence = int('00701805', 16) WeightingTransferFunctionSequence = int('00701806', 16) WeightingLookupTableDescriptor = int('00701807', 16) WeightingLookupTableData = int('00701808', 16) VolumetricAnnotationSequence = int('00701901', 16) ReferencedStructuredContextSequence = int('00701903', 16) ReferencedContentItem = int('00701904', 16) VolumetricPresentationInputAnnotationSequence = int('00701905', 16) AnnotationClipping = int('00701907', 16) PresentationAnimationStyle = int('00701A01', 16) RecommendedAnimationRate = int('00701A03', 16) AnimationCurveSequence = int('00701A04', 16) AnimationStepSize = int('00701A05', 16) HangingProtocolName = int('00720002', 16) HangingProtocolDescription = int('00720004', 16) HangingProtocolLevel = int('00720006', 16) HangingProtocolCreator = int('00720008', 16) HangingProtocolCreationDateTime = int('0072000A', 16) HangingProtocolDefinitionSequence = int('0072000C', 16) HangingProtocolUserIdentificationCodeSequence = int('0072000E', 16) HangingProtocolUserGroupName = int('00720010', 16) SourceHangingProtocolSequence = int('00720012', 16) NumberofPriorsReferenced = int('00720014', 16) ImageSetsSequence = int('00720020', 16) ImageSetSelectorSequence = int('00720022', 16) ImageSetSelectorUsageFlag = int('00720024', 16) SelectorAttribute = int('00720026', 16) SelectorValueNumber = int('00720028', 16) TimeBasedImageSetsSequence = int('00720030', 16) ImageSetNumber = int('00720032', 16) ImageSetSelectorCategory = int('00720034', 16) RelativeTime = int('00720038', 16) RelativeTimeUnits = int('0072003A', 16) AbstractPriorValue = int('0072003C', 16) AbstractPriorCodeSequence = int('0072003E', 16) ImageSetLabel = int('00720040', 16) SelectorAttributeVR = int('00720050', 16) SelectorSequencePointer = int('00720052', 16) SelectorSequencePointerPrivateCreator = int('00720054', 16) SelectorAttributePrivateCreator = int('00720056', 16) SelectorAEValue = int('0072005E', 16) SelectorASValue = int('0072005F', 16) SelectorATValue = int('00720060', 16) SelectorDAValue = int('00720061', 16) SelectorCSValue = int('00720062', 16) SelectorDTValue = int('00720063', 16) SelectorISValue = int('00720064', 16) SelectorOBValue = int('00720065', 16) SelectorLOValue = int('00720066', 16) SelectorOFValue = int('00720067', 16) SelectorLTValue = int('00720068', 16) SelectorOWValue = int('00720069', 16) SelectorPNValue = int('0072006A', 16) SelectorTMValue = int('0072006B', 16) SelectorSHValue = int('0072006C', 16) SelectorUNValue = int('0072006D', 16) SelectorSTValue = int('0072006E', 16) SelectorUCValue = int('0072006F', 16) SelectorUTValue = int('00720070', 16) SelectorURValue = int('00720071', 16) SelectorDSValue = int('00720072', 16) SelectorODValue = int('00720073', 16) SelectorFDValue = int('00720074', 16) SelectorOLValue = int('00720075', 16) SelectorFLValue = int('00720076', 16) SelectorULValue = int('00720078', 16) SelectorUSValue = int('0072007A', 16) SelectorSLValue = int('0072007C', 16) SelectorSSValue = int('0072007E', 16) SelectorUIValue = int('0072007F', 16) SelectorCodeSequenceValue = int('00720080', 16) NumberofScreens = int('00720100', 16) NominalScreenDefinitionSequence = int('00720102', 16) NumberofVerticalPixels = int('00720104', 16) NumberofHorizontalPixels = int('00720106', 16) DisplayEnvironmentSpatialPosition = int('00720108', 16) ScreenMinimumGrayscaleBitDepth = int('0072010A', 16) ScreenMinimumColorBitDepth = int('0072010C', 16) ApplicationMaximumRepaintTime = int('0072010E', 16) DisplaySetsSequence = int('00720200', 16) DisplaySetNumber = int('00720202', 16) DisplaySetLabel = int('00720203', 16) DisplaySetPresentationGroup = int('00720204', 16) DisplaySetPresentationGroupDescription = int('00720206', 16) PartialDataDisplayHandling = int('00720208', 16) SynchronizedScrollingSequence = int('00720210', 16) DisplaySetScrollingGroup = int('00720212', 16) NavigationIndicatorSequence = int('00720214', 16) NavigationDisplaySet = int('00720216', 16) ReferenceDisplaySets = int('00720218', 16) ImageBoxesSequence = int('00720300', 16) ImageBoxNumber = int('00720302', 16) ImageBoxLayoutType = int('00720304', 16) ImageBoxTileHorizontalDimension = int('00720306', 16) ImageBoxTileVerticalDimension = int('00720308', 16) ImageBoxScrollDirection = int('00720310', 16) ImageBoxSmallScrollType = int('00720312', 16) ImageBoxSmallScrollAmount = int('00720314', 16) ImageBoxLargeScrollType = int('00720316', 16) ImageBoxLargeScrollAmount = int('00720318', 16) ImageBoxOverlapPriority = int('00720320', 16) CineRelativetoRealTime = int('00720330', 16) FilterOperationsSequence = int('00720400', 16) FilterbyCategory = int('00720402', 16) FilterbyAttributePresence = int('00720404', 16) FilterbyOperator = int('00720406', 16) StructuredDisplayBackgroundCIELabValue = int('00720420', 16) EmptyImageBoxCIELabValue = int('00720421', 16) StructuredDisplayImageBoxSequence = int('00720422', 16) StructuredDisplayTextBoxSequence = int('00720424', 16) ReferencedFirstFrameSequence = int('00720427', 16) ImageBoxSynchronizationSequence = int('00720430', 16) SynchronizedImageBoxList = int('00720432', 16) TypeofSynchronization = int('00720434', 16) BlendingOperationType = int('00720500', 16) ReformattingOperationType = int('00720510', 16) ReformattingThickness = int('00720512', 16) ReformattingInterval = int('00720514', 16) ReformattingOperationInitialViewDirection = int('00720516', 16) ThreeDRenderingType = int('00720520', 16) SortingOperationsSequence = int('00720600', 16) SortbyCategory = int('00720602', 16) SortingDirection = int('00720604', 16) DisplaySetPatientOrientation = int('00720700', 16) VOIType = int('00720702', 16) PseudoColorType = int('00720704', 16) PseudoColorPaletteInstanceReferenceSequence = int('00720705', 16) ShowGrayscaleInverted = int('00720706', 16) ShowImageTrueSizeFlag = int('00720710', 16) ShowGraphicAnnotationFlag = int('00720712', 16) ShowPatientDemographicsFlag = int('00720714', 16) ShowAcquisitionTechniquesFlag = int('00720716', 16) DisplaySetHorizontalJustification = int('00720717', 16) DisplaySetVerticalJustification = int('00720718', 16) ContinuationStartMeterset = int('00740120', 16) ContinuationEndMeterset = int('00740121', 16) ProcedureStepState = int('00741000', 16) ProcedureStepProgressInformationSequence = int('00741002', 16) ProcedureStepProgress = int('00741004', 16) ProcedureStepProgressDescription = int('00741006', 16) ProcedureStepCommunicationsURISequence = int('00741008', 16) ContactURI = int('0074100A', 16) ContactDisplayName = int('0074100C', 16) ProcedureStepDiscontinuationReasonCodeSequence = int('0074100E', 16) BeamTaskSequence = int('00741020', 16) BeamTaskType = int('00741022', 16) BeamOrderIndexTrial = int('00741024', 16) AutosequenceFlag = int('00741025', 16) TableTopVerticalAdjustedPosition = int('00741026', 16) TableTopLongitudinalAdjustedPosition = int('00741027', 16) TableTopLateralAdjustedPosition = int('00741028', 16) PatientSupportAdjustedAngle = int('0074102A', 16) TableTopEccentricAdjustedAngle = int('0074102B', 16) TableTopPitchAdjustedAngle = int('0074102C', 16) TableTopRollAdjustedAngle = int('0074102D', 16) DeliveryVerificationImageSequence = int('00741030', 16) VerificationImageTiming = int('00741032', 16) DoubleExposureFlag = int('00741034', 16) DoubleExposureOrdering = int('00741036', 16) DoubleExposureMetersetTrial = int('00741038', 16) DoubleExposureFieldDeltaTrial = int('0074103A', 16) RelatedReferenceRTImageSequence = int('00741040', 16) GeneralMachineVerificationSequence = int('00741042', 16) ConventionalMachineVerificationSequence = int('00741044', 16) IonMachineVerificationSequence = int('00741046', 16) FailedAttributesSequence = int('00741048', 16) OverriddenAttributesSequence = int('0074104A', 16) ConventionalControlPointVerificationSequence = int('0074104C', 16) IonControlPointVerificationSequence = int('0074104E', 16) AttributeOccurrenceSequence = int('00741050', 16) AttributeOccurrencePointer = int('00741052', 16) AttributeItemSelector = int('00741054', 16) AttributeOccurrencePrivateCreator = int('00741056', 16) SelectorSequencePointerItems = int('00741057', 16) ScheduledProcedureStepPriority = int('00741200', 16) WorklistLabel = int('00741202', 16) ProcedureStepLabel = int('00741204', 16) ScheduledProcessingParametersSequence = int('00741210', 16) PerformedProcessingParametersSequence = int('00741212', 16) UnifiedProcedureStepPerformedProcedureSequence = int('00741216', 16) RelatedProcedureStepSequence = int('00741220', 16) ProcedureStepRelationshipType = int('00741222', 16) ReplacedProcedureStepSequence = int('00741224', 16) DeletionLock = int('00741230', 16) ReceivingAE = int('00741234', 16) RequestingAE = int('00741236', 16) ReasonforCancellation = int('00741238', 16) SCPStatus = int('00741242', 16) SubscriptionListStatus = int('00741244', 16) UnifiedProcedureStepListStatus = int('00741246', 16) BeamOrderIndex = int('00741324', 16) DoubleExposureMeterset = int('00741338', 16) DoubleExposureFieldDelta = int('0074133A', 16) BrachyTaskSequence = int('00741401', 16) ContinuationStartTotalReferenceAirKerma = int('00741402', 16) ContinuationEndTotalReferenceAirKerma = int('00741403', 16) ContinuationPulseNumber = int('00741404', 16) ChannelDeliveryOrderSequence = int('00741405', 16) ReferencedChannelNumber = int('00741406', 16) StartCumulativeTimeWeight = int('00741407', 16) EndCumulativeTimeWeight = int('00741408', 16) OmittedChannelSequence = int('00741409', 16) ReasonforChannelOmission = int('0074140A', 16) ReasonforChannelOmissionDescription = int('0074140B', 16) ChannelDeliveryOrderIndex = int('0074140C', 16) ChannelDeliveryContinuationSequence = int('0074140D',
import numpy as np from scipy import ndimage import tifffile as tiff import matplotlib.pyplot as plt import pandas as pd from enum import Enum from skimage.transform import resize # Worldview-3 - Panchromatic (3349, 3338): 400nm - 800nm # Worldview-3 RGB (3350, 3338) # Worldview-3 - 8 Multispectral bands (838, 835): # Coastal: 400 - 450 nm (0, QGIS: 1, WV-3-Band-no:2) Red: 630 - 690 nm (4, QGIS: 5, WV-3-Band-no:6) # Blue: 450 - 510 nm (1, QGIS: 2, WV-3-Band-no:3) Red Edge: 705 - 745 nm (5, QGIS: 6, WV-3-Band-no:7) # Green: 510 - 580 nm (2, QGIS: 3, WV-3-Band-no:4) Near-IR1: 770 - 895 nm (6, QGIS: 7, WV-3-Band-no:8) # Yellow: 585 - 625 nm (3, QGIS: 4, WV-3-Band-no:5) Near-IR2: 860 - 1040 nm (7, QGIS: 8, WV-3-Band-no:9) # NIR - Near Infra Red: 750nm - 1400nm # MIR - Mid Infra Red: 3000nm - 8000nm # Worldview-3 - 8 SWIR bands (134, 133): # SWIR-1: 1195 - 1225 nm SWIR-5: 2145 - 2185 nm # SWIR-2: 1550 - 1590 nm SWIR-6: 2185 - 2225 nm # SWIR-3: 1640 - 1680 nm SWIR-7: 2235 - 2285 nm # SWIR-4: 1710 - 1750 nm SWIR-8: 2295 - 2365 nm class WV3ms(Enum): COASTAL = 0 BLUE = 1 GREEN = 2 YELLOW = 3 RED = 4 REDEDGE = 5 NEARIR1 = 6 NEARIR2 = 7 class WV3swir(Enum): SWIR_1 = 0 SWIR_2 = 1 SWIR_3 = 2 SWIR_4 = 3 SWIR_5 = 4 SWIR_6 = 5 SWIR_7 = 6 SWIR_8 = 7 CCCI_THRESHOLD_U = 0.5 CCCI_THRESHOLD_L = -4 FAUX_CCCI_THRESHOLD = 0.11 # CCCI_SWIR_THRESHOLD = 1.03 CCCI_SWIR_THRESHOLD = .94 NDWI_THRESHOLD = 0.07 NDVI_THRESHOLD = 0.07 def stretch_8bit(bands, lower_percent=2, higher_percent=98, depth=3): # contrast enhancement as per QGIS Stretch to MinMax # note that input image range is 0 .. 1 out = np.zeros_like(bands).astype(np.float32) for i in range(depth): a = 0 b = 1 if depth == 1: c = np.percentile(bands[:, :], lower_percent) d = np.percentile(bands[:, :], higher_percent) t = a + (bands[:, :] - c) * (b - a) / (d - c) else: c = np.percentile(bands[:, :, i], lower_percent) d = np.percentile(bands[:, :, i], higher_percent) t = a + (bands[:, :, i] - c) * (b - a) / (d - c) t[t < a] = a t[t > b] = b if depth == 1: out[:, :] = t else: out[:, :, i] = t return out.astype(np.float32) def EVI_index(msdata): # Enhanced Vegetation Index NIR2 = msdata[WV3ms.NEARIR2.value, :, :].astype(np.float32) R = msdata[WV3ms.RED.value, :, :].astype(np.float32) CB = msdata[WV3ms.COASTAL.value, :, :].astype(np.float32) # EVI = 2.5 * (NIR2 - R)/(NIR2 + 6.0*R - 7.5*CB + 1.0) a = 2.5 * (NIR2 - R) b = NIR2 + 6.0*R - 7.5*CB + 1.0 with np.errstate(divide='ignore', invalid='ignore'): EVI = np.true_divide(a, b) EVI[EVI == np.inf] = 0 EVI = np.nan_to_num(EVI) return EVI def SAVI_index(msdata): # Soil Adjusted Vegetation Index NIR1 = msdata[WV3ms.NEARIR1.value, :, :].astype(np.float32) R = msdata[WV3ms.RED.value, :, :].astype(np.float32) # The value of L varies by the amount or cover of green vegetation: in very high vegetation regions, # L=0; and in areas with no green vegetation, L=1. Generally, an L=0.5 works well in most situations # and is the default value used. When L=0, then SAVI = NDVI. L = 0.5 # SAVI = (1 + L) * (NIR1 - R)/(NIR1 + R + L) a = (1 + L) * (NIR1 - R) b = NIR1 + R + L with np.errstate(divide='ignore', invalid='ignore'): SAVI = np.true_divide(a, b) SAVI[SAVI == np.inf] = 0 SAVI = np.nan_to_num(SAVI) return SAVI def faux_CCCI_index(msdata, rgbdata): RE = resize(msdata[WV3ms.REDEDGE.value, :, :], (rgbdata.shape[0], rgbdata.shape[1]), mode='constant', preserve_range=False) NIR2 = resize(msdata[WV3ms.NEARIR2.value, :, :], (rgbdata.shape[0], rgbdata.shape[1]), mode='constant', preserve_range=False) R = rgbdata[:, :, 0] # resize: note that with the default preserve_range=False the input image is # converted according to the conventions of img_as_float (values in [0, 1]) # from the original 11 bits range [0, 2047]. preserve_range=True should be used. # faux_CCCI_index only works preserve_range=False - reason unknown # Canopy Chlorophyll Content Index # CCCI = ((NIR2 - RE) / (NIR2 + RE)) / ((NIR2 - R) / (NIR2 + R)) a = NIR2 - RE b = NIR2 + RE # c = NIR2 - R # d = NIR2 + R c = R * (-1) d = R with np.errstate(divide='ignore', invalid='ignore'): e = np.true_divide(a, b) e[e == np.inf] = 0 e = np.nan_to_num(e) f = np.true_divide(c, d) f[f == np.inf] = 0 f = np.nan_to_num(f) CCCI = np.true_divide(e, f) CCCI[CCCI == np.inf] = 0 CCCI = np.nan_to_num(CCCI) return CCCI def CCCI_NIR2_index(msdata): # Canopy Chlorophyll Content Index # uses NIR2 rather than SWIR_1 RE = msdata[WV3ms.REDEDGE.value, :, :].astype(np.float32) NIR2 = msdata[WV3ms.NEARIR2.value, :, :].astype(np.float32) R = msdata[WV3ms.RED.value, :, :].astype(np.float32) # CCCI = ((NIR2 - RE)/ NIR2 + RE)) / ((NIR2 - R)/(NIR2 + R)) a = NIR2 - RE b = NIR2 + RE c = NIR2 - R d = NIR2 + R with np.errstate(divide='ignore', invalid='ignore'): e = np.true_divide(a, b) e[e == np.inf] = 0 e = np.nan_to_num(e) f = np.true_divide(c, d) f[f == np.inf] = 0 f = np.nan_to_num(f) CCCI = np.true_divide(e, f) CCCI[CCCI == np.inf] = 0 CCCI = np.nan_to_num(CCCI) return CCCI def CCCI_SWIR_index(msdata, swirdata): # Canopy Chlorophyll Content Index # uses SWIR_1 RE = msdata[WV3ms.REDEDGE.value, :, :].astype(np.float32) SWIR1 = resize(swirdata[WV3swir.SWIR_1.value, :, :], (msdata.shape[1], msdata.shape[2]), mode='constant', preserve_range=True).astype(np.float32) R = msdata[WV3ms.RED.value, :, :].astype(np.float32) # CCCI = ((SWIR1 - RE)/ SWIR1 + RE)) / ((SWIR1 - R)/(SWIR1 + R)) a = SWIR1 - RE b = SWIR1 + RE c = SWIR1 - R d = SWIR1 + R with np.errstate(divide='ignore', invalid='ignore'): e = np.true_divide(a, b) e[e == np.inf] = 0 e = np.nan_to_num(e) f = np.true_divide(c, d) f[f == np.inf] = 0 f = np.nan_to_num(f) CCCI = np.true_divide(e, f) CCCI[CCCI == np.inf] = 0 CCCI = np.nan_to_num(CCCI) return CCCI def NDWI_index(msdata): # Normalized Difference Water Index # Uses McFeeter's NDWI based on MODIS band 2 and band 4 G = msdata[WV3ms.GREEN.value, :, :].astype(np.float32) NIR1 = msdata[WV3ms.NEARIR1.value, :, :].astype(np.float32) # NDWI = (G - NIR1)/(G + NIR1) a = G - NIR1 b = G + NIR1 with np.errstate(divide='ignore', invalid='ignore'): NDWI = np.true_divide(a, b) NDWI[NDWI == np.inf] = 0 NDWI = np.nan_to_num(NDWI) return NDWI def NDVI_index(msdata): # Normalized Difference Vegetation Index R = msdata[WV3ms.RED.value, :, :].astype(np.float32) NIR1 = msdata[WV3ms.NEARIR1.value, :, :].astype(np.float32) # NDVI = (NIR1 - R)/(NIR1 + R ) a = NIR1 - R b = NIR1 + R with np.errstate(divide='ignore', invalid='ignore'): NDVI = np.true_divide(a, b) NDVI[NDVI == np.inf] = 0 NDVI = np.nan_to_num(NDVI) return NDVI def display(IM_ID): # read rgb and m bands # tifffile RGB = ndarray shape (3, 3350, 3338) i.e. (colour, row, col) # [0] = red, [1] = green, [2] = blue, 16 bit depth rgb = tiff.imread('three_band/{}.tif'.format(IM_ID)) # change shape to regular (3350, 3338, 3) i.e. (row, col, colour) rgb = np.rollaxis(rgb, 0, 3) # tifffile M = ndarray shape (8, 838, 835) i.e. (spectrum, row, col) m = tiff.imread('sixteen_band/{}_M.tif'.format(IM_ID)) # tiffile panchrom = ndarray shape (3349, 3338) i.e. (row, col) panchrom = tiff.imread('sixteen_band/{}_P.tif'.format(IM_ID)) # tiffile SWIR = ndarray shape (8, 134, 133) i.e. (spectrum, row, col) swir = tiff.imread('sixteen_band/{}_A.tif'.format(IM_ID)) # get our indices myFauxCCCI = faux_CCCI_index(m, rgb) myCCCI = CCCI_NIR2_index(m) mySwirCCCI = CCCI_SWIR_index(m, swir) myNDWI = NDWI_index(m) myNDVI = NDVI_index(m) myEVI = EVI_index(m) mySAVI = SAVI_index(m) # you can look on histogram and pick your favorite threshold value # ccci_binary = (myCCCI < CCCI_THRESHOLD).astype(np.float32) ccci_binary_1 = (myCCCI < CCCI_THRESHOLD_U) ccci_binary_2 = (myCCCI > CCCI_THRESHOLD_L) ccci_binary_3 = np.logical_and(ccci_binary_1, ccci_binary_2) ccci_binary_4 = np.logical_not(ccci_binary_3) ccci_binary_5 = ndimage.binary_opening(ccci_binary_4) ccci_binary = ndimage.binary_closing(ccci_binary_5).astype(np.float32) ndwi_binary = (myNDWI > NDWI_THRESHOLD).astype(np.float32) ndvi_binary = (myNDWI > NDVI_THRESHOLD).astype(np.float32) faux_ccci_binary = (myFauxCCCI > FAUX_CCCI_THRESHOLD).astype(np.float32) ccci_swir_binary = (mySwirCCCI > CCCI_SWIR_THRESHOLD).astype(np.float32) fig, axes = plt.subplots(ncols=5, nrows=2, figsize=(18, 9)) ax = axes.ravel() ax[0].imshow(ccci_binary, cmap='binary_r') ax[0].set_title('CCCI NIR 2 Mask') ax[0].axis('off') ax[1].imshow(ndwi_binary, cmap='binary_r') ax[1].set_title('NDWI Mask') ax[1].axis('off') ax[2].imshow(ndvi_binary, cmap='binary_r') ax[2].set_title('NDVI Mask') ax[2].axis('off') ax[3].imshow(faux_ccci_binary, cmap='binary_r') ax[3].set_title('Faux CCCI Mask') ax[3].axis('off') ax[4].imshow(ccci_swir_binary, cmap='binary_r') ax[4].set_title('CCCI SWIR 1 Mask') ax[4].axis('off') hist, bins = np.histogram(myCCCI, range=(-2, 2), bins=50) width = 0.7 * (bins[1] - bins[0]) center = (bins[:-1] + bins[1:])
# designing footer and header def draw_canvas(self, page_count): page = "Page %s of %s" % (self._pageNumber, page_count) self.saveState() self.setStrokeColorRGB(0, 0, 0) self.setLineWidth(0.5) self.setFont('Times-Roman', 10) self.drawString(10, 10, page) self.drawString(55, 720, "DATE") self.drawString(200, 720, "INFO") self.drawString(318, 720, "PRICE") self.drawString(358, 720, "RECEIVED") self.drawString(421, 720, "USED") self.drawString(468, 720, "FINAL") self.drawString(527, 720, "SIGN") self.line(27.5,710,27.5,735) self.line(107.5,710,107.5,735) self.line(307.5,710,307.5,735) self.line(357.5,710,357.5,735) self.line(407.5,710,407.5,735) self.line(457.5,710,457.5,735) self.line(507.5,710,507.5,735) self.line(567.5,710,567.5,735) self.line(27.5,735,567.5,735) self.setFont('Times-Roman', 13) self.drawString(28, 745, "MEDICINE: "+name) self.drawString(260, 770, "USAGE REGISTER") self.restoreState() # Building PDF save_name = os.path.join(os.path.expanduser("~"), "Desktop/", PDFName+".pdf") doc = SimpleDocTemplate(save_name, pagesize=portrait(A4), topMargin=125, leftMargin=10, rightMargin=10) doc.multiBuild(elements, canvasmaker=FooterCanvas) messagebox.showinfo(title='Success!', message='Your PDF Has Been Created Successfully on desktop with name "'+PDFName+'" ..!') pageFrame.destroy() HomePage() except: messagebox.showerror(title='Error', message='An error occurred in creating PDF') pageFrame.destroy() HomePage() def mistakeCorrectorPage(): # Creating frame for DataEntry Page pageFrame = GradientFrame(MainScreen, "black", "grey", borderwidth=0.05, relief="sunken", width=1000, height= 640) pageFrame.pack() # Title Label for the app (Receive) pageFrame.create_text(340, 50, text="Mistake Correction", fill="blue", anchor='nw', font=('Calibri', 30, 'underline', 'bold')) # RFCMSCorrection Button RFCMSCorrection = Button(pageFrame,text='''RFCMS Correction''', width=11, height=5, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), RFCMSCorrectorPage()]) RFCMSCorrection.place(x=250, y=150) # LP/PMJAK Correction Button LPPMJAKCorrectionButton = Button(pageFrame, text='''LP/PMJAK Correction''', width=11, height=5, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), LPPMJAKCorrectorPage()]) LPPMJAKCorrectionButton.place(x=450, y=150) # RFOD Correction Button RFODCorrectionButton = Button(pageFrame, text='''RFOD Correction''', width=11, height=5, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), RFODCorrectorPage()]) RFODCorrectionButton.place(x=650, y=150) # OB Correction Button OBCorrectionButton = Button(pageFrame, text='''Opening Balance Correction''', width=11, height=5, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), OBCorrectorPage()]) OBCorrectionButton.place(x=250, y=300) # consumptionCorrection Button consumptionCorrectionButton = Button(pageFrame, text='''Consumption Correction''', width=11, height=5, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), consumptionCorrectorPage()]) consumptionCorrectionButton.place(x=450, y=300) # transfer Correction Button transferCorrectionButton = Button(pageFrame, text='''Transfer Correction''', width=11, height=5, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), transferCorrectorPage()]) transferCorrectionButton.place(x=650, y=300) # other Correction Button otherCorrectionButton = Button(pageFrame, text='''Other Correction''', width=11, height=5, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), otherCorrectorPage()]) otherCorrectionButton.place(x=350, y=450) # delete medicine Button deleteMedicineButton = Button(pageFrame, text='''Delete Medicine''', width=11, height=5, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), deleteMedicinePage()]) deleteMedicineButton.place(x=550, y=450) # Back button backButton = Button(pageFrame, text='Back', width=10, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), othersPage()]) backButton.place(x=50, y=20) def nextDatabasePage(): # Creating Frame for newDatabasePage pageFrame = GradientFrame(MainScreen, "black", "grey", borderwidth=0.05, relief="sunken", width=1000, height= 640) pageFrame.pack() pageFrame.create_text(400, 50, text="Data-Base Creation", fill="blue", anchor='nw', font=('Calibri', 20, 'underline')) # Adding calendar for start of the year pageFrame.create_text(325, 150, text="Select the start date for Data :", fill="white", anchor='nw', font=('Calibri', 16), justify='center') startYear = Calendar(pageFrame,state="normal",firstweekday="monday",showweeknumbers=False,date_pattern="y-mm-dd",foreground="black",font="Arial 16 bold") startYear.place(x=325,y=180) def DatabaseCreator(): if(nextDataBaseCreator(int(startYear.get_date()[:4]),int(startYear.get_date()[:4])+1)): messagebox.showinfo(title='Done',message='Database Created and added as your current Database.') pageFrame.destroy() HomePage() else: messagebox.showerror(title='Error', message='Database for selected year already exists!!') # Submit button submit = Button(pageFrame, text="Submit", width=15, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda:[DatabaseCreator()] ) submit.place(x=450, y=480) # Back button back_button = Button(pageFrame, text='Back', width=10, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), HomePage()]) back_button.place(x=10, y=10) def consumptionCorrectorPage(): suggestion = RFODSuggestions() # Creating frame for DataEntry Page pageFrame = GradientFrame(MainScreen, "black", "grey", borderwidth=0.05, relief="sunken", width=1000, height= 640) pageFrame.pack() # Title Label for the app (Consumption & Transfer) pageFrame.create_text(50, 75, text="Consumption Correction", fill="blue", anchor='nw', font=('Calibri', 20, 'underline')) # Drop down for deciding increase or decrease changeType = ['Increase', 'Decrease'] var = StringVar() var.set(changeType[0]) pageFrame.create_text(50, 396, text="Type Of Change :", fill="white", anchor='nw', font=('Calibri', 14), justify='center') typeOfChange = OptionMenu(pageFrame, var, *changeType) typeOfChange.place(x=220, y=395, width=300) # Entry Field for Name. pageFrame.create_text(50, 146, text="Name :", fill="white", anchor='nw', font=('Calibri', 14), justify='center') name = AutocompleteEntry(suggestion, pageFrame) name.place(x=220, y=150, width=500) # Entry Field Quantity pageFrame.create_text(50, 446, text="Quantity Of Change :", fill="white", anchor='nw', font=('Calibri', 14), justify='center') quantity = Entry(pageFrame) quantity.place(x=220, y=450) # Entry field for date pageFrame.create_text(50, 196, text="Entry Date :", fill="white", anchor='nw', font=('Calibri', 14), justify='center') entryDate = Calendar(pageFrame,state="normal",firstweekday="monday",showweeknumbers=False,date_pattern="y-mm-dd",foreground="black") entryDate.place(x=220, y=200) # indent Maker for validation and submition def consumptionCorrectorValidationAndSubmition(): if(len(name.get())==0): messagebox.showerror(title='Error', message='Missing Name!!') name.delete(0, 'end') name.focus() return elif(len(quantity.get())==0): messagebox.showerror(title='Error', message='Missing Qunatity Change !!') quantity.delete(0, 'end') quantity.focus() return currentQuantity = getConsumptionQuantity(name.get(), entryDate.get_date()) if(currentQuantity < int(quantity.get()) and var.get() == "Decrease"): messagebox.showerror(title='Error', message='You have only '+str(currentQuantity)+' medicines used on the selected date..!') quantity.delete(0, 'end') quantity.focus() return elif(int(currentBalanceGet(name.get())) < int(quantity.get()) and var.get() == "Increase"): messagebox.showerror(title='Error', message='You have only '+str(currentBalanceGet(name.get()))+' medicines left in balance!') quantity.delete(0, 'end') quantity.focus() return if(var.get() == "Increase"): confirmation = consumptionIncreaseDB(name.get(), int(quantity.get()), entryDate.get_date(), dt.datetime.now().strftime("%Y-%m-%d")) else: confirmation = consumptionDecreaseDB(name.get(), int(quantity.get()), entryDate.get_date(), dt.datetime.now().strftime("%Y-%m-%d")) if(confirmation): messagebox.showinfo(title='Done', message='Changes done successfully your current balance is '+currentBalanceGet(name.get())+'!') pageFrame.destroy() mistakeCorrectorPage() else: messagebox.showerror(title='Error', message='An error occurred in correction..!') # submit button from form submition submitButton = Button(pageFrame, text='Submit', width=10, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [consumptionCorrectorValidationAndSubmition()]) submitButton.place(x=220, y=500) # back button backButton = Button(pageFrame, text=' Back ',width=10, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), mistakeCorrectorPage()]) backButton.place(x=50, y=20) def RFODCorrectorPage(): suggestion = RFODSuggestions() # Creating frame for DataEntry Page pageFrame = GradientFrame(MainScreen, "black", "grey", borderwidth=0.05, relief="sunken", width=1000, height= 640) pageFrame.pack() # Title Label for the app (Consumption & Transfer) pageFrame.create_text(50, 75, text="RFOD Correction", fill="blue", anchor='nw', font=('Calibri', 20, 'underline')) # Drop down for deciding increase or decrease changeType = ['Increase', 'Decrease'] var = StringVar() var.set(changeType[0]) pageFrame.create_text(50, 396, text="Type Of Change :", fill="white", anchor='nw', font=('Calibri', 14), justify='center') typeOfChange = OptionMenu(pageFrame, var, *changeType) typeOfChange.place(x=220, y=395, width=300) # Entry Field for Name. pageFrame.create_text(50, 146, text="Name :", fill="white", anchor='nw', font=('Calibri', 14), justify='center') name = AutocompleteEntry(suggestion, pageFrame) name.place(x=220, y=150, width=500) # Entry Field Quantity pageFrame.create_text(50, 446, text="Batch No :", fill="white", anchor='nw', font=('Calibri', 14), justify='center') batch = Entry(pageFrame) batch.place(x=220, y=450) # Entry Field Batch No. pageFrame.create_text(50, 496, text="Quantity Of Change :", fill="white", anchor='nw', font=('Calibri', 14), justify='center') quantity = Entry(pageFrame) quantity.place(x=220, y=500) # Entry field for date pageFrame.create_text(50, 196, text="Entry Date :", fill="white", anchor='nw', font=('Calibri', 14), justify='center') entryDate = Calendar(pageFrame,state="normal",firstweekday="monday",showweeknumbers=False,date_pattern="y-mm-dd",foreground="black") entryDate.place(x=220, y=200) # indent Maker for validation and submition def RFODCorrectorValidationAndSubmition(): if(len(name.get())==0): messagebox.showerror(title='Error', message='Missing Name!!') name.delete(0, 'end') name.focus() return elif(len(batch.get())==0): messagebox.showerror(title='Error', message='Missing Batch No !!') batch.delete(0, 'end') batch.focus() return elif(len(quantity.get())==0): messagebox.showerror(title='Error', message='Missing Quantity of change!!') quantity.delete(0, 'end') quantity.focus() return currentQuantity = getCurrentQuantity(name.get(), entryDate.get_date(), batch.get()) receivedQuantity = getReceivedQuantity(name.get(), entryDate.get_date(), batch.get()) currentBalance = int(currentBalanceGet(name.get())) if(receivedQuantity <= 0): messagebox.showerror(title='Error', message='Enter details properly..!') quantity.delete(0, 'end') batch.delete(0, 'end') quantity.focus() return if(receivedQuantity < int(quantity.get()) and var.get() == 'Decrease'): messagebox.showerror(title='Error', message='You have only received '+str(receivedQuantity)+' medicines..!') quantity.delete(0, 'end') quantity.focus() return elif(currentBalance < int(quantity.get()) and var.get() == 'Decrease'): messagebox.showerror(title='Error', message='Can not make change due to low balance..!') quantity.delete(0, 'end') quantity.focus() return if(var.get() == "Increase"): confirmation = RFODIncreaseDB(name.get(), int(quantity.get()), entryDate.get_date(), dt.datetime.now().strftime("%Y-%m-%d"), batch.get()) else: confirmation = RFODDecreaseDB(name.get(), int(quantity.get()), entryDate.get_date(), dt.datetime.now().strftime("%Y-%m-%d"), batch.get(), currentQuantity) if(confirmation): messagebox.showinfo(title='Done', message='Changes done successfully your current balance is '+currentBalanceGet(name.get())+'!') pageFrame.destroy() mistakeCorrectorPage() else: messagebox.showerror(title='Error', message='An error occurred in correction..!') # submit button from form submition submitButton = Button(pageFrame, text='Submit', width=10, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [RFODCorrectorValidationAndSubmition()]) submitButton.place(x=220, y=550) # back button backButton = Button(pageFrame, text=' Back ',width=10, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), mistakeCorrectorPage()]) backButton.place(x=50, y=20) def othersPage(): # Creating frame for DataEntry Page pageFrame = GradientFrame(MainScreen, "black", "grey", borderwidth=0.05, relief="sunken", width=1000, height= 640) pageFrame.pack() # Title Label for the app (Receive) pageFrame.create_text(380, 50, text="Other Options", fill="blue", anchor='nw', font=('Calibri', 30, 'underline', 'bold')) # Button for Register usageRegisterButton = Button(pageFrame, text='''Usage Register''', width=14, height=7, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda:[pageFrame.destroy(), usageRegisterPage()]) usageRegisterButton.place(x=290, y=150) # Button for Mistake mistakeCorrectionButton = Button(pageFrame, text='''Mistake Corrections''', width=14, height=7, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda:[pageFrame.destroy(), mistakeCorrectorPage()]) mistakeCorrectionButton.place(x=590, y=150) # Button for details detailsButton = Button(pageFrame, text='''Get Details''', width=14, height=7, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda:[pageFrame.destroy(), getDetailsPage()]) detailsButton.place(x=290, y=350) # Button for currentBalance currentBalanceButton = Button(pageFrame, text='''Check Current Balance''', width=14, height=7, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda:[pageFrame.destroy(), checkBalancePage()]) currentBalanceButton.place(x=590, y=350) # Back button backButton = Button(pageFrame, text='Back', width=10, bg='grey', fg='black', justify='center', font=('Calibri', 12, 'bold', 'underline'), command=lambda: [pageFrame.destroy(), HomePage()]) backButton.place(x=50, y=20) def RFCMSCorrectorPage(): suggestion = RFODSuggestions() # Creating frame for DataEntry Page pageFrame = GradientFrame(MainScreen, "black", "grey", borderwidth=0.05, relief="sunken", width=1000, height= 640) pageFrame.pack() # Title Label for the app (Consumption & Transfer) pageFrame.create_text(50, 75, text="RFCMS Correction", fill="blue", anchor='nw', font=('Calibri', 20, 'underline')) # Drop down for deciding increase or decrease changeType = ['Increase', 'Decrease'] var =
<reponame>anilyil/funtofem #!/usr/bin/env python # This file is part of the package FUNtoFEM for coupled aeroelastic simulation # and design optimization. # Copyright (C) 2015 Georgia Tech Research Corporation. # Additional copyright (C) 2015 <NAME>, <NAME> and <NAME>. # All rights reserved. # FUNtoFEM is licensed under the Apache License, Version 2.0 (the "License"); # you may not use this software except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import print_function import numpy as np from mpi4py import MPI from funtofem import TransferScheme try: from .hermes_transfer import HermesTransfer except: pass np.set_printoptions(precision=15) class FUNtoFEMDriver(object): """ The FUNtoFEM driver base class has all of the driver except for the coupling algorithms """ def __init__(self, solvers, comm, struct_comm, struct_master, aero_comm, aero_master, transfer_options=None, model=None): """ Parameters ---------- solvers: dict the various disciplinary solvers comm: MPI.comm MPI communicator transfer_options: dict options of the load and displacement transfer scheme model: :class:`~funtofem_model.FUNtoFEMmodel` The model containing the design data """ # communicator self.comm = comm self.aero_comm = aero_comm self.aero_master = aero_master self.struct_comm = struct_comm self.struct_master = struct_master # Solver classes self.solvers = solvers # Make a fake model if not given one if model is not None: self.fakemodel = False else: print("FUNtoFEM driver: generating fake model") from pyfuntofem.model import FUNtoFEMmodel,Body,Scenario,Function model = FUNtoFEMmodel('fakemodel') fakebody = Body('fakebody') model.add_body(fakebody) fakescenario = Scenario('fakescenario') function = Function('cl',analysis_type='aerodynamic') fakescenario.add_function(function) model.add_scenario(fakescenario) self.fakemodel = True self.model = model # Initialize transfer scheme self._initialize_transfer(transfer_options) # Initialize the shape parameterization for body in self.model.bodies: if body.shape: body.initialize_shape_parameterization() def update_model(self, model): """ Update the model object that the driver sees Parameters ---------- model: FUNtoFEM model type """ self.model = model def _initialize_transfer(self, transfer_options): """ Initialize the transfer scheme Parameters ---------- transfer_options: dictionary or list of dictionaries options for the load and displacement transfer scheme for the bodies """ # If the user did not specify a transfer scheme default to MELD if transfer_options is None: transfer_options = [] for body in self.model.bodies: transfer_options.append({'scheme': 'meld', 'isym': -1, 'beta': 0.5, 'npts': 200}) # if the user gave a dictionary instead of a list of dictionaries, assume all bodies use the same settings if type(transfer_options) is dict: transfer_options = len(self.model.bodies) * [ transfer_options ] for ibody, body in enumerate(self.model.bodies): body.transfer = None body_analysis_type = 'aeroelastic' if 'analysis_type' in transfer_options[ibody]: body_analysis_type = transfer_options[ibody]['analysis_type'].lower() # Set up the transfer schemes based on the type of analysis set for this body if body_analysis_type == 'aeroelastic' or body_analysis_type == 'aerothermoelastic': # Set up the load and displacement transfer schemes if transfer_options[ibody]['scheme'].lower() == 'hermes': body.transfer = HermesTransfer(self.comm,self.struct_comm,self.aero_comm) elif transfer_options[ibody]['scheme'].lower() == 'rbf': basis=TransferScheme.PY_THIN_PLATE_SPLINE if 'basis function' in transfer_options[ibody]: if transfer_options[ibody]['basis function'].lower()=='thin plate spline': basis=TransferScheme.PY_THIN_PLATE_SPLINE elif transfer_options[ibody]['basis function'].lower()=='gaussian': basis=TransferScheme.PY_GAUSSIAN elif transfer_options[ibody]['basis function'].lower()=='multiquadric': basis=TransferScheme.PY_MULTIQUADRIC elif transfer_options[ibody]['basis function'].lower()=='inverse multiquadric': basis=TransferScheme.PY_INVERSE_MULTIQUADRIC else: print('Unknown RBF basis function for body number', ibody) quit() body.transfer = TransferScheme.pyRBF(self.comm, self.comm, 0, self.comm, 0, basis, 1) elif transfer_options[ibody]['scheme'].lower()== 'meld': # defaults isym = -1 beta = 0.5 num_nearest = 200 if 'isym' in transfer_options[ibody]: isym = transfer_options[ibody]['isym'] if 'beta' in transfer_options[ibody]: beta = transfer_options[ibody]['beta'] if 'npts' in transfer_options[ibody]: num_nearest = transfer_options[ibody]['npts'] body.transfer = TransferScheme.pyMELD(self.comm, self.struct_comm, self.struct_master, self.aero_comm, self.aero_master, isym, num_nearest, beta) elif transfer_options[ibody]['scheme'].lower()== 'linearized meld': # defaults isym = -1 beta = 0.5 num_nearest = 200 if 'beta' in transfer_options[ibody]: beta = transfer_options[ibody]['beta'] if 'npts' in transfer_options[ibody]: num_nearest = transfer_options[ibody]['npts'] body.transfer = TransferScheme.pyLinearizedMELD(self.comm, self.comm, 0, self.comm, 0, num_nearest, beta) elif transfer_options[ibody]['scheme'].lower()== 'beam': conn = transfer_options[ibody]['conn'] nelems = transfer_options[ibody]['nelems'] order = transfer_options[ibody]['order'] ndof = transfer_options[ibody]['ndof'] body.xfer_ndof = ndof body.transfer = TransferScheme.pyBeamTransfer(self.comm, self.struct_comm, self.struct_master, self.aero_comm, self.aero_master, conn, nelems, order, ndof) else: print("Error: Unknown transfer scheme for body", ibody) quit() # Set up the transfer schemes based on the type of analysis set for this body if body_analysis_type == 'aerothermal' or body_analysis_type == 'aerothermoelastic': # Set up the load and displacement transfer schemes if transfer_options[ibody]['thermal_scheme'].lower()== 'meld': # defaults isym = -1 beta = 0.5 num_nearest = 200 if 'isym' in transfer_options[ibody]: isym = transfer_options[ibody]['isym'] if 'beta' in transfer_options[ibody]: beta = transfer_options[ibody]['beta'] if 'npts' in transfer_options[ibody]: num_nearest = transfer_options[ibody]['npts'] body.thermal_transfer = TransferScheme.pyMELDThermal(self.comm, self.struct_comm, self.struct_master, self.aero_comm, self.aero_master, isym, num_nearest, beta) else: print("Error: Unknown thermal transfer scheme for body", ibody) quit() # Load structural and aerodynamic meshes into FUNtoFEM # Only want real part for the initialization if body.transfer is not None: if TransferScheme.dtype == np.complex128 or TransferScheme.dtype == complex: if self.struct_comm != MPI.COMM_NULL: body.transfer.setStructNodes(body.struct_X.real + 0.0j) else: body.struct_nnodes = 0 if self.aero_comm != MPI.COMM_NULL: body.transfer.setAeroNodes(body.aero_X.real + 0.0j) else: body.aero_nnodes = 0 else: if self.struct_comm != MPI.COMM_NULL: body.transfer.setStructNodes(body.struct_X) body.transfer.setStructNodes(body.struct_X) else: body.struct_nnodes = 0 if self.aero_comm != MPI.COMM_NULL: body.transfer.setAeroNodes(body.aero_X) else: body.aero_nnodes = 0 # Initialize FUNtoFEM body.transfer.initialize() # Load structural and aerodynamic meshes into FUNtoFEM if TransferScheme.dtype == np.complex128 or TransferScheme.dtype == complex: if self.struct_comm != MPI.COMM_NULL: body.transfer.setStructNodes(body.struct_X) else: body.struct_nnodes = 0 if self.aero_comm != MPI.COMM_NULL: body.transfer.setAeroNodes(body.aero_X) else: body.aero_nnodes = 0 # Initialize the thermal problem if body.thermal_transfer is not None: if TransferScheme.dtype == np.complex128 or TransferScheme.dtype == complex: if self.struct_comm != MPI.COMM_NULL: body.thermal_transfer.setStructNodes(body.struct_X.real + 0.0j) else: body.struct_nnodes = 0 if self.aero_comm != MPI.COMM_NULL: body.thermal_transfer.setAeroNodes(body.aero_X.real + 0.0j) else: body.aero_nnodes = 0 else: if self.struct_comm != MPI.COMM_NULL: body.thermal_transfer.setStructNodes(body.struct_X) body.thermal_transfer.setStructNodes(body.struct_X) else: body.struct_nnodes = 0 if self.aero_comm != MPI.COMM_NULL: body.thermal_transfer.setAeroNodes(body.aero_X) else: body.aero_nnodes = 0 # Initialize FUNtoFEM body.thermal_transfer.initialize() # Load structural and aerodynamic meshes into FUNtoFEM if TransferScheme.dtype == np.complex128 or TransferScheme.dtype == complex: if self.struct_comm != MPI.COMM_NULL: body.thermal_transfer.setStructNodes(body.struct_X) else: body.struct_nnodes = 0 if self.aero_comm != MPI.COMM_NULL: body.thermal_transfer.setAeroNodes(body.aero_X) else: body.aero_nnodes = 0 def _update_transfer(self): """ Update the positions of the nodes in transfer schemes """ self.struct_disps = [] self.struct_temps = [] for body in self.model.bodies: if body.transfer is not None: if self.struct_comm != MPI.COMM_NULL: body.transfer.setStructNodes(body.struct_X) else: body.struct_nnodes = 0 if self.aero_comm != MPI.COMM_NULL: body.transfer.setAeroNodes(body.aero_X) else: body.aero_nnodes = 0 if body.thermal_transfer is not None: if self.struct_comm != MPI.COMM_NULL: body.thermal_transfer.setStructNodes(body.struct_X) else: body.struct_nnodes = 0 if self.aero_comm != MPI.COMM_NULL: body.thermal_transfer.setAeroNodes(body.aero_X) else: body.aero_nnodes = 0 def solve_forward(self, steps=None): """ Solves the coupled forward problem Parameters ---------- steps: int number of coupled solver steps. Only for use if a FUNtoFEM model is not defined """ fail = 0 # update the shapes first for body in self.model.bodies: if body.shape: complex_run = True if (TransferScheme.dtype==np.complex128 or TransferScheme.dtype == complex) else False body.update_shape(complex_run) # loop over the forward problem for the different scenarios for scenario in self.model.scenarios: # tell the solvers what the variable values and functions are for this scenario if not self.fakemodel: self._distribute_variables(scenario,self.model.bodies) self._distribute_functions(scenario,self.model.bodies) # Set the new meshes Initialize the forward solvers fail = self._initialize_forward(scenario,self.model.bodies) if fail != 0: if self.comm.Get_rank() == 0: print("Fail flag return during initialization") # Update transfer postions to the initial conditions self._update_transfer() if scenario.steady: fail = self._solve_steady_forward(scenario,steps) if fail != 0: if self.comm.Get_rank() == 0: print("Fail flag return during forward solve") else: fail = self._solve_unsteady_forward(scenario,steps) if fail != 0: if self.comm.Get_rank() == 0: print("Fail flag return during forward solve") # Perform any operations after the forward solve self._post_forward(scenario,self.model.bodies) if fail == 0: self._eval_functions(scenario,self.model.bodies) return fail def solve_adjoint(self): """ Solves the coupled adjoint problem and computes gradients """ fail = 0 # Make sure we have functions defined before we start the adjoint if self.fakemodel: print("Aborting: attempting to run FUNtoFEM adjoint with no model defined") quit() elif not self.model.get_functions(): print("Aborting: attempting to run FUNtoFEM adjoint with no functions defined") quit() # Set the functions into the solvers for scenario in self.model.scenarios: # tell the solvers what the variable values and functions are for this scenario self._distribute_variables(scenario,self.model.bodies) self._distribute_functions(scenario,self.model.bodies) # Initialize the adjoint solvers self._initialize_adjoint_variables(scenario,self.model.bodies) self._initialize_adjoint(scenario,self.model.bodies) if scenario.steady: fail = self._solve_steady_adjoint(scenario) if fail != 0: return fail else: fail = self._solve_unsteady_adjoint(scenario) if fail != 0: return fail # Perform any operations after the adjoint solve self._post_adjoint(scenario,self.model.bodies) self._eval_function_grads(scenario) self.model.enforce_coupling_derivatives() return