text
stringlengths
0
1.25M
meta
stringlengths
47
1.89k
r""" Tabular representation of datasets. .. sidebar:: Contents .. contents:: :local: :depth: 1 While spectroscopic data are usually presented graphically (see the :mod:`aspecd.plotting` module for details), there are cases where a tabular representation is useful or even necessary. One prime example of a situation where you would want to have a tabular representation of a (calculated) dataset is the result of an :class:`aspecd.analysis.AggregatedAnalysisStep`. Here, you perform a :class:`aspecd.analysis.SingleAnalysisStep` on a series of datasets and collect the results in a :class:`aspecd.dataset.CalculatedDataset`. Of course, there are situations where you can simply plot this dataset, but while graphical representations are often helpful for obtaining trends, if the exact numbers are relevant, a table is often much more useful. .. versionadded:: 0.5 Why this module? ================ While there are several Python packages available capable of formatting tables (PrettyTable, Tablib, pandas, to name but a few), all these do much more than only formatting tables, but are designed to work with tables as well, *i.e.* modifying and filtering the table contents. This is, however, not needed in the given context, hence the attempt to create a rather lightweight implementation. The implementation focuses on the following aspects: * Tabulation of 1D and 2D datasets * Primarily textual output * Control over formatting (necessary for different output formats) * Control over formatting of numbers * Automatic column headers and row indices if present in the dataset axes. Currently, the module consists of two types of classes: * :class:`Table` The actual class for tabulating data of a dataset * :class:`Format` A general class controlling the format of the tables created by :class:`Table` There is a list of formatters for different purposes: * :class:`TextFormat` Grid layout for text output * :class:`RstFormat` Simple layout for reStructuredText (rst) * :class:`DokuwikiFormat` DokuWiki table syntax * :class:`LatexFormat` LaTeX table syntax For simple output, you can use the basic formatter, :class:`Format`, as well. As this is the default in the :class:`Table` class, nothing needs to be done in this case. Basic usage =========== To give you an idea how working with this module may look like, have a look at the following examples: .. code-block:: import numpy as np from aspecd import dataset, table ds = dataset.Dataset() ds.data.data = np.random.random([5,3]) tab = table.Table() tab = ds.tabulate(tab) print(tab.table) The last line will produce an output similar to the following -- of course the numbers will be different in your case, as we are using random numbers: .. code-block:: 0.6457921026722823 0.5634217835847304 0.16339715303360636 0.1946206354990324 0.7901047968358327 0.16098166185006968 0.9898725675813765 0.8892801098024301 0.9657653854952412 0.38858973357936466 0.5818405189808569 0.03264142581790075 0.9391951330574149 0.5412481787012977 0.9171357572017617 Note that even though the number of digits of the individual cells are not always identical, the columns are nicely aligned. If we would want to reduce the number of digits shown, we could use the :attr:`Table.column_format` attribute, like so: .. code-block:: tab.column_format = ['8.6f'] tab = ds.tabulate(tab) print(tab.table) .. note:: Two things are relevant here: :attr:`Table.column_format` is a *list*, and you can provide fewer format strings than columns in your table. In this case, the last format will be used for all remaining columns. The last line will in this case produce an output similar to the following -- again with different numbers in your case: .. code-block:: 0.645792 0.563422 0.163397 0.194621 0.790105 0.160982 0.989873 0.889280 0.965765 0.388590 0.581841 0.032641 0.939195 0.541248 0.917136 So far, you could get pretty much the same using an ASCII exporter for your dataset. So what is special with :class:`Table`? A few things: You have much more control on the output, and you can have column headers and row indices included automatically if these are present in your dataset. Let's look at a dataset with information on the different columns set in the second axis. A full example could look like this: .. code-block:: ds = dataset.Dataset() ds.data.data = np.random.random([5,3]) ds.data.axes[1].index = ['foo', 'bar', 'baz'] tab = table.Table() tab.column_format = ['8.6f'] tab = ds.tabulate(tab) print(tab.table) And the result of the print statement would show you the column headers added: .. code-block:: foo bar baz 0.645792 0.563422 0.163397 0.194621 0.790105 0.160982 0.989873 0.889280 0.965765 0.388590 0.581841 0.032641 0.939195 0.541248 0.917136 Of course, the same would work if you would have row indices provided, and it even works if for both axes, indices are provided. To demonstrate the latter (admittedly again in an artificial example): .. code-block:: ds = dataset.Dataset() ds.data.data = np.random.random([5,3]) ds.data.axes[0].index = ['a', 'b', 'c', 'd', 'e'] ds.data.axes[1].index = ['foo', 'bar', 'baz'] tab = table.Table() tab.column_format = ['8.6f'] tab = ds.tabulate(tab) print(tab.table) And the result of the print statement would show you both, the column headers and the row indices added: .. code-block:: foo bar baz a 0.645792 0.563422 0.163397 b 0.194621 0.790105 0.160982 c 0.989873 0.889280 0.965765 d 0.388590 0.581841 0.032641 e 0.939195 0.541248 0.917136 Output formats ============== Tables can be output using different formats, and if you need a special format, you can of course implement one on your own, by subclassing :class:`Format`. However, out of the box there are already a number of formats, from plain (default, shown above) to text to reStructuredText (rst), DokuWiki, and LaTeX. To give you a quick overview, we will create a dataset with both, row indices and column headers, and show the different formats. .. code-block:: ds = dataset.Dataset() ds.data.data = np.random.random([5,3]) ds.data.axes[0].index = ['a', 'b', 'c', 'd', 'e'] ds.data.axes[1].index = ['foo', 'bar', 'baz'] tab = table.Table() tab.column_format = ['8.6f'] tab = ds.tabulate(tab) print(tab.table) The result is the same as already shown above, just a plain table, though already quite useful: .. code-block:: foo bar baz a 0.689140 0.775321 0.657159 b 0.315142 0.412736 0.580745 c 0.116352 0.807541 0.410055 d 0.226994 0.715985 0.967606 e 0.532774 0.620670 0.745630 Now, let's see how the text format looks like: .. code-block:: # Same as above tab.format = 'text' tab = ds.tabulate(tab) print(tab.table) And here you go: .. code-block:: +---+----------+----------+----------+ | | foo | bar | baz | +---+----------+----------+----------+ | a | 0.689140 | 0.775321 | 0.657159 | | b | 0.315142 | 0.412736 | 0.580745 | | c | 0.116352 | 0.807541 | 0.410055 | | d | 0.226994 | 0.715985 | 0.967606 | | e | 0.532774 | 0.620670 | 0.745630 | +---+----------+----------+----------+ Next is reStructuredText: .. code-block:: # Same as above tab.format = 'rst' tab = ds.tabulate(tab) print(tab.table) As you can see, this format outputs the "simple" rst style, that can be used as well for an easy-to-read text-only output: .. code-block:: = ======== ======== ======== foo bar baz = ======== ======== ======== a 0.689140 0.775321 0.657159 b 0.315142 0.412736 0.580745 c 0.116352 0.807541 0.410055 d 0.226994 0.715985 0.967606 e 0.532774 0.620670 0.745630 = ======== ======== ======== Another format that may be useful is DokuWiki, as this kind of lightweight wiki can be used as an electronic lab notebook (ELN): .. code-block:: # Same as above tab.format = 'dokuwiki' tab = ds.tabulate(tab) print(tab.table) This will even correctly highlight the column headers and row indices as "headers": .. code-block:: | ^ foo ^ bar ^ baz ^ ^ a | 0.689140 | 0.775321 | 0.657159 | ^ b | 0.315142 | 0.412736 | 0.580745 | ^ c | 0.116352 | 0.807541 | 0.410055 | ^ d | 0.226994 | 0.715985 | 0.967606 | ^ e | 0.532774 | 0.620670 | 0.745630 | And finally, LaTeX, as this is of great use in the scientific world, and honestly, manually formatting LaTeX tables can be quite tedious. .. code-block:: # Same as above tab.format = 'latex' tab = ds.tabulate(tab) print(tab.table) As you can see, the details of the formatting are left to you, but at least, you get valid LaTeX code and a table layout according to typesetting standards, *i.e.* only horizontal lines. Note that the horizontal lines ( "rules") are typeset using the booktabs package that should always be used: .. code-block:: latex \begin{tabular}{llll} \toprule & foo & bar & baz \\ \midrule a & 0.689140 & 0.775321 & 0.657159 \\ b & 0.315142 & 0.412736 & 0.580745 \\ c & 0.116352 & 0.807541 & 0.410055 \\ d & 0.226994 & 0.715985 & 0.967606 \\ e & 0.532774 & 0.620670 & 0.745630 \\ \bottomrule \end{tabular} Captions ======== Tables can and should have captions that describe the content, as rarely the numbers (and row indices and column headers) stand on their own. Hence, you can add a table caption to a table. As writing a caption is necessarily a manual task, it would only be fair if the table output would include this caption. For formats such as DokuWiki and LaTeX, it is fairly obvious how to add the table caption, and for the other formats, the caption is added as plain text on top of the actual table, wrapped to not have endless lines. .. code-block:: ds = dataset.Dataset() ds.data.data = np.random.random([5,5]) ds.data.axes[0].index = ['a', 'b', 'c', 'd', 'e'] ds.data.axes[1].index = ['foo', 'bar', 'baz', 'foobar', 'frob'] caption = table.Caption() caption.title = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.' caption.text = 'Quisque varius tortor ac faucibus posuere. In hac ' \ 'habitasse platea dictumst. Morbi rutrum felis vitae '\ 'tristique accumsan. Sed est nisl, auctor a metus a, ' \ 'elementum cursus velit. Proin et rutrum erat. ' \ 'Praesent id urna odio. Duis quis augue ac nunc commodo' \ ' euismod quis id orci.' tab = table.Table() tab.caption = caption tab.column_format = ['8.6f'] tab = ds.tabulate(tab) print(tab.table) The result of the print statement above would output something like this: .. code-block:: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque varius tortor ac faucibus posuere. In hac habitasse platea dictumst. Morbi rutrum felis vitae tristique accumsan. Sed est nisl, auctor a metus a, elementum cursus velit. Proin et rutrum erat. Praesent id urna odio. Duis quis augue ac nunc commodo euismod quis id orci. foo bar baz foobar frob a 0.162747 0.620320 0.677983 0.719360 0.426734 b 0.342259 0.907828 0.252471 0.987115 0.563511 c 0.253853 0.752020 0.277696 0.479128 0.929410 d 0.768840 0.220356 0.247271 0.379556 0.231765 e 0.113655 0.725631 0.098438 0.753049 0.363572 Note that the caption has been wrapped for better readability, and that an empty line is inserted between caption and table. Of course, you can combine the caption with the other textual formats ("text", "rst") as well, and it will be output in the same way. The formats "dokuwiki" and "latex" are special, see the respective format class definitions ( :class:`DokuwikiFormat`, :class:`LatexFormat`) for details. Module documentation ==================== """ import logging import textwrap import aspecd.exceptions from aspecd import history, utils logger = logging.getLogger(__name__) logger.addHandler(logging.NullHandler()) class Table: """ Tabular representation of datasets. Formatting of a table can be controlled by the formatter class defined by :attr:`format`. See the documentation of the :class:`Format` class and its subclasses for details. Furthermore, the individual columns containing numerical data can be formatted as well, specifying formats for the individual columns in :attr:`column_format`. In case the axes of a dataset contain values in their :attr:`aspecd.dataset.Axis.index` attribute, these values will be used as first column and column headers, respectively. In case of indices present in either the first or second axis or both, they will be used as row indices and column headers, respectively. One particular use case is in combination with the results of an :class:`aspecd.analysis.AggregatedAnalysisStep` operating on a series of datasets and combining the result in a :class:`aspecd.dataset.CalculatedDataset` with the row indices being the labels of the corresponding datasets. .. note:: For obvious reasons, only 1D and 2D datasets can be tabulated. Therefore, if you try to tabulate a ND dataset with N>2, this will raise an exception. Attributes ---------- dataset : :class:`aspecd.dataset.Dataset` Dataset containing numerical data to tabulate table : :class:`str` Final table ready to be output format : :class:`str` Identifier for output format. Valid identifiers are either the empty string or any first part of a subclass of :class:`Format`, *i.e.* the part before "Format". Examples for currently valid identifiers: ``text``, ``rst``, ``dokuwiki``, ``latex`` See :class:`Format` and the respective subclasses for details on the formats available and what kind of output they create. column_format: :class:`list` (Optional) formats for the data The format strings are used by :meth:`str.format`, see there for details. If the list is shorter than the number of columns, the last element will be used for the remaining columns. filename : :class:`str` Name of the file to save the table to. If calling :meth:`save`, the table contained in :attr:`table` will be saved to this file .. versionadded:: 0.5 """ def __init__(self): self.name = aspecd.utils.full_class_name(self) self.dataset = None self.caption = Caption() self.table = None self.format = '' self.column_format = [] self.filename = '' self._format = Format() self._columns = [] self._rows = [] self._column_widths = [] def tabulate(self, dataset=None, from_dataset=False): """ Create tabular representation of the numerical data of a dataset. The result is stored in :attr:`table`. In case of an empty dataset, a warning is logged and no further action taken. Parameters ---------- dataset : class:`aspecd.dataset.Dataset` Dataset to create the tabular representation for from_dataset : `boolean` whether we are called from within a dataset Defaults to "False" and shall never be set manually. """ if dataset: self.dataset = dataset if not self.dataset: raise aspecd.exceptions.MissingDatasetError if self.dataset.data.data.ndim > 2: raise aspecd.exceptions.NotApplicableToDatasetError( message='Tables work only with 1D and 2D data') if self.dataset.data.data.size == 0: logger.warning('Dataset contains no data, hence nothing to ' 'tabulate.') return if from_dataset: self._set_format() self._format_columns() self._format_rows() self._add_rules() self._add_opening_and_closing() self.table = '\n'.join(self._rows) else: self.dataset.tabulate(table=self) def save(self): """ Save table to file. The filename is set in :attr:`filename`. If no table exists, *i.e.* :meth:`tabulate` has not yet been called, the method will silently return. """ if not self.table: return with open(self.filename, 'w') as file: file.write(self.table) def create_history_record(self): """ Create history record to be added to the dataset. Usually, this method gets called from within the :meth:`aspecd.dataset.Dataset.tabulate` method of the :class:`aspecd.dataset.Dataset` class and ensures the history of each tabulating step to get written properly. Returns ------- history_record : :class:`aspecd.history.TableHistoryRecord` history record for tabulating step """ history_record = \ history.TableHistoryRecord(package=self.dataset.package_name) history_record.table = history.TableRecord(table=self) return history_record def _set_format(self): format_class = self.format.lower().capitalize() + 'Format' format_class = '.'.join(['aspecd.table', format_class]) self._format = utils.object_from_class_name(format_class) def _format_columns(self): self._columns = [] if any(self.dataset.data.axes[0].index): row_indices = [] if any(self.dataset.data.axes[1].index): row_indices.append('') row_indices.extend(self.dataset.data.axes[0].index) self._columns.append(self._adjust_column_width(row_indices)) if self.dataset.data.data.ndim == 2: for column in range(self.dataset.data.data.shape[1]): current_column = [] if any(self.dataset.data.axes[1].index): current_column.append( self.dataset.data.axes[1].index[column]) for row in self.dataset.data.data[:, column]: current_column.append(self._format_number(row, column=column)) current_column = self._adjust_column_width(current_column) self._columns.append(current_column) else: current_column = [] for row in self.dataset.data.data: current_column.append(self._format_number(row)) current_column = self._adjust_column_width(current_column) self._columns.append(current_column) self._column_widths = [len(x[0]) for x in self._columns] @staticmethod def _adjust_column_width(current_column): width = max([len(x) for x in current_column]) return [x.ljust(width) for x in current_column] def _format_number(self, number, column=0): if self.column_format: try: string_format = self.column_format[column] except IndexError: string_format = self.column_format[-1] formatted_number = \ '{:{format}}'.format(number, format=string_format) else: formatted_number = '{}'.format(number) return formatted_number def _format_rows(self): self._rows = [] for row in range(len(self._columns[0])): current_row = [] padding = self._format.padding * ' ' for column in self._columns: current_row.append(column[row]) if any(self.dataset.data.axes[1].index) and row == 0: separator = '{padding}{separator}{padding}'.format( padding=padding, separator=self._format.header_separator ) if any(self.dataset.data.axes[0].index): prefix = self._format.column_prefix else: prefix = self._format.header_prefix formatted_row = \ '{prefix}{padding}{row}{padding}{postfix}'.format( prefix=prefix, padding=padding, row=separator.join(current_row), postfix=self._format.header_postfix ) else: separator = '{padding}{separator}{padding}'.format( padding=padding, separator=self._format.column_separator ) if any(self.dataset.data.axes[0].index): prefix = self._format.header_prefix else: prefix = self._format.column_prefix formatted_row = \ '{prefix}{padding}{row}{padding}{postfix}'.format( prefix=prefix, padding=padding, row=separator.join(current_row), postfix=self._format.column_postfix ) self._rows.append(formatted_row) def _add_rules(self): top_rule = self._format.top_rule(column_widths=self._column_widths) if top_rule: self._rows.insert(0, top_rule) if any(self.dataset.data.axes[1].index): middle_rule = \ self._format.middle_rule(column_widths=self._column_widths) if middle_rule: self._rows.insert(2, middle_rule) bottom_rule = \ self._format.bottom_rule(column_widths=self._column_widths) if bottom_rule: self._rows.append(bottom_rule) def _add_opening_and_closing(self): opening = self._format.opening(columns=len(self._columns), caption=self.caption) if opening: self._rows.insert(0, opening) closing = self._format.closing(caption=self.caption) if closing: self._rows.append(closing) class Format: """ Base class for settings for formatting tables. The formatter is used by :class:`Table` to control the output. Different formats can be implemented by subclassing this class. Currently, the following subclasses are available: * :class:`TextFormat` Grid layout for text output * :class:`RstFormat` Simple layout for reStructuredText (rst) * :class:`DokuwikiFormat` DokuWiki table syntax * :class:`LatexFormat` LaTeX table syntax For simple output, you can use the basic formatter, :class:`Format`, as well. As this is the default in the :class:`Table` class, nothing needs to be done in this case. Attributes ---------- padding : :class:`int` Number of spaces left and right of a field column_separator : :class:`str` String used to separate columns in a row column_prefix : :class:`str` String used to prefix the first column in a row column_postfix : :class:`str` String used to postfix the last column in a row header_separator : :class:`str` String used to separate columns in the header (if present) header_prefix : :class:`str` String used to prefix the first column in the header (if present) header_postfix : :class:`str` String used to postfix the last column in the header (if present) .. versionadded:: 0.5 """ def __init__(self): self.padding = 0 self.column_separator = ' ' self.column_prefix = '' self.column_postfix = '' self.header_separator = ' ' self.header_prefix = '' self.header_postfix = '' # noinspection PyUnusedLocal,PyMethodMayBeStatic # pylint: disable=no-self-use,unused-argument def top_rule(self, column_widths=None): """ Create top rule for table. Tables usually have three types of rules: top rule, middle rule, and bottom rule. The middle rule gets used to separate column headers from the actual tabular data. If your format in a class inheriting from :class:`Format` does not need this rule, don't override this method, as it will by default return the empty string, and hence no rule gets added to the table. Parameters ---------- column_widths : :class:`list` (optional) list of column widths Returns ------- rule : class:`str` Actual rule that gets added to the table output Default: '' """ return '' # noinspection PyUnusedLocal,PyMethodMayBeStatic def middle_rule(self, column_widths=None): """ Create middle rule for table. Tables usually have three types of rules: top rule, middle rule, and bottom rule. The middle rule gets used to separate column headers from the actual tabular data. If your format in a class inheriting from :class:`Format` does not need this rule, don't override this method, as it will by default return the empty string, and hence no rule gets added to the table. Parameters ---------- column_widths : :class:`list` (optional) list of column widths Returns ------- rule : class:`str` Actual rule that gets added to the table output Default: '' """ return '' # noinspection PyUnusedLocal,PyMethodMayBeStatic def bottom_rule(self, column_widths=None): """ Create bottom rule for table. Tables usually have three types of rules: top rule, middle rule, and bottom rule. The middle rule gets used to separate column headers from the actual tabular data. If your format in a class inheriting from :class:`Format` does not need this rule, don't override this method, as it will by default return the empty string, and hence no rule gets added to the table. Parameters ---------- column_widths : :class:`list` (optional) list of column widths Returns ------- rule : class:`str` Actual rule that gets added to the table output Default: '' """ return '' # noinspection PyUnusedLocal,PyMethodMayBeStatic def opening(self, columns=None, caption=None): """ Create opening code. Some formats have opening (and closing, see :meth:`closing`) parts, *e.g.* opening and closing tags in XML and related languages, but in LaTeX as well. Furthermore, table captions are usually set above the table, and if your table has a caption with content, this caption will be output as well. In its simplest form, as implemented here, caption title and caption text will be concatenated and wrapped using :func:`textwrap.wrap`, and an empty line added after the caption to separate it from the actual table. Thus, your table captions are output together with your table in simple text format. Override this method according to your needs for your particular format. Parameters ---------- columns : :class:`int` (optional) number of columns of the table caption : :class:`Caption` (optional) table caption For details, see the :class:`Caption` class documentation. Only if one of the properties of :class:`Caption` contains content, the caption will be considered. Returns ------- opening : :class:`str` Code for opening the environment Default: '' """ opening = '' if caption and (caption.title or caption.text): opening = self._caption_content(caption) return opening @staticmethod def _caption_content(caption=None): caption_text = ' '.join([caption.title, caption.text]).rstrip() return '\n'.join(textwrap.wrap(caption_text)) + '\n' # noinspection PyMethodMayBeStatic def closing(self, caption=None): """ Create closing code. Some formats have opening (see :meth:`opening`) and closing parts, *e.g.* opening and closing tags in XML and related languages, but in LaTeX as well. If your format in a class inheriting from :class:`Format` does not need this code, don't override this method, as it will by default return the empty string, and hence no code gets added to the table. Parameters ---------- caption : :class:`Caption` (optional) table caption For details, see the :class:`Caption` class documentation. Only if one of the properties of :class:`Caption` contains content, the caption will be considered. Having a caption requires some formats to create an additional container surrounding the actual table. Returns ------- closing : :class:`str` Code for closing the environment Default: '' """ return '' class TextFormat(Format): """ Table formatter for textual output. With its default settings, the table would be surrounded by a grid, such as: .. code-block:: +-----+-----+-----+ | foo | bar | baz | +-----+-----+-----+ | 1.0 | 1.1 | 1.2 | | 2.0 | 2.1 | 2.2 | +-----+-----+-----+ Attributes ---------- rule_character : :class:`str` Character used for drawing horizontal lines (rules) rule_edge_character : :class:`str` Character used for the edges of horizontal lines (rules) rule_separator_character : :class:`str` Character used for the column separators of horizontal lines (rules) .. versionadded:: 0.5 """ def __init__(self): super().__init__() self.padding = 1 self.rule_character = '-' self.rule_edge_character = '+' self.rule_separator_character = '+' self.column_separator = '|' self.column_prefix = '|' self.column_postfix = '|' self.header_separator = '|' self.header_prefix = '|' self.header_postfix = '|' def top_rule(self, column_widths=None): """ Create top rule for table. Tables usually have three types of rules: top rule, middle rule, and bottom rule. The middle rule gets used to separate column headers from the actual tabular data. The rule gets constructed according to this overall scheme: * Use the :attr:`rule_character` for the rule * Use the :attr:`rule_separator_character` for the gaps between columns * Use the :attr:`rule_edge_character` for beginning and end of the rule * Use the :attr:`padding` information to add horizontal space in a cell Parameters ---------- column_widths : :class:`list` List of column widths Returns ------- rule : class:`str` Actual rule that gets added to the table output """ segments = [] for width in column_widths: segments.append((width + 2 * self.padding) * self.rule_character) rule = self.rule_separator_character.join(segments) return '{edge}{rule}{edge}'.format(edge=self.rule_edge_character, rule=rule) def middle_rule(self, column_widths=None): """ Create middle rule for table. Here, the middle rule is identical to the :meth:`top_rule`. See there for details how the rule is constructed. Parameters ---------- column_widths : :class:`list` List of column widths Returns ------- rule : class:`str` Actual rule that gets added to the table output """ return self.top_rule(column_widths=column_widths) def bottom_rule(self, column_widths=None): """ Create bottom rule for table. Here, the middle rule is identical to the :meth:`top_rule`. See there for details how the rule is constructed. Parameters ---------- column_widths : :class:`list` List of column widths Returns ------- rule : class:`str` Actual rule that gets added to the table output """ return self.top_rule(column_widths=column_widths) class RstFormat(TextFormat): """ Table formatter for reStructuredText (rst) output. This formatter actually uses the simple format for rst tables, such as: .. code-block:: rst === === === foo bar baz === === === 1.0 1.1 1.2 2.0 2.1 2.2 === === === The above code would result in: === === === foo bar baz === === === 1.0 1.1 1.2 2.0 2.1 2.2 === === === .. versionadded:: 0.5 """ def __init__(self): super().__init__() self.padding = 0 self.rule_character = '=' self.rule_edge_character = '' self.rule_separator_character = ' ' self.column_separator = ' ' self.column_prefix = '' self.column_postfix = '' self.header_separator = ' ' self.header_prefix = '' self.header_postfix = '' class DokuwikiFormat(Format): """ Table formatter for DokuWiki output. For details about the syntax, see the `DokuWiki syntax <https://www.dokuwiki.org/wiki:syntax #tables>`_ documentation. An example of a table in DokuWiki syntax could look like this: .. code-block:: ^ foo ^ bar ^ baz ^ | 1.0 | 1.1 | 1.2 | | 2.0 | 2.1 | 2.2 | And in case of both, column headers and row indices, this would even convert to: .. code-block:: | ^ foo ^ bar ^ baz ^ ^ foo | 1.0 | 1.0 | 1.0 | ^ bar | 1.0 | 1.0 | 1.0 | ^ baz | 1.0 | 1.0 | 1.0 | .. versionadded:: 0.5 """ def __init__(self): super().__init__() self.padding = 1 self.column_separator = '|' self.column_prefix = '|' self.column_postfix = '|' self.header_separator = '^' self.header_prefix = '^' self.header_postfix = '^' def opening(self, columns=None, caption=None): """ Create opening code. In case of DokuWiki, this is usually empty, except in cases where you have added a caption. In the latter case, code consistent with the DokuWiki caption plugin will be output, like so: .. code-block:: xml <table> <caption>*Caption title* Caption text</caption> To make this work in your DokuWiki, make sure to have the caption plugin installed. Parameters ---------- columns : :class:`int` Number of columns of the table caption : :class:`Caption` (optional) table caption For details, see the :class:`Caption` class documentation. Only if one of the properties of :class:`Caption` contains content, the caption will be considered. Having a caption requires DokuWiki to create an additional table environment surrounding the actual table. As this needs to be closed, the closing needs to have the information regarding the caption. Returns ------- opening : :class:`str` Code for opening the environment """ opening = '' if caption and (caption.title or caption.text): opening = '\n'.join([ '<table>', '<caption>{}</caption>'.format(self._caption_string(caption)) ]) return opening @staticmethod def _caption_string(caption=None): caption_string = [] if caption.title: caption_string.append('*{}*'.format(caption.title)) caption_string.append(caption.text) return ' '.join(caption_string).rstrip() def closing(self, caption=None): """ Create closing code. In case of DokuWiki, this is usually empty, except in cases where you have added a caption. In the latter case, code consistent with the DokuWiki caption plugin will be output, like so: .. code-block:: xml </table> To make this work in your DokuWiki, make sure to have the caption plugin installed. Parameters ---------- caption : :class:`Caption` (optional) table caption For details, see the :class:`Caption` class documentation. Only if one of the properties of :class:`Caption` contains content, the caption will be considered. Having a caption requires DokuWiki to create an additional table environment surrounding the actual table. As this needs to be closed, the closing needs to have the information regarding the caption. Returns ------- closing : :class:`str` Code for closing the environment """ closing = '' if caption and (caption.title or caption.text): closing = '</table>' return closing class LatexFormat(Format): r""" Table formatter for LaTeX output. Results in a rather generic LaTeX table, and the goal of this formatter is to provide valid LaTeX code without trying to go into too many details of all the possibilities of LaTeX table formatting. .. note:: The format requires the package "booktabs" to be loaded, as the horizontal rules defined by this package are automatically added to the LaTeX output. An example of the LaTeX code of a table may look as follows: .. code-block:: \begin{tabular}{lll} \toprule foo & bar & baz \\ \midrule 1.0 & 1.1 & 1.2 \\ 2.0 & 2.1 & 2.2 \\ \bottomrule \end{tabular} .. versionadded:: 0.5 """ def __init__(self): super().__init__() self.padding = 0 self.column_separator = ' & ' self.column_prefix = '' self.column_postfix = r' \\' self.header_separator = ' & ' self.header_prefix = '' self.header_postfix = r' \\' def top_rule(self, column_widths=None): """ Create top rule for table. Tables usually have three types of rules: top rule, middle rule, and bottom rule. The middle rule gets used to separate column headers from the actual tabular data. Parameters ---------- column_widths : :class:`list` Ignored in this particular case Returns ------- rule : class:`str` Actual rule that gets added to the table output """ return r'\toprule' def middle_rule(self, column_widths=None): """ Create middle rule for table. Tables usually have three types of rules: top rule, middle rule, and bottom rule. The middle rule gets used to separate column headers from the actual tabular data. Parameters ---------- column_widths : :class:`list` Ignored in this particular case Returns ------- rule : class:`str` Actual rule that gets added to the table output """ return r'\midrule' def bottom_rule(self, column_widths=None): """ Create bottom rule for table. Tables usually have three types of rules: top rule, middle rule, and bottom rule. The middle rule gets used to separate column headers from the actual tabular data. Parameters ---------- column_widths : :class:`list` Ignored in this particular case Returns ------- rule : class:`str` Actual rule that gets added to the table output """ return r'\bottomrule' def opening(self, columns=None, caption=None): r""" Create opening code. In case of LaTeX, this is usually: .. code-block:: \begin{tabular}{<column-specification>} As this class strives for a rather generic, though valid LaTeX code, the column specification is simply 'l' times the number of columns (for exclusively left-aligned columns). Parameters ---------- columns : :class:`int` Number of columns of the table caption : :class:`Caption` (optional) table caption For details, see the :class:`Caption` class documentation. Only if one of the properties of :class:`Caption` contains content, the caption will be considered. Having a caption requires LaTeX to create an additional table environment surrounding the actual table. As this needs to be closed, the closing needs to have the information regarding the caption. Returns ------- opening : :class:`str` Code for opening the environment """ opening = r'\begin{tabular}{' + columns * 'l' + r'}' if caption and (caption.title or caption.text): opening = '\n'.join([ r'\begin{table}', r'\caption{' + self._caption_string(caption=caption) + r'}', opening]) return opening @staticmethod def _caption_string(caption): caption_string = [] if caption.title: caption_string.append(r'\textbf{' + caption.title + r'}') caption_string.append(caption.text) return ' '.join(caption_string).rstrip() def closing(self, caption=None): r""" Create closing code. In case of LaTeX, this is usually: .. code-block:: \end{tabular} Parameters ---------- caption : :class:`Caption` (optional) table caption For details, see the :class:`Caption` class documentation. Only if one of the properties of :class:`Caption` contains content, the caption will be considered. Having a caption requires LaTeX to create an additional table environment surrounding the actual table. As this needs to be closed, the closing needs to have the information regarding the caption. Returns ------- closing : :class:`str` Code for closing the environment """ closing = r'\end{tabular}' if caption and (caption.title or caption.text): closing = '\n'.join([closing, r'\end{table}']) return closing class Caption(utils.Properties): """ Caption for tables. Attributes ---------- title: :class:`str` usually one sentence describing the intent of the table Often plotted bold-face in a table caption. text: :class:`str` additional text directly following the title Contains more information about the table. Ideally, a table caption is self-contained such that it explains the table sufficiently to understand its intent and content without needing to read all the surrounding text. .. versionadded:: 0.5 """ def __init__(self): super().__init__() self.title = '' self.text = ''
{"hexsha": "b2219045fe1211add4209f6ac22423e974338540", "size": 44273, "ext": "py", "lang": "Python", "max_stars_repo_path": "aspecd/table.py", "max_stars_repo_name": "tillbiskup/aspecd", "max_stars_repo_head_hexsha": "5c7d7ceb9ec3eb97d01348c0495adc999c7af78a", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-16T05:26:12.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-27T18:08:22.000Z", "max_issues_repo_path": "aspecd/table.py", "max_issues_repo_name": "tillbiskup/aspecd", "max_issues_repo_head_hexsha": "5c7d7ceb9ec3eb97d01348c0495adc999c7af78a", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "aspecd/table.py", "max_forks_repo_name": "tillbiskup/aspecd", "max_forks_repo_head_hexsha": "5c7d7ceb9ec3eb97d01348c0495adc999c7af78a", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.9749492214, "max_line_length": 80, "alphanum_fraction": 0.6101687259, "include": true, "reason": "import numpy", "num_tokens": 10248}
import cirq import numpy as np import pytest from zquantum.core.circuits import GateOperation, import_from_cirq from zquantum.core.decompositions import ( PowerGateToPhaseAndRotation, decompose_cirq_circuit, ) class TestDecompositionOfPowerGates: @pytest.mark.parametrize("target_qubit", [cirq.LineQubit(0), cirq.LineQubit(2)]) @pytest.mark.parametrize( "gate_to_be_decomposed, decomposition_rules", [ ( cirq.XPowGate(exponent=0.1), [PowerGateToPhaseAndRotation(cirq.XPowGate)], ), ( cirq.XPowGate(exponent=0.1, global_shift=0.2), [PowerGateToPhaseAndRotation(cirq.XPowGate)], ), ( cirq.XPowGate(exponent=0.1), [PowerGateToPhaseAndRotation(cirq.XPowGate, cirq.YPowGate)], ), ( cirq.YPowGate(exponent=0.1), [PowerGateToPhaseAndRotation(cirq.YPowGate)], ), ( cirq.YPowGate(exponent=0.1, global_shift=0.2), [PowerGateToPhaseAndRotation(cirq.YPowGate)], ), ], ) def test_gives_the_same_unitary_as_original_gate( self, gate_to_be_decomposed, decomposition_rules, target_qubit ): circuit = cirq.Circuit([gate_to_be_decomposed.on(target_qubit)]) decomposed_circuit = decompose_cirq_circuit(circuit, decomposition_rules) np.testing.assert_almost_equal( cirq.unitary(circuit), cirq.unitary(decomposed_circuit) ) @pytest.mark.parametrize( "decomposition_rule, operation", [ ( PowerGateToPhaseAndRotation(cirq.XPowGate), cirq.rx(0.5).on(cirq.LineQubit(0)), ), ( PowerGateToPhaseAndRotation(cirq.YPowGate), cirq.ry(0.5).on(cirq.LineQubit(4)), ), ( PowerGateToPhaseAndRotation(cirq.XPowGate, cirq.YPowGate), cirq.rx(0.5).on(cirq.LineQubit(0)), ), ( PowerGateToPhaseAndRotation(cirq.XPowGate, cirq.YPowGate), cirq.ry(0.5).on(cirq.LineQubit(4)), ), ], ) def test_does_not_decompose_usual_rotation_gates( self, decomposition_rule, operation ): circuit = cirq.Circuit([operation]) decomposed_circuit = decompose_cirq_circuit(circuit, [decomposition_rule]) assert list(circuit.all_operations()) == list( decomposed_circuit.all_operations() ) @pytest.mark.parametrize( "decomposition_rule, operations", [ ( PowerGateToPhaseAndRotation(cirq.XPowGate), [cirq.YPowGate(exponent=0.1).on(cirq.LineQubit(1))], ), ( PowerGateToPhaseAndRotation(cirq.XPowGate), [ cirq.X.on(cirq.LineQubit(3)), cirq.Y.on(cirq.LineQubit(1)), cirq.Z.on(cirq.LineQubit(0)), ], ), ( PowerGateToPhaseAndRotation(cirq.XPowGate), [ (cirq.X ** 1).on(cirq.LineQubit(3)), (cirq.Y ** 2).on(cirq.LineQubit(1)), (cirq.Z ** 1).on(cirq.LineQubit(0)), ], ), ( PowerGateToPhaseAndRotation(cirq.XPowGate, cirq.YPowGate), [cirq.CNOT.on(cirq.LineQubit(3), cirq.LineQubit(11))], ), ], ) def test_leaves_gates_not_matching_predicate_unaffected( self, decomposition_rule, operations ): circuit = cirq.Circuit(operations) decomposed_circuit = decompose_cirq_circuit(circuit, [decomposition_rule]) assert list(circuit.all_operations()) == list( decomposed_circuit.all_operations() ) @pytest.mark.parametrize( "gate_to_be_decomposed", [ cirq.XPowGate(exponent=0.1), cirq.XPowGate(exponent=0.1, global_shift=0.2), cirq.XPowGate(exponent=0.1), cirq.YPowGate(exponent=0.1), cirq.YPowGate(exponent=0.1, global_shift=0.2), ], ) @pytest.mark.parametrize("target_qubit", [cirq.LineQubit(0), cirq.LineQubit(2)]) def test_comprises_only_phase_pauli_and_rotations( self, gate_to_be_decomposed, target_qubit ): cirq_circuit = cirq.Circuit([gate_to_be_decomposed.on(target_qubit)]) zquantum_circuit = import_from_cirq( decompose_cirq_circuit( cirq_circuit, [PowerGateToPhaseAndRotation(cirq.XPowGate, cirq.YPowGate)], ) ) assert all( isinstance(op, GateOperation) and op.gate.name in ("X", "PHASE", "RX", "RY") for op in zquantum_circuit.operations ) def test_accepts_only_xpowgate_or_ypowgate_in_initializer_argument(self): with pytest.raises(ValueError): PowerGateToPhaseAndRotation(cirq.ZPowGate, cirq.XPowGate) def test_requires_at_least_one_gate_class_in_initializer_argument(self): with pytest.raises(ValueError): PowerGateToPhaseAndRotation()
{"hexsha": "db7a7f5617734716cc219948fcf80bd085a04158", "size": 5326, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/zquantum/core/_cirq_decomposition_test.py", "max_stars_repo_name": "yukiizm/z-quantum-core", "max_stars_repo_head_hexsha": "c96804d9f0a35e1dde150db21b9e0e91a54f449f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tests/zquantum/core/_cirq_decomposition_test.py", "max_issues_repo_name": "yukiizm/z-quantum-core", "max_issues_repo_head_hexsha": "c96804d9f0a35e1dde150db21b9e0e91a54f449f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/zquantum/core/_cirq_decomposition_test.py", "max_forks_repo_name": "yukiizm/z-quantum-core", "max_forks_repo_head_hexsha": "c96804d9f0a35e1dde150db21b9e0e91a54f449f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.0394736842, "max_line_length": 88, "alphanum_fraction": 0.576229816, "include": true, "reason": "import numpy", "num_tokens": 1290}
import numpy as np import matplotlib.pyplot as plt from autograd import Tensor, Module from autograd.optim import SGD, Adam from autograd.module import Linear from autograd.activation import Sigmoid, Tanh def xor_gate(a, b): assert isinstance(a, int) and isinstance(b, int) if a != b: return 1 else: return 0 class model(Module): def __init__(self, in_features, out_features): super().__init__() self.linear = Linear(in_features, 5) self.linear2 = Linear(5, out_features) self.act = Tanh() self.act2 = Tanh() def forward(self, x): x = self.linear(x) x = self.act(x) x = self.linear2(x) x = self.act2(x) return x def mse(output, label): return (label - output) ** 2 if __name__ == '__main__': pairs = [[np.random.randint(0,2) for _ in range(2)] for i in range(1000)] labels = [[xor_gate(a,b)] for a,b in pairs] x_train = Tensor(pairs) y_train = Tensor(labels) net = model(2, 1) #print(list(net.parameters())) optimizer = Adam(net.parameters(), lr=0.01) batch_size = 32 print(x_train.shape, y_train.shape) history = [] starts = np.arange(0, x_train.shape[0], batch_size) for epoch in range(200): epoch_loss = 0.0 np.random.shuffle(starts) for start in starts: end = start + batch_size optimizer.zero_grad() inputs = x_train[start:end] predicted = net(inputs) actual = y_train[start:end] errors = predicted - actual loss = (errors * errors).sum() loss.backward() epoch_loss += loss.data optimizer.step() history.append(epoch_loss) #print(epoch, epoch_loss) plt.plot(history[10:]) plt.title('loss') plt.show()
{"hexsha": "2f21407fc259f60cb5b4e66ccb8b5b46f3074322", "size": 1878, "ext": "py", "lang": "Python", "max_stars_repo_path": "simple_neural_net.py", "max_stars_repo_name": "ly49nkallo/AdvancedTopics2021", "max_stars_repo_head_hexsha": "f729b16c3f7a5a6de64131039943977db97fc57c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "simple_neural_net.py", "max_issues_repo_name": "ly49nkallo/AdvancedTopics2021", "max_issues_repo_head_hexsha": "f729b16c3f7a5a6de64131039943977db97fc57c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "simple_neural_net.py", "max_forks_repo_name": "ly49nkallo/AdvancedTopics2021", "max_forks_repo_head_hexsha": "f729b16c3f7a5a6de64131039943977db97fc57c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.04, "max_line_length": 77, "alphanum_fraction": 0.5841320554, "include": true, "reason": "import numpy", "num_tokens": 466}
export mean_shift using mlpack._Internal.io import mlpack_jll const mean_shiftLibrary = mlpack_jll.libmlpack_julia_mean_shift # Call the C binding of the mlpack mean_shift binding. function mean_shift_mlpackMain() success = ccall((:mean_shift, mean_shiftLibrary), Bool, ()) if !success # Throw an exception---false means there was a C++ exception. throw(ErrorException("mlpack binding error; see output")) end end " Internal module to hold utility functions. " module mean_shift_internal import ..mean_shiftLibrary end # module """ mean_shift(input; [force_convergence, in_place, labels_only, max_iterations, radius, verbose]) This program performs mean shift clustering on the given dataset, storing the learned cluster assignments either as a column of labels in the input dataset or separately. The input dataset should be specified with the `input` parameter, and the radius used for search can be specified with the `radius` parameter. The maximum number of iterations before algorithm termination is controlled with the `max_iterations` parameter. The output labels may be saved with the `output` output parameter and the centroids of each cluster may be saved with the `centroid` output parameter. For example, to run mean shift clustering on the dataset `data` and store the centroids to `centroids`, the following command may be used: ```julia julia> using CSV julia> data = CSV.read("data.csv") julia> centroids, _ = mean_shift(data) ``` # Arguments - `input::Array{Float64, 2}`: Input dataset to perform clustering on. - `force_convergence::Bool`: If specified, the mean shift algorithm will continue running regardless of max_iterations until the clusters converge. Default value `false`. - `in_place::Bool`: If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use with Python.) Default value `false`. - `labels_only::Bool`: If specified, only the output labels will be written to the file specified by --output_file. Default value `false`. - `max_iterations::Int`: Maximum number of iterations before mean shift terminates. Default value `1000`. - `radius::Float64`: If the distance between two centroids is less than the given radius, one will be removed. A radius of 0 or less means an estimate will be calculated and used for the radius. Default value `0`. - `verbose::Bool`: Display informational messages and the full list of parameters and timers at the end of execution. Default value `false`. # Return values - `centroid::Array{Float64, 2}`: If specified, the centroids of each cluster will be written to the given matrix. - `output::Array{Float64, 2}`: Matrix to write output labels or labeled data to. """ function mean_shift(input; force_convergence::Union{Bool, Missing} = missing, in_place::Union{Bool, Missing} = missing, labels_only::Union{Bool, Missing} = missing, max_iterations::Union{Int, Missing} = missing, radius::Union{Float64, Missing} = missing, verbose::Union{Bool, Missing} = missing, points_are_rows::Bool = true) # Force the symbols to load. ccall((:loadSymbols, mean_shiftLibrary), Nothing, ()); IORestoreSettings("Mean Shift Clustering") # Process each input argument before calling mlpackMain(). IOSetParamMat("input", input, points_are_rows) if !ismissing(force_convergence) IOSetParam("force_convergence", convert(Bool, force_convergence)) end if !ismissing(in_place) IOSetParam("in_place", convert(Bool, in_place)) end if !ismissing(labels_only) IOSetParam("labels_only", convert(Bool, labels_only)) end if !ismissing(max_iterations) IOSetParam("max_iterations", convert(Int, max_iterations)) end if !ismissing(radius) IOSetParam("radius", convert(Float64, radius)) end if verbose !== nothing && verbose === true IOEnableVerbose() else IODisableVerbose() end IOSetPassed("centroid") IOSetPassed("output") # Call the program. mean_shift_mlpackMain() return IOGetParamMat("centroid", points_are_rows), IOGetParamMat("output", points_are_rows) end
{"hexsha": "be5fb433c3f24d7870634962a6e0dd0f2bfc009c", "size": 4397, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/mean_shift.jl", "max_stars_repo_name": "mlpack/mlpack.jl", "max_stars_repo_head_hexsha": "7c499f056dc46ee54a2be6da1beb3066f40cbf09", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-04-24T18:12:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-10T03:18:11.000Z", "max_issues_repo_path": "src/mean_shift.jl", "max_issues_repo_name": "mlpack/mlpack.jl", "max_issues_repo_head_hexsha": "7c499f056dc46ee54a2be6da1beb3066f40cbf09", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-02-20T16:35:48.000Z", "max_issues_repo_issues_event_max_datetime": "2020-02-28T20:20:08.000Z", "max_forks_repo_path": "src/mean_shift.jl", "max_forks_repo_name": "mlpack/mlpack.jl", "max_forks_repo_head_hexsha": "7c499f056dc46ee54a2be6da1beb3066f40cbf09", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-02-20T16:32:46.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-01T05:19:22.000Z", "avg_line_length": 34.8968253968, "max_line_length": 98, "alphanum_fraction": 0.7127586991, "num_tokens": 1031}
############################################################################## # Copyright 2020 IBM Corp. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ############################################################################## import pandas as pd from . import DFPBase from sklearn.preprocessing import LabelEncoder import copy import numpy as np import onnx from onnx import helper from onnx import AttributeProto, TensorProto, GraphProto class ComplementLabelEncoder(DFPBase): """ Encoder categorical (string) values into numerical values. Parameters ---------- inputs : List of strings Each string is an input column label. outputs : List of strings Each string is an output column label. """ def __init__( self, inputs=DFPBase._PARM_ALL, outputs=DFPBase._PARM_ALL, ): self.inputs = inputs self.outputs = outputs self.maps = [] self.vals = [] def __fit(self, X, encoder): label_col = X.map(lambda x:'extra_category_' if str(x)=='nan' else x).astype('str') encoder.fit(label_col) if 'extra_category_' not in encoder.classes_: encoder.classes_ = list(encoder.classes_) + ['extra_category_'] m = {encoder.classes_[i]:i for i in range(len(encoder.classes_))} return m, m['extra_category_'] def fit(self, df): self.maps.clear() self.vals.clear() self.inputs = DFPBase.replace_PARM_ALL(df, self.inputs) self.outputs = DFPBase.replace_PARM_ALL(df, self.outputs) for input in self.inputs: m, v = self.__fit(df[input], LabelEncoder()) self.maps.append(m) self.vals.append(v) return self def __transform(self, X, m, v): if str(X.dtype) == 'category': X = X.cat.add_categories(['extra_category_']) return X.fillna('extra_category_').map(m).fillna(v).astype('int32') def transform(self, df): self.inputs = DFPBase.replace_PARM_ALL(df, self.inputs) self.outputs = DFPBase.replace_PARM_ALL(df, self.outputs) for input, output, m, v in zip(self.inputs, self.outputs, self.maps, self.vals): df[output] = self.__transform(df[input], m, v) return df def to_onnx_operator(self, graph): for input_column, output_column, m, v in zip(self.inputs, self.outputs, self.maps, self.vals): input_tensor = graph.get_current_tensor(input_column) output_tensor = graph.get_next_tensor(output_column, TensorProto.INT64) assert input_tensor.type == TensorProto.STRING kwargs = {} keys = [] for key in m.keys(): key = key.replace('nan', 'NaN') keys.append(key) vals = list(m.values()) kwargs['keys_strings'] = keys kwargs['values_int64s'] = vals kwargs['default_int64'] = v graph.add([input_tensor], [output_tensor], [helper.make_node('LabelEncoder', [input_tensor.name], [output_tensor.name], graph.get_node_name('LabelEncoder'), domain='ai.onnx.ml', **kwargs)])
{"hexsha": "e2423ccfa452a4209573e6892a30c2ad9663f388", "size": 3711, "ext": "py", "lang": "Python", "max_stars_repo_path": "dfpipeline/ComplementLabelEncoder.py", "max_stars_repo_name": "IBM/dataframe-pipeline", "max_stars_repo_head_hexsha": "44bb4efc77ca36022ef2d54cba4d77825111841f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-02-27T02:39:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-13T15:52:08.000Z", "max_issues_repo_path": "dfpipeline/ComplementLabelEncoder.py", "max_issues_repo_name": "IBM/dataframe-pipeline", "max_issues_repo_head_hexsha": "44bb4efc77ca36022ef2d54cba4d77825111841f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-02-26T02:40:27.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-26T03:24:30.000Z", "max_forks_repo_path": "dfpipeline/ComplementLabelEncoder.py", "max_forks_repo_name": "IBM/dataframe-pipeline", "max_forks_repo_head_hexsha": "44bb4efc77ca36022ef2d54cba4d77825111841f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4848484848, "max_line_length": 201, "alphanum_fraction": 0.6106170843, "include": true, "reason": "import numpy", "num_tokens": 824}
# -------------------------------------------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the MIT License. # -------------------------------------------------------------------------------------------- import os import unittest import numpy as np import pandas as pd from nimbusml import Pipeline, FileDataStream from nimbusml.datasets import get_dataset from nimbusml.feature_extraction.categorical import OneHotVectorizer from nimbusml.linear_model import LogisticRegressionBinaryClassifier, OnlineGradientDescentRegressor from nimbusml.preprocessing.filter import RangeFilter seed = 0 train_data = {'c0': ['a', 'b', 'a', 'b'], 'c1': [1, 2, 3, 4], 'c2': [2, 3, 4, 5]} train_df = pd.DataFrame(train_data).astype({'c1': np.float64, 'c2': np.float64}) test_data = {'c0': ['a', 'b', 'b'], 'c1': [1.5, 2.3, 3.7], 'c2': [2.2, 4.9, 2.7]} test_df = pd.DataFrame(test_data).astype({'c1': np.float64, 'c2': np.float64}) class TestPipelineCombining(unittest.TestCase): def test_two_pipelines_created_using_dataframes_can_not_be_combined_when_the_schemas_are_different(self): """ This test verifies that two models created using DataFrames can not be combined if the output schema of the first is different then the input schema of the second. NOTE: This issue only happens with Pipelines created and fit using dataframes. Pipelines created and fit using IDV binary streams do not have this issue (see the tests below). """ # Create and fit a OneHotVectorizer transform using the # training data and use it to transform the training data. transform_pipeline = Pipeline([OneHotVectorizer() << 'c0'], random_state=seed) transform_pipeline.fit(train_df) df = transform_pipeline.transform(train_df) # Create and fit an OnlineGradientDescentRegressor using # the transformed training data from the previous step. predictor_pipeline = Pipeline([OnlineGradientDescentRegressor(label='c2')], random_state=seed) predictor_pipeline.fit(df) # Perform a prediction given the test data using # the transform and predictor defined previously. df = transform_pipeline.transform(test_df) result_1 = predictor_pipeline.predict(df) try: # This does not work because the output schema of the combined_pipeline = Pipeline.combine_models(transform_pipeline, predictor_pipeline) except Exception as e: pass else: self.fail() def test_two_pipelines_created_using_dataframes_can_be_combined_when_the_schemas_are_the_same(self): """ This test verifies that two models created using DataFrames can be combined if the output schema of the first is the same as the input schema of the second. """ df = train_df.drop(['c0'], axis=1) # Create and fit a RangeFilter transform using the training # data and use it to transform the training data. transform_pipeline = Pipeline([RangeFilter(min=0.0, max=4.5) << 'c2'], random_state=seed) transform_pipeline.fit(df) df = transform_pipeline.transform(df) # Create and fit an OnlineGradientDescentRegressor using # the transformed training data from the previous step. predictor_pipeline = Pipeline([OnlineGradientDescentRegressor(label='c2')], random_state=seed) predictor_pipeline.fit(df) # Perform a prediction given the test data using # the transform and predictor defined previously. df = transform_pipeline.transform(test_df) result_1 = predictor_pipeline.predict(df) df = test_df.drop(['c0'], axis=1) # Combine the above Pipelines in to one Pipeline and use # the new Pipeline to get predictions given the test data. combined_pipeline = Pipeline.combine_models(transform_pipeline, predictor_pipeline) result_2 = combined_pipeline.predict(df) # Verify that the prediction from the combined Pipeline # matches the prediction from the original two Pipelines. self.assertEqual(result_1.loc[0, 'Score'], result_2.loc[0, 'Score']) self.assertEqual(result_1.loc[1, 'Score'], result_2.loc[1, 'Score']) def test_two_pipelines_created_using_idv_binary_data_can_be_combined_in_to_one_model(self): """ This test verifies that two models can be combined even if the transform increases the number of columns. """ # Create and fit a OneHotVectorizer transform using the # training data and use it to transform the training data. transform_pipeline = Pipeline([OneHotVectorizer() << 'c0'], random_state=seed) transform_pipeline.fit(train_df) df = transform_pipeline.transform(train_df, as_binary_data_stream=True) # Create and fit an OnlineGradientDescentRegressor using # the transformed training data from the previous step. predictor_pipeline = Pipeline([OnlineGradientDescentRegressor(label='c2', feature=['c0', 'c1'])], random_state=seed) predictor_pipeline.fit(df) # Perform a prediction given the test data using # the transform and predictor defined previously. df = transform_pipeline.transform(test_df, as_binary_data_stream=True) result_1 = predictor_pipeline.predict(df) # Combine the above Pipelines in to one Pipeline and use # the new Pipeline to get predictions given the test data. combined_pipeline = Pipeline.combine_models(transform_pipeline, predictor_pipeline) result_2 = combined_pipeline.predict(test_df) # Verify that the prediction from the combined Pipeline # matches the prediction from the original two Pipelines. self.assertEqual(result_1.loc[0, 'Score'], result_2.loc[0, 'Score']) self.assertEqual(result_1.loc[1, 'Score'], result_2.loc[1, 'Score']) def test_three_pipelines_created_using_idv_binary_data_can_be_combined_in_to_one_model(self): """ This test verifies that three models can be combined even if the transform increases the number of columns. """ # Create and fit a RangeFilter transform using the training # data and use it to transform the training data. transform_pipeline_1 = Pipeline([RangeFilter(min=0.0, max=4.5) << 'c2']) df = transform_pipeline_1.fit_transform(train_df, as_binary_data_stream=True) # Create and fit a OneHotVectorizer transform using # the transformed data from the previous step and use it # to transform the data from the previous step. transform_pipeline_2 = Pipeline([OneHotVectorizer() << 'c0'], random_state=seed) transform_pipeline_2.fit(df) df = transform_pipeline_2.transform(df, as_binary_data_stream=True) # Create and fit an OnlineGradientDescentRegressor using # the transformed training data from the previous step. predictor_pipeline = Pipeline([OnlineGradientDescentRegressor(label='c2', feature=['c0', 'c1'])], random_state=seed) predictor_pipeline.fit(df) # Perform a prediction given the test data using # the transforms and predictor defined previously. df = transform_pipeline_1.transform(test_df, as_binary_data_stream=True) df = transform_pipeline_2.transform(df, as_binary_data_stream=True) result_1 = predictor_pipeline.predict(df) # Combine the above Pipelines in to one Pipeline and use # the new Pipeline to get predictions given the test data. combined_pipeline = Pipeline.combine_models(transform_pipeline_1, transform_pipeline_2, predictor_pipeline) result_2 = combined_pipeline.predict(test_df) # Verify that the prediction from the combined Pipeline # matches the prediction from the original two Pipelines. self.assertEqual(result_1.loc[0, 'Score'], result_2.loc[0, 'Score']) self.assertEqual(result_1.loc[1, 'Score'], result_2.loc[1, 'Score']) def test_combine_two_pipelines_created_from_model_files(self): """ This test verifies that two models can be combined after they are loaded from disk in to new Pipelines. """ # Create and fit a OneHotVectorizer transform using the # training data and use it to transform the training data. transform_pipeline_1 = Pipeline([OneHotVectorizer() << 'c0'], random_state=seed) transform_pipeline_1.fit(train_df) df = transform_pipeline_1.transform(train_df, as_binary_data_stream=True) # Create and fit an OnlineGradientDescentRegressor using # the transformed training data from the previous step. predictor_pipeline_1 = Pipeline([OnlineGradientDescentRegressor(label='c2', feature=['c0', 'c1'])], random_state=seed) predictor_pipeline_1.fit(df) # Perform a prediction given the test data using # the transform and predictor defined previously. df = transform_pipeline_1.transform(test_df, as_binary_data_stream=True) result_1 = predictor_pipeline_1.predict(df) # Use the model files stored in the Pipelines # to create new Pipelines (aka. create new Pipelines # using the model files stored on disk). transform_pipeline_2 = Pipeline() transform_pipeline_2.load_model(transform_pipeline_1.model) predictor_pipeline_2 = Pipeline() predictor_pipeline_2.load_model(predictor_pipeline_1.model) # Combine the newly created Pipelines in to one Pipeline # and use it to get predictions given the test data. combined_pipeline = Pipeline.combine_models(transform_pipeline_2, predictor_pipeline_2) result_2 = combined_pipeline.predict(test_df) # Verify that the prediction from the combined Pipeline # matches the prediction from the original two Pipelines. self.assertEqual(result_1.loc[0, 'Score'], result_2.loc[0, 'Score']) self.assertEqual(result_1.loc[1, 'Score'], result_2.loc[1, 'Score']) def test_passing_in_a_single_transform_returns_new_pipeline(self): transform = OneHotVectorizer() << 'c0' transform.fit(train_df) combined_pipeline = Pipeline.combine_models(transform, contains_predictor=False) result = combined_pipeline.transform(test_df) self.assertEqual(len(result), 3) self.assertEqual(len(result.columns), 4) self.assertTrue(result.columns[0].startswith('c0.')) self.assertTrue(result.columns[1].startswith('c0.')) self.assertTrue(isinstance(combined_pipeline, Pipeline)) def test_passing_in_a_single_predictor_returns_new_pipeline(self): train_dropped_df = train_df.drop(['c0'], axis=1) test_dropped_df = test_df.drop(['c0'], axis=1) predictor = OnlineGradientDescentRegressor(label='c2', feature=['c1']) predictor.fit(train_dropped_df) result_1 = predictor.predict(test_dropped_df) combined_pipeline = Pipeline.combine_models(predictor) result_2 = combined_pipeline.predict(test_dropped_df) self.assertEqual(result_1[0], result_2.loc[0, 'Score']) self.assertEqual(result_1[1], result_2.loc[1, 'Score']) self.assertTrue(isinstance(combined_pipeline, Pipeline)) def test_passing_in_a_single_pipeline_returns_new_pipeline(self): pipeline = Pipeline([ OneHotVectorizer() << 'c0', OnlineGradientDescentRegressor(label='c2', feature=['c0', 'c1']) ]) pipeline.fit(train_df) result_1 = pipeline.predict(test_df) combined_pipeline = Pipeline.combine_models(pipeline) result_2 = combined_pipeline.predict(test_df) self.assertEqual(result_1.loc[0, 'Score'], result_2.loc[0, 'Score']) self.assertEqual(result_1.loc[1, 'Score'], result_2.loc[1, 'Score']) self.assertTrue(isinstance(combined_pipeline, Pipeline)) def test_combine_transform_and_transform(self): transform_1 = RangeFilter(min=0.0, max=4.5) << 'c2' df = transform_1.fit_transform(train_df) transform_2 = OneHotVectorizer() << 'c0' transform_2.fit(df) df = transform_1.transform(test_df) result_1 = transform_2.transform(df) combined_pipeline = Pipeline.combine_models(transform_1, transform_2, contains_predictor=False) result_2 = combined_pipeline.transform(test_df) self.assertTrue(result_1.equals(result_2)) def test_combine_transform_and_predictor(self): transform = OneHotVectorizer() << 'c0' df = transform.fit_transform(train_df, as_binary_data_stream=True) predictor = OnlineGradientDescentRegressor(label='c2', feature=['c0', 'c1']) predictor.fit(df) df = transform.transform(test_df, as_binary_data_stream=True) result_1 = predictor.predict(df) combined_pipeline = Pipeline.combine_models(transform, predictor) result_2 = combined_pipeline.predict(test_df) self.assertEqual(result_1[0], result_2.loc[0, 'Score']) self.assertEqual(result_1[1], result_2.loc[1, 'Score']) def test_combine_transform_and_pipeline(self): transform = RangeFilter(min=0.0, max=4.5) << 'c2' df = transform.fit_transform(train_df, as_binary_data_stream=True) pipeline = Pipeline([ OneHotVectorizer() << 'c0', OnlineGradientDescentRegressor(label='c2', feature=['c0', 'c1']) ]) pipeline.fit(df) df = transform.transform(test_df, as_binary_data_stream=True) result_1 = pipeline.predict(df) combined_pipeline = Pipeline.combine_models(transform, pipeline) result_2 = combined_pipeline.predict(test_df) self.assertTrue(result_1.equals(result_2)) def test_combine_with_classifier_trained_with_y_arg(self): """ Tests a sequence where the initial transform is computed using both X and y input args. Note, any steps after the initial transform will be operating on data where the X and y have been combined in to one dataset. """ np.random.seed(0) df = get_dataset("infert").as_df() X = df.loc[:, df.columns != 'case'] y = df['case'] transform = OneHotVectorizer() << 'education_str' # Passing in both X and y df = transform.fit_transform(X, y, as_binary_data_stream=True) # NOTE: need to specify the label column here because the # feature and label data was joined in the last step. predictor = LogisticRegressionBinaryClassifier(label='case', feature=list(X.columns)) predictor.fit(df) df = transform.transform(X, as_binary_data_stream=True) result_1 = predictor.predict(df) # Combine the models and perform a prediction combined_pipeline = Pipeline.combine_models(transform, predictor) result_2 = combined_pipeline.predict(X) result_2 = result_2['PredictedLabel'].astype(np.float64) self.assertTrue(result_1.equals(result_2)) def test_combine_with_classifier_trained_with_joined_X_and_y(self): np.random.seed(0) infert_df = get_dataset("infert").as_df() feature_cols = [c for c in infert_df.columns if c != 'case'] transform = OneHotVectorizer() << 'education_str' df = transform.fit_transform(infert_df, as_binary_data_stream=True) predictor = LogisticRegressionBinaryClassifier(label='case', feature=feature_cols) predictor.fit(df) df = transform.transform(infert_df, as_binary_data_stream=True) result_1 = predictor.predict(df) # Combine the models and perform a prediction combined_pipeline = Pipeline.combine_models(transform, predictor) result_2 = combined_pipeline.predict(infert_df) result_2 = result_2['PredictedLabel'].astype(np.float64) self.assertTrue(result_1.equals(result_2)) def test_combine_with_classifier_trained_with_filedatastream(self): path = get_dataset('infert').as_filepath() data = FileDataStream.read_csv(path) transform = OneHotVectorizer(columns={'edu': 'education'}) df = transform.fit_transform(data, as_binary_data_stream=True) feature_cols = ['parity', 'edu', 'age', 'induced', 'spontaneous', 'stratum', 'pooled.stratum'] predictor = LogisticRegressionBinaryClassifier(feature=feature_cols, label='case') predictor.fit(df) data = FileDataStream.read_csv(path) df = transform.transform(data, as_binary_data_stream=True) result_1 = predictor.predict(df) data = FileDataStream.read_csv(path) combined_pipeline = Pipeline.combine_models(transform, predictor) result_2 = combined_pipeline.predict(data) result_1 = result_1.astype(np.int32) result_2 = result_2['PredictedLabel'].astype(np.int32) self.assertTrue(result_1.equals(result_2)) if __name__ == '__main__': unittest.main()
{"hexsha": "f16e43aa5756f2a55b7ee9d58ba32e5407daac85", "size": 18002, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/python/nimbusml/tests/pipeline/test_pipeline_combining.py", "max_stars_repo_name": "michaelgsharp/NimbusML", "max_stars_repo_head_hexsha": "50031157265f49eec85d27fe67582d9ddaf01ef9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/python/nimbusml/tests/pipeline/test_pipeline_combining.py", "max_issues_repo_name": "michaelgsharp/NimbusML", "max_issues_repo_head_hexsha": "50031157265f49eec85d27fe67582d9ddaf01ef9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/python/nimbusml/tests/pipeline/test_pipeline_combining.py", "max_forks_repo_name": "michaelgsharp/NimbusML", "max_forks_repo_head_hexsha": "50031157265f49eec85d27fe67582d9ddaf01ef9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.6941747573, "max_line_length": 109, "alphanum_fraction": 0.659371181, "include": true, "reason": "import numpy", "num_tokens": 3712}
[STATEMENT] lemma rm_vars_ground_supports: assumes "ground (subst_range \<theta>)" shows "rm_vars X \<theta> supports \<theta>" [PROOF STATE] proof (prove) goal (1 subgoal): 1. rm_vars X \<theta> supports \<theta> [PROOF STEP] proof [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>x. rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] fix x [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>x. rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] have *: "ground (subst_range (rm_vars X \<theta>))" [PROOF STATE] proof (prove) goal (1 subgoal): 1. ground (subst_range (rm_vars X \<theta>)) [PROOF STEP] using rm_vars_img_subset[of X \<theta>] assms [PROOF STATE] proof (prove) using this: subst_range (rm_vars X \<theta>) \<subseteq> subst_range \<theta> ground (subst_range \<theta>) goal (1 subgoal): 1. ground (subst_range (rm_vars X \<theta>)) [PROOF STEP] by (auto simp add: subst_domain_def) [PROOF STATE] proof (state) this: ground (subst_range (rm_vars X \<theta>)) goal (1 subgoal): 1. \<And>x. rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] show "rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x " [PROOF STATE] proof (prove) goal (1 subgoal): 1. rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] proof (cases "x \<in> subst_domain (rm_vars X \<theta>)") [PROOF STATE] proof (state) goal (2 subgoals): 1. x \<in> subst_domain (rm_vars X \<theta>) \<Longrightarrow> rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x 2. x \<notin> subst_domain (rm_vars X \<theta>) \<Longrightarrow> rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] case True [PROOF STATE] proof (state) this: x \<in> subst_domain (rm_vars X \<theta>) goal (2 subgoals): 1. x \<in> subst_domain (rm_vars X \<theta>) \<Longrightarrow> rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x 2. x \<notin> subst_domain (rm_vars X \<theta>) \<Longrightarrow> rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] hence "fv (rm_vars X \<theta> x) = {}" [PROOF STATE] proof (prove) using this: x \<in> subst_domain (rm_vars X \<theta>) goal (1 subgoal): 1. fv (rm_vars X \<theta> x) = {} [PROOF STEP] using * [PROOF STATE] proof (prove) using this: x \<in> subst_domain (rm_vars X \<theta>) ground (subst_range (rm_vars X \<theta>)) goal (1 subgoal): 1. fv (rm_vars X \<theta> x) = {} [PROOF STEP] by auto [PROOF STATE] proof (state) this: fv (rm_vars X \<theta> x) = {} goal (2 subgoals): 1. x \<in> subst_domain (rm_vars X \<theta>) \<Longrightarrow> rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x 2. x \<notin> subst_domain (rm_vars X \<theta>) \<Longrightarrow> rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] thus ?thesis [PROOF STATE] proof (prove) using this: fv (rm_vars X \<theta> x) = {} goal (1 subgoal): 1. rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] using True [PROOF STATE] proof (prove) using this: fv (rm_vars X \<theta> x) = {} x \<in> subst_domain (rm_vars X \<theta>) goal (1 subgoal): 1. rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] by auto [PROOF STATE] proof (state) this: rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x goal (1 subgoal): 1. x \<notin> subst_domain (rm_vars X \<theta>) \<Longrightarrow> rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x [PROOF STEP] qed (simp add: subst_domain_def) [PROOF STATE] proof (state) this: rm_vars X \<theta> x \<cdot> \<theta> = \<theta> x goal: No subgoals! [PROOF STEP] qed
{"llama_tokens": 1424, "file": "Stateful_Protocol_Composition_and_Typing_More_Unification", "length": 16}
[STATEMENT] lemma of_rat_less_1_iff [simp]: "(of_rat r :: 'a::linordered_field) < 1 \<longleftrightarrow> r < 1" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (of_rat r < (1::'a)) = (r < 1) [PROOF STEP] using of_rat_less [of r 1] [PROOF STATE] proof (prove) using this: (of_rat r < of_rat 1) = (r < 1) goal (1 subgoal): 1. (of_rat r < (1::'a)) = (r < 1) [PROOF STEP] by simp
{"llama_tokens": 188, "file": null, "length": 2}
########################################## # This code impliment fuzzy controller # ########################################## import pendulum import const import numpy as np import abc_py import pso_e #import pso_v2 as pso_e # Optimizer algorithm from matplotlib import pyplot as plt ############################# # GLOBAL # ############################# # coefficent for sliding mode surface, # c0 * theta + c1 * theta' + (c2 * x + c3 * x')(approach/depart) # c4 => s coeficient feeding into fuzzy table # c5 => s' coeficient feeding into fuzzy table # [c0, c1, c2, c3, c4, c5] C = [1, 1, 1, 1, 1, 1] # Weight for calculating fitness function W = [88, 10, 2] # Dimension of the particle dim = 6 # Desire position desire_x = 0 ''' Calculate fitness function fitness value = W_0 * abs(theta) + W_1 * (shift from x desire position) + W_2 * time best solution is we have stable system, which theta is at desire angle, and x is at desire position, and use minimum time to became stable ''' def fitness(theta, x, t): """ @param theta - the angle between desire angle at the end @param x - shiftment from x desire at the end @param t - time @retval fitness value """ angle = abs(theta) shift = abs(x) time = t / const.TUNE_LIMIT return W[0] * angle + W[1] * shift + W[2] * time ''' Fuzzification, use to catagorize ''' def fuzzification(v): shift_v = v + const.ZERO if shift_v >= 8: return 8 elif shift_v <= 0: return 0 else: return shift_v # return int(round(shift_v)) ''' Fuzzy simulation process for the certain coeficient c ''' def fuzzy_sim(c, plot_en=False): """ @param plot_en - plot record data or not #note plot is blocking, you should close the figure manually and the program will continue """ s = 0 # Present s value last_s = 0 # last s value, use to calculate s' force = 0 # Force last calculate ad_mode = 1 # Departure or approaching mode, Depart = 1, approach = -1 sys.initial(const.theta_init, const.pos_init) # Initial pendulum system t = 0 record_theta = [] desire_theta = [] record_x = [] desire_x = [] record_x_e = [] for t in range(const.TUNE_LIMIT): ############################# # Calculate Force # ############################# # Judge approach or departure mode x_d = sys.signal(t) error_x = sys.pos[0] - x_d # e_x sign_e_x = np.sign(error_x) # sgn(e_x) sign_x_prom = np.sign(sys.pos[1]) # sgn(x') sign_theta = np.sign(sys.theta[0]) # sgn(theta) relation_x = sign_e_x * sign_x_prom # relation between e_x and x' relation_x_theta = sign_e_x * sign_theta # Relation between e_x and theta if relation_x == 1 or relation_x_theta == 1: # Departure mode ad_mode = 1 else: # Approaching mode ad_mode = -1 # Calculate sliding surface value last_s = s s = const.theta_scale * (c[0] * sys.theta[0] + c[1] * sys.theta[1]) + ad_mode * (c[2] * const.x_scale * error_x + c[3] * sys.pos[1]) # Judge if reach the force threshold if t % const.REACT_T == 0: s_prom = s - last_s # s' weighted_s = c[4] * s # c4 * s weighted_s_prom = c[5] * s_prom # c5 * s' # Catagory this status falled into s_cata = fuzzification(weighted_s) # s value catagorize s_prom_cata = fuzzification(weighted_s_prom) # s' catagorize cata = const.FUZZY_TABLE[int(round(s_cata))][int(round(s_prom_cata))] norm_force = cata + (s_cata - round(s_cata) + s_prom_cata - round(s_prom_cata)) * 0.5 - const.ZERO # calculate the force to send, shift back to zero = 0 force = norm_force / (const.SCALE - const.ZERO) * const.force_limit # Terminate condition, if we are training, no ploting if not plot_en: # theta out of limit if abs(sys.theta[0]) >= const.theta_limit: t = const.TUNE_LIMIT break # x out of limit if abs(error_x) >= const.x_limit: t = const.TUNE_LIMIT break # Theta is stable in a range of time len_record = len(record_theta) avg_theta = 0 if len_record > const.window: sample_data = record_theta[len_record - int(const.window / 2):len_record].copy() sample_data = np.abs(sample_data) avg_theta = np.average(sample_data) if avg_theta < const.stable_theta: break ############################# # Run pendulum model # ############################# # print(force) sys.add_force(force) # Record variable for further use record_theta.append(sys.theta[0]) record_x.append(sys.pos[0]) desire_x.append(x_d) record_x_e.append(error_x) len_record = len(record_theta) sample_data = 0 sample_x_e = 0 if len_record >= const.window: sample_data = record_theta[len_record - const.window:len_record].copy() sample_x_e = record_x_e[len_record - const.window:len_record].copy() else: sample_data = record_theta[:len_record].copy() sample_x_e = record_x_e[:len_record].copy() # Process data being record sample_data = np.abs(sample_data) sample_x_e = np.abs(sample_x_e) avg_theta = np.average(sample_data) avg_x_e = np.average(sample_x_e) # Calculate fitness value fit = fitness(avg_theta, avg_x_e, t) # Plot the whole process when the pendulum became stable if plot_en: fig, axs = plt.subplots(3, 1, constrained_layout=True) axs[0].plot(record_x, '--', desire_x, '-') axs[0].set_title('x') axs[1].plot(record_theta, '--') axs[1].set_title('theta') plt.show() return fit ''' Simulation for the pendulum system ''' def simulate(C): ''' @param C - coefficent of sliding table ''' return fuzzy_sim(C, True) ''' Define square signal input ''' triger = False def signal_sqrt(t): global triger if ((t+1) % (10/const.PERIOD_T)) == 0: triger = not triger if triger: return -0.5 return 0.5 ''' Set constant position ''' def signal_const(t): return -4 if __name__ == "__main__": # Create inverted pendulum system, signal set to origin point # sys = pendulum.pendulum(M=const.M, m=const.m, L=const.L, mu_c=const.mu_c, mu_p=const.mu_p) # Create inverted pendulum system, signal set to constant desire position # sys = pendulum.pendulum(M=const.M, m=const.m, L=const.L, mu_c=const.mu_c, mu_p=const.mu_p,signal=signal_const) # Create inverted pendulum system, signal set square function sys = pendulum.pendulum(M=const.M, m=const.m, L=const.L, mu_c=const.mu_c, mu_p=const.mu_p,signal=signal_sqrt) sys.initial(const.theta_init, const.pos_init) # fit = simulate([11.88827976, 1.97008765, 14.13742648, 15.72209416, 1.38708104, 0.12893926]) # Test results # Initial ABC algorithm algo = abc_py.ABC (dim=dim, num=const.num, max_iter=const.max_iter, u_bound=const.p_range[1], l_bound=const.p_range[0], func=fuzzy_sim, end_thres=const.end_thres, end_sample=const.end_sample, fit_max=const.fitness_max) # Initial particles algo.abc_init() # Run iteration algo.abc_iterator() # Extract best solution C = algo.bestx.copy() # # Initial PSO algorithm # algo = pso_e.PSO (dim=dim, num=const.num, max_iter=const.max_iter, u_bound=const.p_range[1], l_bound=const.p_range[0], func=fuzzy_sim, end_thres=const.end_thres, end_sample=const.end_sample, fit_max=const.fitness_max) # # Initial particles # algo.pso_init() # # Run iteration # algo.pso_iterator() # # Extract best solution # C = algo.gbest.copy() # Simulate the result fit = simulate(C) plt.plot(algo.best_results) plt.show()
{"hexsha": "a6d3f8b749a075706a2b4dd0709d286f0c16da2a", "size": 8515, "ext": "py", "lang": "Python", "max_stars_repo_path": "main.py", "max_stars_repo_name": "Ernie-Wang/IC_HW2", "max_stars_repo_head_hexsha": "77e792f9afcbc0a0bffeefbf79fb4fe8b933c01d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "main.py", "max_issues_repo_name": "Ernie-Wang/IC_HW2", "max_issues_repo_head_hexsha": "77e792f9afcbc0a0bffeefbf79fb4fe8b933c01d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.py", "max_forks_repo_name": "Ernie-Wang/IC_HW2", "max_forks_repo_head_hexsha": "77e792f9afcbc0a0bffeefbf79fb4fe8b933c01d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.26171875, "max_line_length": 223, "alphanum_fraction": 0.5713446858, "include": true, "reason": "import numpy", "num_tokens": 2195}
from collections import OrderedDict import torch import os import numpy as np def get_predicteds(output,topk=(5,)): """ :param output: model's output tensor :param topk: a tuple for topk (top_start,top_end) :return: preds_list scores_list """ maxk = max(topk) scores, preds = output.topk(maxk, 1, True, True) return preds.cpu().detach().numpy(),scores.cpu().detach().numpy() def multi_get_predicteds(output,score): """ :param output: model's output tensor :param topk: a tuple for topk (top_start,top_end) :return: preds_list scores_list """ preds = np.where(output>score) print(preds) scores= output[preds] # return preds.cpu().detach().numpy(),scores.cpu().detach().numpy() def make_predicted_txt(predicted_dict,txt_dir): """ just make predicted txt for every file in test, and calculate map predicted_dict:your predicted dict(maybe saved as a json file) :return: """ human_bbox_dict = OrderedDict() object_bbox_dict = OrderedDict() action_dict = OrderedDict() scores_dict = OrderedDict() olabels_dict = OrderedDict() filenames = [] for index, row in predicted_dict.items(): if row['img_path'] in filenames: pass else: filenames.append(row['img_path']) if row['img_path'] in human_bbox_dict: human_bbox_dict[row['img_path']].append(row['hbbox']) object_bbox_dict[row['img_path']].append(row['obbox']) action_dict[row['img_path']].append(row['predicteds']) scores_dict[row['img_path']].append(row['scores']) olabels_dict[row['img_path']].append(row['olabel']) else: human_bbox_dict[row['img_path']] = [row['hbbox']] object_bbox_dict[row['img_path']] = [row['obbox']] action_dict[row['img_path']]= [row['predicteds']] scores_dict[row['img_path']]= [row['scores']] olabels_dict[row['img_path']]=[row['olabel']] print("data load done") for file in filenames: obboxs = object_bbox_dict[file] actions = action_dict[file] humans=human_bbox_dict[file] scores=scores_dict[file] olabels=olabels_dict[file] txtname=file.split('.')[0]+'.txt' save_lists=[] for i, hbbox in enumerate(humans): h_x1 = hbbox[0] h_x2 = hbbox[2] h_y1 = hbbox[1] h_y2 = hbbox[3] o_x1 = obboxs[i][0] o_x2 = obboxs[i][2] o_y1 = obboxs[i][1] o_y2 = obboxs[i][3] for ii,label in enumerate(actions[i]): label=label score=scores[i][ii] olabel=olabels[i] save_lists.append(str(label)+' '+str(score)+' '+str(h_x1)+' '+str(h_y1)+' '+str(h_x2)+' '+str(h_y2)+' '+str(o_x1)+' '+str(o_y1)+' '+str(o_x2)+' '+str(o_y2)+ ' '+str(olabel)+'\n') if not os.path.exists(txt_dir): os.makedirs(txt_dir) with open(txt_dir+txtname,'w') as f: for save_list in save_lists: f.write(save_list) print("save into txt") def make_pkl(): pass
{"hexsha": "bccb3913b717146c7067260e3f6c11450675e709", "size": 3188, "ext": "py", "lang": "Python", "max_stars_repo_path": "vcoco_det/model/test_generator.py", "max_stars_repo_name": "ZHUXUHAN/HOI", "max_stars_repo_head_hexsha": "c642e0edeabf47c396e359b6e7059e664644d5aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-12-09T06:48:58.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-17T11:15:13.000Z", "max_issues_repo_path": "vcoco_det/model/test_generator.py", "max_issues_repo_name": "ZHUXUHAN/HOI", "max_issues_repo_head_hexsha": "c642e0edeabf47c396e359b6e7059e664644d5aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-12-17T11:15:55.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-17T11:15:55.000Z", "max_forks_repo_path": "vcoco_det/model/test_generator.py", "max_forks_repo_name": "ZHUXUHAN/HOI", "max_forks_repo_head_hexsha": "c642e0edeabf47c396e359b6e7059e664644d5aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.914893617, "max_line_length": 194, "alphanum_fraction": 0.5862609787, "include": true, "reason": "import numpy", "num_tokens": 808}
"""Module to run a basic decision tree model Author(s): Mike Skarlinski (michael.skarlinski@weightwatchers.com) """ import pandas as pd import numpy as np import logging from sklearn import preprocessing from primrose.base.transformer import AbstractTransformer class ExplicitCategoricalTransform(AbstractTransformer): DEFAULT_NUMERIC = -9999 def __init__(self, categoricals): """initialize the ExplicitCategoricalTransform Args: categoricals: dictionary containing for each column to be transformed: - transformations: list of strings to be executed on the data ('x' represents the current categorical variable) - rename: if present, rename the current categorical variable to that name - to_numeric: if true, attempt to apply to_numeric after previous transformations """ self.categoricals = categoricals def fit(self, data): pass @staticmethod def _process_transformations(data, input_data, categorical, x): """transform a column Args: data (dataframe): dataframe input configuration (JSON): JSON categorical config for this variable categorical (str): varible name x (str): transformation string Returns: data (dataframe) """ if "transformations" in input_data.keys(): logging.info( "Applying key {} to variable {}".format("transformations", categorical) ) for transformation in input_data["transformations"]: exec(transformation.format(x=x)) @staticmethod def _process_rename(data, input_data, categorical): """rename a field Args: data (dataframe): dataframe input configuration (JSON): JSON categorical config for this variable categorical (str): varible name Returns: (tuple): tuple containing: data (dataframe): dataframe name (str): original name (if not "to_numeric": True), new_name otherwise """ if "rename" in input_data.keys(): logging.info("Applying key {} to variable {}".format("rename", categorical)) data = data.rename({categorical: input_data["rename"]}, axis="columns") return data, input_data["rename"] return data, categorical @staticmethod def _process_numeric(data, input_data, name): """convert column to numeric Args: data (dataframe): dataframe input configuration (JSON): JSON categorical config for this variable name (str): field name Returns: data with the colun converted to numeric """ if input_data.get("to_numeric", False): logging.info("Applying key {} to variable {}".format("to_numeric", name)) # if there are errors converting to numerical values, we need to sub in a reasonable value if sum(pd.to_numeric(data[name], errors="coerce").isnull()) > 0: logging.info( "Can't convert these entries in {}. Replacing with {}: {}".format( name, ExplicitCategoricalTransform.DEFAULT_NUMERIC, np.unique( data[name][ pd.to_numeric(data[name], errors="coerce").isnull() ].astype(str) ), ) ) data[name][ pd.to_numeric(data[name], errors="coerce").isnull() ] = ExplicitCategoricalTransform.DEFAULT_NUMERIC try: data[name] = pd.to_numeric(data[name]) return data except: raise TypeError("Failed to convert feature {} to numeric".format(name)) else: return data def transform(self, data): """Transform categorical variables into one or more numeric ones, no need to separate testing & training data Args: data: dictionary containing dataframe with all categorical columns present Returns: data with all categorical columns recoded and/or deleted """ for categorical in self.categoricals.keys(): x = "data['{}']".format(categorical) input_data = self.categoricals[categorical] ExplicitCategoricalTransform._process_transformations( data, input_data, categorical, x ) data, new_name = ExplicitCategoricalTransform._process_rename( data, input_data, categorical ) data = ExplicitCategoricalTransform._process_numeric( data, input_data, new_name ) return data class ImplicitCategoricalTransform(AbstractTransformer): """Class which implicitly transforms all string columns of a dataframe with sklearn LabelEncoder""" def __init__(self, target_variable): """initialize this ImplicitCategoricalTransform Args: target_variable (str): target variable name """ self.target_variable = target_variable self._encoder = {} self.target_encoder = None def fit(self, data): """encode the data as categorical labels Args: data (dataframe) Returns: dataframe (dataframe) """ logging.info("Fitting LabelEncoders on all string-based dataframe columns...") data.is_copy = False for column_name in data.columns: if data[column_name].dtype == object: logging.info("Fitting LabelEncoder for column {}".format(column_name)) self._encoder[column_name] = preprocessing.LabelEncoder() self._encoder[column_name].fit(data[column_name]) if column_name == self.target_variable: self.target_encoder = self._encoder[column_name] else: pass return data def transform(self, data): """Transform data into categorical variables using pre-trained label encoder Args: data (dataframe) Returns: dataframe (dataframe) """ data.is_copy = False for column_name in data.columns: if column_name in self._encoder: logging.info("LabelEncoding column {}".format(column_name)) data[column_name] = self._encoder[column_name].transform( data[column_name] ) return data
{"hexsha": "4a1c0600062b6b24e5b7bc8a9584297e3f557cac", "size": 6766, "ext": "py", "lang": "Python", "max_stars_repo_path": "primrose/transformers/categoricals.py", "max_stars_repo_name": "astro313/primrose", "max_stars_repo_head_hexsha": "891f001e4e198096edb74eea951d27c9ae2a278f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 38, "max_stars_repo_stars_event_min_datetime": "2019-09-04T17:39:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-09T21:20:24.000Z", "max_issues_repo_path": "primrose/transformers/categoricals.py", "max_issues_repo_name": "astro313/primrose", "max_issues_repo_head_hexsha": "891f001e4e198096edb74eea951d27c9ae2a278f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 66, "max_issues_repo_issues_event_min_datetime": "2019-09-05T15:55:19.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-21T05:36:54.000Z", "max_forks_repo_path": "primrose/transformers/categoricals.py", "max_forks_repo_name": "astro313/primrose", "max_forks_repo_head_hexsha": "891f001e4e198096edb74eea951d27c9ae2a278f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2019-12-02T09:05:30.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-09T16:12:36.000Z", "avg_line_length": 29.8061674009, "max_line_length": 123, "alphanum_fraction": 0.5869051138, "include": true, "reason": "import numpy", "num_tokens": 1244}
#include <memo/silo/Silo.hh> #include <boost/algorithm/string/case_conv.hpp> #include <elle/factory.hh> #include <elle/find.hh> #include <elle/log.hh> #include <memo/silo/Key.hh> #include <boost/algorithm/string/classification.hpp> #include <boost/algorithm/string/split.hpp> ELLE_LOG_COMPONENT("memo.silo.Silo"); namespace { int const step = 100 * 1024 * 1024; // 100 MiB } namespace memo { namespace silo { Silo::Silo(boost::optional<int64_t> capacity) : _capacity(std::move(capacity)) , _usage(0) // recovered in the child ctor. , _base_usage(0) , _step(this->capacity() ? (this->capacity().get() / 10) : step) , _block_count{0} // recovered in the child ctor. { // _size_cache too has to be recovered in the child ctor. // There is no point in notifying about the metrics now, even if // ctors of subclasses may update _usage and _block_count, as // this piece of code would be executed first anyway. So let // these ctors notify themselves. } Silo::~Silo() {} void Silo::_notify_metrics() { try { this->_on_storage_size_change(); } catch (elle::Error const& e) { ELLE_WARN("Error notifying storage size change: %s", e); } } elle::Buffer Silo::get(Key key) const { ELLE_TRACE_SCOPE("%s: get %x", this, key); // FIXME: use _size_cache to check block existance? return this->_get(key); } int Silo::set(Key key, elle::Buffer const& value, bool insert, bool update) { ELLE_ASSERT(insert || update); ELLE_TRACE_SCOPE("%s: %s at %x", this, insert ? update ? "upsert" : "insert" : "update", key); int delta = this->_set(key, value, insert, update); this->_usage += delta; if (std::abs(this->_base_usage - this->_usage) >= this->_step) { ELLE_DUMP("%s: _base_usage - _usage = %s (_step = %s)", this, this->_base_usage - this->_usage, this->_step); ELLE_DEBUG("%s: update Beyond (if --push provided) with usage = %s", this, this->_usage); _notify_metrics(); this->_base_usage = this->_usage; } ELLE_DEBUG("%s: usage/capacity = %s/%s", this, this->_usage, this->_capacity); _notify_metrics(); return delta; } int Silo::erase(Key key) { ELLE_TRACE_SCOPE("%s: erase %x", this, key); int delta = this->_erase(key); ELLE_DEBUG("usage %s and delta %s", this->_usage, delta); this->_usage += delta; this->_size_cache.erase(key); _notify_metrics(); return delta; } std::vector<Key> Silo::list() { ELLE_TRACE_SCOPE("%s: list", this); return this->_list(); } BlockStatus Silo::status(Key k) { ELLE_TRACE_SCOPE("%s: status %x", this, k); return this->_status(k); } BlockStatus Silo::_status(Key k) { return BlockStatus::unknown; } void Silo::register_notifier(std::function<void ()> f) { this->_on_storage_size_change.connect(f); } namespace { std::vector<std::string> split_arguments(std::string const& args) { auto res = std::vector<std::string>{}; auto const space = args.find(" "); const char* sep = (space == args.npos) ? ":" : " "; boost::algorithm::split(res, args, boost::algorithm::is_any_of(sep), boost::algorithm::token_compress_on); return res; } } std::unique_ptr<Silo> instantiate(std::string const& name, std::string const& args) { ELLE_TRACE_SCOPE("Processing backend %s '%s'", name, args); return elle::Factory<Silo>::instantiate(name, split_arguments(args)); } /*--------------. | Silo Config. | `--------------*/ SiloConfig::SiloConfig(std::string name, boost::optional<int64_t> capacity, boost::optional<std::string> description) : descriptor::TemplatedBaseDescriptor<SiloConfig>( std::move(name), std::move(description)) , capacity(std::move(capacity)) {} SiloConfig::SiloConfig(elle::serialization::SerializerIn& s) : descriptor::TemplatedBaseDescriptor<SiloConfig>(s) , capacity(s.deserialize<boost::optional<int64_t>>("capacity")) {} void SiloConfig::serialize(elle::serialization::Serializer& s) { descriptor::TemplatedBaseDescriptor<SiloConfig>::serialize(s); s.serialize("capacity", this->capacity); } std::string SiloConfig::name_regex() { return "^[-a-zA-Z0-9._]{0,127}$"; } } }
{"hexsha": "c9356a154e577282541fe4b9995a690020f44302", "size": 4860, "ext": "cc", "lang": "C++", "max_stars_repo_path": "src/memo/silo/Silo.cc", "max_stars_repo_name": "infinit/memo", "max_stars_repo_head_hexsha": "3a8394d0f647efe03ccb8bfe885a7279cb8be8a6", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 124.0, "max_stars_repo_stars_event_min_datetime": "2017-06-22T19:20:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-23T21:36:37.000Z", "max_issues_repo_path": "src/memo/silo/Silo.cc", "max_issues_repo_name": "infinit/memo", "max_issues_repo_head_hexsha": "3a8394d0f647efe03ccb8bfe885a7279cb8be8a6", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 4.0, "max_issues_repo_issues_event_min_datetime": "2017-08-21T15:57:29.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-10T02:52:35.000Z", "max_forks_repo_path": "src/memo/silo/Silo.cc", "max_forks_repo_name": "infinit/memo", "max_forks_repo_head_hexsha": "3a8394d0f647efe03ccb8bfe885a7279cb8be8a6", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 12.0, "max_forks_repo_forks_event_min_datetime": "2017-06-29T09:15:35.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-31T12:39:52.000Z", "avg_line_length": 26.7032967033, "max_line_length": 78, "alphanum_fraction": 0.566255144, "num_tokens": 1228}
import os import numpy as np import pandas as pd import pickle import re def read_joules(f,device): ''' Reads a joules trace csv generated by run_experiment.py''' joules_df = pd.read_csv(f) jcols = joules_df.columns regex = re.compile('package_.') package_cols = [string for string in jcols if re.match(regex, string)] regex = re.compile('dram_.') dram_cols = [string for string in jcols if re.match(regex, string)] joules_df['package_total'] = joules_df[package_cols].sum(axis=1) joules_df['dram_total'] = joules_df[dram_cols].sum(axis=1) if device == 'cuda': joules_df['nvidia_total'] = joules_df['nvidia_gpu_0'] else: joules_df['nvidia_total'] = 0 joules_df['process_total'] = joules_df['package_total'] + joules_df['dram_total'] + joules_df['nvidia_total'] joules_df['device'] = device return joules_df def convert_str_to_param_value(df, param_col): param_df = pd.DataFrame() param_df[[param_col,'unit']] = df[param_col].str.split(' ',expand=True) param_df.loc[param_df['unit'].isin(['k','kMac']),param_col] = param_df[param_df['unit'].isin(['k','kMac'])][param_col].astype(float).values*1e3 param_df.loc[param_df['unit'].isin(['M','MMac']),param_col] = param_df[param_df['unit'].isin(['M','MMac'])][param_col].astype(float).values*1e6 param_df.loc[param_df['unit'].isin(['G','GMac']),param_col] = param_df[param_df['unit'].isin(['G','GMac'])][param_col].astype(float).values*1e9 df[param_col] = param_df[param_col] return df
{"hexsha": "12f1faed903b985141a21c633608bab297ea14df", "size": 1542, "ext": "py", "lang": "Python", "max_stars_repo_path": "utils/io_utils.py", "max_stars_repo_name": "neurodatascience/watts_up_compute", "max_stars_repo_head_hexsha": "1ed41e62690f99f699b44180208689cc19616bb7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "utils/io_utils.py", "max_issues_repo_name": "neurodatascience/watts_up_compute", "max_issues_repo_head_hexsha": "1ed41e62690f99f699b44180208689cc19616bb7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "utils/io_utils.py", "max_forks_repo_name": "neurodatascience/watts_up_compute", "max_forks_repo_head_hexsha": "1ed41e62690f99f699b44180208689cc19616bb7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.5384615385, "max_line_length": 147, "alphanum_fraction": 0.682230869, "include": true, "reason": "import numpy", "num_tokens": 447}
from pathlib import Path from typing import List, Tuple import numpy as np import cv2 import matplotlib.pyplot as plt GRID_X = 9 GRID_Y = 6 class Undistorter: def __init__(self, img_shape: Tuple[int, int]): self.img_shape = img_shape self.mtx = None self.dist = None self._calibrated = False def calibrate(self, image_paths: List[Path], draw: bool = False): objpoints, imgpoints = self._determine_objpoints_imgpoints( image_paths, draw) ret, self.mtx, self.dist, _, _, = cv2\ .calibrateCamera(objpoints, imgpoints, self.img_shape, None, None) self._calibrated = True def apply(self, img): if not self._calibrated: raise ValueError("Calibrate first!") return cv2.undistort(img, self.mtx, self.dist, None, self.mtx) @staticmethod def _determine_objpoints_imgpoints(image_paths: List[Path], draw: bool) -> \ Tuple[np.ndarray, np.ndarray]: # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) objp = np.zeros((GRID_Y * GRID_X, 3), np.float32) objp[:, :2] = np.mgrid[0:GRID_X, 0:GRID_Y].T.reshape(-1, 2) # Arrays to store object points and image points from all the images. objpoints = [] # 3d points in real world space imgpoints = [] # 2d points in image plane. # Step through the list and search for chessboard corners for fname in image_paths: img = cv2.imread(str(fname)) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Find the chessboard corners ret, corners = cv2.findChessboardCorners(gray, (GRID_X, GRID_Y), None) # If found, add object points, image points if ret: objpoints.append(objp) imgpoints.append(corners) if draw: img = cv2.drawChessboardCorners(img, (GRID_X, GRID_Y), corners, ret) plt.title(fname.name) plt.imshow(img) plt.show() return objpoints, imgpoints
{"hexsha": "b2c609f5acd20159413a1401e93443901cedd9f4", "size": 2160, "ext": "py", "lang": "Python", "max_stars_repo_path": "Project2_Advanced_Lane_Finding/lib/camera_calib.py", "max_stars_repo_name": "jvanlier/self-driving-car-engineer", "max_stars_repo_head_hexsha": "08300245fbfa50858ac77a167d6ae8ceb054c0d4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Project2_Advanced_Lane_Finding/lib/camera_calib.py", "max_issues_repo_name": "jvanlier/self-driving-car-engineer", "max_issues_repo_head_hexsha": "08300245fbfa50858ac77a167d6ae8ceb054c0d4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Project2_Advanced_Lane_Finding/lib/camera_calib.py", "max_forks_repo_name": "jvanlier/self-driving-car-engineer", "max_forks_repo_head_hexsha": "08300245fbfa50858ac77a167d6ae8ceb054c0d4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.2388059701, "max_line_length": 88, "alphanum_fraction": 0.5847222222, "include": true, "reason": "import numpy", "num_tokens": 542}
# Copyright 2017 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================ # Copyright 2021 Huawei Technologies Co., Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import division from npu_bridge.npu_init import * import os import time from glob import glob import tensorflow as tf import numpy as np from six.moves import xrange from ops import * from utils import * def npu_tf_optimizer(opt): npu_opt = NPUDistributedOptimizer(opt) return npu_opt class pix2pix(object): def __init__(self, sess, image_size=256, batch_size=1, sample_size=1, output_size=256, gf_dim=64, df_dim=64, L1_lambda=100, input_c_dim=3, output_c_dim=3, dataset_name='facades', checkpoint_dir=None, sample_dir=None): '\n\n Args:\n sess: TensorFlow session\n batch_size: The size of batch. Should be specified before training.\n output_size: (optional) The resolution in pixels of the images. [256]\n gf_dim: (optional) Dimension of gen filters in first conv layer. [64]\n df_dim: (optional) Dimension of discrim filters in first conv layer. [64]\n input_c_dim: (optional) Dimension of input image color. For grayscale input, set to 1. [3]\n output_c_dim: (optional) Dimension of output image color. For grayscale input, set to 1. [3]\n ' self.sess = sess self.is_grayscale = (input_c_dim == 1) self.batch_size = batch_size self.image_size = image_size self.sample_size = sample_size self.output_size = output_size self.gf_dim = gf_dim self.df_dim = df_dim self.input_c_dim = input_c_dim self.output_c_dim = output_c_dim self.L1_lambda = L1_lambda self.d_bn1 = batch_norm(name='d_bn1') self.d_bn2 = batch_norm(name='d_bn2') self.d_bn3 = batch_norm(name='d_bn3') self.g_bn_e2 = batch_norm(name='g_bn_e2') self.g_bn_e3 = batch_norm(name='g_bn_e3') self.g_bn_e4 = batch_norm(name='g_bn_e4') self.g_bn_e5 = batch_norm(name='g_bn_e5') self.g_bn_e6 = batch_norm(name='g_bn_e6') self.g_bn_e7 = batch_norm(name='g_bn_e7') self.g_bn_e8 = batch_norm(name='g_bn_e8') self.g_bn_d1 = batch_norm(name='g_bn_d1') self.g_bn_d2 = batch_norm(name='g_bn_d2') self.g_bn_d3 = batch_norm(name='g_bn_d3') self.g_bn_d4 = batch_norm(name='g_bn_d4') self.g_bn_d5 = batch_norm(name='g_bn_d5') self.g_bn_d6 = batch_norm(name='g_bn_d6') self.g_bn_d7 = batch_norm(name='g_bn_d7') self.dataset_name = dataset_name self.checkpoint_dir = checkpoint_dir self.build_model() def build_model(self): self.real_data = tf.placeholder(tf.float32, [self.batch_size, self.image_size, self.image_size, (self.input_c_dim + self.output_c_dim)], name='real_A_and_B_images') self.real_B = self.real_data[:, :, :, :self.input_c_dim] self.real_A = self.real_data[:, :, :, self.input_c_dim:(self.input_c_dim + self.output_c_dim)] self.fake_B = self.generator(self.real_A) self.real_AB = tf.concat([self.real_A, self.real_B], 3) self.fake_AB = tf.concat([self.real_A, self.fake_B], 3) (self.D, self.D_logits) = self.discriminator(self.real_AB, reuse=False) (self.D_, self.D_logits_) = self.discriminator(self.fake_AB, reuse=True) self.fake_B_sample = self.sampler(self.real_A) self.d_sum = tf.summary.histogram('d', self.D) self.d__sum = tf.summary.histogram('d_', self.D_) #self.fake_B_sum = tf.summary.image('fake_B', self.fake_B) self.d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.D_logits, labels=tf.ones_like(self.D))) self.d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.D_logits_, labels=tf.zeros_like(self.D_))) self.g_loss = (tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.D_logits_, labels=tf.ones_like(self.D_))) + (self.L1_lambda * tf.reduce_mean(tf.abs((self.real_B - self.fake_B))))) self.d_loss_real_sum = tf.summary.scalar('d_loss_real', self.d_loss_real) self.d_loss_fake_sum = tf.summary.scalar('d_loss_fake', self.d_loss_fake) self.d_loss = (self.d_loss_real + self.d_loss_fake) self.g_loss_sum = tf.summary.scalar('g_loss', self.g_loss) self.d_loss_sum = tf.summary.scalar('d_loss', self.d_loss) t_vars = tf.trainable_variables() self.d_vars = [var for var in t_vars if ('d_' in var.name)] self.g_vars = [var for var in t_vars if ('g_' in var.name)] self.saver = tf.train.Saver() def load_random_samples(self, data_path): datapath = ('%s/val/*.jpg' %(data_path)) data = np.random.choice(glob(datapath.format(self.dataset_name)), self.batch_size) sample = [load_data(sample_file) for sample_file in data] if self.is_grayscale: sample_images = np.array(sample).astype(np.float32)[:, :, :, None] else: sample_images = np.array(sample).astype(np.float32) return sample_images def sample_model(self, data_path, sample_dir, epoch, idx): sample_images = self.load_random_samples(data_path) (samples, d_loss, g_loss) = self.sess.run([self.fake_B_sample, self.d_loss, self.g_loss], feed_dict={self.real_data: sample_images}) save_images(samples, [self.batch_size, 1], './{}/train_{:02d}_{:04d}.png'.format(sample_dir, epoch, idx)) print('[Sample] d_loss: {:.8f}, g_loss: {:.8f}'.format(d_loss, g_loss)) def train(self, args): 'Train pix2pix' # d_optim = npu_tf_optimizer(tf.train.AdamOptimizer(args.lr, beta1=args.beta1)).minimize(self.d_loss, var_list=self.d_vars) ######################NPU modify start########################## d_optimizer = tf.train.AdamOptimizer(learning_rate=args.lr, beta1=args.beta1) if args.precision_mode == 'allow_mix_precision': loss_scale_manager = ExponentialUpdateLossScaleManager(init_loss_scale=2**32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.8) if int(os.getenv('RANK_SIZE')) == 1: d_optimizer = NPULossScaleOptimizer(d_optimizer, loss_scale_manager) else: d_optimizer = NPULossScaleOptimizer(d_optimizer, loss_scale_manager, is_distributed=True) d_optim = npu_tf_optimizer(d_optimizer).minimize(self.d_loss, var_list=self.d_vars) #######################NPU modify end ############################## # g_optim = npu_tf_optimizer(tf.train.AdamOptimizer(args.lr, beta1=args.beta1)).minimize(self.g_loss, var_list=self.g_vars) ######################NPU modify start########################## g_optimizer = tf.train.AdamOptimizer(learning_rate=args.lr, beta1=args.beta1) if args.precision_mode == 'allow_mix_precision': loss_scale_manager = ExponentialUpdateLossScaleManager(init_loss_scale=2**32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.8) if int(os.getenv('RANK_SIZE')) == 1: g_optimizer = NPULossScaleOptimizer(g_optimizer, loss_scale_manager) else: g_optimizer = NPULossScaleOptimizer(g_optimizer, loss_scale_manager, is_distributed=True) g_optim = npu_tf_optimizer(g_optimizer).minimize(self.g_loss, var_list=self.g_vars) #######################NPU modify end ############################## init_op = tf.global_variables_initializer() self.sess.run(init_op) #self.g_sum = tf.summary.merge([self.d__sum, self.fake_B_sum, self.d_loss_fake_sum, self.g_loss_sum]) self.g_sum = tf.summary.merge([self.d__sum, self.d_loss_fake_sum, self.g_loss_sum]) self.d_sum = tf.summary.merge([self.d_sum, self.d_loss_real_sum, self.d_loss_sum]) self.writer = tf.summary.FileWriter('./logs', self.sess.graph) counter = 1 start_time = time.time() if self.load(self.checkpoint_dir): print(' [*] Load SUCCESS') else: print(' [!] Load failed...') for epoch in xrange(args.epoch): datapath = ('%s/train/*.jpg' %(args.data_path)) data = glob(datapath.format(self.dataset_name)) batch_idxs = (min(len(data), args.train_size) // self.batch_size) for idx in xrange(0, batch_idxs): batch_files = data[(idx * self.batch_size):((idx + 1) * self.batch_size)] batch = [load_data(batch_file) for batch_file in batch_files] if self.is_grayscale: batch_images = np.array(batch).astype(np.float32)[:, :, :, None] else: batch_images = np.array(batch).astype(np.float32) (_, summary_str) = self.sess.run([d_optim, self.d_sum], feed_dict={self.real_data: batch_images}) self.writer.add_summary(summary_str, counter) (_, summary_str) = self.sess.run([g_optim, self.g_sum], feed_dict={self.real_data: batch_images}) self.writer.add_summary(summary_str, counter) (_, summary_str) = self.sess.run([g_optim, self.g_sum], feed_dict={self.real_data: batch_images}) self.writer.add_summary(summary_str, counter) errD_fake = self.d_loss_fake.eval({self.real_data: batch_images}) errD_real = self.d_loss_real.eval({self.real_data: batch_images}) errG = self.g_loss.eval({self.real_data: batch_images}) counter += 1 print(('Epoch: [%2d] [%4d/%4d] time: %4.4f, d_loss: %.8f, g_loss: %.8f' % (epoch, idx, batch_idxs, (time.time() - start_time), (errD_fake + errD_real), errG))) if (np.mod(counter, 100) == 1): self.sample_model(args.data_path, args.sample_dir, epoch, idx) if (np.mod(counter, 500) == 2): self.save(args.checkpoint_dir, counter) def discriminator(self, image, y=None, reuse=False): with tf.variable_scope('discriminator') as scope: if reuse: tf.get_variable_scope().reuse_variables() else: assert (tf.get_variable_scope().reuse == False) h0 = lrelu(conv2d(image, self.df_dim, name='d_h0_conv')) h1 = lrelu(self.d_bn1(conv2d(h0, (self.df_dim * 2), name='d_h1_conv'))) h2 = lrelu(self.d_bn2(conv2d(h1, (self.df_dim * 4), name='d_h2_conv'))) h3 = lrelu(self.d_bn3(conv2d(h2, (self.df_dim * 8), d_h=1, d_w=1, name='d_h3_conv'))) h4 = linear(tf.reshape(h3, [self.batch_size, (- 1)]), 1, 'd_h3_lin') return (tf.nn.sigmoid(h4), h4) def generator(self, image, y=None): with tf.variable_scope('generator') as scope: s = self.output_size (s2, s4, s8, s16, s32, s64, s128) = (int((s / 2)), int((s / 4)), int((s / 8)), int((s / 16)), int((s / 32)), int((s / 64)), int((s / 128))) e1 = conv2d(image, self.gf_dim, name='g_e1_conv') e2 = self.g_bn_e2(conv2d(lrelu(e1), (self.gf_dim * 2), name='g_e2_conv')) e3 = self.g_bn_e3(conv2d(lrelu(e2), (self.gf_dim * 4), name='g_e3_conv')) e4 = self.g_bn_e4(conv2d(lrelu(e3), (self.gf_dim * 8), name='g_e4_conv')) e5 = self.g_bn_e5(conv2d(lrelu(e4), (self.gf_dim * 8), name='g_e5_conv')) e6 = self.g_bn_e6(conv2d(lrelu(e5), (self.gf_dim * 8), name='g_e6_conv')) e7 = self.g_bn_e7(conv2d(lrelu(e6), (self.gf_dim * 8), name='g_e7_conv')) e8 = self.g_bn_e8(conv2d(lrelu(e7), (self.gf_dim * 8), name='g_e8_conv')) (self.d1, self.d1_w, self.d1_b) = deconv2d(tf.nn.relu(e8), [self.batch_size, s128, s128, (self.gf_dim * 8)], name='g_d1', with_w=True) d1 = npu_ops.dropout(self.g_bn_d1(self.d1), 0.5) d1 = tf.concat([d1, e7], 3) (self.d2, self.d2_w, self.d2_b) = deconv2d(tf.nn.relu(d1), [self.batch_size, s64, s64, (self.gf_dim * 8)], name='g_d2', with_w=True) d2 = npu_ops.dropout(self.g_bn_d2(self.d2), 0.5) d2 = tf.concat([d2, e6], 3) (self.d3, self.d3_w, self.d3_b) = deconv2d(tf.nn.relu(d2), [self.batch_size, s32, s32, (self.gf_dim * 8)], name='g_d3', with_w=True) d3 = npu_ops.dropout(self.g_bn_d3(self.d3), 0.5) d3 = tf.concat([d3, e5], 3) (self.d4, self.d4_w, self.d4_b) = deconv2d(tf.nn.relu(d3), [self.batch_size, s16, s16, (self.gf_dim * 8)], name='g_d4', with_w=True) d4 = self.g_bn_d4(self.d4) d4 = tf.concat([d4, e4], 3) (self.d5, self.d5_w, self.d5_b) = deconv2d(tf.nn.relu(d4), [self.batch_size, s8, s8, (self.gf_dim * 4)], name='g_d5', with_w=True) d5 = self.g_bn_d5(self.d5) d5 = tf.concat([d5, e3], 3) (self.d6, self.d6_w, self.d6_b) = deconv2d(tf.nn.relu(d5), [self.batch_size, s4, s4, (self.gf_dim * 2)], name='g_d6', with_w=True) d6 = self.g_bn_d6(self.d6) d6 = tf.concat([d6, e2], 3) (self.d7, self.d7_w, self.d7_b) = deconv2d(tf.nn.relu(d6), [self.batch_size, s2, s2, self.gf_dim], name='g_d7', with_w=True) d7 = self.g_bn_d7(self.d7) d7 = tf.concat([d7, e1], 3) (self.d8, self.d8_w, self.d8_b) = deconv2d(tf.nn.relu(d7), [self.batch_size, s, s, self.output_c_dim], name='g_d8', with_w=True) return tf.nn.tanh(self.d8) def sampler(self, image, y=None): with tf.variable_scope('generator') as scope: scope.reuse_variables() s = self.output_size (s2, s4, s8, s16, s32, s64, s128) = (int((s / 2)), int((s / 4)), int((s / 8)), int((s / 16)), int((s / 32)), int((s / 64)), int((s / 128))) e1 = conv2d(image, self.gf_dim, name='g_e1_conv') e2 = self.g_bn_e2(conv2d(lrelu(e1), (self.gf_dim * 2), name='g_e2_conv')) e3 = self.g_bn_e3(conv2d(lrelu(e2), (self.gf_dim * 4), name='g_e3_conv')) e4 = self.g_bn_e4(conv2d(lrelu(e3), (self.gf_dim * 8), name='g_e4_conv')) e5 = self.g_bn_e5(conv2d(lrelu(e4), (self.gf_dim * 8), name='g_e5_conv')) e6 = self.g_bn_e6(conv2d(lrelu(e5), (self.gf_dim * 8), name='g_e6_conv')) e7 = self.g_bn_e7(conv2d(lrelu(e6), (self.gf_dim * 8), name='g_e7_conv')) e8 = self.g_bn_e8(conv2d(lrelu(e7), (self.gf_dim * 8), name='g_e8_conv')) (self.d1, self.d1_w, self.d1_b) = deconv2d(tf.nn.relu(e8), [self.batch_size, s128, s128, (self.gf_dim * 8)], name='g_d1', with_w=True) d1 = npu_ops.dropout(self.g_bn_d1(self.d1), 0.5) d1 = tf.concat([d1, e7], 3) (self.d2, self.d2_w, self.d2_b) = deconv2d(tf.nn.relu(d1), [self.batch_size, s64, s64, (self.gf_dim * 8)], name='g_d2', with_w=True) d2 = npu_ops.dropout(self.g_bn_d2(self.d2), 0.5) d2 = tf.concat([d2, e6], 3) (self.d3, self.d3_w, self.d3_b) = deconv2d(tf.nn.relu(d2), [self.batch_size, s32, s32, (self.gf_dim * 8)], name='g_d3', with_w=True) d3 = npu_ops.dropout(self.g_bn_d3(self.d3), 0.5) d3 = tf.concat([d3, e5], 3) (self.d4, self.d4_w, self.d4_b) = deconv2d(tf.nn.relu(d3), [self.batch_size, s16, s16, (self.gf_dim * 8)], name='g_d4', with_w=True) d4 = self.g_bn_d4(self.d4) d4 = tf.concat([d4, e4], 3) (self.d5, self.d5_w, self.d5_b) = deconv2d(tf.nn.relu(d4), [self.batch_size, s8, s8, (self.gf_dim * 4)], name='g_d5', with_w=True) d5 = self.g_bn_d5(self.d5) d5 = tf.concat([d5, e3], 3) (self.d6, self.d6_w, self.d6_b) = deconv2d(tf.nn.relu(d5), [self.batch_size, s4, s4, (self.gf_dim * 2)], name='g_d6', with_w=True) d6 = self.g_bn_d6(self.d6) d6 = tf.concat([d6, e2], 3) (self.d7, self.d7_w, self.d7_b) = deconv2d(tf.nn.relu(d6), [self.batch_size, s2, s2, self.gf_dim], name='g_d7', with_w=True) d7 = self.g_bn_d7(self.d7) d7 = tf.concat([d7, e1], 3) (self.d8, self.d8_w, self.d8_b) = deconv2d(tf.nn.relu(d7), [self.batch_size, s, s, self.output_c_dim], name='g_d8', with_w=True) return tf.nn.tanh(self.d8) def save(self, checkpoint_dir, step): model_name = 'pix2pix.model' model_dir = ('%s_%s_%s' % (self.dataset_name, self.batch_size, self.output_size)) checkpoint_dir = os.path.join(checkpoint_dir, model_dir) if (not os.path.exists(checkpoint_dir)): os.makedirs(checkpoint_dir) self.saver.save(self.sess, os.path.join(checkpoint_dir, model_name), global_step=step) def load(self, checkpoint_dir): print(' [*] Reading checkpoint...') model_dir = ('%s_%s_%s' % (self.dataset_name, self.batch_size, self.output_size)) checkpoint_dir = os.path.join(checkpoint_dir, model_dir) ckpt = tf.train.get_checkpoint_state(checkpoint_dir) if (ckpt and ckpt.model_checkpoint_path): ckpt_name = os.path.basename(ckpt.model_checkpoint_path) self.saver.restore(self.sess, os.path.join(checkpoint_dir, ckpt_name)) return True else: return False def test(self, args): 'Test pix2pix' init_op = tf.global_variables_initializer() self.sess.run(init_op) datapath = ('%s/val/*.jpg' %(args.data_path)) sample_files = glob(datapath.format(self.dataset_name)) n = [int(i) for i in map((lambda x: x.split('/')[(- 1)].split('.jpg')[0]), sample_files)] sample_files = [x for (y, x) in sorted(zip(n, sample_files))] print('Loading testing images ...') sample = [load_data(sample_file, is_test=True) for sample_file in sample_files] if self.is_grayscale: sample_images = np.array(sample).astype(np.float32)[:, :, :, None] else: sample_images = np.array(sample).astype(np.float32) sample_images = [sample_images[i:(i + self.batch_size)] for i in xrange(0, len(sample_images), self.batch_size)] sample_images = np.array(sample_images) print(sample_images.shape) start_time = time.time() if self.load(self.checkpoint_dir): print(' [*] Load SUCCESS') else: print(' [!] Load failed...') for (i, sample_image) in enumerate(sample_images): idx = (i + 1) print('sampling image ', idx) samples = self.sess.run(self.fake_B_sample, feed_dict={self.real_data: sample_image}) save_images(samples, [self.batch_size, 1], './{}/test_{:04d}.png'.format(args.test_dir, idx))
{"hexsha": "d048e7b81a23bad565f94e4d50284732c2009525", "size": 20081, "ext": "py", "lang": "Python", "max_stars_repo_path": "built-in/TensorFlow/Official/cv/Image_translation/Pix2Pix_ID0359_for_TensorFlow/model.py", "max_stars_repo_name": "Ascend/modelzoo", "max_stars_repo_head_hexsha": "f018cfed33dbb1cc2110b9ea2e233333f71cc509", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2020-12-13T08:34:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T15:17:17.000Z", "max_issues_repo_path": "built-in/TensorFlow/Official/cv/Image_translation/Pix2Pix_ID0359_for_TensorFlow/model.py", "max_issues_repo_name": "Ascend/modelzoo", "max_issues_repo_head_hexsha": "f018cfed33dbb1cc2110b9ea2e233333f71cc509", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-20T03:11:05.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-20T06:53:39.000Z", "max_forks_repo_path": "built-in/TensorFlow/Official/cv/Image_translation/Pix2Pix_ID0359_for_TensorFlow/model.py", "max_forks_repo_name": "Ascend/modelzoo", "max_forks_repo_head_hexsha": "f018cfed33dbb1cc2110b9ea2e233333f71cc509", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-10T12:40:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-17T07:55:15.000Z", "avg_line_length": 62.3633540373, "max_line_length": 619, "alphanum_fraction": 0.6126686918, "include": true, "reason": "import numpy", "num_tokens": 5547}
[STATEMENT] lemma coclop_coextensive: "coclop f \<Longrightarrow> f \<le> id" [PROOF STATE] proof (prove) goal (1 subgoal): 1. coclop f \<Longrightarrow> f \<le> id [PROOF STEP] by (simp add: coclop_def)
{"llama_tokens": 84, "file": "Order_Lattice_Props_Closure_Operators", "length": 1}
import tetris import random # import math import numpy as np # import pickle # from tqdm import tqdm # from collections import deque # from keras.models import Sequential # from keras.layers import Dense # from keras.optimizers import Adam class Tetris: def __init__(self): print("init") self.score = 0 self.previous_score = 0 self.move_columns = 4 self.board_height = 20 self.board_width = 10 self.state_shape = (self.board_height, self.board_width) self.boardArray = np.zeros(self.state_shape) self.randomMove = np.zeros((1, 4)) self.board = tetris.Board() self.current_state = self.getBoardArray(self.board.rend(0, 0, 0, 0)) self.next_state = self.getBoardArray(self.board.rend(0, 0, 0, 0)) self.numberOfMoves = 0 self.movesArray = np.zeros((0, 0)) def getMovesArray(self, moves): print("Get moves array") self.numberOfMoves = self.board.getNumberOfMoves() self.movesArray = np.zeros((self.numberOfMoves, self.move_columns)) for i in range(self.numberOfMoves): for j in range(self.move_columns): self.movesArray[i, j] = self.board.getValueOfVectorInts( moves, i, j) return self.movesArray def getBoardArray(self, rend): print("get board array") for i in range(self.board_height): for j in range(self.board_width): self.boardArray[i, j] = self.board.getValueOfVectorBools( rend, i, j) return self.boardArray def getReward(self, score, previous_score): print("get reward") reward = score - previous_score return reward def best_action(self, state): print("best_action") # Return index of movesArray that gives the max reward action = 0 return action def reset(self): print("reset") self.board.reset() self.boardArray = np.zeros(self.state_shape) self.current_state = self.getBoardArray(self.board.rend(0, 0, 0, 0)) self.next_state = self.getBoardArray(self.board.rend(0, 0, 0, 0)) self.previous_score = 0 self.score = 0 # class DQN: # def __init__(self, state_shape, experience_size=200, discount=0.95, epsilon=1, epsilon_min=0, epsilon_stop_episode=50): # self.state_shape = state_shape # self.experiences = [] # self.experience_size = experience_size # self.discount = discount # self.epsilon = epsilon # self.epsilon_min = epsilon_min # self.epsilon_decay = (self.epsilon - self.epsilon_min) / (epsilon_stop_episode) # self.model = self.build_model() # # def build_model(self): # model = Sequential() # model.add(Dense(32, input_shape=self.state_shape, # activation="relu", batch_size=None)) # model.add(Dense(32, activation="relu", batch_size=None)) # model.add(Dense(10, activation="linear", batch_size=None)) # model.compile(loss="mse", optimizer="adam") # return model # # def add_experience(self, current_state, next_state, action, reward): # self.experiences.append((current_state, next_state, action, reward)) # # def train(self, batch_size=32, epochs=3): # batch = random.sample(self.experiences, batch_size) # # for current_state, next_state, action, reward in batch: # next_state = np.expand_dims(next_state, axis=0) # target = reward + self.discount * np.amax(self.model.predict(next_state)[0]) # # print("target: ", target) # current_state = np.expand_dims(current_state, axis=0) # target_f = self.model.predict(current_state) # # print("target_f: ", target_f) # target_f[0][0] = target # # target_f[0][action] = target # self.model.fit(current_state, target_f, epochs=epochs, verbose=0) # # if self.epsilon > self.epsilon_min: # self.epsilon -= self.epsilon_decay # # def writeExperiencesToFile(self, filename): # with open(f"{filename}", "ab") as file: # pickle.dump(self.experiences, file) # # def readExperiencesFromFile(self, filename): # with open(f"{filename}", "rb") as file: # self.experiences = pickle.load(file) def collect_experiences(tetris): # Collect experiences where each experience consists of a tuple: # (current_state, next_state, action, reward) # These experiences are then used to train the DQN. for i in range(1000): print("current to next") tetris.current_state = tetris.next_state print("post current to next") state_str = "" for r in range(tetris.current_state.shape[0]): for c in range(tetris.current_state.shape[1]): if tetris.current_state[r][c] == 0: state_str += "-" else: state_str += "X" state_str += "\n" print(state_str) print("getting moves array") tetris.movesArray = tetris.getMovesArray(tetris.board.getMoves()) print("post getting moves array") if tetris.board.getNumberOfMoves() > 0: rowIndex = random.randint(0, tetris.board.getNumberOfMoves() - 1) else: tetris.reset() continue print("selecting action") action = tetris.movesArray[rowIndex] pieceIndex, row, col, rot = int(action[0]), int( action[1]), int(action[2]), int(action[3]) # print(f"pieceIndex: {pieceIndex}, row: {row}, col: {col}, rot: {rot}") print("checking is valid") notGameOver = tetris.board.isValid(pieceIndex, row, col, rot) print("post is valid") # print("notGameOver: ", notGameOver) if notGameOver: print("Performing: ", pieceIndex, row, col, rot, "(pieceIndex, row, col, rot)") tetris.board.place(pieceIndex, row, col, rot) print("Rendering: ", pieceIndex, row, col, rot, "(pieceIndex, row, col, rot)") render = tetris.board.rend(pieceIndex, row, col, rot) print("Getting board array") tetris.boardArray = tetris.getBoardArray(render) print("Post getting board array") tetris.next_state = tetris.boardArray tetris.previous_score = tetris.score print("get score") tetris.score = tetris.board.getScore() print("score is:") print(tetris.score) print("prev score is:") print(tetris.previous_score) reward = tetris.getReward(tetris.score, tetris.previous_score) print("post get reward") # print("next_state =\n", tetris.next_state) # print("action = ", action) # print("previous_score = ", tetris.previous_score) # print("reward = ", reward) # print("score = ", tetris.score) # dqn.add_experience(tetris.current_state, # tetris.next_state, action, reward) else: tetris.reset() print() # def train_model(tetris, dqn, batch_size, epochs, episodes, train_every): # scores = [] # # for episode in tqdm(range(episodes)): # tetris.current_state = tetris.reset() # notGameOver = True # while notGameOver: # tetris.current_state = tetris.next_state # # print("current_state =\n", tetris.current_state) # tetris.movesArray = tetris.getMovesArray(tetris.board.getMoves()) # # if tetris.board.getNumberOfMoves() > 0: # best_action = tetris.best_action(tetris.current_state) # else: # tetris.reset() # continue # # action = tetris.movesArray[best_action] # pieceIndex, row, col, rot = int(action[0]), int( # action[1]), int(action[2]), int(action[3]) # # print(f"pieceIndex: {pieceIndex}, row: {row}, col: {col}, rot: {rot}") # # notGameOver = tetris.board.isValid(pieceIndex, row, col, rot) # # print("notGameOver: ", notGameOver) # # tetris.board.place(pieceIndex, row, col, rot) # render = tetris.board.rend(pieceIndex, row, col, rot) # tetris.boardArray = tetris.getBoardArray(render) # tetris.next_state = tetris.boardArray # # tetris.previous_score = tetris.score # tetris.score = tetris.board.getScore() # reward = tetris.getReward(tetris.score, tetris.previous_score) # # print("next_state =\n", tetris.next_state) # # print("action = ", action) # # print("previous_score = ", tetris.previous_score) # # print("reward = ", reward) # # print("score = ", tetris.score) # dqn.add_experience(tetris.current_state, tetris.next_state, action, reward) # # scores.append(tetris.score) # # if episode % train_every == 0: # dqn.train(batch_size=batch_size, epochs=epochs) # # print("scores:\n", scores) def main(): tetris = Tetris() collect_experiences(tetris) # train_model(tetris, dqn, batch_size=32, epochs=3, episodes=20, train_every=5) if __name__ == "__main__": main()
{"hexsha": "e955848b34e44bcada3b808ce371fa08b9e84521", "size": 9452, "ext": "py", "lang": "Python", "max_stars_repo_path": "boost/test-boost-tetris.py", "max_stars_repo_name": "TylerWasniowski/tetris", "max_stars_repo_head_hexsha": "8be9fdbf46d134c89e2e5d450c5148cfa6ad76de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "boost/test-boost-tetris.py", "max_issues_repo_name": "TylerWasniowski/tetris", "max_issues_repo_head_hexsha": "8be9fdbf46d134c89e2e5d450c5148cfa6ad76de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2020-01-28T22:21:00.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-10T00:36:48.000Z", "max_forks_repo_path": "boost/test-boost-tetris.py", "max_forks_repo_name": "TylerWasniowski/tetris", "max_forks_repo_head_hexsha": "8be9fdbf46d134c89e2e5d450c5148cfa6ad76de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.1129032258, "max_line_length": 125, "alphanum_fraction": 0.593101989, "include": true, "reason": "import numpy", "num_tokens": 2238}
import os import numpy as np import tensorflow as tf import cv2 import math import time import shutil import cfg from lpdr_net import LpdrNet from utils import data_reader, dataset from net.resnet import load_weights os.environ["CUDA_VISIBLE_DEVICES"] = '1' def train(): # define dataset configs = cfg.Config() configs.set_k(1) heads={'hm':1, 'wh':2, 'offset':2, 'hm_hp':4, 'hp_kp':8, 'hp_offset':2} img_dir = '/home/qinshuxin/datasets/ccpd/ccpd_mix' data_source = data_reader.DataReader(img_dir, config=configs) datasets = dataset.Dataset(data_source, batch_size=configs.BATCH_SIZE) in_imgs = tf.placeholder(dtype=tf.float32, shape=[None, None, None, 3], name='in_imgs') batch_hm = tf.placeholder(dtype=tf.float32, shape=[None, None, None, None], name='batch_hm') batch_wh = tf.placeholder(dtype=tf.float32, shape=[None, None, 2], name='batch_wh') batch_reg = tf.placeholder(dtype=tf.float32, shape=[None, None, 2], name='batch_reg') batch_reg_mask = tf.placeholder(dtype=tf.float32, shape=[None, None], name='batch_reg_mask') batch_ind = tf.placeholder(dtype=tf.float32, shape=[None, None], name='batch_ind') batch_hm_hp = tf.placeholder(dtype=tf.float32, shape=[None, None, None, 4], name='batch_hm_hp') batch_hp_off = tf.placeholder(dtype=tf.float32, shape=[None, None, 2], name='batch_hp_off') batch_hp_ind = tf.placeholder(dtype=tf.float32, shape=[None, None], name='batch_hp_ind') batch_hp_mask = tf.placeholder(dtype=tf.float32, shape=[None, None], name='batch_hp_mask') batch_kps = tf.placeholder(dtype=tf.float32, shape=[None, None, 8], name='batch_kps') batch_kps_mask = tf.placeholder(dtype=tf.float32, shape=[None, None, 8], name='batch_kps_mask') batch_labels = tf.placeholder(dtype=tf.float32, shape=[None, None, 13], name='batch_labels') targets = tf.sparse_placeholder(dtype=tf.int32, name='targets') # define model and loss model = LpdrNet(in_imgs, heads, is_training=True, cfgs=configs, labels=batch_labels) with tf.variable_scope('loss'): hm_loss, wh_loss, reg_loss, hm_hp_loss, kpt_loss, hm_off_loss = \ model.detect_loss(batch_hm, batch_wh, batch_reg, batch_reg_mask, batch_ind, batch_hm_hp, batch_kps, batch_kps_mask, batch_hp_off, batch_hp_mask, batch_hp_ind) det_loss = hm_loss + wh_loss + reg_loss + hm_hp_loss + kpt_loss + hm_off_loss rec_loss = model.recog_loss(targets) total_loss = det_loss + 10*rec_loss global_step = tf.train.create_global_step() training_variables = tf.trainable_variables() learning_rate = tf.train.exponential_decay(1e-3, global_step, decay_steps=1000, decay_rate=0.95, staircase=True) optimizer = tf.train.AdamOptimizer(learning_rate) grads_and_vars = optimizer.compute_gradients(total_loss, var_list=training_variables) clip_grad_var = [(g, v) if g is None else (tf.clip_by_norm(g, 10.), v) for g, v in grads_and_vars] update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_op = optimizer.apply_gradients(clip_grad_var, global_step=global_step, name='train_op') saver = tf.train.Saver(max_to_keep=1) saver_best = tf.train.Saver(max_to_keep=1) print(len(data_source)) # calculate edit distance between two sequences #seq_len = tf.constant(np.ones(configs.BATCH_SIZE, dtype=np.int32) * 24) #decoded, _ = tf.nn.ctc_beam_search_decoder(model.logit(), seq_len, beam_width=10, merge_repeated=False) #dis = tf.reduce_mean(tf.edit_distance(tf.cast(decoded[0], tf.int32), targets)) config = tf.ConfigProto() config.gpu_options.allow_growth = True with tf.Session(config=config) as sess: with tf.name_scope('summary'): tf.summary.scalar("learning_rate", learning_rate) tf.summary.scalar("det_loss", det_loss) tf.summary.scalar("rec_loss", rec_loss) tf.summary.scalar("total_loss", total_loss) logdir = "./log/" if os.path.exists(logdir): shutil.rmtree(logdir) os.mkdir(logdir) write_op = tf.summary.merge_all() summary_writer = tf.summary.FileWriter(logdir, graph=sess.graph) # train sess.run(tf.global_variables_initializer()) #load_weights(sess,'./pretrained_weights/resnet50.npy') #print('load pretrained weights resnet50!') saver.restore(sess, './weights18/lpdr-125000') print('load pretraned weights!') print('Global Variables: ', np.sum([np.prod(v.get_shape().as_list()) for v in tf.global_variables()])) print('Trainable Variables: ', np.sum([np.prod(v.get_shape().as_list()) for v in tf.trainable_variables()])) print('\n----------- start to train -----------\n') best_loss = 1 for epoch in range(1, 1+configs.epochs): epoch_loss = [] start = time.time() step_start = time.time() for data in datasets: imgs, hms, whs, regs, reg_masks, inds, hm_hps, hp_offsets, \ hp_inds, hp_masks, kpss, kps_masks, sparse_labels, labels = data feed_dict = {in_imgs:imgs, batch_hm:hms, batch_wh:whs, batch_reg:regs, batch_reg_mask:reg_masks, batch_ind:inds, batch_hm_hp:hm_hps, batch_hp_off:hp_offsets, batch_hp_ind:hp_inds, batch_hp_mask:hp_masks, batch_kps:kpss, batch_kps_mask:kps_masks, batch_labels:labels, targets:sparse_labels} _, summary, step_loss, det_los, rec_los, hm_los, wh_los, reg_los, hm_hp_los, kpt_los, hm_off_los, step, lr = \ sess.run([train_op, write_op, total_loss, det_loss, rec_loss, hm_loss, wh_loss, reg_loss, hm_hp_loss, \ kpt_loss, hm_off_loss, global_step, learning_rate], feed_dict=feed_dict) epoch_loss.append(step_loss) if step % 10 == 0 and step > 0: summary_writer.add_summary(summary, step) step_time = time.time() - step_start step_start = time.time() print(('Epoch:{}, Step:{}, loss:{:.3f}, lr:{:.6f}, det:{:.3f}, rec:{:.5f}, hm:{:2f}, wh:{:.2f}, ' + \ 'reg:{:.2f}, hp:{:.2f}, kpt:{:.2f}, hm_off:{:.2f}, time:{:.2f}').format(epoch, step, step_loss, lr, det_los, rec_los, hm_los, wh_los, reg_los, hm_hp_los, kpt_los, hm_off_los, step_time)) epoch_loss = np.mean(epoch_loss) print('Epoch:{}, average loss:{:.3f}, time:{:.2f}'.format(epoch, epoch_loss, time.time()-start)) saver.save(sess, "weights18/lpdr-mix", global_step=global_step) if epoch_loss < best_loss: saver_best.save(sess, 'weights18/lpdr-best-mix') best_loss = epoch_loss if __name__ == '__main__': train()
{"hexsha": "1e4fa9ee50e629f68373ea34486e207d0e65a8ae", "size": 7272, "ext": "py", "lang": "Python", "max_stars_repo_path": "train.py", "max_stars_repo_name": "shuxin-qin/TE-CLPDR-IUS", "max_stars_repo_head_hexsha": "65eba5d6368bd9477ac6a2cc999006a97edf913a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "train.py", "max_issues_repo_name": "shuxin-qin/TE-CLPDR-IUS", "max_issues_repo_head_hexsha": "65eba5d6368bd9477ac6a2cc999006a97edf913a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "train.py", "max_forks_repo_name": "shuxin-qin/TE-CLPDR-IUS", "max_forks_repo_head_hexsha": "65eba5d6368bd9477ac6a2cc999006a97edf913a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-07T10:35:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-07T10:35:13.000Z", "avg_line_length": 49.4693877551, "max_line_length": 126, "alphanum_fraction": 0.6247249725, "include": true, "reason": "import numpy", "num_tokens": 1819}
/* Copyright © 2017 Apple Inc. All rights reserved. * * Use of this source code is governed by a BSD-3-clause license that can * be found in the LICENSE.txt file or at https://opensource.org/licenses/BSD-3-Clause */ #include <string> #include <logger/logger.hpp> #include <boost/algorithm/string/predicate.hpp> #include <fileio/curl_downloader.hpp> #include <fileio/temp_files.hpp> #include <fileio/file_download_cache.hpp> #include <fileio/s3_api.hpp> #include <fileio/sanitize_url.hpp> #include "export.hpp" namespace turi { file_download_cache::~file_download_cache() { try { clear(); } catch(...) { logstream(LOG_WARNING) << "Error clearning file download cache" << std::endl; } } std::string file_download_cache::get_file(const std::string& url) { // first check if the file has been downloaded. // if it has, return the downloaded location lock.lock(); if (url_to_file.count(url)) { bool cache_dirty = false; if (boost::starts_with(url, "s3://")) { std::string last_modified = ""; try { last_modified = get_s3_file_last_modified(url); } catch (...) { lock.unlock(); throw; } if (last_modified != url_to_file[url].last_modified) { cache_dirty = true; } } if (!cache_dirty) { std::string ret = url_to_file[url].filename; lock.unlock(); return ret; } } lock.unlock(); // ok. we need to download the file // Ok, it is either local regular file, file:///, or remote urls http://. // For remote urls, download_url download it into to local file. // For local urls, download_url return as is. std::string localfile; int status; bool is_temp; std::tie(status, is_temp, localfile) = download_url(url); if (status) { log_and_throw_io_failure("Fail to download from " + url + ". " + get_curl_error_string(status)); } if (is_temp) { // if it is a remote file, we check the download status code lock.lock(); url_to_file[url].filename = localfile; url_to_file[url].last_modified = ""; lock.unlock(); return localfile; } else { // purely a local file. just return it return localfile; } } void file_download_cache::release_cache(const std::string& url) { // look for the file in the url_to_file map and delete it. lock.lock(); if (url_to_file.count(url)) { delete_temp_file(url_to_file[url].filename); url_to_file.erase(url); } lock.unlock(); } EXPORT file_download_cache& file_download_cache::get_instance() { static file_download_cache cache; return cache; } EXPORT void file_download_cache::clear() { for(auto p: url_to_file) { delete_temp_file(p.second.filename); } url_to_file.clear(); } } // namespace turi
{"hexsha": "60ec6adc274cea473ac0dc371077f8617ef80c4c", "size": 2797, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "src/fileio/file_download_cache.cpp", "max_stars_repo_name": "TimothyRHuertas/turicreate", "max_stars_repo_head_hexsha": "afa00bee56d168190c6f122e14c9fbc6656b4e97", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1.0, "max_stars_repo_stars_event_min_datetime": "2019-04-16T19:51:18.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-16T19:51:18.000Z", "max_issues_repo_path": "src/fileio/file_download_cache.cpp", "max_issues_repo_name": "tashby/turicreate", "max_issues_repo_head_hexsha": "7f07ce795833d0c56c72b3a1fb9339bed6d178d1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 3.0, "max_issues_repo_issues_event_min_datetime": "2021-09-08T02:18:00.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:39:44.000Z", "max_forks_repo_path": "src/fileio/file_download_cache.cpp", "max_forks_repo_name": "tashby/turicreate", "max_forks_repo_head_hexsha": "7f07ce795833d0c56c72b3a1fb9339bed6d178d1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1.0, "max_forks_repo_forks_event_min_datetime": "2020-10-21T17:46:28.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-21T17:46:28.000Z", "avg_line_length": 27.97, "max_line_length": 86, "alphanum_fraction": 0.6582052199, "num_tokens": 688}
import numpy as np import sklearn import math from scipy.stats import chi2 import matplotlib.pyplot as plt import matplotlib.mlab as mlab from random import * from matplotlib.patches import Ellipse from numpy.linalg import cholesky import pandas as pd import matplotlib as mpl import seaborn as snss # https://github.com/joferkington/oost_paper_code/blob/master/error_ellipse.py def plot_point_cov(points, nstd=2, ax=None, **kwargs): """ Plots an `nstd` sigma ellipse based on the mean and covariance of a point "cloud" (points, an Nx2 array). Parameters ---------- points : An Nx2 array of the data points. nstd : The radius of the ellipse in numbers of standard deviations. Defaults to 2 standard deviations. ax : The axis that the ellipse will be plotted on. Defaults to the current axis. Additional keyword arguments are pass on to the ellipse patch. Returns ------- A matplotlib ellipse artist """ pos = points.mean(axis=0) cov = np.cov(points, rowvar=False) return plot_cov_ellipse(cov, pos, nstd, ax, **kwargs) def plot_cov_ellipse(cov, pos, nstd=2, ax=None, **kwargs): """ Plots an `nstd` sigma error ellipse based on the specified covariance matrix (`cov`). Additional keyword arguments are passed on to the ellipse patch artist. Parameters ---------- cov : The 2x2 covariance matrix to base the ellipse on pos : The location of the center of the ellipse. Expects a 2-element sequence of [x0, y0]. nstd : The radius of the ellipse in numbers of standard deviations. Defaults to 2 standard deviations. ax : The axis that the ellipse will be plotted on. Defaults to the current axis. Additional keyword arguments are pass on to the ellipse patch. Returns ------- A matplotlib ellipse artist """ def eigsorted(cov): vals, vecs = np.linalg.eigh(cov) order = vals.argsort()[::-1] return vals[order], vecs[:,order] if ax is None: ax = plt.gca() vals, vecs = eigsorted(cov) theta = np.degrees(np.arctan2(*vecs[:,0][::-1])) # Width and height are "full" widths, not radius width, height = 2 * np.sqrt(nstd * vals) ellip = Ellipse(xy=pos, width=width, height=height, angle=theta, **kwargs) ax.add_artist(ellip) return ellip # generate a bivariate Gaussian distribution points set def generate_gaussian(mu, sigma, sample_num = 300): # mu = np.array([[1, 5]]) # sigma = np.array([[1, 0.5], [1.5, 3]]) R = cholesky(sigma) s = np.dot(np.random.randn(sample_num, 2), R) + mu return s # plt.plot(s[:,0], s[:,1], 'o') # plt.show() def plot_bivariate_gaussian(**kwargs): mu = np.array([[10, 5]]) sigma = np.array([[3, 0.7], [0.7, 2]]) s1 = generate_gaussian(mu, sigma, sample_num=1000) mu = np.array([[15, 20]]) sigma = np.array([[3, -0.3], [-0.3, 6]]) s2 = generate_gaussian(mu, sigma, sample_num=1000) # kwrg = {'edgecolor':'k', 'linewidth':0.5} plt.scatter(s1[:,0], s1[:,1], c='b', marker='o', s=5) plt.scatter(s2[:,0], s2[:,1], c='b', marker='o', s=5) plot_point_cov(s1, nstd = np.sqrt(chi2.ppf(0.99, 2)), **kwargs) plot_point_cov(s1, nstd = np.sqrt(chi2.ppf(0.999, 2)), **kwargs) plot_point_cov(s1, nstd = np.sqrt(chi2.ppf(0.9999, 2)), **kwargs) plot_point_cov(s2, nstd = np.sqrt(chi2.ppf(0.99, 2)), **kwargs) plot_point_cov(s2, nstd = np.sqrt(chi2.ppf(0.999, 2)), **kwargs) plt.show() plt.xlim(0, 25) plt.ylim(-5, 30) return plt.gca() def plot_single_gaussian(): mu = np.array([[1, 5]]) sigma = np.array([[1, 0.5], [1.5, 3]]) s1 = generate_gaussian(mu, sigma) mu = np.array([[4, 11]]) sigma = np.array([[2.4, 3.1], [1.5, 3.7]]) s2 = generate_gaussian(mu, sigma) X = np.hstack((s1[:,0], s2[:,0])) Y = np.hstack((s1[:,1], s2[:,1])) X.shape = (600, 1) Y.shape = (600, 1) points = np.c_[X,Y] kwrg = {'edgecolor':'k', 'linewidth':0.5} plt.plot(s1[:,0], s1[:,1], 'go') plt.plot(s2[:,0], s2[:,1], 'go') plot_point_cov(points, nstd = 2, alpha = 0.7, color = 'pink', **kwrg) plt.show() # %% Exibe exemplo de gaussianas from scipy.spatial import distance np.random.seed(10000) mu1, mu2 = np.array([[10, 5]]), np.array([[10, 17]]) sigma1, sigma2 = np.array([[3, 0.7], [0.7, 2]]), np.array([[3, -0.3], [-0.3, 6]]) sr1 = generate_gaussian(mu1, sigma1, sample_num=50) sr2 = generate_gaussian(mu2, sigma2, sample_num=50) nstd_80 = np.sqrt(chi2.ppf(0.8, 2)) nstd_99 = np.sqrt(chi2.ppf(0.99, 2)) nstd_999 = np.sqrt(chi2.ppf(0.999, 2)) fig, ax = plt.subplots() ax.set_xlim(3, 16) ax.set_ylim(-1, 12) ax.scatter(sr1[:, 0], sr1[:, 1], s=10, zorder=10**2, marker='o') ax.scatter(mu1[0, 0], mu1[0, 1], color='tab:red', s=17, zorder=10**2, marker='D') # ax.scatter(sr2[:, 0], sr2[:, 1], s=4, zorder=10**2) ellkargs = {'edgecolor': 'b', 'linewidth': 0.5, 'fill': True, 'linestyle': '-', 'facecolor': 'tab:blue', 'alpha': 0.2} plot_cov_ellipse(sigma1, mu1[0, :], nstd_99, ax=ax, **ellkargs) dist = distance.cdist(sr1, mu1, metric='mahalanobis', VI=np.linalg.pinv(sigma1)) idx = np.argmax(dist) max_x = sr1[idx] ax.text(max_x[0] - 3, max_x[1] + 0.6, '$s_{lk}(\\mu, x) < \\tau$', fontdict={'fontsize': 10, 'fontweight': 'normal'}) ax.text(max_x[0] + 6, max_x[1] + 1.5, '$s_{lk}(\\mu, x_i) \\geq \\tau$', fontdict={'fontsize': 10, 'fontweight': 'normal'}) ax.set_xlabel('$x_1$') ax.set_ylabel('$x_2$') # plot_cov_ellipse(sigma2, mu2[0, :], nstd_99, ax=ax, **ellkargs) # %% Distância entre todos from scipy.spatial import distance np.random.seed(10000) mu1 = np.array([[10, 5]]) sigma1 = np.array([[3, 0.7], [0.7, 2]]) sr1 = generate_gaussian(mu1, sigma1, sample_num=50) nstd_80 = np.sqrt(chi2.ppf(0.8, 2)) nstd_99 = np.sqrt(chi2.ppf(0.99, 2)) nstd_999 = np.sqrt(chi2.ppf(0.999, 2)) fig, ax = plt.subplots(2, 3, sharex=True, sharey=True) # ax.scatter(sr2[:, 0], sr2[:, 1], s=4, zorder=10**2) ellkargs_m = {'edgecolor': 'b', 'linewidth': 0.5, 'fill': True, 'linestyle': '-', 'facecolor': 'tab:blue', 'alpha': 0.2} dist = distance.cdist(sr1, mu1, metric='mahalanobis', VI=np.linalg.pinv(sigma1)) idx = np.argmax(dist) max_x = sr1[idx] sr1_all = sr1.copy() sr1 = sr1[np.random.randint(0, 50, 10)[5:], :] sr1 = np.vstack((sr1, max_x[None, :])) # ax.scatter(mu1[0, 0], mu1[0, 1], color='tab:red', s=17, zorder=10**2, marker='D') colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple', 'tab:cyan'] markers = ['o', 's', '8', 'D', '*', 'P'] t = ['(a)', '(b)', '(c)', '(d)', '(e)', '(f)'] for i, x in enumerate(sr1): ellkargs = {'edgecolor': 'b', 'linewidth': 0.5, 'fill': True, 'linestyle': '-', 'facecolor': colors[0], 'alpha': 0.2} col = min(i, 2) if (i / 6) < 0.5: row = 0 else: row = 1 col = i - 4 # fig, ax = plt.subplots() ax[row, col].scatter(sr1_all[:, 0], sr1_all[:, 1], s=10, zorder=10**2, marker='o', color='b') # plot_cov_ellipse(sigma1, mu1[0, :], nstd_99, ax=ax[row, col], **ellkargs_m) plot_cov_ellipse(sigma1, x, nstd_99, ax=ax[row, col], **ellkargs) ax[row, col].scatter(sr1[i, 0], sr1[i, 1], s=10, zorder=100**2, marker='o', color='tab:red') ax[row, col].set_xlim(0, 20) ax[row, col].set_ylim(-2, 14) ax[row, col].spines['top'].set_color('white') ax[row, col].spines['right'].set_color('white') ax[row, col].spines['top'].set_color('white') ax[row, col].spines['right'].set_color('white') # ax[row, col].set_xticks([]) # ax[row, col].set_yticks([]) # ax[row, col].set_title('$%s$' % t[i], fontdict={'fontsize': 10}) # for xa in sr1_all: # ax[row, col].plot([x[0], xa[0]], [x[1], xa[1]], linestyle=':', alpha=0.5, linewidth=1, color='tab:green') # ax.scatter(sr1_all[:, 0], sr1_all[:, 1], s=10, zorder=10**2, marker='o') # plot_cov_ellipse(sigma1, mu1[0, :], nstd_99, ax=ax, **ellkargs_m) # plot_cov_ellipse(sigma1, x, nstd_99, ax=ax, **ellkargs) # ax.scatter(sr1[i, 0], sr1[i, 1], s=17, zorder=100**2, marker='D', color='tab:red') # ax.set_xlim(0, 19) # ax.set_ylim(-1, 15) # # ax.set_xticks([]) # # ax.set_yticks([]) # # ax.set_title('$%s$' % t[i], fontdict={'fontsize': 10}) # for xa in sr1_all: # ax.plot([x[0], xa[0]], [x[1], xa[1]], linestyle=':', alpha=0.5, linewidth=1, color='tab:green') # ax.set_xlabel('$x_1$') # ax.set_ylabel('$x_2$') # %% Região de composição (Apenas uma imagem) from scipy.spatial import distance np.random.seed(10000) mu1 = np.array([[10, 5]]) sigma1 = np.array([[3, 0.7], [0.7, 2]]) sr1 = generate_gaussian(mu1, sigma1, sample_num=50) nstd_95 = np.sqrt(chi2.ppf(0.99, 2)) nstd_90 = np.sqrt(chi2.ppf(0.9, 2)) nstd_10 = np.sqrt(chi2.ppf(0.05, 2)) fig, ax = plt.subplots() dist = distance.cdist(sr1, mu1, metric='mahalanobis', VI=np.linalg.pinv(sigma1)) idxs = dist > (nstd_95) no_in = sr1[idxs[:, 0]] sr1 = sr1[np.invert(idxs[:, 0])] colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple', 'tab:cyan'] markers = ['o', 's', '8', 'D', '*', 'P'] t = ['(a)', '(b)', '(c)', '(d)', '(e)', '(f)'] ellkargs = {'edgecolor': 'purple', 'linewidth': 0.5, 'fill': True, 'linestyle': '-', 'facecolor': 'c', 'alpha': 0.4} ellkargs_m = {'edgecolor': 'b', 'linewidth': 0.5, 'fill': True, 'linestyle': '-', 'facecolor': 'tab:blue', 'alpha': 0.2} ax.scatter(mu1[0, 0], mu1[0, 1], color='tab:red', s=20, zorder=10**2, marker='D') ax.scatter(sr1[:, 0], sr1[:, 1], s=10, zorder=10**2, marker='o', color='b') ax.scatter(no_in[:, 0], no_in[:, 1], s=10, zorder=10**2, marker='o', color='b') plot_cov_ellipse(sigma1, mu1[0, :], nstd_95, ax=ax, **ellkargs_m) ax.set_xlim(0, 17) ax.set_ylim(-1, 15) for i, x in enumerate(sr1): plot_cov_ellipse(sigma1, x, nstd_10, ax=ax, **ellkargs) ax.text(10, 10, 'class region $(d^2 \\leq d^2_{class})$', fontdict={'fontsize': 10}) ax.text(10, 12, 'point region $(d^2 \\leq d^2_{point})$', fontdict={'fontsize': 10}) ax.set_xlabel('$x_1$') ax.set_ylabel('$x_2$') # %% Região de composição (exemplos variando \gamma_{point}) from scipy.spatial import distance import numpy as np np.random.seed(10000) mu1 = np.array([[10, 5]]) sigma1 = np.array([[3, 0.7], [0.7, 2]]) sr1 = generate_gaussian(mu1, sigma1, sample_num=50) nstd_95 = np.sqrt(chi2.ppf(0.99, 2)) nstd_90 = np.sqrt(chi2.ppf(0.9, 2)) nstd_10 = np.sqrt(chi2.ppf(0.05, 2)) ns_ = [0.05, 0.2] nstd = np.sqrt(chi2.ppf(ns_, 2)) fig, ax = plt.subplots(2, 2, sharex=True, sharey=True) plt.subplots_adjust(wspace=0.05, hspace=0.2, left=0.1, right=0.95, top=0.95, bottom=0.1) dist = distance.cdist(sr1, mu1, metric='mahalanobis', VI=np.linalg.pinv(sigma1)) idxs = dist > (nstd_95) no_in = sr1[idxs[:, 0]] sr1 = sr1[np.invert(idxs[:, 0])] colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple', 'tab:cyan'] markers = ['o', 's', '8', 'D', '*', 'P'] t = ['(a)', '(b)', '(c)', '(d)', '(e)', '(f)'] ellkargs = {'edgecolor': 'purple', 'linewidth': 0.5, 'fill': True, 'linestyle': '-', 'facecolor': 'c', 'alpha': 0.4} ellkargs_m = {'edgecolor': 'b', 'linewidth': 0.5, 'fill': True, 'linestyle': '-', 'facecolor': 'tab:blue', 'alpha': 0.2} def get_representative(mat, ns): idx = np.random.randint(0, mat.shape[0]) src = mat.copy() c = src[idx] src = np.delete(src, idx, axis=0) p = [] while True: p.append(c) d = distance.cdist(src, c[None, :], metric='mahalanobis', VI=np.linalg.pinv(sigma1)) src = src[(d > ns)[:, 0], :] if src.shape[0] == 0: break else: if src.shape[0] == 1: p.append(src[0]) break idx = np.random.randint(0, src.shape[0]) c = src[idx] src = np.delete(src, idx, axis=0) return np.array(p) for i, ns in enumerate(nstd): ax[i, 0].scatter(sr1[:, 0], sr1[:, 1], s=10, zorder=10**2, marker='o', color='b') ax[i, 0].scatter(no_in[:, 0], no_in[:, 1], s=10, zorder=10**2, marker='o', color='b') plot_cov_ellipse(sigma1, mu1[0, :], nstd_95, ax=ax[i, 0], **ellkargs_m) plot_cov_ellipse(sigma1, mu1[0, :], nstd_95, ax=ax[i, 1], **ellkargs_m) ax[i, 0].set_xlim(0, 19) ax[i, 0].set_ylim(-1, 15) ax[i, 1].set_xlim(0, 19) ax[i, 1].set_ylim(-1, 15) ax[i, 0].spines['top'].set_color('white') ax[i, 0].spines['right'].set_color('white') ax[i, 1].spines['top'].set_color('white') ax[i, 1].spines['right'].set_color('white') fig.text(10,10, '$\gamma_{point} = %.2f$' % ns_[i], zorder=10**100, fontdict={'fontsize': 10}) ax[i, 1].set_title('$\gamma_{point} = %.2f$' % ns_[i], fontdict={'fontsize': 10}) ax[0, 0].set_ylabel('$x_2$') ax[1, 0].set_ylabel('$x_2$') ax[1, 0].set_xlabel('$x_1$') ax[1, 1].set_xlabel('$x_1$') for j, x in enumerate(sr1): plot_cov_ellipse(sigma1, x, ns, ax=ax[i, 0], **ellkargs) src = get_representative(sr1, ns) ax[i, 1].scatter(src[:, 0], src[:, 1], s=10, zorder=10**2, marker='o', color='b') for j, x in enumerate(src): plot_cov_ellipse(sigma1, x, ns, ax=ax[i, 1], **ellkargs) # %% alphas = lambda d, t: (-np.log(t) / (d)) t = 0.1 d95 = chi2.ppf(.95, 2) d99 = chi2.ppf(.99, 2) alpha95 = alphas(d95, t) alpha99 = alphas(d99, t) x = np.linspace(0, max(d95, d99) + 2) y95 = np.exp(-alpha95 * x) y99 = np.exp(-alpha99 * x) # plt.plot(x, y95, lw=1.5) plt.plot(x, y99, lw=1.5) plt.yticks(np.arange(0, 1.1, 0.1)) # plt.xticks((d99, )) plt.grid(linestyle=':') # plt.scatter(d95, t, s=6 ** 2, zorder=1000) plt.scatter(d99, t, s=6 ** 2, zorder=1000) plt.text(d99 - 0.35, t + 0.03, '$s_{lk}(d^2_{max})$', fontdict={'fontsize': 10, 'fontweight': 'normal'}) plt.ylabel('$s_{lk}(d^2)$') plt.xlabel('statistic distance $(d^2)$') # %% Exibe clusters (2d) em `alg`. import numpy as np import sklearn import math from scipy.stats import chi2 import matplotlib.pyplot as plt import matplotlib.mlab as mlab from random import * from matplotlib.patches import Ellipse from numpy.linalg import cholesky import pandas as pd import matplotlib as mpl import seaborn as snss # https://github.com/joferkington/oost_paper_code/blob/master/error_ellipse.py def plot_point_cov(points, nstd=2, ax=None, **kwargs): """ Plots an `nstd` sigma ellipse based on the mean and covariance of a point "cloud" (points, an Nx2 array). Parameters ---------- points : An Nx2 array of the data points. nstd : The radius of the ellipse in numbers of standard deviations. Defaults to 2 standard deviations. ax : The axis that the ellipse will be plotted on. Defaults to the current axis. Additional keyword arguments are pass on to the ellipse patch. Returns ------- A matplotlib ellipse artist """ pos = points.mean(axis=0) cov = np.cov(points, rowvar=False) return plot_cov_ellipse(cov, pos, nstd, ax, **kwargs) def plot_cov_ellipse(cov, pos, nstd=2, ax=None, **kwargs): """ Plots an `nstd` sigma error ellipse based on the specified covariance matrix (`cov`). Additional keyword arguments are passed on to the ellipse patch artist. Parameters ---------- cov : The 2x2 covariance matrix to base the ellipse on pos : The location of the center of the ellipse. Expects a 2-element sequence of [x0, y0]. nstd : The radius of the ellipse in numbers of standard deviations. Defaults to 2 standard deviations. ax : The axis that the ellipse will be plotted on. Defaults to the current axis. Additional keyword arguments are pass on to the ellipse patch. Returns ------- A matplotlib ellipse artist """ def eigsorted(cov): vals, vecs = np.linalg.eigh(cov) order = vals.argsort()[::-1] return vals[order], vecs[:,order] if ax is None: ax = plt.gca() vals, vecs = eigsorted(cov) theta = np.degrees(np.arctan2(*vecs[:,0][::-1])) # Width and height are "full" widths, not radius width, height = 2 * np.sqrt(nstd * vals) ellip = Ellipse(xy=pos, width=width, height=height, angle=theta, **kwargs) ax.add_artist(ellip) return ellip np.random.seed(9) ellkargs = {'edgecolor': 'b', 'linewidth': 0.5, 'fill': True, 'linestyle': '-', 'facecolor': 'tab:blue', 'alpha': 0.2} fig, ax = plt.subplots() # ax.set_xlim(-10, 170) # ax.set_ylim(-10, 180) data = pd.read_csv('data/ruspini.csv', sep=';') # data = (data - data.min()) / (data.max() - data.min()) # data = pd.read_csv('data/entrada.csv', sep=';').iloc[:, [37, 38]] IdCurto = np.array(output) idx = np.random.choice(np.arange(0, 148), 148, replace=False) colors = np.array(list(mpl.colors.cnames.keys()))[idx.tolist()] nstd = np.sqrt(chi2.ppf(CONF_MAT_D, 2)) for i, cluster in enumerate(esbm.manager.values()): mu = cluster.mu sigma = np.linalg.pinv(cluster.inv_sigma) D = cluster.D plot_cov_ellipse(sigma, mu[0, :], nstd, ax=ax, **ellkargs) ax.scatter(data.iloc[IdCurto == i + 1, 1], data.iloc[IdCurto == i + 1, 2], s=12, c=colors[i+1], marker='o', edgecolors='k', lw=0.5 ) ax.scatter(D[:, 0], D[:, 1], s=17, zorder=20**2, marker='s', c=colors[i + 1], edgecolors='k', lw=0.2) ax.scatter(mu[0, 0], mu[0, 1], color='tab:red', s=17, zorder=10**2, marker='s') # %% Execução eSBM import pandas as pd import numpy as np np.random.seed(10) data = pd.read_csv('data/ruspini.csv', sep=';') # data = pd.read_csv('data/entrada.csv', sep=';') # data = (data - data.min()) / (data.max() - data.min()) # data_norm.iloc[:, 1:39].plot(legend=None) # ---------------------- CONF_CLUSTER = 0.7 CONF_ADD_POINT = 0.1 CONF_RESIDUE = 0.44 CONF_MAT_D = 0.6 W = 10 config = { 'data_path': 'data', 'sep': ';', 'FROM_SCRATCH': 'TRUE', 'W': W, 'GAMMA_STATS': CONF_CLUSTER, 'GAMMA_POINT': CONF_ADD_POINT, 'GAMMA_CLASS' : CONF_MAT_D, 'PR_ERROR': CONF_RESIDUE, 'TAU': 0.0001 } from algorithms.esbm import eSBM from algorithms.ctools.log import Log from matplotlib.animation import FuncAnimation from matplotlib import pyplot as plt log = Log('{}/{}.log'.format(config['data_path'], 'run'), 'run', True, True) esbm = eSBM(log, config) output = [] q = [] for row in data.iterrows(): row = row[1] timestamp = row['Timestamp'] x = row.values[1:3].astype(float) info, quality = esbm.process_input(x[None, :], timestamp) output.append(info['IdCurto']) q.append(quality) # ---------------------------- # %% Animação base 2d import matplotlib as mpl from matplotlib.patches import Ellipse import pandas as pd import numpy as np from scipy.stats import chi2 from scipy import stats np.random.seed(1011) sigma = np.array([[10, 7], [7, 15]]) mu = np.array([10, 10]) g1 = stats.multivariate_normal.rvs(mean=mu, cov=sigma, size=35) g2 = stats.multivariate_normal.rvs(mean=mu * 5.5, cov=sigma * -1.2, size=35) g3 = stats.multivariate_normal.rvs(mean=mu * 2.5, cov=sigma * -0.8, size=35) data = pd.read_csv('data/ruspini.csv', sep=';') # data = pd.DataFrame(columns=['Timestamp', 'x', 'y']) # d = np.vstack((g1, g2, g3)) # data.x = d[:, 0] # data.y = d[:, 1] # data = pd.read_csv('data/entrada.csv', sep=';') # data = (data - data.min()) / (data.max() - data.min()) # data_norm.iloc[:, 1:39].plot(legend=None) # ---------------------- CONF_CLUSTER = 0.999 CONF_ADD_POINT = 0.2 CONF_RESIDUE = 0.75 CONF_MAT_D = 0.999 W = 10 config = { 'data_path': 'data', 'sep': ';', 'FROM_SCRATCH': 'true', 'W': W, 'GAMMA_STATS': CONF_CLUSTER, 'GAMMA_POINT': CONF_ADD_POINT, 'GAMMA_CLASS' : CONF_MAT_D, 'GAMMA_RESIDUE': CONF_RESIDUE, 'TAU': 1e-5 } from algorithms.esbm import eSBM from algorithms.ctools.log import Log from matplotlib.animation import FuncAnimation from matplotlib import animation from matplotlib import pyplot as plt log = Log('{}/{}.log'.format(config['data_path'], 'run'), 'run', True, True) esbm = eSBM(log, config) idx = np.random.choice(np.arange(0, 148), 148, replace=False) colors = np.array(list(mpl.colors.cnames.keys()))[idx.tolist()] unknow_color = 'k' markers = mpl.markers.MarkerStyle.filled_markers fig= plt.figure() ax = plt.subplot(111) ax.set_xlabel('$x_1$') ax.set_ylabel('$x_2$') ax.set_xlim(data.x.min() - 20, data.x.max() + 40) ax.set_ylim(data.y.min() - 20, data.y.max() + 30) # ax.set_xlim(data.x.min() - 0.1, data.x.max() + 0.1) # ax.set_ylim(data.y.min() - 0.1, data.y.max() + 0.1) fig.add_subplot(ax) pargs = {'s': 11, 'color': unknow_color, 'marker': 'o', 'edgecolors': 'k', 'lw' : 0.5} dargs = {'s': 14, 'marker': 's', 'edgecolors': 'k', 'lw' : 0.5} muargs = {'s': 11 ** 2, 'marker': '+', 'color': 'b', 'edgecolors': 'b', 'lw' : 1.2} ellkargs_c = {'edgecolor': 'b', 'linewidth': 0.6, 'fill': False, 'linestyle': '--', 'facecolor': 'tab:blue', 'alpha': 1} ellkargs_s = {'edgecolor': 'r', 'linewidth': 1, 'fill': False, 'linestyle': 'dotted', 'facecolor': 'tab:blue', 'alpha': 1} clusters = list(esbm.manager.values()) nstd_class = np.sqrt(chi2.ppf(CONF_MAT_D, 2)) nstd_stats = np.sqrt(chi2.ppf(CONF_RESIDUE, 2)) points = [] unknow_points = [] frames = [] ecclip1 = [] ecclip2 = [] mul = [] out =[] ests_p = [] ests_a = [] cp = None ell_conf = None def func(frame): global frames global points global unknow_points global mul global ecclip1, ecclip2 global ests_p, ests_a global cp, ell_conf, out print(frame) try: row = data.iloc[frame, :] timestamp = row['Timestamp'] x = row.values[1:3].astype(float) info, quality = esbm.process_input(x[None, :], timestamp) id_ = info['IdCurto'] ax.set_title('$t=%d$' % (frame + 1)) ests, c_max= esbm.ests p = data.iloc[frame, 1:3] [p.remove() for p in ests_p] [p.remove() for p in ests_a] [p.remove() for p in ecclip1] ests_p = [] ests_a = [] ecclip1 = [] if cp is not None: cp.remove() cp = ax.scatter(p.x, p.y, s=30, color='r', marker='8') for c in esbm.manager.values(): sigma = np.linalg.pinv(c.inv_sigma) ecclip1.append(plot_cov_ellipse(sigma, c.mu[0, :], nstd_class, ax, **ellkargs_c)) for e in ests: cl = esbm.manager[e[0]] sigma = np.linalg.pinv(cl.inv_sigma) ests_a.append(plot_cov_ellipse(sigma, e[1][-1][0, :], nstd_stats, ax, **ellkargs_s)) ests_p.append(ax.scatter(e[1][-1][0, 0], e[1][-1][0, 1], zorder=100**5, lw=0.5, edgecolor='k', s=60, marker='*', color=colors[cl.info['IdCurto'] - 1])) if quality == 64: unknow_points.append(ax.scatter(p.x, p.y, **pargs)) frames.append(frame) else: c = esbm.manager[info['IdLongo']] [p.remove() for p in unknow_points] # [u.remove() for u in points] [g.remove() for g in mul] unknow_points = [] points = [] mul = [] ax.scatter(data.iloc[frames, 1], data.iloc[frames, 2], color=colors[id_ - 1], s=9, marker='o', lw=0.5, edgecolor='k') ax.scatter(p.x, p.y, color=colors[id_ - 1], s=11, marker='o', lw=0.5, edgecolor='k') points.append(ax.scatter(c.D[:, 0], c.D[:, 1], color=colors[id_ - 1], **dargs)) mul = [ax.scatter(c.mu[0, 0], c.mu[0, 1], **muargs) for c in esbm.manager.values()] frames = [] # if frame + 1 == data.shape[0]: # for e in esbm.manager.values(): # ax.text(10 + e.info['IdCurto'] ** 3, 10, '$\hat{x}_{t, %d}$' % e.info['IdCurto'], fontdict={'fontsize': 9, 'fontweight': 'normal'}) except Exception as e: print(e) # plt.savefig('data/figs/fig_%d.svg' % frame) f = np.arange(0, data.shape[0]) Writer = animation.writers['ffmpeg'] writer = Writer(fps=5, metadata=dict(artist='Me'), bitrate=1800) funcanim = FuncAnimation(fig, func, init_func=lambda : False, frames=f, interval=30, repeat=False, cache_frame_data=False, save_count=1) funcanim.save('esbm_ruspini.mp4', writer) plt.show() # %% Animação 2d eSBM Plus import numpy as np import matplotlib as mpl from matplotlib import pyplot as plt from matplotlib.animation import FuncAnimation from matplotlib import animation from matplotlib.patches import Ellipse import pandas as pd from scipy.stats import chi2 from scipy import stats from fdiidf.helpers import data as data_m from fdiidf.evolving.clustering import esbm as m_esbm np.random.seed(1011) ## Build data seed = 98 np.random.seed(seed) mu_1, mu_2 = 1, 30 sigma_1, sigma_2 = 2, 0.5 num_samples = 3000 changes = {'incip': [ {'add': 50, 'where': (1000, 1300)}, {'add': 0, 'where': (1300, 1450)}, {'add': -50, 'where': (1450, 1500)} ], 'sudden': [ {'add': -25, 'where': (2000, 2200)} ] } x_1, x_2 = data_m.build_2d_gauss_data(mu_1, mu_2, sigma_1, sigma_2, samples=num_samples, changes=changes, w=50, lags=0) data = np.zeros(shape=(num_samples, 2), dtype=float) data[:, 0] = x_1 data[:, 1] = x_2 # %% config = { 'dim': 2, 'k': 10, 'gamma_class': 0.99, 'gamma_point': 0.2, 'gamma_res': 0.5, 'tau': 0.00001 } esbm = m_esbm.eSBMPlus(**config) output = [] for x in data: x = x.reshape(1, 2) q, cstar = esbm.process_input(x) output.append(cstar) fig, axes = plt.subplots(2, 1, sharex=True) axes[0].plot(data) axes[1].plot(output) # %% from fdiidf.evolving.clustering import esbm from fdiidf.diagnosis import contribution esbm_ins = esbm.eSBMPlus esbm_config = { 'dim': 2, 'k': 10, 'gamma_class': 0.99, 'gamma_point': 0.2, 'gamma_res': 0.5, 'tau': 0.00001 } grbc_ins = contribution.GRBC grbc_config = { 'w_train': 33, 'w_diag': 5, 'n_pc': 0.95, 'conf_level': 0.99, 'index': 'combined', 'lags': 0 } ## Monitoring from fdiidf.monitoring import Monitor monitor = Monitor(clustering_method=(esbm_ins, esbm_config), diagnosis_method=(grbc_ins, grbc_config)) output = [] for i, x in enumerate(data): x = x.reshape(1, 2) print('Sample: ', i + 1) with np.errstate(all='raise'): res = monitor.process_input(x) output.append(res) # %% ## Buil animation config = { 'dim': 2, 'k': 10, 'gamma_class': 0.999, 'gamma_point': 0.2, 'gamma_res': 0.5, 'tau': 0.00001 } esbm = m_esbm.eSBMPlus(**config) idx = np.random.choice(np.arange(0, 148), 148, replace=False) colors = np.array(list(mpl.colors.cnames.keys()))[idx.tolist()] unknow_color = 'k' markers = mpl.markers.MarkerStyle.filled_markers fig= plt.figure() ax = plt.subplot(111) ax.set_xlabel('$x_1$') ax.set_ylabel('$x_2$') # ax.set_xlim(data.x.min() - 20, data.x.max() + 40) # ax.set_ylim(data.y.min() - 20, data.y.max() + 30) x_max = np.max(data[:, 0]) x_min = np.min(data[:, 0]) y_max = np.max(data[:, 1]) y_min = np.min(data[:, 1]) ax.set_xlim(x_min - 5, x_max + 5) ax.set_ylim(y_min - 5, y_max + 5) fig.add_subplot(ax) pargs = {'s': 11, 'color': unknow_color, 'marker': 'o', 'edgecolors': 'k', 'lw' : 0.5} dargs = {'s': 14, 'marker': 's', 'edgecolors': 'k', 'lw' : 0.5} muargs = {'s': 11 ** 2, 'marker': '+', 'color': 'b', 'edgecolors': 'b', 'lw' : 1.2} ellkargs_c = {'edgecolor': 'b', 'linewidth': 0.6, 'fill': False, 'linestyle': '--', 'facecolor': 'tab:blue', 'alpha': 1} ellkargs_s = {'edgecolor': 'r', 'linewidth': 1, 'fill': False, 'linestyle': 'dotted', 'facecolor': 'tab:blue', 'alpha': 1} nstd_class = chi2.ppf(config['gamma_class'], 2) nstd_stats = chi2.ppf(config['gamma_res'], 2) points = [] unknow_points = [] frames = [] ecclip1 = [] ecclip2 = [] mul = [] out =[] ests_p = [] ests_a = [] cp = None ell_conf = None def func(frame): global frames global points global unknow_points global mul global ecclip1, ecclip2 global ests_p, ests_a global cp, ell_conf, out print(frame) try: x = data[frame].reshape(1, 2) quality, cstar = esbm.process_input(x) id_ = cstar ax.set_title('$t=%d, c=%d$' % (frame + 1, len(esbm.clusters))) ests = esbm.estimations [p.remove() for p in ests_p] [p.remove() for p in ests_a] [p.remove() for p in ecclip1] ests_p = [] ests_a = [] ecclip1 = [] if cp is not None: cp.remove() cp = ax.scatter(x[0, 0], x[0, 1], s=30, color='r', marker='8') for c_id, xhat in esbm.estimations: # print(x, '\n', xhat) c = esbm.clusters[c_id] sigma = np.linalg.pinv(c.inv_cov) ecclip1.append(plot_cov_ellipse(sigma, c.mean[0, :], nstd_class, ax, **ellkargs_c)) ests_a.append(plot_cov_ellipse(sigma, xhat[0], nstd_stats, ax, **ellkargs_s)) ests_p.append(ax.scatter(xhat[0, 0], xhat[0, 1], zorder=100**5, lw=0.5, edgecolor='k', s=60, marker='*', color=colors[cstar - 1])) if not quality: unknow_points.append(ax.scatter(x[0, 0], x[0, 1], **pargs)) frames.append(frame) else: if cstar in esbm.clusters: c = esbm.clusters[cstar] [p.remove() for p in unknow_points] # [u.remove() for u in points] [g.remove() for g in mul] unknow_points = [] points = [] mul = [] ax.scatter(data[frames, 0], data[frames, 1], color=colors[id_ - 1], s=9, marker='o', lw=0.5, edgecolor='k') ax.scatter(x[0, 0], x[0, 1], color=colors[id_ - 1], s=11, marker='o', lw=0.5, edgecolor='k') points.append(ax.scatter(c.D[:, 0], c.D[:, 1], color=colors[id_ - 1], **dargs)) mul = [ax.scatter(c.mean[0, 0], c.mean[0, 1], **muargs) for c in esbm.clusters.values()] frames = [] # if frame + 1 == data.shape[0]: # for e in esbm.manager.values(): # ax.text(10 + e.info['IdCurto'] ** 3, 10, '$\hat{x}_{t, %d}$' % e.info['IdCurto'], fontdict={'fontsize': 9, 'fontweight': 'normal'}) except Exception as e: import traceback print(traceback.format_exc()) # plt.savefig('data/figs/fig_%d.svg' % frame) f = np.arange(0, data.shape[0]) # Writer = animation.writers['ffmpeg'] # writer = Writer(fps=5, metadata=dict(artist='Me'), bitrate=1800) funcanim = FuncAnimation(fig, func, init_func=lambda : False, frames=f, interval=200, repeat=False, cache_frame_data=False, save_count=1) # funcanim.save('esbm_ruspini.mp4', writer) plt.show() # %% import pandas as pd import numpy as np tags = ["CP_30100C_T75_PT_3427=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_PDIT_3793=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3769=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3765=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3768=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3777=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3776=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3804=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3803=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3778=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3779=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3767=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3802=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3801=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_VXI_3702=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VXI_3704=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VXI_3719=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VXI_3717=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3702=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3704=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3717=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3719=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VZI_3700A=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_VZI_3700B=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_VZI_3715A=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_VZI_3715B=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_FIT_3796=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_FIT_3795=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_PT_3324=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_PT_3327=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_PT_3227=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_PT_3224=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_PT_3127=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_TT_3329=NUM,NORM_MAX_MIN,0.0,200.0", # Este 100 -> 200 (max) "CP_30100C_T75_TT_3321=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_TT_3121=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_TT_3129=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_TT_3221=NUM,NORM_MAX_MIN,0.0,100.0"] tags_dict = {tag.split('=')[0] : (float(tag.split(',')[-2]), float(tag.split(',')[-1])) for tag in tags} data = pd.read_csv('data/entrada.csv', sep=';') data_norm = pd.DataFrame(columns=data.columns) for tag, (min_, max_) in tags_dict.items(): values = data[tag] data_norm[tag] = (values - min_) / (max_ - min_) # data = pd.read_csv('data/entrada_all.csv', sep=';') values = data.iloc[:, 1:39] # data_norm = data.copy() # data_norm.iloc[:, 1:39] = (values - values.min()) / (values.max() - values.min()) # data_norm.iloc[:, 1:39].plot(legend=None) # ---------------------- CONF_CLUSTER = 0.99 CONF_ADD_POINT = 0.1 CONF_RESIDUE = 0.4 CONF_MAT_D = 0.99 W = 40 config = { 'data_path': 'data', 'sep': ';', 'FROM_SCRATCH': 'false', 'W': W, 'GAMMA_STATS': CONF_CLUSTER, 'GAMMA_POINT': CONF_ADD_POINT, 'GAMMA_CLASS' : CONF_MAT_D, 'GAMMA_RESIDUE': CONF_RESIDUE, 'TAU': 0.00001 } from algorithms.esbm import eSBM from algorithms.ctools.log import Log from matplotlib.animation import FuncAnimation from matplotlib import pyplot as plt log = Log('{}/{}.log'.format(config['data_path'], 'run'), 'run', True, True) esbm = eSBM(log, config) # ---------------------- fig, ax = plt.subplots(3, 1, sharex=True) ax[0].plot(data_norm.iloc[:, 1:39]) vline = ax[0].axvline(0, data_norm.iloc[:, 1:39].min().min(), data.iloc[:, 1:39].max().max(), lw=1.5, color='c') # line_var = ax[0].plot([], color='g')[0] line_idcurto = ax[1].plot([], color='b')[0] line_quality = ax[2].plot([], color='k')[0] ax[2].set_ylim(60, 200) vars_v = [] idcurto_v = [] quality_v = [] idx_var = 0 x = [] w = 1000 # ax[0].plot(data_norm.iloc[:, 1:39].iloc[:w, idx_var]) # ax[0].set_title('Sinal') # ax[1].set_title('IdCurto') # ax[2].set_title('Qualidade') def func(frame): row = data.iloc[frame, :] timestamp = row['Timestamp'] point = row.values[1:39].astype(float) info, quality = esbm.process_input(point[None, :], timestamp) # print(quality) x.append(frame) vars_v.append(point[idx_var]) idcurto_v.append(info['IdCurto']) quality_v.append(quality) # line_var.set_data(x, np.array(vars_v)) line_idcurto.set_data(x, np.array(idcurto_v)) line_quality.set_data(x, np.array(quality_v)) x_min = 0 if frame > w: x_min = 0 if frame - w < 0 else frame - w vline.set_xdata(frame) [a.set_xlim(x_min, frame + w) for a in ax] # axv.set_xlim(x_min, frame + w) ax[1].set_ylim(0, max(idcurto_v) + 1) # axv.set_title(frame) return line_quality, line_idcurto, vline funcanim = FuncAnimation(fig, func, frames=list(range(0, data_norm.shape[0])), interval=5) plt.tight_layout(pad=0.5, h_pad=0, w_pad=0.1) plt.show() # %% import pandas as pd import numpy as np import matplotlib.pyplot as plt tags = ["CP_30100C_T75_PT_3427=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_PDIT_3793=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3769=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3765=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3768=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3777=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3776=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3804=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3803=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3778=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3779=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3767=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3802=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3801=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_VXI_3702=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VXI_3704=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VXI_3719=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VXI_3717=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3702=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3704=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3717=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3719=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VZI_3700A=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_VZI_3700B=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_VZI_3715A=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_VZI_3715B=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_FIT_3796=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_FIT_3795=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_PT_3324=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_PT_3327=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_PT_3227=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_PT_3224=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_PT_3127=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_TT_3329=NUM,NORM_MAX_MIN,0.0,200.0", # Este 100 -> 200 (max) "CP_30100C_T75_TT_3321=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_TT_3121=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_TT_3129=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_TT_3221=NUM,NORM_MAX_MIN,0.0,100.0"] tags_dict = {tag.split('=')[0] : (float(tag.split(',')[-2]), float(tag.split(',')[-1])) for tag in tags} data = pd.read_csv('data/entrada.csv', sep=';') data_norm = pd.DataFrame(columns=data.columns) for tag, (min_, max_) in tags_dict.items(): values = data[tag] data_norm[tag] = (values - min_) / (max_ - min_) v0 = 0.1 v1 = 1 v2 = 20 v3 = 85 data = data.iloc[:, 1:39] data_norm = data_norm.iloc[:, 1:39] fig = plt.figure() ax4 = plt.subplot(515,) ax3 = plt.subplot(514, sharex=ax4) ax2 = plt.subplot(513, sharex=ax3) ax1 = plt.subplot(512, sharex=ax2) ax0 = plt.subplot(511, sharex=ax1) idx0 = np.nonzero(data.mean() <= v0)[0] ax0.plot(data_norm.iloc[:, idx0]) idx1 = np.nonzero(data.mean() > v0)[0] idx1 = np.nonzero(data.iloc[:, idx1].mean() <= v1)[0] ax1.plot(data_norm.iloc[:, idx1]) idx2 = np.nonzero(data.mean() > v1)[0] idx2 = np.nonzero(data.iloc[:, idx2].mean() <= v2)[0] ax2.plot(data_norm.iloc[:, idx2]) idx3 = np.nonzero(data.mean() > v2)[0] idx3 = np.nonzero(data.iloc[:, idx3].mean() <= v3)[0] ax3.plot(data_norm.iloc[:, idx3]) ax4.plot(data_norm.iloc[:, np.nonzero(data.mean() > v3)[0]]) fig.add_subplot(ax0) fig.add_subplot(ax1) fig.add_subplot(ax2) fig.add_subplot(ax3) fig.add_subplot(ax4) tickslabels = ax0.get_xticklabels() ax0.set_xticklabels([]) # ax4.set_xticklabels(tickslabels) plt.tight_layout(pad=0.1, h_pad=0.1, w_pad=0.01) plt.show() # %% import pandas as pd import numpy as np import matplotlib.pyplot as plt tags = ["CP_30100C_T75_PT_3427=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_PDIT_3793=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3769=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3765=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3768=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3777=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3776=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3804=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3803=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3778=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3779=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PIT_3767=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3802=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_PDIT_3801=NUM,NORM_MAX_MIN,0.0,3.0", "CP_30100C_T75_VXI_3702=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VXI_3704=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VXI_3719=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VXI_3717=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3702=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3704=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3717=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VYI_3719=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_VZI_3700A=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_VZI_3700B=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_VZI_3715A=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_VZI_3715B=NUM,NORM_MAX_MIN,-1.0,2.0", "CP_30100C_T75_FIT_3796=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_FIT_3795=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_PT_3324=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_PT_3327=NUM,NORM_MAX_MIN,0.0,250.0", "CP_30100C_T75_PT_3227=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_PT_3224=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_PT_3127=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_TT_3329=NUM,NORM_MAX_MIN,0.0,200.0", # Este 100 -> 200 (max) "CP_30100C_T75_TT_3321=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_TT_3121=NUM,NORM_MAX_MIN,0.0,100.0", "CP_30100C_T75_TT_3129=NUM,NORM_MAX_MIN,0.0,200.0", "CP_30100C_T75_TT_3221=NUM,NORM_MAX_MIN,0.0,100.0"] tags_dict = {tag.split('=')[0] : (float(tag.split(',')[-2]), float(tag.split(',')[-1])) for tag in tags} labels = np.array(['x_{%d}' % i for i in range(1, 39)]) data = pd.read_csv('data/pr/saida.csv', sep=';') data_norm = pd.DataFrame(columns=labels) for i, (tag, (min_, max_)) in enumerate(tags_dict.items()): values = data[tag] data_norm['x_{%d}' % (i + 1)] = (values - min_) / (max_ - min_) v0 = 0.1 v1 = 1 v2 = 20 v3 = 85 # data_norm = data_norm.iloc[:, 1:39] fig, axes = plt.subplots() import seaborn as sns np.random.seed(10001) colors = list(sns.palettes.xkcd_rgb.values()) cm = plt.get_cmap('Accent') NUM_COLORS = 38 # colors = [cm((i + 0)/NUM_COLORS) for i in range(NUM_COLORS)] colors_ = [matplotlib.colors.hex2color(c) for c in np.random.choice(colors, NUM_COLORS, replace=False)] print('UNIQUE:', np.unique(colors_).shape) axes.set_prop_cycle('color', colors_) axes.set_ylabel("Signal", fontsize=9) axes.set_xlabel('Time $(t)$', fontsize=9) # axes.set_prop_cycle('color', colors) # plt.ylabel('$aaa$') # idx0 = np.nonzero(data_norm.mean() <= v0)[0] # ax0.set_prop_cycle('color', colors[:idx0.shape[0]]) lines = axes.plot(data_norm, lw=0.8) ticks = np.round(np.arange(0, 1.2, 0.2), decimals=1) axes.set_yticks(ticks) axes.set_yticklabels(ticks) axes.set_ylim(-0.05, 1.05) p = np.array([490, 1339, 1495, 3191, 4324, 8343]) axes.vlines(p-1, -0.05, 1.05, lw=1.1, linestyle='dotted') axes.set_xticks(p-1) axes.set_xticklabels(p, rotation=45, fontdict={'fontsize': 8}) axes.legend(lines, [('$%s$' % l) for l in labels], ncol=9, fontsize=7, framealpha=0.5, columnspacing=0.5, handlelength=0.7, labelspacing=0.25) plt.tight_layout(pad=0.1, h_pad=0.5, w_pad=0.01) # %% Resultado todo período p_created = np.array([40, 525, 1378, 1490, 3169, 3232, 4336, 8382, 9027]) p_detected = np.array(p_created[1:]) - 40 fig = plt.figure(200) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:3, 0]) ax2 = fig.add_subplot(gs[3, 0]) import seaborn as sns np.random.seed(10001) colors = list(sns.palettes.xkcd_rgb.values()) cm = plt.get_cmap('Accent') NUM_COLORS = 38 # colors = [cm((i + 0)/NUM_COLORS) for i in range(NUM_COLORS)] colors_ = [matplotlib.colors.hex2color(c) for c in np.random.choice(colors, NUM_COLORS, replace=False)] print('UNIQUE:', np.unique(colors_).shape) ax1.set_prop_cycle('color', colors_) lines = ax1.plot(data_norm, lw=0.8) # ax1.vlines(p_detected, -0.04, 0.9, lw=1.1, linestyle='--', color='r') ax2.plot(data.IdCurto, '.') ax2.plot(p_detected - 1, data.IdCurto[p_detected-1], linestyle='None', marker=(7, 2, 0), lw=7, color='r') ax2.plot(p_created - 1, data.IdCurto[p_created-1], linestyle='None', marker=(7, 2, 0), lw=4, color='k') ax2.set_yticks(np.unique(data.IdCurto)) ax2.set_yticklabels(np.unique(data.IdCurto), fontsize=8) ax2.set_xticks(p_detected - 1) ax1.set_xticks(p_detected - 1) ax1.set_yticklabels(np.round(ax1.get_yticks(), decimals=1), fontsize=8) ax1.set_ylim(-0.05, 1.05) ax1.set_xticklabels([]) ax2.set_xticklabels(p_detected, fontsize=8, rotation=45) ax2.set_ylim(0, data.IdCurto.max() + 1) ax1.set_ylabel('Signal', fontsize=9) ax2.set_ylabel('$c$', fontsize=9, rotation=0) ax2.set_xlabel('Time $(t)$', fontsize=9) ax1.legend(lines, [('$%s$' % l) for l in labels], ncol=9, fontsize=7, framealpha=0.5, columnspacing=0.5, handlelength=0.7, labelspacing=0.25) plt.tight_layout(pad=0.1, h_pad=0.5, w_pad=0.01) plt.show() # %% Resultado mudança 1 t_min = 480 t_max = 531 p = [490, 1339, 1495, 3191, 4324, 8343] p_created = [40, 525, 1378, 1490, 3169, 3232, 4336, 8382, 9027] p_detected = np.array(p_created[1:]) - 40 fig = plt.figure(200) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:3, 0]) ax2 = fig.add_subplot(gs[3, 0]) import seaborn as sns np.random.seed(10001) colors = list(sns.palettes.xkcd_rgb.values()) cm = plt.get_cmap('Accent') NUM_COLORS = 38 # colors = [cm((i + 0)/NUM_COLORS) for i in range(NUM_COLORS)] colors_ = [matplotlib.colors.hex2color(c) for c in np.random.choice(colors, NUM_COLORS, replace=False)] print('UNIQUE:', np.unique(colors_).shape) ax1.set_prop_cycle('color', colors_) lines = ax1.plot(data_norm.iloc[t_min:t_max, :], lw=0.8) # ax1.vlines(p_detected, -0.04, 0.9, lw=1.1, linestyle='--', color='r') ax2.plot(data.IdCurto[t_min:t_max], '.') ax2.plot(p_detected[0] - 1, data.IdCurto[t_min:t_max][p_detected[0]-1], linestyle='None', marker=(7, 2, 0), lw=7, color='r') ax2.plot(p_created[1] - 1, data.IdCurto[p_created[1]-1], linestyle='None', marker=(7, 2, 0), lw=4, color='k') ax2.set_yticks(np.unique(data.IdCurto)[:2]) ax2.set_yticklabels(np.unique(data.IdCurto)[:2]) ax2.set_ylim(0, 3) ax1.set_ylim(-0.05, 1.05) ax1.set_xticklabels([]) ticks = np.arange(t_min, t_max, 5) ax2.set_xticks(ticks - 1) ax1.set_xticks(ticks - 1) ax2.set_xticklabels(ticks) ax1.set_ylabel('Signal', fontsize=9) ax2.set_ylabel('$c$', fontsize=9, rotation=0) ax2.set_xlabel('Time $(t)$', fontsize=9) ax1.legend(lines, [('$%s$' % l) for l in labels], ncol=9, fontsize=7, framealpha=0.5, columnspacing=0.5, handlelength=0.7, labelspacing=0.25) ax1.vlines([p_detected[0] - 1, p_created[1] - 1], -0.05, 1.05, lw=1.1, linestyle='dotted', color='k') plt.tight_layout(pad=0.1, h_pad=0.5, w_pad=0.01) plt.show() # %% Exemplo região residual from scipy.spatial import distance import matplotlib np.random.seed(10000) mu1, mu2 = np.array([[10, 5]]), np.array([[10, 17]]) sigma1, sigma2 = np.array([[3, 0.7], [0.7, 2]]), np.array([[3, -0.3], [-0.3, 6]]) sr1 = generate_gaussian(mu1, sigma1, sample_num=50) sr2 = generate_gaussian(mu2, sigma2, sample_num=50) nstd_80 = np.sqrt(chi2.ppf(0.8, 2)) nstd_99 = np.sqrt(chi2.ppf(0.99, 2)) nstd_999 = np.sqrt(chi2.ppf(0.999, 2)) fig, ax = plt.subplots() ax.set_xlim(3, 16) ax.set_ylim(-1, 12) dist = distance.cdist(sr1, mu1, metric='mahalanobis', VI=np.linalg.pinv(sigma1)) idx = np.argmax(dist) max_x = sr1[idx] # ax.scatter(sr1[:, 0], sr1[:, 1], s=10, zorder=10**2, marker='o', color='b') CONF_CLUSTER = 0.99 CONF_ADD_POINT = 0.2 CONF_RESIDUE = 0.3 CONF_MAT_D = 0.99 W = 40 config = { 'data_path': 'data', 'sep': ';', 'FROM_SCRATCH': 'TRUE', 'W': W, 'GAMMA_STATS': CONF_CLUSTER, 'GAMMA_POINT': CONF_ADD_POINT, 'GAMMA_CLASS' : CONF_MAT_D, 'PR_ERROR': CONF_RESIDUE, 'TAU': 0.0001 } from algorithms.esbm import eSBM from algorithms.ctools.log import Log from matplotlib.animation import FuncAnimation from matplotlib import pyplot as plt log = Log('{}/{}.log'.format(config['data_path'], 'run'), 'run', True, True) esbm = eSBM(log, config) output = [] q = [] for x in sr1: info, quality = esbm.process_input(x[None, :], timestamp=None) output.append(info['IdCurto']) q.append(quality) # ax.scatter(sr2[:, 0], sr2[:, 1], s=4, zorder=10**2) ellkargs = {'edgecolor': 'b', 'linewidth': 0.5, 'fill': True, 'linestyle': '-', 'facecolor': 'tab:blue', 'alpha': 0.2} esbm.process_input(max_x[None, :]) ests, c_max = esbm.ests est = ests[0][-1][1] D = esbm.manager[c_max].D mu = esbm.manager[c_max].mu sigma = np.linalg.pinv(esbm.manager[c_max].inv_sigma) plot_cov_ellipse(sigma, mu[0, :], np.sqrt(chi2.ppf(CONF_MAT_D, 2)), ax=ax, **ellkargs) levels = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99] colormap = plt.cm.nipy_spectral colors = [colormap(i) for i in np.linspace(0.2, 1, 10)] matplotlib.lines.lineStyles art = [] for i, c in enumerate(levels[::-1]): art.append(plot_cov_ellipse(sigma, est[0, :], np.sqrt(chi2.ppf(c, 2)), ax=ax, **{'edgecolor': colors[i], 'linewidth': 1.5, 'fill': False, 'linestyle': '-', 'facecolor': colors[i], 'alpha': 1, 'label': i})) ax.set_xlim(0, 18) ax.set_ylim(0, 15) lg = plt.legend(art[::-1], levels, ncol=2, markerscale=0.25, title='$\gamma_{res}$', **{'fontsize': 9}) ax.text(8, 12, '$\hat{x}_t$', fontsize=9) ax.text(6, 12, '$x_t$', fontsize=9) ax.text(3, 12, 'class region', fontsize=9) ax.set_xlabel('$x_1$') ax.set_ylabel('$x_2$') # [a.set_facecolor('b') for a in lg.get_patches()] # # cs = matplotlib.cm.ScalarMappable(cmap=matplotlib.colors.ListedColormap(c)) # mappable = matplotlib.contour.ContourLabeler(ax=ax, levels=levels[::-1], cmap=c) # cb = plt.colorbar(cs, spacing='proportional') # cb.set_ticks(levels[::-1]) # cb.set_label(' $\quad \gamma_{res}$', rotation=0) ax.scatter(max_x[0], max_x[1], s=6**2, zorder=10**2, marker='o', color='r') ax.scatter(mu[0, 0], mu[0, 1], color='tab:red', s=17, zorder=10**2, marker='D') ax.scatter(D[:, 0], D[:, 1], color='b', marker='s', s=7**2) ax.scatter(est[0, 0], est[0, 1], color='r', marker='*', s=6**2, zorder=10*100) #%% import numpy as np; import pandas as pd; import matplotlib; import matplotlib.pyplot as plt; import seaborn as sns # m = 3 data = pd.read_csv('data/saida_at_m3.csv', sep=';') fig = plt.figure(200) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:3, 0]) ax2 = fig.add_subplot(gs[3, 0]) data.iloc[:, 1:39].plot(legend=None, ax=ax1, lw=1) data.IdCurto.plot(ax=ax2, lw=1.2) ax1.set_xticklabels([]) ax1.set_ylabel('Amplitude') ax2.set_yticks(data.IdCurto.unique()) ax2.set_ylim(0, data.IdCurto.max() + 1) ax2.set_ylabel('Índice cluster', fontsize=10) ax2.set_xlabel('Amostra $(t)$') plt.tight_layout(h_pad=0.01, w_pad=0.01) plt.show() # m=2 data = pd.read_csv('data/saida_at_m2.csv', sep=';') fig = plt.figure(201) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:3, 0]) ax2 = fig.add_subplot(gs[3, 0]) data.iloc[:, 1:39].plot(legend=None, ax=ax1, lw=1) data.IdCurto.plot(ax=ax2, lw=1.2) ax1.set_xticklabels([]) ax1.set_ylabel('Amplitude') ax2.set_yticks(data.IdCurto.unique()) ax2.set_ylim(0, data.IdCurto.max() + 1) ax2.set_ylabel('Índice cluster', fontsize=10) ax2.set_xlabel('Amostra $(t)$') plt.tight_layout(h_pad=0.01, w_pad=0.01) plt.show() # m=1.5 data = pd.read_csv('data/saida_at_m1_5.csv', sep=';') fig = plt.figure(202) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:3, 0]) ax2 = fig.add_subplot(gs[3, 0]) data.iloc[:, 1:39].plot(legend=None, ax=ax1, lw=1) data.IdCurto.plot(ax=ax2, lw=1.2) ax1.set_xticklabels([]) ax1.set_ylabel('Amplitude') ax2.set_ylim(0, data.IdCurto.max() + 1) ax2.set_ylabel('Índice cluster', fontsize=10) ax2.set_xlabel('Amostra $(t)$') plt.tight_layout(h_pad=0.01, w_pad=0.01) plt.show() # Segunda base de dados m=3 data = pd.read_csv('data/saida_at_b2_m3.csv', sep=';') fig = plt.figure(2010) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:3, 0]) ax2 = fig.add_subplot(gs[3, 0]) data.iloc[:, 1:39].plot(legend=None, ax=ax1, lw=1) data.IdCurto.plot(ax=ax2, lw=1.2) ax1.set_xticklabels([]) ax1.set_ylabel('Amplitude') ax2.set_ylim(0, data.IdCurto.max() + 1) ax2.set_ylabel('Índice cluster', fontsize=10) ax2.set_xlabel('Amostra $(t)$') plt.tight_layout(h_pad=0.01, w_pad=0.01) plt.show() # Segunda base de dados m=3, w=10 data = pd.read_csv('data/saida_at_b2_m3_w10.csv', sep=';') fig = plt.figure(2011) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:3, 0]) ax2 = fig.add_subplot(gs[3, 0]) data.iloc[:, 1:39].plot(legend=None, ax=ax1, lw=1) data.IdCurto.plot(ax=ax2, lw=1.2) ax1.set_xticklabels([]) ax1.set_ylabel('Amplitude') ax2.set_ylim(0, data.IdCurto.max() + 1) ax2.set_ylabel('Índice cluster', fontsize=10) ax2.set_xlabel('Amostra $(t)$') plt.tight_layout(h_pad=0.01, w_pad=0.01) plt.show() # AutoCloud, eSBM, OEC data = pd.read_csv('data/saida_at_m2.csv', sep=';') fig = plt.figure(203) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:2, 0]) ax2 = fig.add_subplot(gs[2:, 0]) data.iloc[:, 1].plot(legend=None, ax=ax1, lw=1, label=data.columns[1]) lat= data.IdCurto.plot(ax=ax2, lw=1.5, label='AutoCloud', color='b', ls='-') ax1.set_xticklabels([]) ax1.set_ylabel('Amplitude') ax1.legend(fontsize=8, handlelength=1.8) ax2.set_ylabel('Índice cluster', fontsize=10) ax2.set_xlabel('Amostra $(t)$') data = pd.read_csv('data/saida_esbm.csv', sep=';') le = data.IdCurto.plot(label='eSBM', lw=1.5, color='g', ax=ax2, ls='dashdot') data = pd.read_csv('data/saida_oec.csv', sep=';') lo = data.IdCurto.plot(label='OEC', lw=1.5, color='k', ax=ax2, ls='--') ax2.legend(fontsize=8, handlelength=1.8) plt.tight_layout(h_pad=0.01, w_pad=0.01) plt.show() # w=1.5 w, 10 data = pd.read_csv('data/saida_at_w10.csv', sep=';') fig = plt.figure(204) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:3, 0]) ax2 = fig.add_subplot(gs[3, 0]) data.iloc[:, 1:39].plot(legend=None, ax=ax1, lw=1) data.IdCurto.plot(ax=ax2, lw=1.2) ax1.set_xticklabels([]) ax1.set_ylabel('Amplitude') ax2.set_ylim(0, data.IdCurto.max() + 1) ax2.set_ylabel('Índice cluster', fontsize=10) ax2.set_xlabel('Amostra $(t)$') plt.tight_layout(h_pad=0.01, w_pad=0.01) plt.show() # w=1.5 w, 15 data = pd.read_csv('data/saida_at_w15.csv', sep=';') fig = plt.figure(205) gs = plt.GridSpec(4, 1) ax1 = fig.add_subplot(gs[:3, 0]) ax2 = fig.add_subplot(gs[3, 0]) data.iloc[:, 1:39].plot(legend=None, ax=ax1, lw=1) data.IdCurto.plot(ax=ax2, lw=1.2) ax1.set_xticklabels([]) ax1.set_ylabel('Amplitude') ax2.set_ylim(0, data.IdCurto.max() + 1) ax2.set_ylabel('Índice cluster', fontsize=10) ax2.set_xlabel('Amostra $(t)$') plt.tight_layout(h_pad=0.01, w_pad=0.01) plt.show()
{"hexsha": "9b3e6a88088f0b899491fa7464b6a966f7f2c60c", "size": 55417, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/models/make_graphs.py", "max_stars_repo_name": "nayronmorais/EMPF", "max_stars_repo_head_hexsha": "4baf87dcbd0689cbe43ec3d8775f489ee7742426", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/models/make_graphs.py", "max_issues_repo_name": "nayronmorais/EMPF", "max_issues_repo_head_hexsha": "4baf87dcbd0689cbe43ec3d8775f489ee7742426", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/models/make_graphs.py", "max_forks_repo_name": "nayronmorais/EMPF", "max_forks_repo_head_hexsha": "4baf87dcbd0689cbe43ec3d8775f489ee7742426", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.2721456693, "max_line_length": 149, "alphanum_fraction": 0.625439847, "include": true, "reason": "import numpy,from numpy,from scipy", "num_tokens": 19790}
import numpy.testing as npt import pytest from cued_sf2_lab.dct import * class TestDctII: def test_basic(self): dct4 = dct_ii(4) npt.assert_allclose(dct4, [[0.5, 0.5, 0.5, 0.5 ], [0.65328, 0.27059, -0.27059, -0.65328], [0.5, -0.5, -0.5, 0.5 ], [0.27059, -0.65328, 0.65328, -0.27059]], atol=1e-5) @pytest.mark.xfail(raises=ZeroDivisionError) def test_zero(self): dct0 = dct_ii(0) assert dct0.shape == (0, 0) def test_one(self): dct1 = dct_ii(1) assert dct1.shape == (1, 1) npt.assert_allclose(dct1, 1) class TestDCTIV: def test_basic(self): dct4 = dct_iv(4) npt.assert_allclose(dct4, [[ 0.69352 , 0.587938, 0.392847, 0.13795 ], [ 0.587938, -0.13795 , -0.69352 , -0.392847], [ 0.392847, -0.69352 , 0.13795 , 0.587938], [ 0.13795 , -0.392847, 0.587938, -0.69352 ]], atol=1e-5) @pytest.mark.xfail(raises=ZeroDivisionError) def test_zero(self): dct0 = dct_iv(0) assert dct0.shape == (0, 0) def test_one(self): dct1 = dct_iv(1) assert dct1.shape == (1, 1) npt.assert_allclose(dct1, 1) class TestRegroup: def check(self, x, y, m, n): assert x.shape == y.shape xm, xn = x.shape for xi in range(xm): i_div, i_mod = divmod(xi, m) yi = i_mod*(xm // m) + i_div for xj in range(xn): j_div, j_mod = divmod(xj, n) yj = j_mod*(xn // n) + j_div assert x[xi, xj] == y[yi, yj], (xi, xj, yi, yj) def test_roundtrip(self): x = np.arange(3*4*5*6).reshape(3*4, 5*6) y = regroup(x, [3, 5]) self.check(x, y, 3, 5) # regrouping the other axes puts things back the way they were z = regroup(y, [4, 6]) npt.assert_equal(z, x) def test_repeated(self): x = np.arange(3*4*3*6).reshape(3*4, 3*6) y = regroup(x, 3) self.check(x, y, 3, 3) def test_invalid(self): x = np.arange(3*4*5*6).reshape(3*4, 5*6) with pytest.raises(ValueError): regroup(x, 7) # 7 is not a divisor class TestColXFm: def test_basic(self): C = dct_ii(4) X = np.arange(8*2).reshape(8, 2) Y = colxfm(X, C) assert Y.shape == X.shape npt.assert_allclose(Y, [[ 6. , 8. ], [-4.4609, -4.4609], [-0. , -0. ], [-0.317 , -0.317 ], [22. , 24. ], [-4.4609, -4.4609], [-0. , -0. ], [-0.317 , -0.317 ]], atol=1e-4) def test_invalid(self): C = dct_ii(5) X = np.arange(16).reshape(16, 1) with pytest.raises(ValueError): colxfm(X, C) # 5 is not a divisor of 16
{"hexsha": "2685041eeb1545e048f90dec9547486a331d1cfc", "size": 2929, "ext": "py", "lang": "Python", "max_stars_repo_path": "tests/test_dct.py", "max_stars_repo_name": "sigproc/cued_sf2_lab", "max_stars_repo_head_hexsha": "d31f5e6725e9c1be64145006d20ddb08ae68e70e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-05-13T10:00:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-17T11:03:13.000Z", "max_issues_repo_path": "tests/test_dct.py", "max_issues_repo_name": "sigproc/cued_sf2_lab", "max_issues_repo_head_hexsha": "d31f5e6725e9c1be64145006d20ddb08ae68e70e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-17T11:57:29.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-17T11:57:29.000Z", "max_forks_repo_path": "tests/test_dct.py", "max_forks_repo_name": "sigproc/cued_sf2_lab", "max_forks_repo_head_hexsha": "d31f5e6725e9c1be64145006d20ddb08ae68e70e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-17T11:06:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-20T20:22:51.000Z", "avg_line_length": 28.1634615385, "max_line_length": 70, "alphanum_fraction": 0.4817343803, "include": true, "reason": "import numpy", "num_tokens": 1034}
# Author: Alexandre Bovet <alexandre.bovet@gmail.com> # License: BSD 3 clause from sklearn.linear_model import SGDClassifier from sklearn.model_selection import KFold, GridSearchCV from sklearn.pipeline import Pipeline from sklearn.externals import joblib import time import numpy as np import ujson as json from multiprocessing import cpu_count from baseModule import baseModule class crossValOptimize(baseModule): """ Cross-validation of the classifier. Must be initialized with a dictionary `job` containing keys `features_vect_file`, `labels_vect_file` and `best_params_file`. Estimate the performance of the classifier and optimize classifier parameters with cross-validation. `crossValOptimize` loads the vectorized features and labels (`features_vect_file` and `labels_vect_file`) and saves the results of the optimization to `best_params_file` in JSON format. *Optional parameters:* :undersample_maj_class: if `undersample_maj_class` was set to `False` when building the training set, class weights will be adjusted to take into account different sizes of classes. :ncpu: number of cores to use (default is the number of cpus on your machine minus one). :scoring: The score used to optimize (default is `'f1_micro'`). :n_splits: number of folds (default is 10). :loss: loss function to be used. Default is `'log'` for Logistic Regression. :penalty: penalty of the regularization term (default is `'l2`). :n_iter: number of iterations of the gradient descent algorithm. Default is `5e5/(number of training samples)`. :grid_search_parameters: parameter space to explore during the cross-validation. Default is `{'classifier__alpha' : np.logspace(-1,-7, num=20)}`, i.e. optimizing the regularization strength (`alpha`) between 1e-1 and 1e-7 with 20 logarithmically spaced steps. :verbose: verbosity level of the calssifier (default is 1). See the sklearn Stochastic Gradient Descent user guide (http://scikit-learn.org/0.18/modules/sgd.html#sgd) for recommended settings, the GridSearchCV (http://scikit-learn.org/0.18/modules/generated/sklearn.model_selection.GridSearchCV.html) and the Stochastic Gradient Descent documentations (http://scikit-learn.org/0.18/modules/sgd.html#sgd) for details. """ def run(self): #============================================================================== # PARAMETERS #============================================================================== features_vect_file = self.job['features_vect_file'] labels_vect_file = self.job['labels_vect_file'] best_params_file = self.job['best_params_file'] # loading the memmaped features X = joblib.load(features_vect_file) y = joblib.load(labels_vect_file) #============================================================================== # OPTIONAL PARAMETERS #============================================================================== # parameters ncpu = self.job.get('ncpu', cpu_count()-1) # score to optimize scoring = self.job.get('scoring', 'f1_micro') # number of folds for the cross val n_splits = self.job.get('n_splits', 10) # loss function (log = logistic regression) loss = self.job.get('loss', 'log') # regularization (l2 = Ridge (L2 norm)) penalty = self.job.get('penalty', 'l2') # number of iterations of the stochastic gradient descent # SGD should see aounrd 1e6 samples n_iter = self.job.get('n_iter', int(np.ceil(5e6/X.shape[0]))) # parameters to optimize, defaults : alpha = regularization strength grid_search_parameters = self.job.get('grid_search_parameters', {'classifier__alpha' : np.logspace(-1,-7, num=20)}) verbose = self.job.get('CV_verbose', 1) # wether to undersample the majority class or to adjust class_weights undersample_maj_class = self.job.get('undersample_maj_class', True) if undersample_maj_class: # no need to adjust class weights since classes are balanced by # undersampling majority class class_weight=None else: # adjust class weights to balance classes class_weight='balanced' # classifier pipeline pipeline_list = [('classifier', SGDClassifier(verbose=verbose, loss=loss, n_iter=n_iter, penalty=penalty, class_weight=class_weight))] pipeline = Pipeline(pipeline_list) kfold = KFold(n_splits=n_splits, shuffle=True, random_state=34) # # Auto Grid Search # self.grid_search = GridSearchCV(estimator=pipeline, param_grid=grid_search_parameters, cv=kfold, scoring=scoring, verbose=0 , n_jobs=ncpu) print("\nPerforming grid search...") print("pipeline:", [name for name, _ in pipeline.steps]) print("parameters:") print(grid_search_parameters) t0 = time.time() self.grid_search.fit(X, y) self.print_elapsed_time(t0) print("\nBest score: %0.3f" % self.grid_search.best_score_) print("Best parameters set:") best_parameters_np = self.grid_search.best_estimator_.get_params() # prepare dictionary with best parameters default values self.best_parameters = {'classifier__loss': loss, 'classifier__penalty': penalty, 'classifier__n_iter': n_iter, 'classifier__alpha': 0.01, 'classifier__class_weight' : class_weight} # update and print best parameters for param_name in sorted(grid_search_parameters.keys()): print("\t%s: %r" % (param_name, best_parameters_np[param_name])) # convert numpy dtypes to python types if hasattr(best_parameters_np[param_name], 'item'): self.best_parameters[param_name] = best_parameters_np[param_name].item() else: self.best_parameters[param_name] = best_parameters_np[param_name] # save best params to JSON file with open(best_params_file, 'w') as fopen: json.dump(self.best_parameters, fopen)
{"hexsha": "dca351b778146ab7164e67395ee34eaf7ea86cd8", "size": 7204, "ext": "py", "lang": "Python", "max_stars_repo_path": "crossValOptimize.py", "max_stars_repo_name": "alexbovet/twitter_opinion_mining", "max_stars_repo_head_hexsha": "e071fc0447072877518a14f2f8f59f0dd974167f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-02-12T14:41:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-06T14:48:30.000Z", "max_issues_repo_path": "crossValOptimize.py", "max_issues_repo_name": "alexbovet/twitter_opinion_mining", "max_issues_repo_head_hexsha": "e071fc0447072877518a14f2f8f59f0dd974167f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "crossValOptimize.py", "max_forks_repo_name": "alexbovet/twitter_opinion_mining", "max_forks_repo_head_hexsha": "e071fc0447072877518a14f2f8f59f0dd974167f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-02-11T10:50:54.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T09:13:35.000Z", "avg_line_length": 43.6606060606, "max_line_length": 115, "alphanum_fraction": 0.56191005, "include": true, "reason": "import numpy", "num_tokens": 1413}
import sys import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy.stats import skew import imp parameters = imp.load_source("parameters", "../../../data/raw/parameters.py") selection_of_players = ["EvolvedLookerUp2_2_2", "Tit For Tat", "ZD-Extort-2" ] def main(): players = parameters.PLAYER_GROUPS["full"] player_names = [s.name for s in players] df = pd.read_csv("../../../data/processed/full/std/per_opponent/main.csv") df["Name"] = df.apply( lambda row: player_names[row["Player index"]], axis=1 ) df = df[df["Name"].isin(selection_of_players)] fig, axarr = plt.subplots(1, 3, figsize=(25, 5)) for ax, name in zip(axarr, selection_of_players): data = df[df["Name"] == name]["residual"] ax.hist(data, bins=20, color="black") ax.set_title(f"SSE distribution for {name}", size=20) ax.set_xlabel("SSE") ax.set_ylabel("Count") fig.tight_layout() fig.savefig("main.pdf") if __name__ == "__main__": main()
{"hexsha": "55f1b380014304fc08b7167e5cf499d64f8535ba", "size": 1060, "ext": "py", "lang": "Python", "max_stars_repo_path": "assets/img/sserror_distribution_for_selection_of_strategies/main.py", "max_stars_repo_name": "drvinceknight/testing_for_ZD", "max_stars_repo_head_hexsha": "a08643849a8e4ed3c1ee86ab8bd4530a97e92154", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/img/sserror_distribution_for_selection_of_strategies/main.py", "max_issues_repo_name": "drvinceknight/testing_for_ZD", "max_issues_repo_head_hexsha": "a08643849a8e4ed3c1ee86ab8bd4530a97e92154", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2019-10-02T09:25:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-27T20:48:06.000Z", "max_forks_repo_path": "assets/img/sserror_distribution_for_selection_of_strategies/main.py", "max_forks_repo_name": "drvinceknight/testing_for_ZD", "max_forks_repo_head_hexsha": "a08643849a8e4ed3c1ee86ab8bd4530a97e92154", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.0909090909, "max_line_length": 78, "alphanum_fraction": 0.6283018868, "include": true, "reason": "import numpy,from scipy", "num_tokens": 278}
program Example implicit none integer :: n n = countsubstring("the three truths", "th") write(*,*) n n = countsubstring("ababababab", "abab") write(*,*) n n = countsubstring("abaabba*bbaba*bbab", "a*b") write(*,*) n contains function countsubstring(s1, s2) result(c) character(*), intent(in) :: s1, s2 integer :: c, p, posn c = 0 if(len(s2) == 0) return p = 1 do posn = index(s1(p:), s2) if(posn == 0) return c = c + 1 p = p + posn + len(s2) end do end function end program
{"hexsha": "a9156702ad502ef72c7aa1848b4bad450b2e8803", "size": 524, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "Task/Count-occurrences-of-a-substring/Fortran/count-occurrences-of-a-substring.f", "max_stars_repo_name": "LaudateCorpus1/RosettaCodeData", "max_stars_repo_head_hexsha": "9ad63ea473a958506c041077f1d810c0c7c8c18d", "max_stars_repo_licenses": ["Info-ZIP"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-09T22:08:38.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-09T22:08:38.000Z", "max_issues_repo_path": "Task/Count-occurrences-of-a-substring/Fortran/count-occurrences-of-a-substring.f", "max_issues_repo_name": "seanwallawalla-forks/RosettaCodeData", "max_issues_repo_head_hexsha": "9ad63ea473a958506c041077f1d810c0c7c8c18d", "max_issues_repo_licenses": ["Info-ZIP"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Task/Count-occurrences-of-a-substring/Fortran/count-occurrences-of-a-substring.f", "max_forks_repo_name": "seanwallawalla-forks/RosettaCodeData", "max_forks_repo_head_hexsha": "9ad63ea473a958506c041077f1d810c0c7c8c18d", "max_forks_repo_licenses": ["Info-ZIP"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-11-09T22:08:40.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-09T22:08:40.000Z", "avg_line_length": 18.0689655172, "max_line_length": 49, "alphanum_fraction": 0.5858778626, "num_tokens": 200}
import hashlib import struct import sys import logging from functools import reduce import numpy as np from itertools import islice, chain logger = logging.getLogger(__name__) logger.setLevel("DEBUG") COMPLEMENT = {"A": "T", "C": "G", "G": "C", "T": "A"} BITS = {"A": "00", "G": "01", "C": "10", "T": "11"} BASES = {"00": "A", "01": "G", "10": "C", "11": "T"} def batch(iterable, size): sourceiter = iter(iterable) while True: batchiter = islice(sourceiter, size) yield chain([next(batchiter)], batchiter) def bitwise_and(bitarrays): return reduce(lambda x, y: x & y, bitarrays) def non_zero_bitarrary_positions(bitarray): return np.where(bitarray)[0].tolist() def chunks(l, n): """Yield successive n-sized chunks from l.""" for i in range(0, len(l), n): yield l[i : i + n] def reverse_comp(s): return "".join([COMPLEMENT.get(base, base) for base in reversed(s)]) def convert_query_kmers(kmers): for k in kmers: yield convert_query_kmer(k) def convert_query_kmer(kmer): return canonical(kmer) def canonical(k): l = [k, reverse_comp(k)] l.sort() return l[0] def min_lexo(k): l = [k, reverse_comp(k)] l.sort() return l[0] def seq_to_kmers(seq, kmer_size): for i in range(len(seq) - kmer_size + 1): yield seq[i : i + kmer_size]
{"hexsha": "22b2b60a5e642ae5725a20587d3e328903665cf3", "size": 1351, "ext": "py", "lang": "Python", "max_stars_repo_path": "bigsi/utils/fncts.py", "max_stars_repo_name": "Phelimb/bfg", "max_stars_repo_head_hexsha": "bf34abbb9d6f72a9f0c64c40eefc44d810a2502e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 109, "max_stars_repo_stars_event_min_datetime": "2017-12-13T12:25:40.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-18T08:35:44.000Z", "max_issues_repo_path": "bigsi/utils/fncts.py", "max_issues_repo_name": "Phelimb/bfg", "max_issues_repo_head_hexsha": "bf34abbb9d6f72a9f0c64c40eefc44d810a2502e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 25, "max_issues_repo_issues_event_min_datetime": "2017-12-14T04:03:46.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-04T11:50:34.000Z", "max_forks_repo_path": "bigsi/utils/fncts.py", "max_forks_repo_name": "Phelimb/bfg", "max_forks_repo_head_hexsha": "bf34abbb9d6f72a9f0c64c40eefc44d810a2502e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2017-12-22T02:14:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-01T02:49:02.000Z", "avg_line_length": 20.4696969697, "max_line_length": 72, "alphanum_fraction": 0.6247224278, "include": true, "reason": "import numpy", "num_tokens": 394}
import unittest import scipy.optimize as opt import numpy as np from parameterized import parameterized from lab3.src.methods.simplex_method import simplex_method simplex_method_testcases = [ ( np.array([[1, 2, -1, 2, 4], [0, -1, 2, 1, 3], [1, -3, 2, 2, 0]]), np.array([1, 3, 4]), np.array([1, -3, 2, 1, 4]), ), ( np.array([[1, 3, 0, 2, 1], [2, -1, 1, 2, 3], [1, -1, 2, 1, 0]]), np.array([1, 2, 4]), np.array([-1, -3, 2, 1, 4]), ), ( np.array([[-1, 3, 0, 2, 1], [2, -1, 1, 2, 3], [1, -1, 2, 1, 0]]), np.array([1, 4, 5]), np.array([-1, 0, -2, 5, 4]), ), ( np.array([[2, 3, 1, 2, 1], [2, 1, -3, 2, 1], [2, 1, 2, 1, 0]]), np.array([1, 3, 1]), np.array([-1, 1, -2, 1, 5]), ), ( np.array([[2, 1, 3, 4], [1, -1, 2, 1], [0, 0, 1, 3]]), np.array([2, 4, 1]), np.array([-2, 3, 4, -1]), ), ( np.array([[2, 3, 1, 2], [2, -1, 2, 1], [1, 1, 0, -1]]), np.array([3, 4, 1]), np.array([-2, 3, -3, 3]), ), ( np.array([[2, 3, -1, 2], [1, 1, 1, 1], [2, -1, 0, 2]]), np.array([1, 1, 2]), np.array([-2, 3, 4, -1]), ), ( np.array([[2, 1, 3, 4], [2, -1, 2, 1], [0, 0, 1, 2]]), np.array([1, 2, 4]), np.array([-2, 3, 4, -1]), ), ( np.array([[1, 2, 3, 1, 2, 5], [2, -3, 1, 2, 1, 4]]), np.array([1, 2]), np.array([-2, 3, 4, -1, 2, 1]), ), ( np.array([[3, 2, 1, -3, 2, 1], [1, 1, 0, 0, 1, 1]]), np.array([3, 2]), np.array([-2, 3, 1, 2, 0, 1]), ), ( np.array([[1, 2, 3, 4, 5, 6], [2, 1, -3, 2, 1, -3]]), np.array([1, 4]), np.array([1, -1, 2, 3, 1, 0]), ), ( np.array([[2, 3, -1, 0, 2, 1], [2, 0, 3, 0, 1, 1]]), np.array([1, 2]), np.array([-2, 3, 4, -1, 2, 1]), ) ] DELTA = 9 class TestSimplexMethod(unittest.TestCase): @parameterized.expand(simplex_method_testcases) def test_simplex_method(self, A, b, c): t = simplex_method(A, b, c) result = t if t is not None else (None, None) expected_result = opt.linprog(c=-c, A_eq=A, b_eq=b, method='simplex') self.check_result(result, expected_result) def check_result(self, result, expected_result): res, x = result if not expected_result.success: self.assertIsNone(x) else: np.testing.assert_array_almost_equal(res, -expected_result.fun, DELTA)
{"hexsha": "568178dbecc05d78c401cdd45785ab72dee0642b", "size": 2856, "ext": "py", "lang": "Python", "max_stars_repo_path": "lab3/test/test_simplex_method.py", "max_stars_repo_name": "pavponn/optimization-methods", "max_stars_repo_head_hexsha": "00db08c1b28a1ffad781fb918869247a4f2ab329", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lab3/test/test_simplex_method.py", "max_issues_repo_name": "pavponn/optimization-methods", "max_issues_repo_head_hexsha": "00db08c1b28a1ffad781fb918869247a4f2ab329", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab3/test/test_simplex_method.py", "max_forks_repo_name": "pavponn/optimization-methods", "max_forks_repo_head_hexsha": "00db08c1b28a1ffad781fb918869247a4f2ab329", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.4432989691, "max_line_length": 82, "alphanum_fraction": 0.3764005602, "include": true, "reason": "import numpy,import scipy", "num_tokens": 1113}
[STATEMENT] lemma cf_adj_eqI: assumes "\<Phi> : \<FF> \<rightleftharpoons>\<^sub>C\<^sub>F \<GG> : \<CC> \<rightleftharpoons>\<rightleftharpoons>\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>" and "\<Phi>' : \<FF>' \<rightleftharpoons>\<^sub>C\<^sub>F \<GG>' : \<CC>' \<rightleftharpoons>\<rightleftharpoons>\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>'" and "\<CC> = \<CC>'" and "\<DD> = \<DD>'" and "\<FF> = \<FF>'" and "\<GG> = \<GG>'" and "\<Phi>\<lparr>AdjNT\<rparr> = \<Phi>'\<lparr>AdjNT\<rparr>" shows "\<Phi> = \<Phi>'" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<Phi> = \<Phi>' [PROOF STEP] proof- [PROOF STATE] proof (state) goal (1 subgoal): 1. \<Phi> = \<Phi>' [PROOF STEP] interpret \<Phi>: is_cf_adjunction \<alpha> \<CC> \<DD> \<FF> \<GG> \<Phi> [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<Phi> : \<FF> \<rightleftharpoons>\<^sub>C\<^sub>F \<GG> : \<CC> \<rightleftharpoons>\<rightleftharpoons>\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD> [PROOF STEP] by (rule assms(1)) [PROOF STATE] proof (state) goal (1 subgoal): 1. \<Phi> = \<Phi>' [PROOF STEP] interpret \<Phi>': is_cf_adjunction \<alpha> \<CC>' \<DD>' \<FF>' \<GG>' \<Phi>' [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<Phi>' : \<FF>' \<rightleftharpoons>\<^sub>C\<^sub>F \<GG>' : \<CC>' \<rightleftharpoons>\<rightleftharpoons>\<^sub>C\<^bsub>\<alpha>\<^esub> \<DD>' [PROOF STEP] by (rule assms(2)) [PROOF STATE] proof (state) goal (1 subgoal): 1. \<Phi> = \<Phi>' [PROOF STEP] show ?thesis [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<Phi> = \<Phi>' [PROOF STEP] proof(rule vsv_eqI) [PROOF STATE] proof (state) goal (4 subgoals): 1. vsv \<Phi> 2. vsv \<Phi>' 3. \<D>\<^sub>\<circ> \<Phi> = \<D>\<^sub>\<circ> \<Phi>' 4. \<And>a. a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> \<Phi> \<Longrightarrow> \<Phi>\<lparr>a\<rparr> = \<Phi>'\<lparr>a\<rparr> [PROOF STEP] have dom: "\<D>\<^sub>\<circ> \<Phi> = 3\<^sub>\<nat>" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<D>\<^sub>\<circ> \<Phi> = 3\<^sub>\<nat> [PROOF STEP] by (cs_concl cs_shallow cs_simp: V_cs_simps adj_cs_simps) [PROOF STATE] proof (state) this: \<D>\<^sub>\<circ> \<Phi> = 3\<^sub>\<nat> goal (4 subgoals): 1. vsv \<Phi> 2. vsv \<Phi>' 3. \<D>\<^sub>\<circ> \<Phi> = \<D>\<^sub>\<circ> \<Phi>' 4. \<And>a. a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> \<Phi> \<Longrightarrow> \<Phi>\<lparr>a\<rparr> = \<Phi>'\<lparr>a\<rparr> [PROOF STEP] show "\<D>\<^sub>\<circ> \<Phi> = \<D>\<^sub>\<circ> \<Phi>'" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<D>\<^sub>\<circ> \<Phi> = \<D>\<^sub>\<circ> \<Phi>' [PROOF STEP] by (cs_concl cs_shallow cs_simp: V_cs_simps adj_cs_simps dom) [PROOF STATE] proof (state) this: \<D>\<^sub>\<circ> \<Phi> = \<D>\<^sub>\<circ> \<Phi>' goal (3 subgoals): 1. vsv \<Phi> 2. vsv \<Phi>' 3. \<And>a. a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> \<Phi> \<Longrightarrow> \<Phi>\<lparr>a\<rparr> = \<Phi>'\<lparr>a\<rparr> [PROOF STEP] from assms(4-7) [PROOF STATE] proof (chain) picking this: \<DD> = \<DD>' \<FF> = \<FF>' \<GG> = \<GG>' \<Phi>\<lparr>AdjNT\<rparr> = \<Phi>'\<lparr>AdjNT\<rparr> [PROOF STEP] have sup: "\<Phi>\<lparr>AdjLeft\<rparr> = \<Phi>'\<lparr>AdjLeft\<rparr>" "\<Phi>\<lparr>AdjRight\<rparr> = \<Phi>'\<lparr>AdjRight\<rparr>" "\<Phi>\<lparr>AdjNT\<rparr> = \<Phi>'\<lparr>AdjNT\<rparr>" [PROOF STATE] proof (prove) using this: \<DD> = \<DD>' \<FF> = \<FF>' \<GG> = \<GG>' \<Phi>\<lparr>AdjNT\<rparr> = \<Phi>'\<lparr>AdjNT\<rparr> goal (1 subgoal): 1. \<Phi>\<lparr>AdjLeft\<rparr> = \<Phi>'\<lparr>AdjLeft\<rparr> &&& \<Phi>\<lparr>AdjRight\<rparr> = \<Phi>'\<lparr>AdjRight\<rparr> &&& \<Phi>\<lparr>AdjNT\<rparr> = \<Phi>'\<lparr>AdjNT\<rparr> [PROOF STEP] by (simp_all add: adj_cs_simps) [PROOF STATE] proof (state) this: \<Phi>\<lparr>AdjLeft\<rparr> = \<Phi>'\<lparr>AdjLeft\<rparr> \<Phi>\<lparr>AdjRight\<rparr> = \<Phi>'\<lparr>AdjRight\<rparr> \<Phi>\<lparr>AdjNT\<rparr> = \<Phi>'\<lparr>AdjNT\<rparr> goal (3 subgoals): 1. vsv \<Phi> 2. vsv \<Phi>' 3. \<And>a. a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> \<Phi> \<Longrightarrow> \<Phi>\<lparr>a\<rparr> = \<Phi>'\<lparr>a\<rparr> [PROOF STEP] show "a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> \<Phi> \<Longrightarrow> \<Phi>\<lparr>a\<rparr> = \<Phi>'\<lparr>a\<rparr>" for a [PROOF STATE] proof (prove) goal (1 subgoal): 1. a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> \<Phi> \<Longrightarrow> \<Phi>\<lparr>a\<rparr> = \<Phi>'\<lparr>a\<rparr> [PROOF STEP] by (unfold dom, elim_in_numeral, insert sup) (auto simp: adj_field_simps) [PROOF STATE] proof (state) this: ?a \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> \<Phi> \<Longrightarrow> \<Phi>\<lparr>?a\<rparr> = \<Phi>'\<lparr>?a\<rparr> goal (2 subgoals): 1. vsv \<Phi> 2. vsv \<Phi>' [PROOF STEP] qed (auto simp: \<Phi>.L.vsv_axioms \<Phi>'.vsv_axioms) [PROOF STATE] proof (state) this: \<Phi> = \<Phi>' goal: No subgoals! [PROOF STEP] qed
{"llama_tokens": 2272, "file": "CZH_Universal_Constructions_czh_ucategories_CZH_UCAT_Adjoints", "length": 18}
(* Title: Jive Data and Store Model Author: Norbert Schirmer <schirmer at informatik.tu-muenchen.de>, 2003 Maintainer: Nicole Rauch <rauch at informatik.uni-kl.de> License: LGPL *) section \<open>Location\<close> theory Location imports AttributesIndep "../Isabelle/Value" begin text \<open>A storage location can be a field of an object, a static field, the length of an array, or the contents of an array. \<close> datatype Location = objLoc CAttId ObjectId \<comment> \<open>field in object\<close> | staticLoc AttId \<comment> \<open>static field in concrete class\<close> | arrLenLoc Arraytype ObjectId \<comment> \<open>length of an array\<close> | arrLoc Arraytype ObjectId nat \<comment> \<open>contents of an array\<close> text \<open>We only directly support one-dimensional arrays. Multidimensional arrays can be simulated by arrays of references to arrays. \<close> text \<open>The function \<open>ltype\<close> yields the content type of a location.\<close> definition ltype:: "Location \<Rightarrow> Javatype" where "ltype l = (case l of objLoc cf a \<Rightarrow> rtype (att cf) | staticLoc f \<Rightarrow> rtype f | arrLenLoc T a \<Rightarrow> IntgT | arrLoc T a i \<Rightarrow> at2jt T)" lemma ltype_simps [simp]: "ltype (objLoc cf a) = rtype (att cf)" "ltype (staticLoc f) = rtype f" "ltype (arrLenLoc T a) = IntgT" "ltype (arrLoc T a i) = at2jt T" by (simp_all add: ltype_def) text \<open>Discriminator functions to test whether a location denotes an array length or whether it denotes a static object. Currently, the discriminator functions for object and array locations are not specified. They can be added if they are needed. \<close> definition isArrLenLoc:: "Location \<Rightarrow> bool" where "isArrLenLoc l = (case l of objLoc cf a \<Rightarrow> False | staticLoc f \<Rightarrow> False | arrLenLoc T a \<Rightarrow> True | arrLoc T a i \<Rightarrow> False)" definition isStaticLoc:: "Location \<Rightarrow> bool" where "isStaticLoc l = (case l of objLoc cff a \<Rightarrow> False | staticLoc f \<Rightarrow> True | arrLenLoc T a \<Rightarrow> False | arrLoc T a i \<Rightarrow> False)" lemma isStaticLoc_simps [simp]: "isStaticLoc (objLoc cf a) = False" "isStaticLoc (staticLoc f) = True" "isStaticLoc (arrLenLoc T a) = False" "isStaticLoc (arrLoc T a i) = False" by (simp_all add: isStaticLoc_def) text \<open>The function \<open>ref\<close> yields the object or array containing the location that is passed as argument (see the function \<open>obj\<close> in \cite[p. 43 f.]{Poetzsch-Heffter97specification}). Note that for static locations the result is \<open>nullV\<close> since static locations are not associated to any object. \label{ref_def} \<close> definition ref:: "Location \<Rightarrow> Value" where "ref l = (case l of objLoc cf a \<Rightarrow> objV (cls cf) a | staticLoc f \<Rightarrow> nullV | arrLenLoc T a \<Rightarrow> arrV T a | arrLoc T a i \<Rightarrow> arrV T a)" lemma ref_simps [simp]: "ref (objLoc cf a) = objV (cls cf) a" "ref (staticLoc f) = nullV" "ref (arrLenLoc T a) = arrV T a" "ref (arrLoc T a i) = arrV T a" by (simp_all add: ref_def) text \<open>The function \<open>loc\<close> denotes the subscription of an object reference with an attribute.\<close> primrec loc:: "Value \<Rightarrow> AttId \<Rightarrow> Location" ("_.._" [80,80] 80) where "loc (objV c a) f = objLoc (catt c f) a" text \<open>Note that we only define subscription properly for object references. For all other values we do not provide any defining equation, so they will internally be mapped to \<open>arbitrary\<close>. \<close> text \<open>The length of an array can be selected with the function \<open>arr_len\<close>.\<close> primrec arr_len:: "Value \<Rightarrow> Location" where "arr_len (arrV T a) = arrLenLoc T a" text \<open>Arrays can be indexed by the function \<open>arr_loc\<close>.\<close> primrec arr_loc:: "Value \<Rightarrow> nat \<Rightarrow> Location" ("_.[_]" [80,80] 80) where "arr_loc (arrV T a) i = arrLoc T a i" text \<open>The functions @{term "loc"}, @{term "arr_len"} and @{term "arr_loc"} define the interface between the basic store model (based on locations) and the programming language Java. Instance field access {\tt obj.x} is modelled as @{term "obj..x"} or \<open>loc obj x\<close> (without the syntactic sugar), array length {\tt a.length} with @{term "arr_len a"}, array indexing {\tt a[i]} with @{term "a.[i]"} or \<open>arr_loc a i\<close>. The accessing of a static field {\tt C.f} can be expressed by the location itself \<open>staticLoc C'f\<close>. Of course one can build more infrastructure to make access to instance fields and static fields more uniform. We could for example define a function \<open>static\<close> which indicates whether a field is static or not and based on that create an @{term "objLoc"} location or a @{term "staticLoc"} location. But this will only complicate the actual proofs and we can already easily perform the distinction whether a field is static or not in the \jive-frontend and therefore keep the verification simpler. \<close> lemma ref_loc [simp]: "\<lbrakk>isObjV r; typeof r \<le> dtype f\<rbrakk> \<Longrightarrow> ref (r..f) = r" apply (case_tac r) apply (case_tac [!] f) apply (simp_all) done lemma obj_arr_loc [simp]: "isArrV r \<Longrightarrow> ref (r.[i]) = r" by (cases r) simp_all lemma obj_arr_len [simp]: "isArrV r \<Longrightarrow> ref (arr_len r) = r" by (cases r) simp_all end
{"author": "data61", "repo": "PSL", "sha": "2a71eac0db39ad490fe4921a5ce1e4344dc43b12", "save_path": "github-repos/isabelle/data61-PSL", "path": "github-repos/isabelle/data61-PSL/PSL-2a71eac0db39ad490fe4921a5ce1e4344dc43b12/SeLFiE/Example/afp-2020-05-16/thys/JiveDataStoreModel/Isabelle_Store/Location.thy"}
#!/usr/bin/python # -*- coding: utf-8 -*- ''' 2/24/2021 This script takes outputs from a regional climate model (RCM) - e.g. MERRA, MAR - for a particular site and puts that data into a pandas dataframe. The output can be fed to RCMpkl_to_spin.py to generate a time series to force the CFM YOU MAY HAVE TO EDIT THIS SCRIPT A LOT TO MAKE IT WORK WITH YOUR FILE STRUCTURE AND WHAT CLIMATE FILES YOU HAVE. And, for now there are little things you need to search out and change manually, like the reference climate interval. Sorry! @author: maxstev ''' import netCDF4 as nc import numpy as np import scipy.io import csv import math import sys import decimal import os import sys import matplotlib.pyplot as plt from dateutil import rrule from datetime import datetime, timedelta, date import pandas as pd import fnmatch from scipy.spatial import cKDTree from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score from sklearn.svm import SVR import time import xarray as xr import glob import hl_analytic as hla def find_indices(points,lon,lat,tree=None): ''' find the grid point nearest a given coordinate. ''' if tree is None: # lon,lat = lon.T,lat.T lonlat = np.column_stack((lon.ravel(),lat.ravel())) tree = cKDTree(lonlat) dist,idx = tree.query(points,k=[1]) ind = np.column_stack(np.unravel_index(idx,lon.shape)) print(ind) for i,j in ind: ii=i jj=j return ii,jj #, [(i,j) for i,j in ind] def read_netcdfs_merra(files, dim, ii, jj, vv, transform_func=None): ''' Read merra files and concatenate into a pandas dataframe ''' def process_one_path(path): with xr.open_dataset(path) as ds: # transform_func should do some sort of selection or # aggregation # if transform_func is not None: # ds = transform_func(ds) # load all data from the transformed dataset, to ensure we can # use it after closing each original file ds = ds[vv].isel(lat=ii,lon=jj) ds.load() return ds datasets = [process_one_path(p) for p in files] combined = xr.concat(datasets, dim) df1 = combined.to_dataframe() return (df1.drop(labels=['lon','lat'],axis=1)).sort_index() def read_netcdfs_mar(files, dim, ii, jj, vv): ''' Read mar files and concatenate into a pandas dataframe ''' def process_one_path(path): with xr.open_dataset(path) as ds: dsd = {} for v in vv: # print(v) if len(ds[v].dims)==4: dsd[v] = ds[v][:,0,ii,jj].to_dataframe() else: dsd[v] = ds[v][:,ii,jj].to_dataframe() df_list = [v for k,v in dsd.items()] df1 = pd.concat(df_list, axis=1) return df1[df1.columns.intersection(vv)] datasets = [process_one_path(p) for p in files] return (pd.concat(datasets)).sort_index() def effectiveT(T): ''' The Arrhenius mean temperature. ''' Q = -1 * 60.0e3 R = 8.314 k = np.exp(Q/(R*T)) km = np.mean(k) return Q/(R*np.log(km)) def getClimate(lat_int,lon_int,writer=True,datatype='MERRA',timeres='1D',melt=False,runtype='local',dsource = None): ''' Load data from MERRA or MAR or whatever. Put it into a pandas dataframe, called df_CLIM. index must be datetimeindex for resampling. df_CLIM can have any number of columns: BDOT, TSKIN, SMELT, RAIN, SUBLIMATION (use capital letters. We use SMELT because melt is a pandas function) Hopefully this makes it easy to adapt for the different climate products. write df_CLIM into a pickle for future use. Reference for Summit, Greenland (my favorite test site): lat = 72.57972 lon = -38.50454 DYE-2 (my favorite wet test site): lat = 66.5 lon = -46.2 UNITS FOR MASS FLUXES IN THE DATAFRAMES ARE kg/m^2 PER TIME STEP SIZE IN THE DATA FRAME. e.g. if you have hourly data in the dataframe, the units for accumulation are kg/m^2/hour - the mass of precip that fell during that time interval. Parameters ---------- lat_int: float the latitude of the site you want to build a climate history for lon_int: float the longitude of the site you want to build a climate history for writer: boolean Whether or not you want to write the pandas dataframe to a pickle datatype: string The type of RCM data you are using 'MERRA' or 'MAR' for now. melt: boolean Whether or not to put melt into the pandas dataframe Tinterp: 'mean', 'effective', or 'weighted' how to resample the temperature; mean is regular mean, 'effective' is Arrhenius mean; 'weighted' is accumulation-weighted mean runtype: 'local' or 'remote' Allows you easily switch between directory structures if you are testing code locally and running on a remote server dsource: 'ERA10k', 'ERA6k', or 'NCEP20k' MAR has several flavors; choose which one. Returns ------- df_CLIM: pandas dataframe Dataframe containing the time series of each pertinent variable for the site, pulled from the RCM data. Index is a datetimeindex. ''' if not writer: print('Files will not be written!') SPY = 365.25*24*3600 todaystring = date.today().strftime("%Y%m%d") # write_out_dir = 'inputdata{}/'.format(todaystring) + datatype + 'input' write_out_dir = 'pickle' if writer: try: os.makedirs(write_out_dir) except: pass if datatype == 'MERRA': ''' smb has dimensions of (time,lat,lon) smb has units of kg m^-2 s^-1 per day (because I sum the hourly values to get a value for each day, but do not divde by 24 after that) (pretty sure, at least!) temperature has dimensions of (time,lat,lon) temperature has units K ''' ### Set directory to find climate files. if lat_int < 0: # Antarctica if runtype=='local': # ddir = 'PATH/TO/LOCAL/DATA/MERRA/Antarctica/Hourly' ddir = '/Volumes/Samsung_T1/MERRA/Antarctica/daily_melt' elif runtype=='remote': ddir = 'PATH/TO/REMOTE/DATA/MERRA/Antarctica/Hourly' elif runtype=='differentremote': ddir = 'PATH/TO/OTHER/REMOTE/DATA/CFM/MERRA/Antarctica/Hourly' # Adjust these as you see fit to set the Reference Climate Interval (RCI) spin_date_st = 1980 spin_date_end = 2019 else: # Greenland if runtype=='local': # ddir = 'PATH/TO/LOCAL/DATA/MERRA/Greenland/Hourly' # ddir = '/Volumes/Samsung_T1/MERRA/Greenland/daily_melt' ddir = '/Users/cdsteve2/RCMdata/MERRA2/Greenland/daily_melt' elif runtype=='remote': ddir = 'PATH/TO/REMOTE/DATA/MERRA/Greenland/Hourly' elif runtype == 'loki': ddir = '/home/maxstev/CFM_main/MERRA/Greenland/daily_melt' # Adjust these as you see fit to set the Reference Climate Interval (RCI) spin_date_st = 1980 spin_date_end = 1995 # input_datetimes = [dparser.parse((re.search(r'\d{8}',xx)).group()) for xx in ff] # this will extract the dates for each file # yy = np.array([float((re.search(r'\d{8}',xx)).group()[0:4]) for xx in glob.glob(ddir+'/TS/*.nc*')]) # yrs = np.arange(min(yy),max(yy)+1) fn_ll = glob.glob(ddir + '/*.nc*') nc_ll = nc.Dataset(fn_ll[0],'r') lat_ll = nc_ll.variables['lat'][:] lon_ll = nc_ll.variables['lon'][:] ii, lat_val = min(enumerate(lat_ll), key=lambda x: abs(x[1]-lat_int)) jj, lon_val = min(enumerate(lon_ll), key=lambda x: abs(x[1]-lon_int)) nc_ll.close() print('lat_val: ', lat_val) print('lon_val: ', lon_val) if runtype=='local': # pickle_folder = '/PUT/PICKLES/HERE/MERRA/IDSpickle/pickle/' pickle_folder = 'example_pickle/' else: pickle_folder = 'IDS/pickle/' pickle_name = pickle_folder + 'MERRA2_CLIM_df_{}_{}.pkl'.format(lat_val,lon_val) if not os.path.exists(pickle_folder): os.makedirs(pickle_folder) if os.path.isfile(pickle_name): print('pickle found') writer = False loadnetcdf = False df_CLIM = pd.read_pickle(pickle_name) # try: # df_BDOT = pd.DataFrame(df_CLIM['PRECTOT']) # df_TS = pd.DataFrame(df_CLIM['TS']) # df_CLIM.rename(columns={'PRECTOT':'BDOT','TS':'TSKIN'},inplace=True) # except Exception: # df_BDOT = pd.DataFrame(xx['BDOT']) # df_TS = pd.DataFrame(xx['TSKIN']) # if df_CLIM.BDOT.resample('1A').sum().mean()<1: # df_CLIM.BDOT = df_CLIM.BDOT *3600 #get rid of seconds dimension - MERRA is hourly, so this gives precip per hour. else: vv=['TS','EVAP','SMELT','PRECTOT','PRECSNO'] # flist_TS = glob.glob(ddir+'/TS/*.nc*') # df_TS = read_netcdfs_merra(flist_TS, dim='time',ii=ii,jj=jj,vv='TS') # df_TS.rename(columns={'TS':'TSKIN'},inplace=True) # flist_SMB = glob.glob(ddir+'/SMB/*.nc*') # df_BDOT = read_netcdfs_merra(flist_SMB, dim='time',ii=ii,jj=jj,vv='PRECTOT') # [kg m^-2 s^-1] # df_BDOT = (df_BDOT.rename(columns={'PRECTOT':'BDOT'}))*3600 # [kg m^-2 hour^-1] (this is amount of precip per MERRA time interval) df_merra = read_netcdfs_merra(fn_ll, dim='time',ii=ii,jj=jj,vv=vv) df_CLIM = df_merra # ACCVAR = 'PRECTOT' # TVAR = 'TS' # df_MELT = None # df_RAIN = None #################### #### end MERRA ##### elif datatype == 'MAR': spin_date_st = 1980 spin_date_end = 1995 print('Using MAR') if lat_int < 0: print('no Antarctic MAR data') sys.exit() else: if runtype=='local': ddir = '/Volumes/Samsung_T1/MAR311/Greenland/Daily' if not dsource: dsource = 'ERA10k' print('using MAR ', dsource) if dsource == 'ERA10k': d2 = '/ERA_1958-2019-10km/' vv = ['ME','SF','ST2','RF','SU','TT'] elif dsource == 'ERA6k': d2 = '/ERA_1979-2020-6km/' vv = ['ME','SF','ST2','RF','TT'] elif dsource == 'NCEP20k': d2 = '/NCEP1_1948-2020_20km/' vv = ['ME','SF','ST2','RF','SU','TT'] pickle_folder = ddir + '/pickles' + d2 print(pickle_folder) if not os.path.exists(pickle_folder): os.makedirs(pickle_folder) # searchdir = ddir + d2 + '/*.nc' flist = glob.glob(ddir + d2 + '*.nc') rgr = nc.Dataset(flist[0],'r') lat = rgr['LAT'][:,:] lon = rgr['LON'][:,:] ii,jj = find_indices((lon_int,lat_int),lon,lat) lat_val = lat[ii,jj] lon_val = lon[ii,jj] print('lat_val: ', lat_val) print('lon_val: ', lon_val) rgr.close() PN = pickle_folder + 'MAR_{}_CLIM_df_{}_{}.pkl'.format(dsource,lat_val,lon_val) if os.path.isfile(PN): df_CLIM = pd.read_pickle(PN) print('Pickle found!') df_BDOT = pd.DataFrame(df_CLIM.BDOT) df_TS = pd.DataFrame(df_CLIM.TSKIN) # vv = ['ST2','SMB'] else: df_CLIM = (read_netcdfs_mar(flist,'TIME',ii=ii,jj=jj,vv=vv))[str(spin_date_st):] if 'SMB' in df_CLIM.columns: df_BDOT = pd.DataFrame(df_CLIM['SMB']/1000*917).rename(columns = ['BDOT']) #put into units kg/m^2/day (i.e. per time resolution in the files)) df_MELT = None df_RAIN = None else: if 'SU' in df_CLIM.columns: df_BDOT = pd.DataFrame(((df_CLIM['SF']-df_CLIM['SU'])/1000*917),columns=['BDOT']) #put into units kg/m^2/day (i.e. per time resolution in the files)) df_CLIM['BDOT'] = df_BDOT.BDOT.values df_CLIM.drop(['SF','SU'],axis=1,inplace=True) else: df_BDOT = pd.DataFrame((df_CLIM['SF'])/1000*917).rename(columns={'SF':'BDOT'}) #put into units kg/m^2/day (i.e. per time resolution in the files)) df_CLIM['BDOT'] = df_BDOT.BDOT.values df_CLIM.drop(['SF'],axis=1,inplace=True) df_CLIM['ME'] = df_CLIM['ME']/1000*917 #put into units kg/m^2/day (i.e. per time resolution in the files)) df_CLIM['RF'] = df_CLIM['RF']/1000*917 #put into units kg/m^2/day (i.e. per time resolution in the files)) # df_MELT = pd.DataFrame(df_CLIM['ME']/1000*917/3600).rename(columns={'ME':'MELT'}) #put into equivalent units to the merra data (kg/m^2/s) # df_RAIN = pd.DataFrame(df_CLIM['RF']/1000*917/3600).rename(columns={'RF':'RAIN'}) #put into equivalent units to the merra data (kg/m^2/s) df_TS = pd.DataFrame(df_CLIM[['ST2','TT']]).rename(columns = {'ST2':'TSKIN','TT':'T2M'}) + 273.15 drn = {'ME':'SMELT','SU':'SUBLIMATION','SF':'BDOT','RF':'RAIN','ST2':'TSKIN','SMB':'BDOT','TT':'T2M'} df_CLIM.rename(mapper=drn,axis=1,inplace=True) df_CLIM.TSKIN = df_CLIM.TSKIN + 273.15 df_CLIM.T2M = df_CLIM.T2M + 273.15 ############### ### end MAR ### ############### elif datatype == 'RACMO': ### Set directory to find climate files. if lat_int < 0: # Antarctica if runtype=='local': ddir = '/Volumes/Samsung_T1/RACMO/Antarctica' elif runtype=='remote': ddir = 'PATH/TO/REMOTE/DATA/RACMO/Antarctica/Hourly' elif runtype=='differentremote': ddir = 'PATH/TO/OTHER/REMOTE/DATA/RACMO/Antarctica/Hourly' # Adjust these as you see fit to set the Reference Climate Interval (RCI) spin_date_st = 1980 spin_date_end = 2019 else: # Greenland if runtype=='local': # ddir = 'PATH/TO/LOCAL/DATA/MERRA/Greenland/Hourly' ddir = '/Volumes/Samsung_T1/RACMO/Greenland' elif runtype=='remote': ddir = 'PATH/TO/REMOTE/DATA/RACMO/Greenland/Hourly' elif runtype == 'differentremote': ddir = 'PATH/TO/OTHER/REMOTE/DATA/RACMO/Greenland/Hourly' spin_date_st = 1980 spin_date_end = 1995 flist = glob.glob(ddir + '/*1958-2016*.nc*')[0] rgr = nc.Dataset(flist[0],'r') lat = rgr['LAT'][:,:] lon = rgr['LON'][:,:] ii,jj = find_indices((lon_int,lat_int),lon,lat) lat_val = lat[ii,jj] lon_val = lon[ii,jj] print('lat_val: ', lat_val) print('lon_val: ', lon_val) rgr.close() if writer: if datatype =='MERRA': df_CLIM.to_pickle(pickle_folder + 'MERRA2_CLIM_df_{}_{}.pkl'.format(lat_val,lon_val)) elif datatype == 'MAR': df_CLIM.to_pickle(pickle_folder + 'MAR_{}_CLIM_df_{}_{}.pkl'.format(dsource,lat_val,lon_val)) return df_CLIM # return CD, stepsperyear, depth_S1, depth_S2, desired_depth if __name__ == '__main__': tic = time.time() LLpair = sys.argv[1] nn = np.fromstring(LLpair,dtype =float, sep=' ') lat_int = nn[0] lon_int = nn[1] writer=True datatype='MERRA' runtype = 'local' df_CLIM = getClimate(lat_int,lon_int,writer = True, runtype = runtype) print(time.time()-tic)
{"hexsha": "e0783995f34d1a18371acc0a213f57497d162465", "size": 15923, "ext": "py", "lang": "Python", "max_stars_repo_path": "CFM_main/siteClimate_from_RCM.py", "max_stars_repo_name": "UWGlaciology/CommunityFirnModel", "max_stars_repo_head_hexsha": "820f8b3cfd8355b0c3085058a51f7488cac17fbe", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2019-03-28T13:56:51.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-28T12:39:10.000Z", "max_issues_repo_path": "CFM_main/siteClimate_from_RCM.py", "max_issues_repo_name": "UWGlaciology/CommunityFirnModel", "max_issues_repo_head_hexsha": "820f8b3cfd8355b0c3085058a51f7488cac17fbe", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-06-10T06:53:49.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-12T22:07:02.000Z", "max_forks_repo_path": "CFM_main/siteClimate_from_RCM.py", "max_forks_repo_name": "UWGlaciology/CommunityFirnModel", "max_forks_repo_head_hexsha": "820f8b3cfd8355b0c3085058a51f7488cac17fbe", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2017-10-09T08:16:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-11T03:51:40.000Z", "avg_line_length": 37.554245283, "max_line_length": 169, "alphanum_fraction": 0.5794762294, "include": true, "reason": "import numpy,import scipy,from scipy", "num_tokens": 4406}
#!/usr/bin/env python """About TUI window 2003-12-17 ROwen 2004-03-08 ROwen Expanded the text and made it center-justified. Moved the code to a separate class. Added test code. 2004-05-18 ROwen Stopped obtaining TUI model in addWindow; it was ignored. Thus stopped importing TUI.TUIModel in the main code. 2005-10-24 ROwen Updated the acknowledgements to include WingIDE. 2006-06-01 ROwen Updated the acknowledgements to include Fritz Stauffer. 2007-04-17 ROwen Updated the acknowledgements to add "scripts". 2009-04-21 ROwen Updated for tuiModel root->tkRoot. 2010-03-10 ROwen Added WindowName 2010-03-18 ROwen Added special file paths to the information. Removed Wingware from the acknowledgements. 2010-04-23 ROwen Stopped using Exception.message to make Python 2.6 happier. 2011-02-18 ROwen Acknowledge Joseph Huehnerhoff for the Windows builds. 2012-10-15 ROwen Assume matplotlib is installed. Report pygame version, if installed. 2013-09-05 ROwen Change "import Image" to "from PIL import Image" for compatibility with Pillow. 2014-09-16 ROwen Modified to use astropy instead of pyfits, if available. 2014-10-28 ROwen Improved version display if pyfits used instead of astropy. """ import os.path import sys from PIL import Image import matplotlib import numpy try: import astropy astropyVers = "astropy: %s" % (astropy.__version__,) except ImportError: import pyfits astropyVers = "pyfits: %s" % (pyfits.__version__,) try: import pygame pygameVersion = pygame.__version__ except ImportError: pygameVersion = "not installed" import RO.Wdg from RO.StringUtil import strFromException import TUI.TUIModel import TUI.TUIPaths import TUI.Version WindowName = "%s.About %s" % (TUI.Version.ApplicationName, TUI.Version.ApplicationName) def addWindow(tlSet): tlSet.createToplevel( name = WindowName, resizable = False, visible = False, wdgFunc = AboutWdg, ) def getInfoDict(): global astropyVers global pygameVersion tuiModel = TUI.TUIModel.getModel() res = {} res["tui"] = TUI.Version.VersionStr res["python"] = sys.version.split()[0] res["tcltk"] = tuiModel.tkRoot.call("info", "patchlevel") res["matplotlib"] = matplotlib.__version__ res["numpy"] = numpy.__version__ res["astropy"] = astropyVers # Image uses VERSION, but PILLOW supports __version__ res["pil"] = getattr(Image, "VERSION", getattr(Image, "__version__", "unknown")) res["pygame"] = pygameVersion res["specialFiles"] = getSpecialFileStr() return res def getSpecialFileStr(): """Return a string describing where the special files are """ def strFromPath(filePath): if os.path.exists(filePath): return filePath return "%s (not found)" % (filePath,) outStrList = [] for name, func in ( ("Preferences", TUI.TUIPaths.getPrefsFile), ("Window Geom.", TUI.TUIPaths.getGeomFile), ("User Presets", TUI.TUIPaths.getUserPresetsFile) ): try: filePath = func() pathStr = strFromPath(filePath) except Exception as e: pathStr = "?: %s" % (strFromException(e),) outStrList.append("%s: %s" % (name, pathStr)) tuiAdditionsDirs = TUI.TUIPaths.getAddPaths(ifExists=False) for ind, filePath in enumerate(tuiAdditionsDirs): pathStr = strFromPath(filePath) outStrList.append("%sAdditions %d: %s" % (TUI.Version.ApplicationName, ind + 1, pathStr)) outStrList.append("Error Log: %s" % (sys.stderr.name,)) return "\n".join(outStrList) class AboutWdg(RO.Wdg.StrLabel): def __init__(self, master): versDict = getInfoDict() RO.Wdg.StrLabel.__init__( self, master = master, text = u"""APO 3.5m Telescope User Interface Version %(tui)s by Russell Owen Special files: %(specialFiles)s Library versions: Python: %(python)s Tcl/Tk: %(tcltk)s matplotlib: %(matplotlib)s numpy: %(numpy)s %(astropy)s PIL: %(pil)s pygame: %(pygame)s With special thanks to: - Joseph Huehnerhoff for the Windows builds - Craig Loomis and Fritz Stauffer for the APO hub - Bob Loewenstein for Remark - Dan Long for the photograph used for the icon - APO observing specialists and users for suggestions, scripts and bug reports """ % (versDict), justify = "left", borderwidth = 10, ) if __name__ == "__main__": import TUI.TUIModel root = RO.Wdg.PythonTk() tm = TUI.TUIModel.getModel(True) addWindow(tm.tlSet) tm.tlSet.makeVisible('TUI.About TUI') getSpecialFileStr() root.lower() root.mainloop()
{"hexsha": "560e573a83b57b0eed6b06e84e17c6ec08b0dfd4", "size": 4778, "ext": "py", "lang": "Python", "max_stars_repo_path": "TUI/TUIMenu/AboutWindow.py", "max_stars_repo_name": "r-owen/TUI", "max_stars_repo_head_hexsha": "8f130368254161a2748167b7c8260cc24170c28c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-04-29T20:28:20.000Z", "max_stars_repo_stars_event_max_datetime": "2015-04-29T20:28:20.000Z", "max_issues_repo_path": "TUI/TUIMenu/AboutWindow.py", "max_issues_repo_name": "ApachePointObservatory/TUI", "max_issues_repo_head_hexsha": "8f130368254161a2748167b7c8260cc24170c28c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-06-05T22:53:58.000Z", "max_issues_repo_issues_event_max_datetime": "2017-06-05T22:53:58.000Z", "max_forks_repo_path": "TUI/TUIMenu/AboutWindow.py", "max_forks_repo_name": "r-owen/TUI", "max_forks_repo_head_hexsha": "8f130368254161a2748167b7c8260cc24170c28c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-01-28T06:28:02.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-28T06:28:02.000Z", "avg_line_length": 31.4342105263, "max_line_length": 99, "alphanum_fraction": 0.6732942654, "include": true, "reason": "import numpy,import astropy", "num_tokens": 1266}
""" Geothermal: Climate signal: What happens when assuming a climate change is linear, when in fact it was abrupt? """ import numpy from fatiando import logger, utils from fatiando.geothermal import climsig from fatiando.vis import mpl log = logger.get() log.info(logger.header()) log.info(__doc__) # Generating synthetic data using an ABRUPT model amp = 3 age = 54 zp = numpy.arange(0, 100, 1) temp, error = utils.contaminate(climsig.abrupt(amp, age, zp), 0.02, percent=True, return_stddev=True) # Preparing for the inversion assuming that the change was LINEAR p, residuals = climsig.ilinear(temp, zp) est_amp, est_age = p mpl.figure(figsize=(12,5)) mpl.subplot(1, 2, 1) mpl.title("Climate signal\n(true is abrupt but inverted using linear)") mpl.plot(temp, zp, 'ok', label='Observed') mpl.plot(temp - residuals, zp, '--r', linewidth=3, label='Predicted') mpl.legend(loc='lower right', numpoints=1) mpl.xlabel("Temperature (C)") mpl.ylabel("Z") mpl.ylim(100, 0) ax = mpl.subplot(1, 2, 2) ax2 = mpl.twinx() mpl.title("Age and amplitude") width = 0.3 ax.bar([1 - width], [age], width, color='b', label="True") ax.bar([1], [est_age], width, color='r', label="Estimate") ax2.bar([2 - width], [amp], width, color='b') ax2.bar([2], [est_amp], width, color='r') ax.legend(loc='upper center', numpoints=1) ax.set_ylabel("Age (years)") ax2.set_ylabel("Amplitude (C)") ax.set_xticks([1, 2]) ax.set_xticklabels(['Age', 'Amplitude']) ax.set_ylim(0, 150) ax2.set_ylim(0, 4) mpl.show()
{"hexsha": "0b54bd8fec29eebcfaff6ab8925cb51f7ba50299", "size": 1482, "ext": "py", "lang": "Python", "max_stars_repo_path": "_static/cookbook/geothermal_climsig_wrong.py", "max_stars_repo_name": "fatiando/v0.1", "max_stars_repo_head_hexsha": "1ab9876b247c67834b8e1c874d5b1d86f82802e2", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_static/cookbook/geothermal_climsig_wrong.py", "max_issues_repo_name": "fatiando/v0.1", "max_issues_repo_head_hexsha": "1ab9876b247c67834b8e1c874d5b1d86f82802e2", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_static/cookbook/geothermal_climsig_wrong.py", "max_forks_repo_name": "fatiando/v0.1", "max_forks_repo_head_hexsha": "1ab9876b247c67834b8e1c874d5b1d86f82802e2", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.64, "max_line_length": 74, "alphanum_fraction": 0.7044534413, "include": true, "reason": "import numpy", "num_tokens": 463}
#include "hsm/details/has_action.h" #include "hsm/details/state.h" #include "hsm/details/traits.h" #include "hsm/details/transition_table.h" #include "hsm/front/transition_tuple.h" #include <gtest/gtest.h> #include <boost/hana.hpp> using namespace ::testing; namespace { class TraitsTests : public Test { }; struct e1 { }; struct S1 { constexpr auto on_entry(){ } constexpr auto on_exit(){ } static constexpr auto make_transition_table() { } constexpr auto make_internal_transition_table() { return 0; } constexpr auto defer_events() { return hsm::events<e1>; } }; struct S2 { }; } TEST_F(TraitsTests, should_recognize_exit_state) { constexpr auto exit = hsm::state_t<hsm::Exit<S1, S2>> {}; static_assert(hsm::is_exit_state(exit)); } TEST_F(TraitsTests, should_recognize_transition_table) { static_assert(hsm::has_transition_table(hsm::state_t<S1> {})); } TEST_F(TraitsTests, should_not_recognize_transition_table) { static_assert(!hsm::has_transition_table(hsm::state_t<S2> {})); } TEST_F(TraitsTests, should_recognize_internal_transition_table) { static_assert(hsm::has_internal_transition_table(hsm::state_t<S1> {})); } TEST_F(TraitsTests, should_recognize_on_entry_function) { static_assert(hsm::has_entry_action(hsm::state_t<S1> {})); } TEST_F(TraitsTests, should_recognize_on_exit_function) { static_assert(hsm::has_exit_action(hsm::state_t<S1> {})); } TEST_F(TraitsTests, should_recognize_history_state) { static_assert(hsm::is_history_state(hsm::state_t<hsm::History<S1>> {})); } TEST_F(TraitsTests, should_recognize_initial_state) { static_assert(hsm::is_initial_state(hsm::state_t<hsm::Initial<S1>> {})); } TEST_F(TraitsTests, should_recognize_defered_events) { static_assert(hsm::has_deferred_events(hsm::state_t<S1> {})); } TEST_F(TraitsTests, should_recognize_no_action) { static_assert(hsm::is_no_action(hsm::noAction {})); } TEST_F(TraitsTests, should_recognize_substate_initial_state_entry_action) { struct SubState { static auto constexpr make_transition_table() { return hsm::transition_table(hsm::transition( hsm::initial_t<S1> {}, hsm::event_t<e1> {}, hsm::noGuard {}, hsm::noAction {}, hsm::state_t<S1> {})); } }; static_assert(hsm::has_substate_initial_state_entry_action(hsm::state_t<SubState> {})); }
{"hexsha": "bb0664814771e960236ebc081c9407608d8b8d12", "size": 2487, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "test/unit/traits_tests.cpp", "max_stars_repo_name": "erikzenker/hsm", "max_stars_repo_head_hexsha": "02369b68b36faa2c3e101b66725b5e38f15250a8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 137.0, "max_stars_repo_stars_event_min_datetime": "2020-01-15T07:58:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T17:15:01.000Z", "max_issues_repo_path": "test/unit/traits_tests.cpp", "max_issues_repo_name": "wuyadie/hsm", "max_issues_repo_head_hexsha": "02369b68b36faa2c3e101b66725b5e38f15250a8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 85.0, "max_issues_repo_issues_event_min_datetime": "2019-08-04T17:22:19.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-12T16:48:49.000Z", "max_forks_repo_path": "test/unit/traits_tests.cpp", "max_forks_repo_name": "wuyadie/hsm", "max_forks_repo_head_hexsha": "02369b68b36faa2c3e101b66725b5e38f15250a8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10.0, "max_forks_repo_forks_event_min_datetime": "2019-12-01T14:03:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-05T18:17:31.000Z", "avg_line_length": 22.2053571429, "max_line_length": 91, "alphanum_fraction": 0.6924004825, "num_tokens": 632}
# -*- coding: utf-8 -*- """ Created on Wed Feb 27 13:34:37 2019 With help from : https://pytorch.org/tutorials/beginner/data_loading_tutorial.html https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py @author: abobashe """ import os import numpy as np import pandas as pd import torchvision.datasets as dset import torchvision.transforms as transforms class JocondeDataset(dset.ImageFolder): """Joconde dataset.""" def __init__(self, csv_file, images_root_dir, dataset_name = '', label_column='label', exclude_labels=[] , filename_column='imagePath', filter_dict = {}, add_columns = ['ref'], transform=None, target_transform=None, multiple_labels=False): """ Args: csv_file (string): Path to the csv file with image file information. root_dir (string): Directory with all the images. label_column (string): Column in the csv file that defines the image label exclude_lables (list of strings): Exclude items with labels from this list from the dataset filter_dict (dictionary): Define the filters as a dictionary key =<column name> value=[list of values] add_columns (list of strings): Additional items included with the sample file transform (callable, optional): Optional transform to be applied on a sample. target_transform (callable, optional): A function/transform that takes in the target and transforms it. """ # to curcumvent the exception in the base class constructor # output an empty file fake = self.__output_fake_dir(os.path.dirname(csv_file)) super(JocondeDataset, self).__init__(images_root_dir,#os.path.dirname(csv_file), transform=transform, target_transform=target_transform) self.__remove_fake_dir(fake) classes, class_to_idx, samples = self._make_dataset(images_root_dir, csv_file, label_column=label_column, exclude_labels=exclude_labels , multi_label=multiple_labels, filename_column= filename_column, filter_dict=filter_dict, add_columns=add_columns) self.name = dataset_name self.classes = classes self.class_to_idx = class_to_idx self.samples = samples self.targets = [s[1] for s in samples] self.imgs = self.samples self.descr_file = csv_file if isinstance(self.targets[0], int): #single label labels_per_class = [sum([x == i for x in self.targets]) for i in range(len(self.classes))] else: #multi label labels_per_class = [sum([x[i] for x in self.targets]) for i in range(len(self.classes))] self.labels_count = dict(zip(self.classes, labels_per_class)) def _make_dataset(self, root, dataset_file, label_column='label', exclude_labels=[] , multi_label = False, filename_column='imagePath', filter_dict = {}, add_columns = [] ): images = [] root = os.path.expanduser(root) df = pd.read_csv(dataset_file, na_filter=False) # get classes #classes = df[label_column].dropna().unique() classes = [x for x in df[label_column].unique() if len(x) > 0] classes = [ elem for elem in classes if elem not in exclude_labels] if(multi_label): #exclude compositions of classes based on assumption that #labels will be separated by '+' #classes = [ elem for elem in classes if elem.find('+') < 0 ] classes = sorted(set('+'.join(classes).split('+'))) else: classes.sort() class_to_idx = {classes[i]: i for i in range(len(classes))} #filter the datatset if(len(exclude_labels) > 0): df = df.loc[df[label_column].isin(classes)] for column, values in filter_dict.items(): df = df[df[column].isin(values)] #create the imagepath and target(s) list for index, row in df.iterrows(): path = os.path.normpath(os.path.join(root, row[filename_column].strip('/\\'))) if(multi_label): target = [1 if key in row[label_column].split('+') else 0 for key in class_to_idx.keys()] else: target = class_to_idx[row[label_column]] item = (path , target) for i in range(len(add_columns)): item = item + (row[add_columns[i]],) images.append(item) return classes, class_to_idx, images def __getitem__(self, index): """ Args: index (int): Index Returns: tuple: (sample, target) where target is class_index of the target class. """ path = self.samples[index][0] target = self.samples[index][1] sample = self.loader(path) if self.transform is not None: sample = self.transform(sample) if self.target_transform is not None: target = self.target_transform(target) return sample, target def __output_fake_dir(self, dir): fake_dir = os.path.join(dir, 'foo') if not os.path.exists(fake_dir ): os.makedirs(fake_dir) fake_file = os.path.join(fake_dir, 'bar.png') if not os.path.exists(fake_file ): open(fake_file, 'a').close() return fake_dir def __remove_fake_dir(self, fake_dir): import shutil shutil.rmtree(fake_dir) def extra_repr(self): return '\n'.join(['Description file: {}'.format(self.descr_file), 'Number of classes: {}'.format(len(self.classes)), 'Number of uniqie labels: {}'.format(np.unique(np.array(self.targets), axis=0).shape[0], 'Number of class labels: {}'.format(sum(self.labels_count.values())))]) def get_norm_values(self): for trans in self.transform.transforms: if(isinstance(trans, transforms.transforms.Normalize)) : return trans return transforms.transforms.Normalize(mean = [ 0.5, 0.5, 0.5 ], std = [ 0.5, 0.5, 0.5 ]) #%% #TODO: Test class JocondeDataset_ext(JocondeDataset): def __init__(self, csv_file, images_root_dir, dataset_name = '', label_column='label', exclude_labels=[] , filename_column='imagePath', filter_dict = {}, add_columns = ['ref'], transform=None, target_transform=None, multiple_labels=False): """ Args: the same as in JocondeDataset """ super(JocondeDataset_ext, self ).__init__(csv_file, images_root_dir, dataset_name, label_column, exclude_labels, filename_column, filter_dict, add_columns, transform, target_transform, multiple_labels) def __getitem__(self, index): sample, target = super(JocondeDataset_ext, self ).__getitem__(index) if len(self.samples[index]) > 1: extra_data = self.samples[index][-1] else: extra_data = None return sample, target, extra_data #%% # Alternative approach to delivering extra data class JocondeDataset_wrap(object): def __init__(self, Joconde_dataset): if isinstance(Joconde_dataset, JocondeDataset): raise TypeError() self.dataset = Joconde_dataset def __getitem__(self, index): sample, target = self.dataset.__getitem__(index) if len(self.samples[index]) > 1: extra_data = self.samples[index][-1] else: extra_data = None return sample, target, extra_data
{"hexsha": "882806b3f7d30acc1edc5f2eb311dd4b6eb2c291", "size": 9456, "ext": "py", "lang": "Python", "max_stars_repo_path": "MonaLIA/data/image_dataset.py", "max_stars_repo_name": "Wimmics/MonaLIA", "max_stars_repo_head_hexsha": "448cbcf08ddcd837f63cd959a5b7f1ff393e60d3", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MonaLIA/data/image_dataset.py", "max_issues_repo_name": "Wimmics/MonaLIA", "max_issues_repo_head_hexsha": "448cbcf08ddcd837f63cd959a5b7f1ff393e60d3", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MonaLIA/data/image_dataset.py", "max_forks_repo_name": "Wimmics/MonaLIA", "max_forks_repo_head_hexsha": "448cbcf08ddcd837f63cd959a5b7f1ff393e60d3", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.9759036145, "max_line_length": 120, "alphanum_fraction": 0.5039128596, "include": true, "reason": "import numpy", "num_tokens": 1758}
''' Copyright 2022 Airbus SAS Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import unittest import numpy as np import pandas as pd from os.path import join, dirname from pandas import DataFrame, read_csv from scipy.interpolate import interp1d from sos_trades_core.execution_engine.execution_engine import ExecutionEngine from sos_trades_core.tests.core.abstract_jacobian_unit_test import AbstractJacobianUnittest class MacroEconomicsJacobianDiscTest(AbstractJacobianUnittest): #AbstractJacobianUnittest.DUMP_JACOBIAN = True def setUp(self): self.name = 'Test' self.ee = ExecutionEngine(self.name) self.year_start = 2020 self.year_end = 2050 self.time_step = 1 self.years = np.arange(self.year_start, self.year_end + 1, self.time_step) self.nb_per = round( (self.year_end - self.year_start) / self.time_step + 1) # ------------------------- # csv data # energy production self.data_dir = join(dirname(__file__), 'data') brut_net = 1/1.45 #prepare energy df energy_outlook = pd.DataFrame({ 'year': [2010, 2017, 2018, 2025, 2030, 2035, 2040, 2050, 2060, 2100], 'energy': [149.483879, 162.7848774, 166.4685636, 180.7072889, 189.6932084, 197.8418842, 206.1201182, 220.000, 250.0, 300.0]}) f2 = interp1d(energy_outlook['year'], energy_outlook['energy']) #Find values for 2020, 2050 and concat dfs energy_supply = f2(np.arange(self.year_start, self.year_end+1)) energy_supply_values = energy_supply * brut_net energy_supply_df = pd.DataFrame({'years': self.years, 'Total production': energy_supply_values}) energy_supply_df.index = self.years energy_supply_df.loc[2021, 'Total production'] = 116.1036348 self.energy_supply_df = energy_supply_df # ------------------------- # csv data # co2 emissions energy_supply_csv = read_csv(join(self.data_dir, 'energy_supply_data_onestep.csv')) energy_supply_start = energy_supply_csv.loc[energy_supply_csv['years'] >= self.year_start] energy_supply_end = energy_supply_csv.loc[energy_supply_csv['years'] <= self.year_end] energy_supply_df = pd.merge(energy_supply_start, energy_supply_end) # energy production divided by 1e3 (scaling factor production) energy_supply_csv['cumulative_total_energy_supply'] = energy_supply_csv['cumulative_total_energy_supply'] / 1e3 self.co2_emissions_gt = energy_supply_df.rename( columns={'total_CO2_emitted': 'Total CO2 emissions'}) self.co2_emissions_gt.index = self.years for i in np.arange(2021, self.year_end+1): emission_vefore = self.co2_emissions_gt.loc[i-1, 'Total CO2 emissions'] self.co2_emissions_gt.loc[i,'Total CO2 emissions'] = emission_vefore*(1.02) self.default_co2_efficiency = pd.DataFrame( {'years': self.years, 'CO2_tax_efficiency': 40.0}, index=self.years) # ------------------------- # csv data # damage damage_csv = read_csv(join(self.data_dir, 'damage_data_onestep.csv')) # adapt lenght to the year range damage_df_start = damage_csv.loc[damage_csv['years'] >= self.year_start] damage_df_end = damage_csv.loc[damage_csv['years'] <= self.year_end] damage_df = pd.merge(damage_df_start, damage_df_end) self.damage_df = damage_df[['years', 'damage_frac_output']] self.damage_df.index = self.years # ------------------------- # csv data # population global_data_dir = join(dirname(dirname(__file__)), 'data') population_csv = read_csv( join(global_data_dir, 'population_df.csv')) population_df_start = population_csv.loc[population_csv['years'] >= self.year_start] population_df_end = population_csv.loc[population_csv['years'] <= self.year_end] self.population_df = pd.merge(population_df_start, population_df_end) self.population_df.index = self.years # energy invest divided by 1e2 (scaling factor invest) energy_invest = np.asarray([2.6] * self.nb_per) self.total_invest = np.asarray([27.0] * self.nb_per) self.total_invest = DataFrame( {'years': self.years, 'share_investment': self.total_invest}) self.share_energy_investment = DataFrame( {'years': self.years, 'share_investment': energy_invest}) # default CO2 tax self.default_CO2_tax = pd.DataFrame( {'years': self.years, 'CO2_tax': 50.0}, index = self.years) self.default_CO2_tax.loc[2020, 'CO2_tax'] = 5000.0 self.default_CO2_tax.loc[2021, 'CO2_tax'] = 120.0 #Population workforce self.working_age_population_df = pd.DataFrame( {'years': self.years, 'population_1570': 6300}, index=self.years) # energy_capital nb_per = len(self.years) energy_capital_year_start = 16.09 energy_capital = [] energy_capital.append(energy_capital_year_start) for year in np.arange(1, nb_per): energy_capital.append(energy_capital[year - 1] * 1.02) self.energy_capital = pd.DataFrame({'years': self.years, 'energy_capital': energy_capital}) def analytic_grad_entry(self): return [ self.test_macro_economics_analytic_grad, self.test_macro_economics_analytic_grad_damageproductivity, self.test_macro_economics_analytic_grad_max_damage, self.test_macro_economics_analytic_grad_gigantic_invest, self.test_macro_economics_very_high_emissions, self.test_macro_economics_negativeco2_emissions, self.test_macro_economics_negativeco2_tax ] def test_macro_economics_analytic_grad(self): self.model_name = 'Macroeconomics' ns_dict = {'ns_witness': f'{self.name}', 'ns_energy_mix': f'{self.name}', 'ns_public': f'{self.name}', 'ns_functions': f'{self.name}', 'ns_ref':f'{self.name}' } self.ee.ns_manager.add_ns_def(ns_dict) mod_path = 'climateeconomics.sos_wrapping.sos_wrapping_witness.macroeconomics.macroeconomics_discipline.MacroeconomicsDiscipline' builder = self.ee.factory.get_builder_from_module( self.model_name, mod_path) self.ee.factory.set_builders_to_coupling_builder(builder) self.ee.configure() self.ee.display_treeview_nodes() inputs_dict = {f'{self.name}.year_start': self.year_start, f'{self.name}.year_end': self.year_end, f'{self.name}.time_step': self.time_step, f'{self.name}.init_rate_time_pref': 0.015, f'{self.name}.conso_elasticity': 1.45, f'{self.name}.{self.model_name}.damage_to_productivity': False, f'{self.name}.frac_damage_prod': 0.3, f'{self.name}.share_energy_investment': self.share_energy_investment, f'{self.name}.energy_production': self.energy_supply_df, f'{self.name}.damage_df': self.damage_df, f'{self.name}.population_df': self.population_df, f'{self.name}.total_investment_share_of_gdp': self.total_invest, f'{self.name}.CO2_taxes': self.default_CO2_tax, f'{self.name}.{self.model_name}.CO2_tax_efficiency': self.default_co2_efficiency, f'{self.name}.co2_emissions_Gt': self.co2_emissions_gt, f'{self.name}.working_age_population_df' : self.working_age_population_df, f'{self.name}.energy_capital': self.energy_capital, f'{self.name}.alpha': 0.5 } self.ee.load_study_from_input_dict(inputs_dict) disc_techno = self.ee.root_process.sos_disciplines[0] self.check_jacobian(location=dirname(__file__), filename=f'jacobian_macroeconomics_discipline.pkl', discipline=disc_techno, step=1e-15, derr_approx='complex_step', inputs=[f'{self.name}.energy_production', f'{self.name}.damage_df', f'{self.name}.share_energy_investment', f'{self.name}.total_investment_share_of_gdp', f'{self.name}.co2_emissions_Gt', f'{self.name}.CO2_taxes', f'{self.name}.population_df', f'{self.name}.working_age_population_df', f'{self.name}.energy_capital'], outputs=[f'{self.name}.economics_df', f'{self.name}.energy_investment', f'{self.name}.pc_consumption_constraint', f'{self.name}.global_investment_constraint', f'{self.name}.emax_enet_constraint', f'{self.name}.delta_capital_objective', f'{self.name}.delta_capital_objective_weighted', f'{self.name}.delta_capital_constraint', f'{self.name}.delta_capital_constraint_dc', f'{self.name}.delta_capital_lintoquad']) def test_macro_economics_analytic_grad_damageproductivity(self): self.model_name = 'Macroeconomics' ns_dict = {'ns_witness': f'{self.name}', 'ns_energy_mix': f'{self.name}', 'ns_public': f'{self.name}', 'ns_functions': f'{self.name}', 'ns_ref':f'{self.name}'} self.ee.ns_manager.add_ns_def(ns_dict) mod_path = 'climateeconomics.sos_wrapping.sos_wrapping_witness.macroeconomics.macroeconomics_discipline.MacroeconomicsDiscipline' builder = self.ee.factory.get_builder_from_module( self.model_name, mod_path) self.ee.factory.set_builders_to_coupling_builder(builder) self.ee.configure() self.ee.display_treeview_nodes() inputs_dict = {f'{self.name}.year_start': self.year_start, f'{self.name}.year_end': self.year_end, f'{self.name}.time_step': self.time_step, f'{self.name}.init_rate_time_pref': 0.015, f'{self.name}.conso_elasticity': 1.45, f'{self.name}.{self.model_name}.damage_to_productivity': True, f'{self.name}.frac_damage_prod': 0.3, f'{self.name}.share_energy_investment': self.share_energy_investment, # f'{self.name}.share_non_energy_investment': # share_non_energy_investment, f'{self.name}.energy_production': self.energy_supply_df, f'{self.name}.damage_df': self.damage_df, f'{self.name}.population_df': self.population_df, f'{self.name}.total_investment_share_of_gdp': self.total_invest, f'{self.name}.CO2_taxes': self.default_CO2_tax, f'{self.name}.{self.model_name}.CO2_tax_efficiency': self.default_co2_efficiency, f'{self.name}.co2_emissions_Gt': self.co2_emissions_gt, f'{self.name}.working_age_population_df' : self.working_age_population_df, f'{self.name}.energy_capital': self.energy_capital, f'{self.name}.alpha': 0.5 } self.ee.load_study_from_input_dict(inputs_dict) disc_techno = self.ee.root_process.sos_disciplines[0] self.check_jacobian(location=dirname(__file__), filename=f'jacobian_macroeconomics_discipline_grad_damageproductivity.pkl', discipline=disc_techno, step=1e-15, derr_approx='complex_step', inputs=[f'{self.name}.energy_production', f'{self.name}.damage_df', f'{self.name}.share_energy_investment', f'{self.name}.total_investment_share_of_gdp', f'{self.name}.co2_emissions_Gt', f'{self.name}.CO2_taxes', f'{self.name}.population_df', f'{self.name}.working_age_population_df', f'{self.name}.energy_capital'], outputs=[f'{self.name}.economics_df', f'{self.name}.energy_investment', f'{self.name}.pc_consumption_constraint', f'{self.name}.global_investment_constraint', f'{self.name}.emax_enet_constraint', f'{self.name}.delta_capital_objective', f'{self.name}.delta_capital_objective_weighted', f'{self.name}.delta_capital_constraint', f'{self.name}.delta_capital_constraint_dc']) def test_macro_economics_analytic_grad_max_damage(self): self.model_name = 'Macroeconomics' ns_dict = {'ns_witness': f'{self.name}', 'ns_energy_mix': f'{self.name}', 'ns_public': f'{self.name}', 'ns_functions': f'{self.name}', 'ns_ref': f'{self.name}'} self.ee.ns_manager.add_ns_def(ns_dict) mod_path = 'climateeconomics.sos_wrapping.sos_wrapping_witness.macroeconomics.macroeconomics_discipline.MacroeconomicsDiscipline' builder = self.ee.factory.get_builder_from_module( self.model_name, mod_path) self.ee.factory.set_builders_to_coupling_builder(builder) self.ee.configure() self.ee.display_treeview_nodes() self.damage_df['damage_frac_output'] = 0.9 inputs_dict = {f'{self.name}.year_start': self.year_start, f'{self.name}.year_end': self.year_end, f'{self.name}.time_step': self.time_step, f'{self.name}.init_rate_time_pref': 0.015, f'{self.name}.conso_elasticity': 1.45, f'{self.name}.{self.model_name}.damage_to_productivity': False, f'{self.name}.frac_damage_prod': 0.3, f'{self.name}.share_energy_investment': self.share_energy_investment, # f'{self.name}.share_non_energy_investment': # share_non_energy_investment, f'{self.name}.energy_production': self.energy_supply_df, f'{self.name}.damage_df': self.damage_df, f'{self.name}.population_df': self.population_df, f'{self.name}.total_investment_share_of_gdp': self.total_invest, f'{self.name}.CO2_taxes': self.default_CO2_tax, f'{self.name}.{self.model_name}.CO2_tax_efficiency': self.default_co2_efficiency, f'{self.name}.co2_emissions_Gt': self.co2_emissions_gt, f'{self.name}.working_age_population_df' : self.working_age_population_df, f'{self.name}.energy_capital': self.energy_capital, f'{self.name}.alpha': 0.5 } self.ee.load_study_from_input_dict(inputs_dict) disc_techno = self.ee.root_process.sos_disciplines[0] self.check_jacobian(location=dirname(__file__), filename=f'jacobian_macroeconomics_discipline_grad_max_damage.pkl', discipline=disc_techno, step=1e-15, derr_approx='complex_step', inputs=[f'{self.name}.energy_production', f'{self.name}.damage_df', f'{self.name}.share_energy_investment', f'{self.name}.total_investment_share_of_gdp', f'{self.name}.co2_emissions_Gt', f'{self.name}.CO2_taxes', f'{self.name}.population_df', f'{self.name}.working_age_population_df', f'{self.name}.energy_capital'], outputs=[f'{self.name}.economics_df', f'{self.name}.energy_investment', f'{self.name}.pc_consumption_constraint', f'{self.name}.global_investment_constraint', f'{self.name}.emax_enet_constraint', f'{self.name}.delta_capital_objective', f'{self.name}.delta_capital_objective_weighted', f'{self.name}.delta_capital_constraint', f'{self.name}.delta_capital_constraint_dc']) def test_macro_economics_analytic_grad_gigantic_invest(self): self.model_name = 'Macroeconomics' ns_dict = {'ns_witness': f'{self.name}', 'ns_energy_mix': f'{self.name}', 'ns_public': f'{self.name}', 'ns_functions': f'{self.name}', 'ns_ref':f'{self.name}'} self.ee.ns_manager.add_ns_def(ns_dict) mod_path = 'climateeconomics.sos_wrapping.sos_wrapping_witness.macroeconomics.macroeconomics_discipline.MacroeconomicsDiscipline' builder = self.ee.factory.get_builder_from_module( self.model_name, mod_path) self.ee.factory.set_builders_to_coupling_builder(builder) self.ee.configure() self.ee.display_treeview_nodes() energy_invest = np.asarray([60.0] * self.nb_per) total_invest = np.asarray([80.0] * self.nb_per) total_invest = DataFrame( {'years': self.years, 'share_investment': total_invest}) share_energy_investment = DataFrame( {'years': self.years, 'share_investment': energy_invest}) inputs_dict = {f'{self.name}.year_start': self.year_start, f'{self.name}.year_end': self.year_end, f'{self.name}.time_step': self.time_step, f'{self.name}.init_rate_time_pref': 0.015, f'{self.name}.conso_elasticity': 1.45, f'{self.name}.{self.model_name}.damage_to_productivity': False, f'{self.name}.frac_damage_prod': 0.3, f'{self.name}.share_energy_investment': share_energy_investment, # f'{self.name}.share_non_energy_investment': # share_non_energy_investment, f'{self.name}.energy_production': self.energy_supply_df, f'{self.name}.damage_df': self.damage_df, f'{self.name}.population_df': self.population_df, f'{self.name}.total_investment_share_of_gdp': total_invest, f'{self.name}.CO2_taxes': self.default_CO2_tax, f'{self.name}.{self.model_name}.CO2_tax_efficiency': self.default_co2_efficiency, f'{self.name}.co2_emissions_Gt': self.co2_emissions_gt, f'{self.name}.working_age_population_df' : self.working_age_population_df, f'{self.name}.energy_capital': self.energy_capital, f'{self.name}.alpha': 0.5 } self.ee.load_study_from_input_dict(inputs_dict) disc_techno = self.ee.root_process.sos_disciplines[0] self.check_jacobian(location=dirname(__file__), filename=f'jacobian_macroeconomics_discipline_grad_gigantic_invest.pkl', discipline=disc_techno, step=1e-15, derr_approx='complex_step', inputs=[f'{self.name}.energy_production', f'{self.name}.damage_df', f'{self.name}.share_energy_investment', f'{self.name}.total_investment_share_of_gdp', f'{self.name}.co2_emissions_Gt', f'{self.name}.CO2_taxes', f'{self.name}.population_df', f'{self.name}.working_age_population_df', f'{self.name}.energy_capital'], outputs=[f'{self.name}.economics_df', f'{self.name}.energy_investment', f'{self.name}.pc_consumption_constraint', f'{self.name}.global_investment_constraint', f'{self.name}.emax_enet_constraint', f'{self.name}.delta_capital_objective', f'{self.name}.delta_capital_objective_weighted', f'{self.name}.delta_capital_constraint', f'{self.name}.delta_capital_constraint_dc']) def test_macro_economics_very_high_emissions(self): self.model_name = 'Macroeconomics' ns_dict = {'ns_witness': f'{self.name}', 'ns_energy_mix': f'{self.name}', 'ns_public': f'{self.name}', 'ns_functions': f'{self.name}', 'ns_ref':f'{self.name}'} self.ee.ns_manager.add_ns_def(ns_dict) mod_path = 'climateeconomics.sos_wrapping.sos_wrapping_witness.macroeconomics.macroeconomics_discipline.MacroeconomicsDiscipline' builder = self.ee.factory.get_builder_from_module( self.model_name, mod_path) self.ee.factory.set_builders_to_coupling_builder(builder) self.ee.configure() self.ee.display_treeview_nodes() #- retrieve co2_emissions_gt input energy_supply_csv = read_csv( join(self.data_dir, 'energy_supply_data_onestep_high_CO2.csv')) # adapt lenght to the year range energy_supply_start = energy_supply_csv.loc[energy_supply_csv['years'] >= self.year_start] energy_supply_end = energy_supply_csv.loc[energy_supply_csv['years'] <= self.year_end] energy_supply_df = pd.merge(energy_supply_start, energy_supply_end) energy_supply_df["years"] = energy_supply_df['years'] co2_emissions_gt = energy_supply_df.rename( columns={'total_CO2_emitted': 'Total CO2 emissions'}) co2_emissions_gt.index = self.years inputs_dict = {f'{self.name}.year_start': self.year_start, f'{self.name}.year_end': self.year_end, f'{self.name}.time_step': self.time_step, f'{self.name}.init_rate_time_pref': 0.015, f'{self.name}.conso_elasticity': 1.45, f'{self.name}.{self.model_name}.damage_to_productivity': True, f'{self.name}.frac_damage_prod': 0.3, f'{self.name}.share_energy_investment': self.share_energy_investment, # f'{self.name}.share_non_energy_investment': # share_non_energy_investment, f'{self.name}.energy_production': self.energy_supply_df, f'{self.name}.damage_df': self.damage_df, f'{self.name}.population_df': self.population_df, f'{self.name}.total_investment_share_of_gdp': self.total_invest, f'{self.name}.CO2_taxes': self.default_CO2_tax, f'{self.name}.{self.model_name}.CO2_tax_efficiency': self.default_co2_efficiency, f'{self.name}.co2_emissions_Gt': co2_emissions_gt, f'{self.name}.working_age_population_df' : self.working_age_population_df, f'{self.name}.energy_capital': self.energy_capital, f'{self.name}.alpha': 0.5 } self.ee.load_study_from_input_dict(inputs_dict) disc_techno = self.ee.root_process.sos_disciplines[0] self.check_jacobian(location=dirname(__file__), filename=f'jacobian_macroeconomics_discipline_very_high_emissions.pkl', discipline=disc_techno, step=1e-15, derr_approx='complex_step', inputs=[f'{self.name}.energy_production', f'{self.name}.damage_df', f'{self.name}.share_energy_investment', f'{self.name}.total_investment_share_of_gdp', f'{self.name}.co2_emissions_Gt', f'{self.name}.CO2_taxes', f'{self.name}.population_df', f'{self.name}.working_age_population_df', f'{self.name}.energy_capital'], outputs=[f'{self.name}.economics_df', f'{self.name}.energy_investment', f'{self.name}.pc_consumption_constraint', f'{self.name}.global_investment_constraint', f'{self.name}.emax_enet_constraint', f'{self.name}.delta_capital_objective', f'{self.name}.delta_capital_objective_weighted', f'{self.name}.delta_capital_constraint', f'{self.name}.delta_capital_constraint_dc']) def test_macro_economics_negativeco2_emissions(self): self.model_name = 'Macroeconomics' ns_dict = {'ns_witness': f'{self.name}', 'ns_energy_mix': f'{self.name}', 'ns_public': f'{self.name}', 'ns_functions': f'{self.name}', 'ns_ref':f'{self.name}'} self.ee.ns_manager.add_ns_def(ns_dict) mod_path = 'climateeconomics.sos_wrapping.sos_wrapping_witness.macroeconomics.macroeconomics_discipline.MacroeconomicsDiscipline' builder = self.ee.factory.get_builder_from_module( self.model_name, mod_path) self.ee.factory.set_builders_to_coupling_builder(builder) self.ee.configure() self.ee.display_treeview_nodes() #- retrieve co2_emissions_gt input energy_supply_csv = read_csv( join(self.data_dir, 'energy_supply_data_onestep_negative_CO2.csv')) # adapt lenght to the year range energy_supply_start = energy_supply_csv.loc[energy_supply_csv['years'] >= self.year_start] energy_supply_end = energy_supply_csv.loc[energy_supply_csv['years'] <= self.year_end] energy_supply_df = pd.merge(energy_supply_start, energy_supply_end) energy_supply_df["years"] = energy_supply_df['years'] co2_emissions_gt = energy_supply_df.rename( columns={'total_CO2_emitted': 'Total CO2 emissions'}) co2_emissions_gt.index = self.years inputs_dict = {f'{self.name}.year_start': self.year_start, f'{self.name}.year_end': self.year_end, f'{self.name}.time_step': self.time_step, f'{self.name}.init_rate_time_pref': 0.015, f'{self.name}.conso_elasticity': 1.45, f'{self.name}.{self.model_name}.damage_to_productivity': True, f'{self.name}.frac_damage_prod': 0.3, f'{self.name}.share_energy_investment': self.share_energy_investment, # f'{self.name}.share_non_energy_investment': # share_non_energy_investment, f'{self.name}.energy_production': self.energy_supply_df, f'{self.name}.damage_df': self.damage_df, f'{self.name}.population_df': self.population_df, f'{self.name}.total_investment_share_of_gdp': self.total_invest, f'{self.name}.CO2_taxes': self.default_CO2_tax, f'{self.name}.{self.model_name}.CO2_tax_efficiency': self.default_co2_efficiency, f'{self.name}.co2_emissions_Gt': co2_emissions_gt, f'{self.name}.working_age_population_df' : self.working_age_population_df, f'{self.name}.energy_capital': self.energy_capital, f'{self.name}.alpha': 0.5 } self.ee.load_study_from_input_dict(inputs_dict) disc_techno = self.ee.root_process.sos_disciplines[0] self.check_jacobian(location=dirname(__file__), filename=f'jacobian_macroeconomics_discipline_negative_emissions.pkl', discipline=disc_techno, step=1e-15, derr_approx='complex_step', inputs=[f'{self.name}.energy_production', f'{self.name}.damage_df', f'{self.name}.share_energy_investment', f'{self.name}.total_investment_share_of_gdp', f'{self.name}.co2_emissions_Gt', f'{self.name}.CO2_taxes', f'{self.name}.population_df', f'{self.name}.working_age_population_df', f'{self.name}.energy_capital'], outputs=[f'{self.name}.economics_df', f'{self.name}.energy_investment', f'{self.name}.pc_consumption_constraint', f'{self.name}.global_investment_constraint', f'{self.name}.emax_enet_constraint', f'{self.name}.delta_capital_objective', f'{self.name}.delta_capital_objective_weighted', f'{self.name}.delta_capital_constraint', f'{self.name}.delta_capital_constraint_dc']) def test_macro_economics_negativeco2_tax(self): self.model_name = 'Macroeconomics' ns_dict = {'ns_witness': f'{self.name}', 'ns_energy_mix': f'{self.name}', 'ns_public': f'{self.name}', 'ns_functions': f'{self.name}', 'ns_ref':f'{self.name}'} self.ee.ns_manager.add_ns_def(ns_dict) mod_path = 'climateeconomics.sos_wrapping.sos_wrapping_witness.macroeconomics.macroeconomics_discipline.MacroeconomicsDiscipline' builder = self.ee.factory.get_builder_from_module( self.model_name, mod_path) self.ee.factory.set_builders_to_coupling_builder(builder) self.ee.configure() self.ee.display_treeview_nodes() self.default_CO2_tax = pd.DataFrame( {'years': self.years, 'CO2_tax': np.linspace(50, -50, len(self.years))}, index=self.years) inputs_dict = {f'{self.name}.year_start': self.year_start, f'{self.name}.year_end': self.year_end, f'{self.name}.time_step': self.time_step, f'{self.name}.init_rate_time_pref': 0.015, f'{self.name}.conso_elasticity': 1.45, f'{self.name}.{self.model_name}.damage_to_productivity': True, f'{self.name}.frac_damage_prod': 0.3, f'{self.name}.share_energy_investment': self.share_energy_investment, # f'{self.name}.share_non_energy_investment': # share_non_energy_investment, f'{self.name}.energy_production': self.energy_supply_df, f'{self.name}.damage_df': self.damage_df, f'{self.name}.population_df': self.population_df, f'{self.name}.total_investment_share_of_gdp': self.total_invest, f'{self.name}.CO2_taxes': self.default_CO2_tax, f'{self.name}.{self.model_name}.CO2_tax_efficiency': self.default_co2_efficiency, f'{self.name}.co2_emissions_Gt': self.co2_emissions_gt, f'{self.name}.working_age_population_df' : self.working_age_population_df, f'{self.name}.energy_capital': self.energy_capital, f'{self.name}.alpha': 0.5 } self.ee.load_study_from_input_dict(inputs_dict) disc_techno = self.ee.root_process.sos_disciplines[0] self.check_jacobian(location=dirname(__file__), filename=f'jacobian_macroeconomics_discipline_negative_co2_tax.pkl', discipline=disc_techno, step=1e-15, derr_approx='complex_step', inputs=[f'{self.name}.energy_production', f'{self.name}.damage_df', f'{self.name}.share_energy_investment', f'{self.name}.total_investment_share_of_gdp', f'{self.name}.co2_emissions_Gt', f'{self.name}.CO2_taxes', f'{self.name}.population_df', f'{self.name}.working_age_population_df', f'{self.name}.energy_capital'], outputs=[f'{self.name}.economics_df', f'{self.name}.energy_investment', f'{self.name}.pc_consumption_constraint', f'{self.name}.global_investment_constraint', f'{self.name}.emax_enet_constraint', f'{self.name}.delta_capital_objective', f'{self.name}.delta_capital_objective_weighted', f'{self.name}.delta_capital_constraint', f'{self.name}.delta_capital_constraint_dc']) if '__main__' == __name__: cls = MacroEconomicsJacobianDiscTest() cls.setUp() cls.test_macro_economics_negativeco2_tax()
{"hexsha": "1830080ef159161b0af3b9031b1ef4874c55bb92", "size": 35947, "ext": "py", "lang": "Python", "max_stars_repo_path": "climateeconomics/tests/l1_test_gradient_macroeconomics_discipline.py", "max_stars_repo_name": "os-climate/witness-core", "max_stars_repo_head_hexsha": "3ef9a44d86804c5ad57deec3c9916348cb3bfbb8", "max_stars_repo_licenses": ["MIT", "Apache-2.0", "BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-14T06:37:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-14T06:37:42.000Z", "max_issues_repo_path": "climateeconomics/tests/l1_test_gradient_macroeconomics_discipline.py", "max_issues_repo_name": "os-climate/witness-core", "max_issues_repo_head_hexsha": "3ef9a44d86804c5ad57deec3c9916348cb3bfbb8", "max_issues_repo_licenses": ["MIT", "Apache-2.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "climateeconomics/tests/l1_test_gradient_macroeconomics_discipline.py", "max_forks_repo_name": "os-climate/witness-core", "max_forks_repo_head_hexsha": "3ef9a44d86804c5ad57deec3c9916348cb3bfbb8", "max_forks_repo_licenses": ["MIT", "Apache-2.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.331738437, "max_line_length": 137, "alphanum_fraction": 0.5602971041, "include": true, "reason": "import numpy,from scipy", "num_tokens": 7422}
## Standard Library Imports ## Library Imports import numpy as np from IPython.core import debugger breakpoint = debugger.set_trace ## Local Imports from .shared_constants import * def gamma_tonemap(img, gamma = 1/2.2): assert(gamma <= 1.0), "Gamma should be < 1" assert(0.0 <= gamma), "Gamma should be non-neg" tmp_img = np.power(img, gamma) return tmp_img / tmp_img.max() def calc_fov(n_rows, n_cols, fov_major_axis): ''' Calculate fov for horizontal and vertical axis ''' if(n_rows > n_cols): fov_vert = fov_major_axis fov_horiz = fov_major_axis * (float(n_cols) / float(n_rows)) else: fov_horiz = fov_major_axis fov_vert = fov_major_axis * (float(n_rows) / float(n_cols)) return (float(fov_horiz), float(fov_vert)) def calc_spherical_coords(fov_horiz, fov_vert, n_rows, n_cols, is_deg=True): ''' Given the FoV along each axis, generate a view direction image where each element corresponds to the angle made between the normal along the camera center and that pixel. Inputs: * fov_horiz: field of view along horizonatal direction (horizontal direction, columns direction) * fov_vert: field of view along vertical direction (vertical direction, rows direction) * n_rows: Number of rows * n_cols: Numer of columns * is_deg: Are FoV in radians or degrees Outputs: Spherical coordinates for each pixel. * theta_img = angle with vertical direction (positive vertical direction, UP direction) * phi_img = angle with horizontal direction (positive horizontal direction, RIGHT direction) ''' offset = 90 if is_deg else 0.5*np.pi phi_range = offset - np.linspace(-0.5*fov_horiz,0.5*fov_horiz, n_cols) theta_range = np.linspace(-0.5*fov_vert,0.5*fov_vert, n_rows) + offset (phi_img, theta_img) = np.meshgrid(phi_range, theta_range) return (phi_img, theta_img) def spherical2xyz(r, phi, theta, is_deg=True): ''' Compute cartesian coordinates given spherical Here we assume that X, Y are the horizontal and vertical directions of the camera, and that positive Z points outwards of the camera This convention might be a bit different from what is in the Spherical coords Wikipedia article ''' if(is_deg): x = r*np.cos(phi*np.pi/180.)*np.sin(theta*np.pi/180.) y = r*np.cos(theta*np.pi/180.) z = r*np.sin(phi*np.pi/180.)*np.sin(theta*np.pi/180.) else: x = r*np.cos(phi)*np.sin(theta) y = r*np.cos(theta) z = r*np.sin(phi)*np.sin(theta) return (x,y,z)
{"hexsha": "dd54bf271549b23813482abdc07030db85d7e7ea", "size": 2671, "ext": "py", "lang": "Python", "max_stars_repo_path": "improc_ops.py", "max_stars_repo_name": "felipegb94/fgb_research_utils", "max_stars_repo_head_hexsha": "8328b9c65bf22d6e84df54106f9bd2d2029b6aa5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-03-28T04:56:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T17:44:47.000Z", "max_issues_repo_path": "improc_ops.py", "max_issues_repo_name": "felipegb94/fgb_research_utils", "max_issues_repo_head_hexsha": "8328b9c65bf22d6e84df54106f9bd2d2029b6aa5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "improc_ops.py", "max_forks_repo_name": "felipegb94/fgb_research_utils", "max_forks_repo_head_hexsha": "8328b9c65bf22d6e84df54106f9bd2d2029b6aa5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.8656716418, "max_line_length": 127, "alphanum_fraction": 0.6645451142, "include": true, "reason": "import numpy", "num_tokens": 668}
# Data Management import pandas # External Interfaces import glob import kaggle import os from zipfile import ZipFile # Evaluation from sklearn.metrics import roc_auc_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.model_selection import train_test_split # Processing import numpy import scipy from scipy.stats import chi2 # Modeling from sklearn.linear_model import LinearRegression from sklearn.decomposition import PCA from sklearn import svm from sklearn.svm import OneClassSVM from sklearn.neighbors import KNeighborsClassifier from sklearn.cluster import KMeans from sklearn.neighbors import LocalOutlierFactor from sklearn.ensemble import IsolationForest # ------------------------------------------------------------------------------ # Function : retrieve_combine_and_pickle() # Engineer : Christian Westbrook # Abstract : This function begins by defining all of the CSV files that are # expected from the dataset. Each CSV is loaded into a pandas # dataframe and then appended to a list of dataframes. Once all CSV # files have been loaded into dataframes, the dataframes are merged # into a single large dataframe representing the dataset. This final # dataframe is then written to disk in pickle format. # ------------------------------------------------------------------------------ def retrieve_combine_and_pickle(): # Check if a root /data directory exists, and create it if it doesn't if not os.path.exists("../data/"): os.makedirs("../data") # Retrieve the dataset in .zip archive format !kaggle datasets download cicdataset/cicids2017 -q # Move the dataset into the root /data directory. #!mv cicids2017.zip ../data/ os.replace("./cicids2017.zip", "../data/cicids2017.zip") # Unzip the dataset in place with ZipFile('../data/cicids2017.zip', 'r') as zipObj: zipObj.extractall(path="../data/") # Grab all CSV file paths in the root /data directory file_paths = glob.glob("../data/**/*.csv", recursive=True) # Move all CSV files from the unzipped folder structure into the root /data directory for index, path in enumerate(file_paths): os.replace(path, "../data/" + path.split("\\")[len(path.split("\\")) - 1]) # Grab all CSV file paths in the root /data directory file_paths = glob.glob("../data/**/*.csv", recursive=True) # Read each CSV into a pandas dataframe frames = [] for index, path in enumerate(file_paths): frames.append(pandas.read_csv(path)) # Merge dataframes vertically combined_frame = pandas.concat(frames, axis=0) # Reset row indices combined_frame = combined_frame.reset_index(drop=True) # Write combined dataframe to disk combined_frame.to_pickle("../data/cicids2017.pkl") # Clean up the root /data directory for index, path in enumerate(file_paths): os.remove(path) os.rmdir("../data/MachineLearningCSV/MachineLearningCVE/") os.rmdir("../data/MachineLearningCSV/") os.remove("../data/cicids2017.zip") os.remove("../data/MachineLearningCSV.md5") retrieve_combine_and_pickle()
{"hexsha": "a615a6bab95a3a8674d3e5748821fddd32c4391a", "size": 3200, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/retrieve.py", "max_stars_repo_name": "christian-westbrook/intrusion-detection", "max_stars_repo_head_hexsha": "7f7e8470327ead1cd122918452d1238a90361c75", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scripts/retrieve.py", "max_issues_repo_name": "christian-westbrook/intrusion-detection", "max_issues_repo_head_hexsha": "7f7e8470327ead1cd122918452d1238a90361c75", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scripts/retrieve.py", "max_forks_repo_name": "christian-westbrook/intrusion-detection", "max_forks_repo_head_hexsha": "7f7e8470327ead1cd122918452d1238a90361c75", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.5555555556, "max_line_length": 89, "alphanum_fraction": 0.685625, "include": true, "reason": "import numpy,import scipy,from scipy", "num_tokens": 686}
julia> horner2([-19,7,-4,6], 3) 128
{"hexsha": "eaef6d16068e29d90459095ae36590fa86dab71f", "size": 36, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "lang/Julia/horners-rule-for-polynomial-evaluation-4.jl", "max_stars_repo_name": "ethansaxenian/RosettaDecode", "max_stars_repo_head_hexsha": "8ea1a42a5f792280b50193ad47545d14ee371fb7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lang/Julia/horners-rule-for-polynomial-evaluation-4.jl", "max_issues_repo_name": "ethansaxenian/RosettaDecode", "max_issues_repo_head_hexsha": "8ea1a42a5f792280b50193ad47545d14ee371fb7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lang/Julia/horners-rule-for-polynomial-evaluation-4.jl", "max_forks_repo_name": "ethansaxenian/RosettaDecode", "max_forks_repo_head_hexsha": "8ea1a42a5f792280b50193ad47545d14ee371fb7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 12.0, "max_line_length": 31, "alphanum_fraction": 0.5833333333, "num_tokens": 20}
import MLJModelInterface import Soss function predict_particles(predictor::SossMLJPredictor, Xnew) args = predictor.args pars = Soss.particles(predictor.post) pred = predictor.pred transform = predictor.model.transform dist = pred(merge(args, transform(Xnew), pars)) return Soss.particles(dist) end function predict_particles(sm::SossMLJModel, fitresult, Xnew; response = sm.response) predictor_joint = MLJModelInterface.predict_joint(sm, fitresult, Xnew) return getproperty(predict_particles(predictor_joint, Xnew), response) end
{"hexsha": "007e86167a8f3cf4a12ada4dacc48d91fb941ebe", "size": 646, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/particles.jl", "max_stars_repo_name": "cscherrer/SossMLJ.jl", "max_stars_repo_head_hexsha": "0baac3355802b8af2c682f98845d29de6e1f2901", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2020-05-23T18:42:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T20:30:10.000Z", "max_issues_repo_path": "src/particles.jl", "max_issues_repo_name": "cscherrer/SossMLJ.jl", "max_issues_repo_head_hexsha": "0baac3355802b8af2c682f98845d29de6e1f2901", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 133, "max_issues_repo_issues_event_min_datetime": "2020-08-10T19:16:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-03T00:47:45.000Z", "max_forks_repo_path": "src/particles.jl", "max_forks_repo_name": "cscherrer/SossMLJ.jl", "max_forks_repo_head_hexsha": "0baac3355802b8af2c682f98845d29de6e1f2901", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-08-10T18:52:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-11T01:06:59.000Z", "avg_line_length": 32.3, "max_line_length": 74, "alphanum_fraction": 0.6764705882, "num_tokens": 140}
import sys import numpy as np import pandas as pd from helicalc import helicalc_dir, helicalc_data from helicalc.coil import CoilIntegrator from helicalc.busbar import ArcIntegrator3D from helicalc.geometry import read_solenoid_geom_combined from helicalc.solenoid_geom_funcs import load_all_geoms from helicalc.constants import dxyz_dict, dxyz_arc_bar_dict from tqdm import tqdm # output info output_dir = helicalc_data+'Bmaps/helicalc_validation/' # coil only map # full #save_name = output_dir+'Mau14.DS1_region.standard-helicalc.coil_56_full.pkl' # y=0 plane save_name = output_dir+'Mau14.DS1_region_plane.standard-helicalc.coil_56_full.pkl' # load coil geometry paramdir = helicalc_dir + 'dev/params/' # paramname = 'Mu2e_V13' paramname = 'Mu2e_V14' # correct geom_df = read_solenoid_geom_combined(paramdir,paramname).iloc[55:].copy() df_DS1 = geom_df.query('Coil_Num == 56').copy().iloc[0] # load chunk data chunk_file = helicalc_data+'Bmaps/aux/batch_N_helicalc_03-16-22.txt' df_chunks = pd.read_csv(chunk_file) N_chunk_coil = df_chunks.query(f'Nt_Ri == {df_DS1.Nt_Ri}').iloc[0].N_field_points # load interlayer geometry df_dict = load_all_geoms(version=14, return_dict=True) df_DS1_IL = df_dict['interlayers'].query('`cond N` == 56').copy().iloc[0] # kludge for better column naming df_DS1_IL['cond N'] = f'{int(df_DS1_IL["cond N"])}_il' dxyz_interlayer = dxyz_arc_bar_dict[1] N_chunk_inter = 10000 # load OPERA dataframe for grid to calculate on opera_file = helicalc_data+'Bmaps/single_coil_Mau13/DSMap_V14_DS1only.pkl' df_O = pd.read_pickle(opera_file) # y=0 plane df_O = df_O.query('(Y==0) & (-4.796 <= X <= -2.996)').copy() def DS1_calc(df=df_O, df_coil=df_DS1, df_interlayer=df_DS1_IL, outfile=save_name): df_ = df.copy() # loop over two layers for layer, dev in zip([1, 2], [1, 2]): # create coil myCoil = CoilIntegrator(df_coil, dxyz=dxyz_dict[df_coil.dxyz], layer=layer, dev=dev) # integrate on grid and add to dataframe df_ = myCoil.integrate_grid(df_, N_batch=N_chunk_coil, tqdm=tqdm) # interlayer connect # create interlayer myArc = ArcIntegrator3D(df_interlayer, dxyz=dxyz_interlayer, dev=3) # integrate on grid and add to dataframe df_ = myArc.integrate_grid(df_, N_batch=N_chunk_inter, tqdm=tqdm) # add coil components for i in ['x', 'y', 'z']: df_.eval(f'B{i}_helicalc = B{i}_helicalc_c56_l1 + B{i}_helicalc_c56_l2 + B{i}_bus_arc_cn_56_il', inplace=True) # Tesla to Gauss df_.eval(f'B{i} = B{i} * 1e4', inplace=True) df_.eval(f'B{i}_helicalc = B{i}_helicalc * 1e4', inplace=True) df_.eval(f'B{i}_delta = B{i}_helicalc - B{i}', inplace=True) # save df_.to_pickle(outfile) return df_ if __name__ == '__main__': df_result = DS1_calc(df_O, df_DS1, df_DS1_IL, save_name) print(df_result.columns) print(df_result)
{"hexsha": "28d614aea2f3848647acab2b760ec98874d75257", "size": 2879, "ext": "py", "lang": "Python", "max_stars_repo_path": "scripts/validation/DS1/compare_DS1_gen.py", "max_stars_repo_name": "FMS-Mu2e/helicalc", "max_stars_repo_head_hexsha": "557ab63696459807998a9ab44f92badd62e93a2a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scripts/validation/DS1/compare_DS1_gen.py", "max_issues_repo_name": "FMS-Mu2e/helicalc", "max_issues_repo_head_hexsha": "557ab63696459807998a9ab44f92badd62e93a2a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scripts/validation/DS1/compare_DS1_gen.py", "max_forks_repo_name": "FMS-Mu2e/helicalc", "max_forks_repo_head_hexsha": "557ab63696459807998a9ab44f92badd62e93a2a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-22T15:54:38.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T23:51:27.000Z", "avg_line_length": 36.9102564103, "max_line_length": 118, "alphanum_fraction": 0.7290725947, "include": true, "reason": "import numpy", "num_tokens": 924}
"popen_ex.py" from subprocess import Popen import os from astropy.io import fits flg = 1 # SDSS nproc = 8 ## ############################ # "Parallel" if flg == 0: print('Running BOSS!') boss_cat_fil = os.environ.get('BOSSPATH')+'/DR10/BOSSLyaDR10_cat_v2.1.fits.gz' bcat_hdu = fits.open(boss_cat_fil) t_boss = bcat_hdu[1].data nqso = len(t_boss) outroot = 'Output/BOSS_DR10Lya_PCA_values_nocut' elif flg == 1: print('Running SDSS!') sdss_cat_fil = os.environ.get('SDSSPATH')+'/DR7_QSO/dr7_qso.fits.gz' scat_hdu = fits.open(sdss_cat_fil) t_sdss = scat_hdu[1].data nqso = len(t_sdss) outroot = 'Output/SDSS_DR7Lya_PCA_values_nocut' #nqso = 800 #20000 # Testing nsub = nqso // nproc cut_Lya = False # Setup the Processes for ii in range(nproc): # Generate istrt = ii * nsub if ii == (nproc-1): iend = nqso else: iend = (ii+1)*nsub outfil = outroot+str(ii)+'.fits' Popen(['python', './fit_boss_qsos.py', str(flg), str(istrt), str(iend), outfil])
{"hexsha": "e501a9162197d1da250216309955f9bf35dcc046", "size": 1042, "ext": "py", "lang": "Python", "max_stars_repo_path": "py/desisim/qso_template/run_qso_fits.py", "max_stars_repo_name": "HiramHerrera/desisim", "max_stars_repo_head_hexsha": "3ae76e4c921f72b71ff7522462740e904136f428", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2015-12-16T22:01:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-14T07:31:55.000Z", "max_issues_repo_path": "py/desisim/qso_template/run_qso_fits.py", "max_issues_repo_name": "HiramHerrera/desisim", "max_issues_repo_head_hexsha": "3ae76e4c921f72b71ff7522462740e904136f428", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 455, "max_issues_repo_issues_event_min_datetime": "2015-04-06T03:11:27.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-28T18:11:16.000Z", "max_forks_repo_path": "py/desisim/qso_template/run_qso_fits.py", "max_forks_repo_name": "HiramHerrera/desisim", "max_forks_repo_head_hexsha": "3ae76e4c921f72b71ff7522462740e904136f428", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2015-01-26T17:45:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-22T19:46:20.000Z", "avg_line_length": 24.2325581395, "max_line_length": 84, "alphanum_fraction": 0.6285988484, "include": true, "reason": "from astropy", "num_tokens": 374}
# -*- coding: utf-8 -*- import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from keras.datasets import imdb from keras.datasets import reuters from keras.datasets import mnist from sklearn.datasets import load_digits def vetorizar_sequencias(sequencias, dimensao = 10000): resultados = np.zeros((len(sequencias), dimensao)) for i, sequencia in enumerate(sequencias): resultados[i, sequencia] = 1. return resultados def base_qualquer(caminho): dados = pd.read_csv(caminho) X = dados.drop(['classe'], axis=1).values Y = dados['classe'].values return (X,Y) def base_iris(caminho = ''): dados = pd.read_csv(caminho+'iris.csv') X = dados.drop(['classe'], axis=1).values Y = dados['classe'].values return (X,Y) def base_mnist64(): digitos = load_digits() X = digitos.data Y = digitos.target return (X, Y) def base_mnist(): (x_train, y_train), (x_test, y_test) = mnist.load_data() entradas = x_train.tolist() saidas = y_train.tolist() X = np.array(entradas).reshape(60000,784) Y = np.array(saidas, dtype=np.int64) return (X, Y) def base_letras(caminho = ''): dados = pd.read_csv(caminho + 'letras.csv') #X,Xt, Y, Yt = train_test_split(dados.drop(['classe'], axis=1).values,dados['classe'].values, train_size=0.75, test_size=0.25, stratify=dados['classe'].values) X = dados.drop(['classe'], axis=1).values Y = dados['classe'].values return (X,Y) def base_mnist_fashion(caminho = ''): dados = pd.read_csv(caminho + 'fashion_mnist.csv') X = dados.drop(['classe'], axis=1).values Y = dados['classe'].values return (X,Y) def base_imdb(): (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=5000) x_train = vetorizar_sequencias(x_train, dimensao=5000) x_test = vetorizar_sequencias(x_test, dimensao=5000) entradas = x_train.tolist() entradas.extend(x_test.tolist()) saidas = y_train.tolist() saidas.extend(y_test.tolist()) X = np.array(entradas) Y = np.array(saidas) return (X,Y) def base_reuters(): (x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=500) x_train = vetorizar_sequencias(x_train, dimensao=500) x_test = vetorizar_sequencias(x_test, dimensao=500) entradas = x_train.tolist() entradas.extend(x_test.tolist()) saidas = y_train.tolist() saidas.extend(y_test.tolist()) X = np.array(entradas) Y = np.array(saidas) return (X,Y)
{"hexsha": "9fc68b77a211cef0efc220018d9534f0aec00330", "size": 2552, "ext": "py", "lang": "Python", "max_stars_repo_path": "BaseDados.py", "max_stars_repo_name": "brunnovicente/SKNN", "max_stars_repo_head_hexsha": "8a201cb3b24f1e725ba7077c82af11be3eb68398", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "BaseDados.py", "max_issues_repo_name": "brunnovicente/SKNN", "max_issues_repo_head_hexsha": "8a201cb3b24f1e725ba7077c82af11be3eb68398", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "BaseDados.py", "max_forks_repo_name": "brunnovicente/SKNN", "max_forks_repo_head_hexsha": "8a201cb3b24f1e725ba7077c82af11be3eb68398", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0235294118, "max_line_length": 163, "alphanum_fraction": 0.664184953, "include": true, "reason": "import numpy", "num_tokens": 757}
import ase.db import warnings import numpy import matplotlib.pyplot as plt from ase.data import covalent_radii from scipy.stats import linregress import os, os.path from scipy.constants import pi, epsilon_0 db_file = "../../data/gpaw_data/c2db.db" if not os.path.exists(db_file): raise FileExistsError(("Please download the c2db data into ../../data/gpaw_data/ folder," "from https://cmr.fysik.dtu.dk/_downloads/c2db.db")) db = ase.db.connect(db_file) candidates = db.select(selection="bse_binding>0,dir_gap_gw>0,alphax>0") materials = [] alpha_x = [] E_opt = [] for mol in candidates: if "Cr" in mol.formula: # CrS2 stuffs are not correct? continue print("{0}-{1}".format(mol.formula, mol.prototype)) materials.append("{0}-{1}".format(mol.formula, mol.prototype)) alpha_x.append((mol.alphax + mol.alphay) / 2) E_opt.append(mol.dir_gap_gw - mol.bse_binding) alpha_x = numpy.array(alpha_x) E_opt = numpy.array(E_opt) img_path = "../../tmp_img/" plt.style.use("science") # x-direction plt.figure(figsize=(3.5, 3.5)) plt.plot(E_opt, 1 / (alpha_x), "o", alpha=0.5) k, b, r, *_ = linregress(x=E_opt, y=1 / alpha_x) print(k, b, r) xx = numpy.linspace(0.0, 8) yy = k * xx + b plt.xlim(0, 8) plt.plot(xx, yy, "--") plt.xlabel("$E_{\\rm{g}}^{\\rm{opt}}$ (eV)") plt.ylabel("$(4 \\pi \\varepsilon_0)/\\alpha^{\parallel}_{\\rm{2D}}$ ($\\AA^{-1}$)") plt.savefig(os.path.join(img_path, "alpha_x_E_opt.svg"))
{"hexsha": "4021a682840d514cbe93bee8dc67d0326a58b12d", "size": 1461, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/gpaw_analysis/alpha_opt_gap.py", "max_stars_repo_name": "lovaulonze/paper.2D_dielectric", "max_stars_repo_head_hexsha": "df6718840e74807a7ea3a969cd7d88bcbdac9284", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/gpaw_analysis/alpha_opt_gap.py", "max_issues_repo_name": "lovaulonze/paper.2D_dielectric", "max_issues_repo_head_hexsha": "df6718840e74807a7ea3a969cd7d88bcbdac9284", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/gpaw_analysis/alpha_opt_gap.py", "max_forks_repo_name": "lovaulonze/paper.2D_dielectric", "max_forks_repo_head_hexsha": "df6718840e74807a7ea3a969cd7d88bcbdac9284", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.5660377358, "max_line_length": 93, "alphanum_fraction": 0.6632443532, "include": true, "reason": "import numpy,from scipy", "num_tokens": 462}
This editor can edit this entry and tell us a bit about themselves by clicking the Edit icon. 20100521 11:07:33 nbsp Welcome to the Wiki Howdy Mr. Knights, and welcome to the wiki! My names Evan, pleased to meet you! Thanks for adding the comment about WiFi at Giedt Hall, but also feel free to edit the entry and add it directly. Everything on the wiki was created by an editor like you. Once again, welcome to the wiki! Users/JabberWokky Evan JabberWokky Edwards
{"hexsha": "ff9d4943a10b2fdcd1cf5f91ca6404ccf2959b7c", "size": 473, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "lab/davisWiki/saviorknights.f", "max_stars_repo_name": "voflo/Search", "max_stars_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lab/davisWiki/saviorknights.f", "max_issues_repo_name": "voflo/Search", "max_issues_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab/davisWiki/saviorknights.f", "max_forks_repo_name": "voflo/Search", "max_forks_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.8333333333, "max_line_length": 375, "alphanum_fraction": 0.778012685, "num_tokens": 121}
module GreekModule σ(x) = 1 ./ (1 + exp.(-x)) logσ(x) = - log1p.(exp.(-x)) end
{"hexsha": "e95db1dd8955d147e312371783e191edf9028fea", "size": 89, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "inst/examples/GreekModule.jl", "max_stars_repo_name": "bakaburg1/JuliaConnectoR", "max_stars_repo_head_hexsha": "d0b2d2ac974ddee52fb3bbe7fcc92c4eab7dc477", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 82, "max_stars_repo_stars_event_min_datetime": "2019-04-10T15:20:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-16T13:53:27.000Z", "max_issues_repo_path": "inst/examples/GreekModule.jl", "max_issues_repo_name": "bakaburg1/JuliaConnectoR", "max_issues_repo_head_hexsha": "d0b2d2ac974ddee52fb3bbe7fcc92c4eab7dc477", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 15, "max_issues_repo_issues_event_min_datetime": "2019-12-23T14:13:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-08T14:32:39.000Z", "max_forks_repo_path": "inst/examples/GreekModule.jl", "max_forks_repo_name": "bakaburg1/JuliaConnectoR", "max_forks_repo_head_hexsha": "d0b2d2ac974ddee52fb3bbe7fcc92c4eab7dc477", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-06-02T07:01:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-10T16:11:35.000Z", "avg_line_length": 17.8, "max_line_length": 32, "alphanum_fraction": 0.4719101124, "num_tokens": 38}
from pathlib import Path import os import random import math import torch import numpy as np from torch.utils.data.dataset import Dataset from torchaudio.sox_effects import apply_effects_file from collections import Counter from itertools import accumulate import pdb CLASSES = [1,2,3,4,5] PERTURBATION={'speed': (lambda x: ['speed', f"{x}"]), 'trim': (lambda x: ['trim', "0", f"{-x}"]), 'pad': (lambda x: ['pad', "0", f"{x}"]), 'tempo': (lambda x: ['tempo', f"{x}"]), 'pitch': (lambda x: ['pitch', f"{x}"]), } PERTURBATION_MODE=['none', 'fixed', 'random'] def generate_apply_effect_file_commands(length, perturb_type='none', perturb_ratio=None): apply_effect_file_list = [] if perturb_type == 'none': for i in range(length): apply_effect_file_list.append([ ["channels", "1"], ["rate", "16000"], ["norm"], ]) return apply_effect_file_list assert perturb_type in list(PERTURBATION.keys()), "Invalid perturbation type." for i in range(length): perturb = PERTURBATION[perturb_type](perturb_ratio) apply_effect_file_list.append([ ["channels", "1"], ["rate", "16000"], perturb, ["norm"], ]) return apply_effect_file_list class VoiceMOSDataset(Dataset): def __init__(self, mos_list, ld_score_list, wav_folder, corpus_name, perturb_mode='none', perturb_types=[], perturb_ratios=[], total_length=-1, valid=False): self.wav_folder = Path(wav_folder) self.mos_list = mos_list self.ld_score_list = ld_score_list self.corpus_name = corpus_name self.perturb_mode = perturb_mode self.perturb_types = perturb_types self.perturb_ratios = perturb_ratios self.apply_effect_file_list = [] self.class_num = 5 self.class2index = {CLASSES[i]: i for i in range(len(CLASSES))} self._JUDGE = 4 self.valid = valid self.total_length = total_length if (total_length != -1) else len(self.mos_list) # generate list of effects for apply_effect_file() assert self.perturb_mode in PERTURBATION_MODE, "Invalid perturbation mode" self.apply_effect_file_list += generate_apply_effect_file_commands(self.total_length) if self.perturb_mode == 'fixed': for perturb_type, perturb_ratio in zip(self.perturb_types, self.perturb_ratios): self.apply_effect_file_list += generate_apply_effect_file_commands((self.total_length), perturb_type=perturb_type, perturb_ratio=perturb_ratio) print(f"[Dataset Information] - MOS Score dataset \'{corpus_name}\' using perturbation type \'{perturb_mode}\'. Dataset length={len(self.apply_effect_file_list)}") def __len__(self): return len(self.apply_effect_file_list) def __getitem__(self, idx): list_idx = idx % self.total_length % len(self.mos_list) wav_name, mos = self.mos_list.loc[list_idx] wav_path = self.wav_folder / wav_name effects = self.apply_effect_file_list[idx] if self.perturb_mode == 'random': perturb_type, perturb_ratio = random.choice(list(zip(self.perturb_types, self.perturb_ratios))) effects = generate_apply_effect_file_commands(1, perturb_type=perturb_type, perturb_ratio=perturb_ratio)[0] wav, _ = apply_effects_file( str(wav_path), effects ) wav = wav.view(-1) system_name = wav_name.split("-")[0] corpus_name = self.corpus_name judge_id = 0 prob = np.zeros(self.class_num) # If not in validation, then probability is needed if self.valid == False: wav_ld_score_list = list(self.ld_score_list[self.ld_score_list[1] == wav_name][2]) wav_ld_score_index_list = [self.class2index[score] for score in wav_ld_score_list] wav_ld_score_counter = np.zeros(self.class_num) for index, value in Counter(wav_ld_score_index_list).items(): wav_ld_score_counter[index] = value prob = wav_ld_score_counter / np.sum(wav_ld_score_counter) return wav.numpy(), system_name, wav_name, corpus_name, mos, prob, judge_id def collate_fn(self, samples): return zip(*samples) class VoiceMOSLDScoreDataset(Dataset): def __init__(self, ld_score_list, wav_folder, corpus_name, perturb_mode='none', perturb_types=[], perturb_ratios=[], idtable=''): self.wav_folder = Path(wav_folder) self.ld_score_list = ld_score_list self.corpus_name = corpus_name self.perturb_mode = perturb_mode self.perturb_types = perturb_types self.perturb_ratios = perturb_ratios self.apply_effect_file_list = [] self.class_num = 5 self.class2index = {CLASSES[i]: i for i in range(len(CLASSES))} self._JUDGE = 4 self.total_length = len(self.ld_score_list) # generate list of effects for apply_effect_file() assert self.perturb_mode in PERTURBATION_MODE, "Invalid perturbation mode" self.apply_effect_file_list += generate_apply_effect_file_commands(self.total_length) if self.perturb_mode == 'fixed': for perturb_type, perturb_ratio in zip(self.perturb_types, self.perturb_ratios): self.apply_effect_file_list += generate_apply_effect_file_commands((self.total_length), perturb_type=perturb_type, perturb_ratio=perturb_ratio) print(f"[Dataset Information] - Listener Dependent Score dataset \'{corpus_name}\' using perturbation type \'{perturb_mode}\'. Dataset length={len(self.apply_effect_file_list)}") # Load idtable assert Path.is_file(idtable), f"Can't find idtable file: {idtable}" self.idtable = torch.load(idtable) for i, judge_i in enumerate(self.ld_score_list[self._JUDGE]): self.ld_score_list[self._JUDGE][i] = self.idtable[judge_i] def __len__(self): return len(self.apply_effect_file_list) def __getitem__(self, idx): list_idx = idx % self.total_length system_name, wav_name, opinion_score, _, judge_id = self.ld_score_list.loc[list_idx] wav_path = self.wav_folder / wav_name effects = self.apply_effect_file_list[idx] if self.perturb_mode == 'random': perturb_type, perturb_ratio = random.choice(list(zip(self.perturb_types, self.perturb_ratios))) effects = generate_apply_effect_file_commands(1, perturb_type=perturb_type, perturb_ratio=perturb_ratio)[0] wav, _ = apply_effects_file( str(wav_path), effects ) wav = wav.view(-1) system_name = wav_name.split("-")[0] corpus_name = self.corpus_name prob = np.zeros(self.class_num) prob[self.class2index[opinion_score]] = 1 return wav.numpy(), system_name, wav_name, corpus_name, opinion_score, prob, judge_id def collate_fn(self, samples): return zip(*samples)
{"hexsha": "582329386c4dd7da861b7fece5ac07168b7a0733", "size": 7213, "ext": "py", "lang": "Python", "max_stars_repo_path": "s3prl/downstream/voiceMOS/dataset.py", "max_stars_repo_name": "RayTzeng/voiceMOS", "max_stars_repo_head_hexsha": "65ad6b4c8a9c572b5a69126a68e8c9886267e886", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-14T00:25:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-14T00:25:53.000Z", "max_issues_repo_path": "s3prl/downstream/voiceMOS/dataset.py", "max_issues_repo_name": "RayTzeng/voiceMOS", "max_issues_repo_head_hexsha": "65ad6b4c8a9c572b5a69126a68e8c9886267e886", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "s3prl/downstream/voiceMOS/dataset.py", "max_forks_repo_name": "RayTzeng/voiceMOS", "max_forks_repo_head_hexsha": "65ad6b4c8a9c572b5a69126a68e8c9886267e886", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-03-05T13:46:48.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T05:48:06.000Z", "avg_line_length": 37.1804123711, "max_line_length": 186, "alphanum_fraction": 0.651739914, "include": true, "reason": "import numpy", "num_tokens": 1714}
#= Copyright (c) 2015, Intel Corporation Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. =# include("../../src/Sparso.jl") include("../../src/simple-show.jl") using Sparso using CompilerTools using CompilerTools.CFGs using CompilerTools.OptFramework using CompilerTools.LivenessAnalysis function dump_liveness(func_ast :: Expr, func_arg_types :: Tuple, func_args) assert(typeof(func_ast)== Expr) assert(func_ast.head == :lambda) LivenessAnalysis.set_use_inplace_naming_convention() liveness = LivenessAnalysis.from_expr(func_ast)#, no_mod = Sparso.create_unmodified_args_dict()) println("Liveness:\n", liveness) func_ast end sparse_pass = OptFramework.optPass(dump_liveness, true) OptFramework.setOptPasses([sparse_pass]) #Sparso.set_debug_level(2) include("./cg.jl") A = matrix_market_read(ARGS[1], true, true) m = size(A, 1) x = zeros(Float64, m) b = ones(Float64, m) tol = 1e-10 maxiter = 1000 @acc x, k, rel_err = cg(x, A, b, tol, maxiter)
{"hexsha": "cb6196986957de0b7ed04b9e254d2536502edc4a", "size": 2417, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/correctness/liveness-test1.jl", "max_stars_repo_name": "IntelLabs/Sparso", "max_stars_repo_head_hexsha": "570e7a18a96045e490f4ebf27ea948592e0bfa0b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2016-07-11T15:11:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T00:32:08.000Z", "max_issues_repo_path": "test/correctness/liveness-test1.jl", "max_issues_repo_name": "IntelLabs/Sparso", "max_issues_repo_head_hexsha": "570e7a18a96045e490f4ebf27ea948592e0bfa0b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-09-15T13:37:36.000Z", "max_issues_repo_issues_event_max_datetime": "2016-12-09T19:30:32.000Z", "max_forks_repo_path": "test/correctness/liveness-test1.jl", "max_forks_repo_name": "IntelLabs/Sparso", "max_forks_repo_head_hexsha": "570e7a18a96045e490f4ebf27ea948592e0bfa0b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-01-03T03:11:19.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-03T03:11:19.000Z", "avg_line_length": 40.2833333333, "max_line_length": 100, "alphanum_fraction": 0.7645841953, "num_tokens": 546}
#include <cstdlib> #include <iostream> #include <complex> #include <type_traits> #include <algorithm> #include <boost/numeric/ublas/vector.hpp> #include <boost/numeric/ublas/matrix.hpp> #include <boost/numeric/ublas/banded.hpp> #include <boost/numeric/bindings/ublas/vector.hpp> #include <boost/numeric/bindings/ublas/banded.hpp> #include <boost/numeric/bindings/blas/level1.hpp> #include <boost/numeric/bindings/blas/level2.hpp> #include <boost/numeric/bindings/lapack/driver.hpp> #include "random.hpp" namespace ublas=boost::numeric::ublas; namespace blas=boost::numeric::bindings::blas; namespace lapack=boost::numeric::bindings::lapack; int main(int argc, char *argv[]) { typedef std::complex<double> complex; typedef ublas::vector<complex> vector; typedef ublas::vector<double> d_vector; typedef ublas::banded_matrix<complex, ublas::column_major> matrix; typedef ublas::matrix<complex, ublas::column_major> dense_matrix; typedef typename std::make_signed<vector::size_type>::type size_type; rand_normal<complex>::reset(); size_type n=1024; matrix A(n, n, 1, 1); for (size_type i=0; i<n; ++i) A(i, i)=std::abs(rand_normal<complex>::get())+1; for (size_type i=0; i<n-1; ++i) { A(i+1, i)=complex(0); A(i, i+1)=complex(0); } for (int k=0; k<n-1; ++k) { // generate a random 2x2 unitary matrix double phi(rand_uniform<double>::get(0, 1.5707963267948966192)); double alpha(rand_uniform<double>::get(0, 6.2831853071795864770)); double psi(rand_uniform<double>::get(0, 6.2831853071795864770)); double chi(rand_uniform<double>::get(0, 6.2831853071795864770)); dense_matrix u(2, 2); u(0, 0)=complex(std::cos(alpha+psi), std::sin(alpha+psi))*std::cos(phi); u(1, 0)=-complex(std::cos(alpha-chi), std::sin(alpha-chi))*std::sin(phi); u(0, 1)=complex(std::cos(alpha+chi), std::sin(alpha+chi))*std::sin(phi); u(1, 1)=complex(std::cos(alpha-psi), std::sin(alpha-psi))*std::cos(phi); dense_matrix a(2, 2); a(0, 0)=A(k, k); a(1, 0)=A(k+1, k); a(0, 1)=A(k, k+1); a(1, 1)=A(k+1, k+1); a=ublas::prod(ublas::trans(ublas::conj(u)), a); a=ublas::prod(a, u); A(k, k)=a(0, 0); A(k+1, k)=a(1, 0); A(k, k+1)=a(0, 1); A(k+1, k+1)=a(1, 1); } vector b(n); for (size_type i=0; i<n; ++i) b(i)=rand_normal<complex>::get(); vector x(b); d_vector d(n); vector e(n-1); for (size_type i=0; i<n; ++i) d(i)=A(i ,i).real(); for (size_type i=0; i<n-1; ++i) e(i)=A(i+1, i); int info=lapack::ptsv(d, e, x); // solve if (info==0) { // res <- A*x - b vector res(b); blas::gbmv(complex(1, 0), A, x, complex(-1, 0), res); std::cout << "norm of residual : " << blas::nrm2(res) << '\n'; } else if (info>0) std::cout << "singular matrix\n"; else std::cout << "illegal arguments\n"; return EXIT_SUCCESS; }
{"hexsha": "a6d8f122e52c9956af65e90f40a7311450aaf848", "size": 2858, "ext": "cc", "lang": "C++", "max_stars_repo_path": "examples/lapack/ptsv.cc", "max_stars_repo_name": "rabauke/numeric_bindings", "max_stars_repo_head_hexsha": "f4de93bd7a01a8b31c9367fad35c81d086768f99", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/lapack/ptsv.cc", "max_issues_repo_name": "rabauke/numeric_bindings", "max_issues_repo_head_hexsha": "f4de93bd7a01a8b31c9367fad35c81d086768f99", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/lapack/ptsv.cc", "max_forks_repo_name": "rabauke/numeric_bindings", "max_forks_repo_head_hexsha": "f4de93bd7a01a8b31c9367fad35c81d086768f99", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.0238095238, "max_line_length": 77, "alphanum_fraction": 0.6319104269, "num_tokens": 1003}
//------------------------------------------------------------------------------ /* Copyright (c) 2012, 2013 Ripple Labs Inc. Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL , DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ //============================================================================== #include <ripple/protocol/STTx.h> #include <ripple/basics/contract.h> #include <ripple/basics/Log.h> #include <ripple/basics/safe_cast.h> #include <ripple/basics/StringUtilities.h> #include <ripple/protocol/HashPrefix.h> #include <ripple/protocol/jss.h> #include <ripple/protocol/Protocol.h> #include <ripple/protocol/PublicKey.h> #include <ripple/protocol/Sign.h> #include <ripple/protocol/STAccount.h> #include <ripple/protocol/STArray.h> #include <ripple/protocol/TxFlags.h> #include <ripple/protocol/UintTypes.h> #include <ripple/json/to_string.h> #include <boost/format.hpp> #include <array> #include <memory> #include <type_traits> #include <utility> namespace ripple { static auto getTxFormat (TxType type) { auto format = TxFormats::getInstance().findByType (type); if (format == nullptr) { Throw<std::runtime_error> ( "Invalid transaction type " + std::to_string ( safe_cast<std::underlying_type_t<TxType>>(type))); } return format; } STTx::STTx (STObject&& object) noexcept (false) : STObject (std::move (object)) { tx_type_ = safe_cast<TxType> (getFieldU16 (sfTransactionType)); applyTemplate (getTxFormat (tx_type_)->getSOTemplate()); // may throw tid_ = getHash(HashPrefix::transactionID); } STTx::STTx (SerialIter& sit) noexcept (false) : STObject (sfTransaction) { int length = sit.getBytesLeft (); if ((length < txMinSizeBytes) || (length > txMaxSizeBytes)) Throw<std::runtime_error> ("Transaction length invalid"); if (set (sit)) Throw<std::runtime_error> ("Transaction contains an object terminator"); tx_type_ = safe_cast<TxType> (getFieldU16 (sfTransactionType)); applyTemplate (getTxFormat (tx_type_)->getSOTemplate()); // May throw tid_ = getHash(HashPrefix::transactionID); } STTx::STTx ( TxType type, std::function<void(STObject&)> assembler) : STObject (sfTransaction) { auto format = getTxFormat (type); set (format->getSOTemplate()); setFieldU16 (sfTransactionType, format->getType ()); assembler (*this); tx_type_ = safe_cast<TxType>(getFieldU16 (sfTransactionType)); if (tx_type_ != type) LogicError ("Transaction type was mutated during assembly"); tid_ = getHash(HashPrefix::transactionID); } std::string STTx::getFullText () const { std::string ret = "\""; ret += to_string (getTransactionID ()); ret += "\" = {"; ret += STObject::getFullText (); ret += "}"; return ret; } boost::container::flat_set<AccountID> STTx::getMentionedAccounts () const { boost::container::flat_set<AccountID> list; for (auto const& it : *this) { if (auto sacc = dynamic_cast<STAccount const*> (&it)) { assert(! sacc->isDefault()); if (! sacc->isDefault()) list.insert(sacc->value()); } else if (auto samt = dynamic_cast<STAmount const*> (&it)) { auto const& issuer = samt->getIssuer (); if (! isXRP (issuer)) list.insert(issuer); } } return list; } static Blob getSigningData (STTx const& that) { Serializer s; s.add32 (HashPrefix::txSign); that.addWithoutSigningFields (s); return s.getData(); } uint256 STTx::getSigningHash () const { return STObject::getSigningHash (HashPrefix::txSign); } Blob STTx::getSignature () const { try { return getFieldVL (sfTxnSignature); } catch (std::exception const&) { return Blob (); } } void STTx::sign ( PublicKey const& publicKey, SecretKey const& secretKey) { auto const data = getSigningData (*this); auto const sig = ripple::sign ( publicKey, secretKey, makeSlice(data)); setFieldVL (sfTxnSignature, sig); tid_ = getHash(HashPrefix::transactionID); } std::pair<bool, std::string> STTx::checkSign(bool allowMultiSign) const { std::pair<bool, std::string> ret {false, ""}; try { if (allowMultiSign) { // Determine whether we're single- or multi-signing by looking // at the SigningPubKey. It it's empty we must be // multi-signing. Otherwise we're single-signing. Blob const& signingPubKey = getFieldVL (sfSigningPubKey); ret = signingPubKey.empty () ? checkMultiSign () : checkSingleSign (); } else { ret = checkSingleSign (); } } catch (std::exception const&) { ret = {false, "Internal signature check failure."}; } return ret; } Json::Value STTx::getJson (JsonOptions) const { Json::Value ret = STObject::getJson (JsonOptions::none); ret[jss::hash] = to_string (getTransactionID ()); return ret; } Json::Value STTx::getJson (JsonOptions options, bool binary) const { if (binary) { Json::Value ret; Serializer s = STObject::getSerializer (); ret[jss::tx] = strHex (s.peekData ()); ret[jss::hash] = to_string (getTransactionID ()); return ret; } return getJson(options); } std::string const& STTx::getMetaSQLInsertReplaceHeader () { static std::string const sql = "INSERT OR REPLACE INTO Transactions " "(TransID, TransType, FromAcct, FromSeq, LedgerSeq, Status, RawTxn, TxnMeta)" " VALUES "; return sql; } std::string STTx::getMetaSQL (std::uint32_t inLedger, std::string const& escapedMetaData) const { Serializer s; add (s); return getMetaSQL (s, inLedger, txnSqlValidated, escapedMetaData); } // VFALCO This could be a free function elsewhere std::string STTx::getMetaSQL (Serializer rawTxn, std::uint32_t inLedger, char status, std::string const& escapedMetaData) const { static boost::format bfTrans ("('%s', '%s', '%s', '%d', '%d', '%c', %s, %s)"); std::string rTxn = sqlEscape (rawTxn.peekData ()); auto format = TxFormats::getInstance().findByType (tx_type_); assert (format != nullptr); return str (boost::format (bfTrans) % to_string (getTransactionID ()) % format->getName () % toBase58(getAccountID(sfAccount)) % getSequence () % inLedger % status % rTxn % escapedMetaData); } std::pair<bool, std::string> STTx::checkSingleSign () const { // We don't allow both a non-empty sfSigningPubKey and an sfSigners. // That would allow the transaction to be signed two ways. So if both // fields are present the signature is invalid. if (isFieldPresent (sfSigners)) return {false, "Cannot both single- and multi-sign."}; bool validSig = false; try { bool const fullyCanonical = (getFlags() & tfFullyCanonicalSig); auto const spk = getFieldVL (sfSigningPubKey); if (publicKeyType (makeSlice(spk))) { Blob const signature = getFieldVL (sfTxnSignature); Blob const data = getSigningData (*this); validSig = verify ( PublicKey (makeSlice(spk)), makeSlice(data), makeSlice(signature), fullyCanonical); } } catch (std::exception const&) { // Assume it was a signature failure. validSig = false; } if (validSig == false) return {false, "Invalid signature."}; return {true, ""}; } std::pair<bool, std::string> STTx::checkMultiSign () const { // Make sure the MultiSigners are present. Otherwise they are not // attempting multi-signing and we just have a bad SigningPubKey. if (!isFieldPresent (sfSigners)) return {false, "Empty SigningPubKey."}; // We don't allow both an sfSigners and an sfTxnSignature. Both fields // being present would indicate that the transaction is signed both ways. if (isFieldPresent (sfTxnSignature)) return {false, "Cannot both single- and multi-sign."}; STArray const& signers {getFieldArray (sfSigners)}; // There are well known bounds that the number of signers must be within. if (signers.size() < minMultiSigners || signers.size() > maxMultiSigners) return {false, "Invalid Signers array size."}; // We can ease the computational load inside the loop a bit by // pre-constructing part of the data that we hash. Fill a Serializer // with the stuff that stays constant from signature to signature. Serializer const dataStart {startMultiSigningData (*this)}; // We also use the sfAccount field inside the loop. Get it once. auto const txnAccountID = getAccountID (sfAccount); // Determine whether signatures must be full canonical. bool const fullyCanonical = (getFlags() & tfFullyCanonicalSig); // Signers must be in sorted order by AccountID. AccountID lastAccountID (beast::zero); for (auto const& signer : signers) { auto const accountID = signer.getAccountID (sfAccount); // The account owner may not multisign for themselves. if (accountID == txnAccountID) return {false, "Invalid multisigner."}; // No duplicate signers allowed. if (lastAccountID == accountID) return {false, "Duplicate Signers not allowed."}; // Accounts must be in order by account ID. No duplicates allowed. if (lastAccountID > accountID) return {false, "Unsorted Signers array."}; // The next signature must be greater than this one. lastAccountID = accountID; // Verify the signature. bool validSig = false; try { Serializer s = dataStart; finishMultiSigningData (accountID, s); auto spk = signer.getFieldVL (sfSigningPubKey); if (publicKeyType (makeSlice(spk))) { Blob const signature = signer.getFieldVL (sfTxnSignature); validSig = verify ( PublicKey (makeSlice(spk)), s.slice(), makeSlice(signature), fullyCanonical); } } catch (std::exception const&) { // We assume any problem lies with the signature. validSig = false; } if (!validSig) return {false, std::string("Invalid signature on account ") + toBase58(accountID) + "."}; } // All signatures verified. return {true, ""}; } //------------------------------------------------------------------------------ static bool isMemoOkay (STObject const& st, std::string& reason) { if (!st.isFieldPresent (sfMemos)) return true; auto const& memos = st.getFieldArray (sfMemos); // The number 2048 is a preallocation hint, not a hard limit // to avoid allocate/copy/free's Serializer s (2048); memos.add (s); // FIXME move the memo limit into a config tunable if (s.getDataLength () > 1024) { reason = "The memo exceeds the maximum allowed size."; return false; } for (auto const& memo : memos) { auto memoObj = dynamic_cast <STObject const*> (&memo); if (!memoObj || (memoObj->getFName() != sfMemo)) { reason = "A memo array may contain only Memo objects."; return false; } for (auto const& memoElement : *memoObj) { auto const& name = memoElement.getFName(); if (name != sfMemoType && name != sfMemoData && name != sfMemoFormat) { reason = "A memo may contain only MemoType, MemoData or " "MemoFormat fields."; return false; } // The raw data is stored as hex-octets, which we want to decode. auto optData = strUnHex (memoElement.getText ()); if (!optData) { reason = "The MemoType, MemoData and MemoFormat fields may " "only contain hex-encoded data."; return false; } if (name == sfMemoData) continue; // The only allowed characters for MemoType and MemoFormat are the // characters allowed in URLs per RFC 3986: alphanumerics and the // following symbols: -._~:/?#[]@!$&'()*+,;=% static std::array<char, 256> const allowedSymbols = [] { std::array<char, 256> a; a.fill(0); std::string symbols ( "0123456789" "-._~:/?#[]@!$&'()*+,;=%" "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz"); for(char c : symbols) a[c] = 1; return a; }(); for (auto c : *optData) { if (!allowedSymbols[c]) { reason = "The MemoType and MemoFormat fields may only " "contain characters that are allowed in URLs " "under RFC 3986."; return false; } } } } return true; } // Ensure all account fields are 160-bits static bool isAccountFieldOkay (STObject const& st) { for (int i = 0; i < st.getCount(); ++i) { auto t = dynamic_cast<STAccount const*>(st.peekAtPIndex (i)); if (t && t->isDefault ()) return false; } return true; } bool passesLocalChecks (STObject const& st, std::string& reason) { if (!isMemoOkay (st, reason)) return false; if (!isAccountFieldOkay (st)) { reason = "An account field is invalid."; return false; } if (isPseudoTx(st)) { reason = "Cannot submit pseudo transactions."; return false; } return true; } std::shared_ptr<STTx const> sterilize (STTx const& stx) { Serializer s; stx.add(s); SerialIter sit(s.slice()); return std::make_shared<STTx const>(std::ref(sit)); } bool isPseudoTx(STObject const& tx) { auto t = tx[~sfTransactionType]; if (!t) return false; auto tt = safe_cast<TxType>(*t); return tt == ttAMENDMENT || tt == ttFEE; } } // ripple
{"hexsha": "4fe91d6b0ace58d2acbecd989c6951e97ece11a0", "size": 15417, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "src/ripple/protocol/impl/STTx.cpp", "max_stars_repo_name": "ripplealpha/ripple-alpha-core", "max_stars_repo_head_hexsha": "509118209407d46ce29d2889b982b8999fb1eeaa", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 1.0, "max_stars_repo_stars_event_min_datetime": "2020-06-19T11:32:39.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-19T11:32:39.000Z", "max_issues_repo_path": "src/ripple/protocol/impl/STTx.cpp", "max_issues_repo_name": "ripplealpha/ripple-alpha-core", "max_issues_repo_head_hexsha": "509118209407d46ce29d2889b982b8999fb1eeaa", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/ripple/protocol/impl/STTx.cpp", "max_forks_repo_name": "ripplealpha/ripple-alpha-core", "max_forks_repo_head_hexsha": "509118209407d46ce29d2889b982b8999fb1eeaa", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": 1.0, "max_forks_repo_forks_event_min_datetime": "2020-02-17T02:16:02.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-17T02:16:02.000Z", "avg_line_length": 28.9793233083, "max_line_length": 88, "alphanum_fraction": 0.5853278848, "num_tokens": 3643}
\documentclass[11pt]{report} \usepackage[margin=2cm]{geometry} \usepackage{graphicx} \usepackage{float} \usepackage{times} \usepackage{url} \newcommand{\Gap}{\texorpdfstring{\hfill}{}} \newcommand{\Rec}{\texorpdfstring{{\small\emph{\color{blue}{\fbox{High Leverage}}}}}{}} \newcommand{\HighRisk}{\texorpdfstring{{\small\emph{\color{orange}{\fbox{Uncertain Impact}}}}}{}} \newcommand{\Longterm}{\texorpdfstring{{\small\emph{\color{OliveGreen}{\fbox{Long-term}}}}}{}} \usepackage[dvipsnames]{xcolor} \begin{document} \section{Individual Action\texorpdfstring{\hfill\textit{by Natasha Jaques}}{}} \label{sec:tools-individuals} Individuals may worry that they are powerless to affect climate change, or lack clarity on which of their behaviors are most important to change. In fact, there are actions which can meaningfully reduce each person's carbon footprint, and, if widely adopted, could have a significant impact on mitigating global emissions \cite{ccneedsbehaviorchange,hawken2017drawdown}. AI can help to identify those behaviors, inform individuals, and provide constructive opportunities by modeling individual behavior. \subsection{Understanding personal carbon footprint} \label{sec:personal_carbon_footprint} We as individuals are constantly confronted with decisions that affect our carbon footprint, but we may lack the data and knowledge to know which decisions are most impactful. Fortunately, ML can help determine an individual's carbon footprint from their personal and household data\footnote{See e.g.~\url{https://www.tmrow.com/}}. For example, natural language processing can be used to extract the flights a person takes from their email, or determine specific grocery items purchased from a bill, making it possible to predict the associated emissions. Systems that combine this information with data obtained from the user's smartphone (e.g.~from a ride-sharing app) can then help consumers who wish to identify which behaviors result in the highest emissions. Given such a ML model, counterfactual reasoning can potentially be used to demonstrate to consumers how much their emissions would be reduced for each behavior they changed. As a privacy-conscious alternative, emissions estimates could be directly incorporated into grocery labels \cite{supermarketfuture} or interfaces for purchasing flights. Such information can empower people to understand how they can best help mitigate climate change through behavior change. Residences are responsible for a large share of GHG emissions \cite{ipcc_global_2018} (see also \S\ref{sec:buildings-cities}). A large meta-analysis found that significant residential energy savings can be achieved \cite{ehrhardt2010advanced}, by targeting the right interventions to the right households \cite{albert2016predictive, allcott2011social, allcott2014short}. ML can predict a household's emissions in transportation, energy, water, waste, foods, goods, and services, as a function of its characteristics \cite{jones2011quantifying}. These predictions can be used to tailor customized interventions for high-emissions households \cite{jones2014spatial}. Changing behavior both helps mitigate climate change and benefits individuals; studies have shown that many carbon mitigation strategies also provide cost savings to consumers \cite{jones2011quantifying}. Household energy disaggregation breaks down overall electricity consumption into energy use by individual appliances (see also \S\ref{sec:indv}) \cite{armel2013disaggregation}, which can help facilitate behavior change \cite{sundramoorthy2011domesticating}. For example, it can be used to inform consumers of high-energy appliances of which they were previously unaware. This alone could have a significant impact, since many devices consume a large amount of electricity even when not in use; standby power consumption accounts for roughly 8\% of residential electricity demand \cite{mackay2008sustainable}. A variety of ML techniques have been used to effectively disaggregate household energy, such as spectral clustering, Hidden Markov Models, and neural networks \cite{armel2013disaggregation}. ML can also be used to predict the marginal emissions of energy consumption in real time, on a scale of hours\footnote{\url{https://www.watttime.org/}}, potentially allowing consumers to effectively schedule activities such as charging an electric vehicle when the emissions (and prices \cite{klenert2018making}) will be lowest \cite{olivierElectricitymap}. Combining these predictions with disaggregated energy data allows for the efficient automation of household energy consumption, ideally through products that present interpretable insights to the consumer (e.g.~\cite{strbac2008demand, schweppe1989algorithms}). Methods like reinforcement learning can be used to learn how to optimally schedule household appliances to consume energy more efficiently and sustainably \cite{mocanu2018line, remani2018residential}. Multi-agent learning has also been applied to this problem, to ensure that groups of homes can coordinate to balance energy consumption to keep peak demand low \cite{ramchurn2011agent2, ygge1999homebots}. \subsection{Facilitating behavior change \Gap \Rec} \label{sec:behavior_change} ML is highly effective at modeling human preferences, and this can be leveraged to help mitigate climate change. Using ML, we can model and cluster individuals based on their climate knowledge, preferences, demographics, and consumption characteristics (e.g.~\cite{beiser2018assessing,carr2011exploring,de2018graph,gabe2016householders,yang2013review}), and thus predict who will be most amenable to new technologies and sustainable behavior change. Such techniques have improved the enrollment rate of customers in an energy savings program by 2-3x \cite{albert2016predictive}. Other works have used ML to predict how much consumers are willing to pay to avoid potential environmental harms of energy consumption \cite{de2013willingness}, finding that some groups were totally insensitive to cost and would pay the maximum amount to mitigate harm, while other groups were willing to pay nothing. Given such disparate types of consumers, targeting interventions toward particular households may be especially worthwhile; all the more so because data show that the size and composition of household carbon footprints varies dramatically across geographic regions and demographics \cite{jones2011quantifying}. Citizens who would like to engage with policy decisions, or explore different options to reduce their personal carbon footprint, can have difficulty understanding existing laws and policies due to their complexity. They may benefit from tools that make policy information more manageable and relevant to the individual (e.g.~based on where the individual lives). There is the potential for natural language processing to derive understandable insights from policy texts for these applications, similar to automated compliance checking \cite{doi:10.1061/(ASCE)CP.1943-5487.0000427, bell2016systems}. Understanding individual behavior can help signal how it can be nudged. For example, path analysis has shown that an individual's \textit{psychological distance} to climate change (on geographic, temporal, social, and uncertainty dimensions) fully mediates their level of climate change concern \cite{jones2017future}. This suggests that interventions minimizing psychological distance to the effects of climate change may be most effective. Similarly, ML has revealed that cross-cultural support for international climate programs is not reduced, even when individuals are exposed to information about other countries' climate behavior \cite{beiser2019commitment}. To make the effects of climate change more real for consumers, and thus help motivate those who wish to act, image generation techniques such as CycleGANs have been used to visualize the potential consequences of extreme weather events on houses and cities \cite{schmidt2019visualizing}. Gamification via deep learning has been proposed to further allow individuals to explore their personal energy usage \cite{konstantakopoulos2019deep}. All of these programs may be an incredibly cost-effective way to reduce energy consumption; behavior change programs can cost as little as 3 cents to save a kilowatt hour of electricity, whereas generating one kWh would cost 5-6 cents with a coal or wind power plant, and 10 cents with solar \cite{peerpressureOPower, forbesEnergyCost}. \subsection{Discussion} While individuals can sometimes feel that their contributions to climate change are dwarfed by other factors, in reality individual actions can have a significant impact in mitigating climate change. ML can aid this process by empowering consumers to understand which of their behaviors lead to the highest emissions, automatically scheduling energy consumption, and providing insights into how to facilitate behavior change. \end{document}
{"hexsha": "f6b2a6e28c7051cd4f231900a2abdcc818b3f1d0", "size": 8958, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tacling_climate_change_source/sections/toolsIndividuals.tex", "max_stars_repo_name": "mirandrom/climatechange-ml", "max_stars_repo_head_hexsha": "2aa36c90f047ba7b10310c66df2ecfc0aa90e304", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tacling_climate_change_source/sections/toolsIndividuals.tex", "max_issues_repo_name": "mirandrom/climatechange-ml", "max_issues_repo_head_hexsha": "2aa36c90f047ba7b10310c66df2ecfc0aa90e304", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tacling_climate_change_source/sections/toolsIndividuals.tex", "max_forks_repo_name": "mirandrom/climatechange-ml", "max_forks_repo_head_hexsha": "2aa36c90f047ba7b10310c66df2ecfc0aa90e304", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-29T19:28:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-29T19:28:13.000Z", "avg_line_length": 186.625, "max_line_length": 1230, "alphanum_fraction": 0.8260772494, "num_tokens": 1870}
import numpy as np import nltk import pandas as pd from ast import literal_eval from collections import Counter def sampleFromDirichlet(alpha): return np.random.dirichlet(alpha) def sampleFromCategorical(theta): # theta = theta / np.sum(theta) return np.random.multinomial(1, theta).argmax() def word_indices(doc_sent_word_dict, sent_index): """ :param doc_sent_word_dict: :param sent_index: :return: """ sentence = doc_sent_word_dict[sent_index] for idx in sentence: yield idx class STMD_Gibbs_Sampler: def __init__(self, numTopics, alpha, beta, gamma, max_vocab_size=10000, max_sentence=50, numSentiments=2): self.alpha = alpha self.beta = beta self.gamma = gamma self.numTopics = numTopics self.numSentiments = numSentiments self.MAX_VOCAB_SIZE = max_vocab_size self.maxSentence = max_sentence def build_dataset(self, reviews): """ :param reviews: 리뷰 데이터 [ [[문서1의 문장1],[문서1의 문장2]], [[문서2의 문장1],[문서2의 문장2]], ...]] :return: """ corpus = [word for review in reviews for sentence in review for word in sentence] text = nltk.Text(corpus) freq = nltk.FreqDist(text) keywords = [tup[0] for tup in freq.most_common(self.MAX_VOCAB_SIZE)] # 많이 등장한 단어 선택 word2idx = {} # key : 단어, value : index for index, key in enumerate(keywords): word2idx[key] = index idx2word = dict(zip(word2idx.values(), word2idx.keys())) # key : index, value : 단어 doc_sent_word_dict = {} # key: 문서 index, value : [[list of sent1 단어의 index], [list of sent2 단어의 index]...] numSentence = {} # key : 문서 index, value : 해당 문서의 문장수 wordCountSentence = {} # key : 문서 index, value : 해당 문서의 각 문장별 word count for index, review in enumerate(reviews): doc_sent_lst = [] doc_sent_count = [] for sent in review: word_indices = [word2idx[word] for word in sent if word in word2idx] doc_sent_lst.append(word_indices) counts = Counter(word_indices) doc_sent_count.append(counts) numSentence[index] = len(doc_sent_lst) doc_sent_word_dict[index] = doc_sent_lst wordCountSentence[index] = doc_sent_count return word2idx, idx2word, doc_sent_word_dict, wordCountSentence, numSentence def _initialize_(self, reviews): self.word2idx, self.idx2word, self.doc_sent_word_dict, self.wordCountSentence, self.numSentence = self.build_dataset( reviews) numDocs = len(self.doc_sent_word_dict.keys()) vocabSize = len(self.word2idx.keys()) # Pseudocounts self.n_wkl = np.zeros((vocabSize, self.numTopics, self.numSentiments)) # 단어 i가 topic k, senti l로 할당된 수 self.n_kl = np.zeros((self.numTopics, self.numSentiments)) # topic k, senti l로 할당된 단어 수 self.ns_d = np.zeros((numDocs)) # 문서 d의 문장 수 self.ns_dkl = np.zeros((numDocs, self.numTopics, self.numSentiments)) # 문서 d에서 topic k, sentiment l로 할당된 문장 수 self.ns_dk = np.zeros((numDocs, self.numTopics)) # 문서 d에서 topic k로 할당된 문장 수 self.topics = {} self.sentiments = {} # self.priorSentiment = {} alphaVec = self.alpha * np.ones(self.numTopics) gammaVec = self.gamma * np.ones(self.numSentiments) # 기존 sentiment-lda에서는 sentiment wordnet을 이용해서 priorsentiment를 줬는데, # word2vec은 classvector와 유사성을 이용해서 해도 괜찮을듯 # for i, word in enumerate(self.vectorizer.get_feature_names()): # synsets = swn.senti_synsets(word) # posScore = np.mean([s.pos_score() for s in synsets]) # negScore = np.mean([s.neg_score() for s in synsets]) # if posScore >= 0.1 and posScore > negScore: # self.priorSentiment[i] = 1 # elif negScore >= 0.1 and negScore > posScore: # self.priorSentiment[i] = 0 for d in range(numDocs): topicDistribution = sampleFromDirichlet(alphaVec) sentimentDistribution = np.zeros((self.numTopics, self.numSentiments)) for t in range(self.numTopics): sentimentDistribution[t, :] = sampleFromDirichlet(gammaVec) for m in range(self.numSentence[d]): t = sampleFromCategorical(topicDistribution) s = sampleFromCategorical(sentimentDistribution[t, :]) self.topics[(d, m)] = t # d 문서의 m번째 문장의 topic self.sentiments[(d, m)] = s # d 문서의 m 번째 문장의 sentiment self.ns_d[d] += 1 self.ns_dkl[d, t, s] += 1 self.ns_dk[d, t] += 1 for i, w in enumerate(word_indices(self.doc_sent_word_dict[d], m)): # d번째 문서의 m번째 문장의 단어를 돌면서 self.n_wkl[w, t, s] += 1 # w번째 단어가 topic은 t, sentiment s로 할당된 개수 self.n_kl[t, s] += 1 # topic k, senti l로 할당된 단어 수 def conditionalDistribution(self, d, m, w): """ Calculates the (topic, sentiment) probability for sentence m in document d Returns: a matrix (numTopics x numSentiments) storing the probabilities """ probabilities_ts = np.ones((self.numTopics, self.numSentiments)) # firstfactor 수정 firstFactor = (self.n_wkl[w, :, :] + self.beta) / \ (self.n_kl + self.n_wkl.shape[0] * self.beta) # dim(K x L) secondFactor = (self.ns_dk[d, :] + self.alpha) / \ (self.ns_d[d] + self.numTopics * self.alpha) # dim(K x 1) thirdFactor = (self.ns_dkl[d, :, :] + self.gamma) / \ (self.ns_dk[d] + self.numSentiments * self.gamma)[:, np.newaxis] # dim (K x L) probabilities_ts *= firstFactor * thirdFactor probabilities_ts *= secondFactor[:, np.newaxis] probabilities_ts /= np.sum(probabilities_ts) return probabilities_ts def run(self, reviews, maxIters=10): self._initialize_(reviews) numDocs = len(self.doc_sent_word_dict.keys()) for iteration in range(maxIters): if (iteration + 1) % 10 == 0: print("Starting iteration %d of %d" % (iteration + 1, maxIters)) for d in range(numDocs): for m in range(self.numSentence[d]): t = self.topics[(d, m)] s = self.sentiments[(d, m)] self.ns_d[d] -= 1 self.ns_dkl[d, t, s] -= 1 self.ns_dk[d, t] -= 1 for i, w in enumerate(word_indices(self.doc_sent_word_dict[d], m)): self.n_wkl[w, t, s] -= 1 # w번째 단어가 topic은 t, sentiment s로 할당된 개수 self.n_kl[t, s] -= 1 # topic k, senti l로 할당된 단어 수 probabilities_ts = self.conditionalDistribution(d, m, w) ind = sampleFromCategorical(probabilities_ts.flatten()) t, s = np.unravel_index(ind, probabilities_ts.shape) self.topics[(d, m)] = t self.sentiments[(d, m)] = s self.ns_d[d] += 1 self.ns_dkl[d, t, s] += 1 self.ns_dk[d, t] += 1 for i, w in enumerate(word_indices(self.doc_sent_word_dict[d], m)): self.n_wkl[w, t, s] += 1 # w번째 단어가 topic은 t, sentiment s로 할당된 개수 self.n_kl[t, s] += 1 # topic k, senti l로 할당된 단어 수 data = pd.read_csv("E:/dataset/MasterThesis/elec_df_brand2vec.csv",nrows =1000) data['reviewSentence'] = data.reviewSentence.apply(lambda row: literal_eval(row)) data['reviewSentence_tagged'] = data.reviewSentence_tagged.apply(lambda row: literal_eval(row)) tagged_text_list = list(data['reviewSentence_tagged']) sampler = STMD_Gibbs_Sampler(numTopics=5, alpha=0.1, beta=0.1, gamma=0.5, numSentiments=2) sampler._initialize_(tagged_text_list) sampler.run(tagged_text_list) print("end")
{"hexsha": "6cd4456b5b179c47022299336306a8ce21a85df5", "size": 8271, "ext": "py", "lang": "Python", "max_stars_repo_path": "STMD-hs.py", "max_stars_repo_name": "dedert/python_lda", "max_stars_repo_head_hexsha": "7ffb792ccee468c0d6afc41f38efd63c33fa59fe", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "STMD-hs.py", "max_issues_repo_name": "dedert/python_lda", "max_issues_repo_head_hexsha": "7ffb792ccee468c0d6afc41f38efd63c33fa59fe", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "STMD-hs.py", "max_forks_repo_name": "dedert/python_lda", "max_forks_repo_head_hexsha": "7ffb792ccee468c0d6afc41f38efd63c33fa59fe", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.4662921348, "max_line_length": 126, "alphanum_fraction": 0.5745375408, "include": true, "reason": "import numpy", "num_tokens": 2221}
#redirect University Village
{"hexsha": "869400517b141f1981b212b2f55f8b75d2dc6de8", "size": 29, "ext": "f", "lang": "FORTRAN", "max_stars_repo_path": "lab/davisWiki/Sterling_University_Vista.f", "max_stars_repo_name": "voflo/Search", "max_stars_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lab/davisWiki/Sterling_University_Vista.f", "max_issues_repo_name": "voflo/Search", "max_issues_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab/davisWiki/Sterling_University_Vista.f", "max_forks_repo_name": "voflo/Search", "max_forks_repo_head_hexsha": "55088b2fe6a9d6c90590f090542e0c0e3c188c7d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 14.5, "max_line_length": 28, "alphanum_fraction": 0.8620689655, "num_tokens": 5}
import os import sys import time import math import torch import torch.nn as nn import torch.nn.init as init from torch.utils.data import DataLoader import torch.utils.data as Data from torchvision import datasets,models,transforms from torch.utils import data from PIL import Image import numpy as np import torch # 0, 29, 67, 76, 128, 149, 151, 217, 225 class EcpDataset(data.Dataset): def __init__(self,root,type = 'train',transform = True): self.root = root self._transform = transform self.files = None self.labels = None self.label_dict = { 0:0, 29:1, 67:2, 76:3, 128:4, 149:5, 151:6, 217:7, 225:8 } image_filelist = os.listdir(self.root+'/images') label_filelist = os.listdir(self.root+'/labels') filterlist = list(map(lambda x:''.join(x.replace('.jpg','_mask.png')),image_filelist)) image_filelist = list(map(lambda x:x.replace('_mask.png','.jpg'), list(set(filterlist)&set(label_filelist)))) self.image_filelist = image_filelist[:80] if type == 'train' else image_filelist[80:] def __len__(self): return len(self.image_filelist) def __getitem__(self,index): img_path = self.root+'/images/'+self.image_filelist[index] lbl_path = self.root+'/labels/'+self.image_filelist[index].replace('.jpg','_mask.png') img = np.array(Image.open(img_path)) img = img.transpose(2,0,1) lbl = self.process_label(lbl_path) img = torch.from_numpy(img).float() # img = img.permute(0,2,3,1) lbl = torch.from_numpy(lbl).type(torch.LongTensor) return img,lbl def process_label(self,lbl_filename): label = np.array(Image.open(lbl_filename).convert('L')) relabel = np.zeros([label.shape[0],label.shape[1]]) for i in range(label.shape[0]): for j in range(label.shape[1]): # print(label[i,j]) relabel[i,j] = self.label_dict[label[i,j]] return relabel # gg = EcpDataset('data') # print(gg[:9]) def loaddata(imagepath,labelpath,kind = True): if kind: x = 60000 else: x = 10000 image = np.fromfile(imagepath,'float').reshape(x,784,3).transpose(0,2,1) label = np.fromfile(labelpath,'float').reshape(x,784) images = torch.from_numpy(image).type(torch.FloatTensor) labels = torch.from_numpy(label).type(torch.FloatTensor) dataset = Data.TensorDataset(data_tensor=images, target_tensor=labels) loader = Data.DataLoader(dataset=dataset, batch_size=128, shuffle=True) return images,labels,loader def saveimage(grayimage,filename): # print(grayimage) grayimage = np.squeeze(grayimage) nummatrix = np.array([ [255, 255, 0], [128 ,255, 255] ,[255 ,128 , 0] ,[ 0 ,255, 0] ,[128 ,128 ,128] ,[255 , 0 , 0] ,[128 , 0 ,255] ,[ 0 , 0 ,255] ,[0, 0 ,0]]) numdict = { 0:nummatrix[8], 1:nummatrix[7], 2:nummatrix[6], 3:nummatrix[5], 4:nummatrix[4], 5:nummatrix[3], 6:nummatrix[2], 7:nummatrix[1], 8:nummatrix[0], } image = np.array(list(map(lambda x:nummatrix[8-x],grayimage))) Image.fromarray(np.uint8(image)).save(filename) def get_mean_and_std(dataset): dataloader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True, num_workers=2) mean = torch.zeros(3) std = torch.zeros(3) print('==> Computing mean and std..') for inputs, targets in dataloader: for i in range(3): mean[i] += inputs[:,i,:,:].mean() std[i] += inputs[:,i,:,:].std() mean.div_(len(dataset)) std.div_(len(dataset)) return mean, std def init_params(net): for m in net.modules(): if isinstance(m, nn.Conv2d): init.kaiming_normal(m.weight, mode='fan_out') if m.bias: init.constant(m.bias, 0) elif isinstance(m, nn.BatchNorm2d): init.constant(m.weight, 1) init.constant(m.bias, 0) elif isinstance(m, nn.Linear): init.normal(m.weight, std=1e-3) if m.bias: init.constant(m.bias, 0) class TripletLossFunc(nn.Module): def __init__(self,anchor,positive,negative,beta): super(TripletLossFunc,self).__init__() self.anchor = anchor self.positive = positive self.negative = negative self.beta = beta def forward(self): matched = torch.pow(self.anchor-self.positive,2) mimatched = torch.pow(self.anchor-self.negative,2) distance = matched-mimatched+self.beta loss = torch.max(distance,0) return loss _, term_width = os.popen('stty size', 'r').read().split() term_width = int(term_width) TOTAL_BAR_LENGTH = 65. last_time = time.time() begin_time = last_time def progress_bar(current, total, msg=None): global last_time, begin_time,tot_time if current == 0: begin_time = time.time() # Reset for new bar. cur_len = int(TOTAL_BAR_LENGTH*current/total) rest_len = int(TOTAL_BAR_LENGTH - cur_len) - 1 sys.stdout.write(' [') for i in range(cur_len): sys.stdout.write('=') sys.stdout.write('>') for i in range(rest_len): sys.stdout.write('.') sys.stdout.write(']') cur_time = time.time() step_time = cur_time - last_time last_time = cur_time tot_time = cur_time - begin_time L = [] L.append(' Step: %s' % format_time(step_time)) L.append(' | Tot: %s' % format_time(tot_time)) if msg: L.append(' | ' + msg) msg = ''.join(L) sys.stdout.write(msg) for i in range(term_width-int(TOTAL_BAR_LENGTH)-len(msg)-3): sys.stdout.write(' ') # Go back to the center of the bar. for i in range(term_width-int(TOTAL_BAR_LENGTH/2)+2): sys.stdout.write('\b') sys.stdout.write(' %d/%d ' % (current+1, total)) if current < total-1: sys.stdout.write('\r') else: sys.stdout.write('\n') sys.stdout.flush() def format_time(seconds): days = int(seconds / 3600/24) seconds = seconds - days*3600*24 hours = int(seconds / 3600) seconds = seconds - hours*3600 minutes = int(seconds / 60) seconds = seconds - minutes*60 secondsf = int(seconds) seconds = seconds - secondsf millis = int(seconds*1000) f = '' i = 1 if days > 0: f += str(days) + 'D' i += 1 if hours > 0 and i <= 2: f += str(hours) + 'h' i += 1 if minutes > 0 and i <= 2: f += str(minutes) + 'm' i += 1 if secondsf > 0 and i <= 2: f += str(secondsf) + 's' i += 1 if millis > 0 and i <= 2: f += str(millis) + 'ms' i += 1 if f == '': f = '0ms' return f
{"hexsha": "3a1eac95e9a0bac73825a9faffe9b03e96c2bd30", "size": 6912, "ext": "py", "lang": "Python", "max_stars_repo_path": "mutils.py", "max_stars_repo_name": "liuhantang/DeepFacade", "max_stars_repo_head_hexsha": "3751d01dee46cde5396d14b724dfd7f3f9499b66", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2018-11-07T19:57:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T03:02:56.000Z", "max_issues_repo_path": "mutils.py", "max_issues_repo_name": "liuhantang/DeepFacade", "max_issues_repo_head_hexsha": "3751d01dee46cde5396d14b724dfd7f3f9499b66", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-12-01T12:32:52.000Z", "max_issues_repo_issues_event_max_datetime": "2019-09-05T12:46:34.000Z", "max_forks_repo_path": "mutils.py", "max_forks_repo_name": "liuhantang/DeepFacade", "max_forks_repo_head_hexsha": "3751d01dee46cde5396d14b724dfd7f3f9499b66", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2018-11-21T04:29:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-13T20:57:20.000Z", "avg_line_length": 30.449339207, "max_line_length": 117, "alphanum_fraction": 0.5872395833, "include": true, "reason": "import numpy", "num_tokens": 1897}
\section{ITK Introduction} \centeredlargetext{white}{black}{ ITK Introduction } \begin{frame} \frametitle{ITK is a Templated Library} You will typically do: \begin{itemize} \item Include headers \pause \item Pick pixel type \pause \item Pick image dimension \pause \item Instantiate image type \pause \item Instantiate filter type \pause \item Create filters \pause \item Connect pipeline \pause \item Run pipeline \end{itemize} \end{frame} { \setbeamertemplate{navigation symbols}{} \begin{frame}[fragile] \frametitle{Basic Filtering - Median Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITK.cxx} \begin{itemize} \item Include headers \end{itemize} \lstlistingwithnumber{19}{21}{BasicImageFilteringITK.cxx} \pause \begin{itemize} \item Read images from files \item Write images from files \item Apply a Median filter in an image \end{itemize} \end{frame} } { \setbeamertemplate{navigation symbols}{} \begin{frame}[fragile] \frametitle{Basic Filtering - Median Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITK.cxx} \begin{itemize} \item Declare pixel types and image dimension \lstlistingwithnumber{32}{35}{BasicImageFilteringITK.cxx} \end{itemize} \pause \begin{itemize} \item Declare input and output image types \lstlistingwithnumber{37}{38}{BasicImageFilteringITK.cxx} \end{itemize} \end{frame} } { \setbeamertemplate{navigation symbols}{} \begin{frame}[fragile] \frametitle{Basic Filtering - Median Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITK.cxx} \begin{itemize} \item Declare the types for reader and writer \lstlistingwithnumber{40}{41}{BasicImageFilteringITK.cxx} \end{itemize} \pause \begin{itemize} \item Instantiate the reader and writer objects (source and sink) \lstlistingwithnumber{43}{44}{BasicImageFilteringITK.cxx} \end{itemize} \pause \begin{itemize} \item Set input and output filenames \lstlistingwithnumber{46}{47}{BasicImageFilteringITK.cxx} \end{itemize} \end{frame} } { \setbeamertemplate{navigation symbols}{} \begin{frame}[fragile] \frametitle{Basic Filtering - Median Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITK.cxx} \begin{itemize} \item Declare the Median filter type \lstlistingwithnumber{49}{50}{BasicImageFilteringITK.cxx} \end{itemize} \pause \begin{itemize} \item Create the filter \lstlistingwithnumber{51}{51}{BasicImageFilteringITK.cxx} \end{itemize} \end{frame} } { \setbeamertemplate{navigation symbols}{} \begin{frame}[fragile] \frametitle{Basic Filtering - Median Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITK.cxx} \begin{itemize} \item Define the Median kernel radius (Manhattan Radius) \lstlistingwithnumber{55}{58}{BasicImageFilteringITK.cxx} \end{itemize} \pause \begin{itemize} \item Connect the pipeline \lstlistingwithnumber{60}{61}{BasicImageFilteringITK.cxx} \end{itemize} \end{frame} } { \setbeamertemplate{navigation symbols}{} \begin{frame}[fragile] \frametitle{Basic Filtering - Median Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITK.cxx} \begin{itemize} \item Trigger the pipeline execution by calling Update(). \lstlistingwithnumber{63}{71}{BasicImageFilteringITK.cxx} \end{itemize} \pause \begin{itemize} \item ITK uses C++ exceptions for error management \item Exceptions are typically thrown during Update() calls \item Applications must catch the exceptions and solve them \end{itemize} \end{frame} } \begin{frame} \frametitle{How to Configure and Build} \framesubtitle{cmake-gui} \begin{itemize} \item Create a binary directory \item Configure the code with CMake \item Build (compile and link an executable) \item Run it in example image \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{How to Configure and Build} \framesubtitle{cmake-gui} \begin{itemize} \item Create a binary directory \begin{verbatim} cd ~/bin mkdir itkexercise1b cd itkexercise1b \end{verbatim} \end{itemize} \vspace{1in} \textcolor{gray}{For convenience, a pre-built version is available in \texttt{\textasciitilde/bin/itkexercise1}} \end{frame} \begin{frame}[fragile] \frametitle{How to Configure and Build} \framesubtitle{cmake-gui} \begin{itemize} \item Run ``cmake-gui'' \end{itemize} \begin{center} \includegraphics[width=0.5\paperwidth]{Screenshot-CMakeGUI-01.png} \end{center} \end{frame} \begin{frame}[fragile] \frametitle{How to Configure and Build} \framesubtitle{cmake-gui} \begin{itemize} \item Set ``Source Directory'' (where the source code is) \item Set ``Binary Directory'' (where to build the executable) \item Click on ``Configure'' \end{itemize} \begin{center} \includegraphics[width=0.7\paperwidth]{Screenshot-CMakeGUI-02.png} \end{center} \end{frame} \begin{frame}[fragile] \frametitle{How to Configure and Build} \framesubtitle{cmake-gui} \begin{itemize} \item You will get an error message \end{itemize} \begin{center} \includegraphics[width=0.4\paperwidth]{Screenshot-CMakeGUI-03.png} \end{center} \begin{itemize} \item Because the project needs ITK and OpenCV \item and we have not provided ITK\_DIR or OpenCV\_DIR yet \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{How to Configure and Build} \framesubtitle{cmake-gui} \begin{itemize} \item Provide the path to ITK in the ITK\_DIR variable \item /home/tutorial/bin/ITKVideo/Release \pause \item Provide the path to OpenCV in the OpenCV\_DIR variable \item /home/tutorial/bin/opencv/Release \end{itemize} \begin{center} \includegraphics[width=0.7\paperwidth]{Screenshot-CMakeGUI-04.png} \end{center} \end{frame} \begin{frame}[fragile] \frametitle{How to Configure and Build} \framesubtitle{cmake-gui} \begin{itemize} \item Click on ``Configure'' \item Click on ``Generate'' \end{itemize} \begin{center} \includegraphics[width=0.7\paperwidth]{Screenshot-CMakeGUI-05.png} \end{center} \end{frame} \begin{frame}[fragile] \frametitle{How to Build} \framesubtitle{make} \begin{itemize} \item In the command line do: \begin{verbatim} cd /home/tutorial/bin/itkexercise1b make \end{verbatim} \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{How to Run} \framesubtitle{/home/tutorial/bin/itkexercise1b} \begin{itemize} \item While in the binary directory: \begin{verbatim} /home/tutorial/bin/itkexercise1b \end{verbatim} \item In the command line type: \begin{verbatim} ./BasicImageFilteringITK \ ~/data/mandrillgray.png \ ./mandrillgrayMedian.png \ 3 3 \end{verbatim} \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{How to View the Result} \framesubtitle{Image viewing application ``eye of gnome'': eog} \begin{itemize} \item In the command line type: \begin{verbatim} eog ~/data/mandrillgray.jpg & eog ./mandrillgrayMedian.png & \end{verbatim} \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Result of Median Filter} \begin{center} \includegraphics[width=0.45\paperwidth]{Screenshot-mandrillgray-01.png} \includegraphics[width=0.45\paperwidth]{Screenshot-mandrillgrayMedian-01.png} \end{center} \end{frame} \begin{frame}[fragile] \frametitle{Excercise 1} \framesubtitle{Replace the filter with another one} \begin{itemize} \item Select a Filter from the Doxygen documentation\\ (e.g. MeanImageFilter) \item Replace the MedianImageFilter with the selected filter \item Recompile \item Rerun \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{ITK Doxygen Documentation} \begin{center} \includegraphics[scale=0.3]{Screenshot-ITKDoxygen-01.png} \end{center} \end{frame} \begin{frame} \frametitle{Excercise 1} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITKAnswer1.cxx} \begin{itemize} \item First we replace the Header file: \lstlistingwithnumber{21}{21}{BasicImageFilteringITKAnswer1.cxx} \pause \item Then we replace the Filter instantiation: \lstlistingwithnumber{49}{49}{BasicImageFilteringITKAnswer1.cxx} \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{How to Build} \framesubtitle{make} \begin{itemize} \item In the command line do: \begin{verbatim} cd /home/tutorial/bin/itkexercise1b make \end{verbatim} \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{How to Run} \framesubtitle{/home/tutorial/bin/itkexercise1b} \begin{itemize} \item While in the binary directory: \begin{verbatim} /home/tutorial/bin/itkexercise1b \end{verbatim} \item In the command line type: \begin{verbatim} ./BasicImageFilteringITK \ ~/data/mandrillgray.png \ ./mandrillgrayMean.png \ 3 3 \end{verbatim} \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{How to View the Result} \framesubtitle{Image viewing application ``eye of gnome'': eog} \begin{itemize} \item In the command line type: \begin{verbatim} eog ~/data/mandrillgray.jpg & eog ./mandrillgrayMean.png & \end{verbatim} \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Result of Mean Filter} \begin{center} \includegraphics[width=0.45\paperwidth]{Screenshot-mandrillgray-01.png} \includegraphics[width=0.45\paperwidth]{Screenshot-mandrillgrayMean-01.png} \end{center} \end{frame} \begin{frame}[fragile] \frametitle{Find All Other Exercises} \begin{itemize} \item Go to the binary directory \begin{verbatim} cd ~/bin/ITK-OpenCV-Bridge-Tutorial/Exercises \end{verbatim} \end{itemize} \end{frame} \begin{frame} \frametitle{Basic Filtering - Canny Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITKAnswer2.cxx} \begin{itemize} \item Some filters expect specific pixel types \pause \item Canny Edge detection is an example \pause \item Here we Cast the image before Canny \pause \item Then we Cast/Rescale it after Canny \end{itemize} \end{frame} \begin{frame} \frametitle{Basic Filtering - Canny Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITKAnswer2.cxx} \begin{itemize} \item Let's start with the relevant headers: \pause \lstlistingwithnumber{21}{23}{BasicImageFilteringITKAnswer2.cxx} \pause \item We then declare the relevant pixel types \pause \lstlistingwithnumber{34}{36}{BasicImageFilteringITKAnswer2.cxx} \pause \item Then we declare the relevant image types \pause \lstlistingwithnumber{38}{40}{BasicImageFilteringITKAnswer2.cxx} \end{itemize} \end{frame} \begin{frame} \frametitle{Basic Filtering - Canny Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITKAnswer2.cxx} \begin{itemize} \item We declare the Casting filter and instantiate it: \pause \lstlistingwithnumber{52}{55}{BasicImageFilteringITKAnswer2.cxx} \pause \item We declare and instantiate the Canny filter: \pause \lstlistingwithnumber{58}{61}{BasicImageFilteringITKAnswer2.cxx} \pause \item and do the same for the RescaleIntensity filter: \pause \lstlistingwithnumber{64}{67}{BasicImageFilteringITKAnswer2.cxx} \end{itemize} \end{frame} \begin{frame} \frametitle{Basic Filtering - Canny Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITKAnswer2.cxx} \begin{itemize} \item We connect the pipeline: \pause \lstlistingwithnumber{70}{73}{BasicImageFilteringITKAnswer2.cxx} \pause \item Set the parameters of the Canny Edge detection filter: \pause \lstlistingwithnumber{76}{78}{BasicImageFilteringITKAnswer2.cxx} \end{itemize} \end{frame} \begin{frame} \frametitle{Basic Filtering - Canny Filter} \framesubtitle{ITKIntroduction/exercise1/BasicImageFilteringITKAnswer2.cxx} \begin{itemize} \item Trigger the execution of the pipeline: \pause \lstlistingwithnumber{81}{89}{BasicImageFilteringITKAnswer2.cxx} \pause \item Note that the pipeline only runs when we call Update() \pause \item That's the point where we should catch exceptions \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{How to Run} \framesubtitle{/home/tutorial/bin/itkexercise1b} \begin{itemize} \item While in the binary directory: \begin{verbatim} /home/tutorial/bin/itkexercise1b \end{verbatim} \pause \item In the command line type: \begin{verbatim} ./BasicImageFilteringITKAnswer2 \ ~/data/mandrillgray.png \ ./mandrillgrayCanny.png \ 6 1 8 \end{verbatim} \pause \item 6 = Variance for Gaussian \item 1 = Lower Threshold \item 8 = Upper Threshold \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{How to View the Result} \framesubtitle{Image viewing application ``eye of gnome'': eog} \begin{itemize} \item In the command line type: \begin{verbatim} eog ~/data/mandrillgray.jpg & eog ./mandrillgrayCanny.png & \end{verbatim} \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Result of Canny Filter} \begin{center} \includegraphics[width=0.45\paperwidth]{Screenshot-mandrillgray-01.png} \includegraphics[width=0.45\paperwidth]{Screenshot-mandrillgrayCanny-01.png} \end{center} \end{frame}
{"hexsha": "142de0ac693545270deb37c660258fb8ffb00f57", "size": 12659, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documents/Tutorial/ITKIntroduction.tex", "max_stars_repo_name": "InsightSoftwareConsortium/ITK-OpenCV-Bridge-Tutorial", "max_stars_repo_head_hexsha": "0c47e0a06d61f21acd27ad4339ce0e42c8260a0c", "max_stars_repo_licenses": ["CC-BY-3.0", "Apache-2.0"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2015-05-10T23:24:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-23T11:44:12.000Z", "max_issues_repo_path": "Documents/Tutorial/ITKIntroduction.tex", "max_issues_repo_name": "InsightSoftwareConsortium/ITK-OpenCV-Bridge-Tutorial", "max_issues_repo_head_hexsha": "0c47e0a06d61f21acd27ad4339ce0e42c8260a0c", "max_issues_repo_licenses": ["CC-BY-3.0", "Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-08-31T14:01:43.000Z", "max_issues_repo_issues_event_max_datetime": "2016-09-06T02:43:30.000Z", "max_forks_repo_path": "Documents/Tutorial/ITKIntroduction.tex", "max_forks_repo_name": "InsightSoftwareConsortium/ITK-OpenCV-Bridge-Tutorial", "max_forks_repo_head_hexsha": "0c47e0a06d61f21acd27ad4339ce0e42c8260a0c", "max_forks_repo_licenses": ["CC-BY-3.0", "Apache-2.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2015-04-04T15:28:26.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-23T18:57:47.000Z", "avg_line_length": 25.6255060729, "max_line_length": 112, "alphanum_fraction": 0.779524449, "num_tokens": 3889}
import pandas as pd import numpy as np import re import calendar from datetime import datetime from sklearn import linear_model from sklearn.cluster import KMeans from sklearn.decomposition import PCA from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import PolynomialFeatures from sklearn.svm import SVR from sklearn.ensemble import VotingRegressor import statsmodels.api as sm import json from pmdarima.arima import auto_arima df = pd.read_csv('data\order.csv') df['date'] = pd.to_datetime(df.date) df['month_year'] = pd.to_datetime(df['date']).dt.to_period('M') def totalYear(): df2 = df[['total', 'year']] total_year = df2.groupby(['year'], as_index=False).sum( ).sort_values('year', ascending=True) data=total_year[['year','total']] data['year']=data['year'].astype(str) data.to_json('total_year.json', orient='index') return total_year def lastYear(): """lastYear() function: return the total revenue of the nearest year Returns: [type]: [description] """ lastYear = totalYear().tail(1) last_year = lastYear.iloc[0]['total'].round(2) return last_year def totalMonth(): """This is a function to calculate the total revenue for each month. + Using 'pandas' library Returns: [dataframe]: A list of total revenue for each month """ df4 = df[['total', 'month_year']] total_month = df4.groupby(['month_year'], as_index=False).sum( ).sort_values('month_year', ascending=True) data1=total_month[['month_year','total']] data1['month_year']=data1['month_year'].astype(str) data1.to_json('total_month.json', orient='index') return total_month def lastMonth(): """lastMonth() function: return the total revenue of the nearest month Returns: [type]: [description] """ lastMonth = totalMonth().tail(1) last_month = lastMonth.iloc[0]['total'].round(2) return last_month def totalDate(): """[summary] This is a function to calculate the total revenue of each day Returns: [type]: [description] """ df5 = df[['total', 'date']] total_date = df5.groupby(['date'], as_index=False).sum( ).sort_values('date', ascending=True) data1=total_date[['date','total']] data1['date'] = pd.to_datetime(data1['date']).dt.date return data1 def lastDate(): """[summary] lastDate() function: return the total revenue of the nearest day Returns: [type]: [description] """ lastDate = totalDate().tail(1) last_date = lastDate.iloc[0]['total'].round(2) return last_date def saleYear(): df6 = df[['quantity', 'year']] sale_year = df6.groupby(['year'], as_index=False).sum( ).sort_values('year', ascending=True) return sale_year def saleMonth(): df7 = df[['quantity', 'month_year']] sale_month = df7.groupby(['month_year'], as_index=False).sum( ).sort_values('month_year', ascending=True) data1=sale_month[['month_year','quantity']] data1['month_year']=data1['month_year'].astype(str) data1.to_json('sale_month.json', orient='index') return sale_month def saleDate(): df8 = df[['quantity', 'date']] sale_date = df8.groupby(['date'], as_index=False).sum( ).sort_values('date', ascending=True) data1=sale_date[['date','quantity']] data1['date'] = pd.to_datetime(data1['date']).dt.date return data1 def lastYearSale(): lastYearSale = saleYear().tail(1) last_year_sale = lastYearSale.iloc[0]['quantity'] return last_year_sale def lastMonthSale(): lastMonthSale = saleMonth().tail(1) last_month_sale = lastMonthSale.iloc[0]['quantity'] return last_month_sale def lastDateSale(): lastDateSale = saleDate().tail(1) last_date_sale = lastDateSale.iloc[0]['quantity'] return last_date_sale def percentageMethod(): df9=df[['method']] percent=(df9['method'].value_counts()/df9['method'].count())*100 percent.to_json('percent.json', orient='split') return percent def stationary_trend(data): result = sm.tsa.adfuller(data.dropna(), regression='c') # print('ADF Statistic: %f' % result[0]) # print('p-value: %f' % result[1]) # print('Critical Values:') # for key, value in result[4].items(): # print('\t%s: %.3f' % (key, value)) return result def stationary(data): result = sm.tsa.adfuller(data.dropna(), regression='ct') # print('ADF Statistic: %f' % result[0]) # print('p-value: %f' % result[1]) # print('Critical Values:') # for key, value in result[4].items(): # print('\t%s: %.3f' % (key, value)) return result def fit_model_stationary(data): model = auto_arima(data, start_p=0, start_q=0, max_p=5, max_q=5, m=12, start_P=0, seasonal=True, d=0, D=1, trace=True, error_action='ignore', suppress_warnings=True, stepwise=True) print(model.aic()) return model def fit_model_non_stationary(data): model = auto_arima(data, start_p=0, start_q=0, max_p=5, max_q=5, m=12, start_P=0, seasonal=True, d=1, D=1, trace=True, error_action='ignore', suppress_warnings=True, stepwise=True) print(model.aic()) return model def fit_model_fast(data): model = auto_arima(data, start_p=5, start_q=0, max_p=5, max_q=0, m=12,start_Q=0, max_Q=0, start_P=2,max_P=2, seasonal=True, d=1, D=1, trace=True, error_action='ignore', suppress_warnings=True, stepwise=True) print(model.aic()) return model def fit_model_fast_stationary(data): model = auto_arima(data, start_p=5, start_q=0, max_p=5, m=12,start_Q=0, max_Q=0, start_P=2, seasonal=True, d=0, D=1, trace=True, error_action='ignore', suppress_warnings=True, stepwise=True) print(model.aic()) return model # def show_result(model): # return model.summary() # def check_model(model): # model_sarima.plot_diagnostics(figsize=(15, 12)) # url='/static/images/plot.png' # plt.savefig(url) # return url
{"hexsha": "158d383f78bf7aa66754b2705d3bb97e405f8a51", "size": 6592, "ext": "py", "lang": "Python", "max_stars_repo_path": "model/main.py", "max_stars_repo_name": "hydrogen1999/flask-admin-boilerplate", "max_stars_repo_head_hexsha": "0abf95ddfb48789764d3d06939eb9cbaf93a3149", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "model/main.py", "max_issues_repo_name": "hydrogen1999/flask-admin-boilerplate", "max_issues_repo_head_hexsha": "0abf95ddfb48789764d3d06939eb9cbaf93a3149", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "model/main.py", "max_forks_repo_name": "hydrogen1999/flask-admin-boilerplate", "max_forks_repo_head_hexsha": "0abf95ddfb48789764d3d06939eb9cbaf93a3149", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.9483568075, "max_line_length": 74, "alphanum_fraction": 0.6101334951, "include": true, "reason": "import numpy,import statsmodels", "num_tokens": 1647}
# pylint: disable=missing-docstring import unittest import numpy as np import tensorflow as tf from absl.testing import parameterized from tf_encrypted.test import tf_execution_context class TestExecutionContext(parameterized.TestCase): @parameterized.parameters({"run_eagerly": True}, {"run_eagerly": False}) def test_tf_execution_mode(self, run_eagerly): context = tf_execution_context(run_eagerly) with context.scope(): x = tf.fill(dims=(2, 2), value=5.0) assert tf.executing_eagerly() == run_eagerly assert isinstance(x, tf.Tensor) actual_result = context.evaluate(x) assert isinstance(actual_result, np.ndarray) expected_result = np.array([[5.0, 5.0], [5.0, 5.0]]) np.testing.assert_equal(actual_result, expected_result) if __name__ == "__main__": unittest.main()
{"hexsha": "b20d6278367708a2554348cef9da19af4f43cec7", "size": 869, "ext": "py", "lang": "Python", "max_stars_repo_path": "primitives/tf_encrypted/test/execution_context_test.py", "max_stars_repo_name": "wqruan/tf-encrypted", "max_stars_repo_head_hexsha": "50ee4ae3ba76b7c1f70a90e18f875191adea0a07", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 825, "max_stars_repo_stars_event_min_datetime": "2019-04-18T09:21:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T05:55:26.000Z", "max_issues_repo_path": "primitives/tf_encrypted/test/execution_context_test.py", "max_issues_repo_name": "wqruan/tf-encrypted", "max_issues_repo_head_hexsha": "50ee4ae3ba76b7c1f70a90e18f875191adea0a07", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 354, "max_issues_repo_issues_event_min_datetime": "2019-04-18T08:42:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T18:06:31.000Z", "max_forks_repo_path": "primitives/tf_encrypted/test/execution_context_test.py", "max_forks_repo_name": "wqruan/tf-encrypted", "max_forks_repo_head_hexsha": "50ee4ae3ba76b7c1f70a90e18f875191adea0a07", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 161, "max_forks_repo_forks_event_min_datetime": "2019-05-02T16:43:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T01:35:03.000Z", "avg_line_length": 29.9655172414, "max_line_length": 76, "alphanum_fraction": 0.7008055236, "include": true, "reason": "import numpy", "num_tokens": 204}
import numpy as np import torch import os import pandas as pd import pickle import json import os.path as op import re import pathlib def nparams(model): return sum([p.numel() for p in model.parameters()]) def get_eval_idx(save_dir): model_dir = op.join(save_dir, 'model_ckpt') filenames = os.listdir(model_dir) pattern = r'^ckpt([0-9]+)\.pth$' valid = [ (f, re.search(pattern, f)[1]) for f in filenames \ if re.search(pattern, f) ] return max([int(i) for _, i in valid] + [-1]) def load_model_ckpt(model, save_dir, idx=None): model_dir = op.join(save_dir, 'model_ckpt') if idx is None: filenames = os.listdir(model_dir) pattern = r'^ckpt([0-9]+)\.pth$' valid = [ (f, re.search(pattern, f)[1]) for f in filenames \ if re.search(pattern, f) ] idx = max([int(i) for _, i in valid] + [-1]) if idx == -1: print('No model to load, skipping') return model print(f'Loading model checkpoint ckpt{idx}.pth') load_path = op.join(model_dir, f'ckpt{idx}.pth') model.load_state_dict(torch.load(load_path)) return model def load_dumped_split(data_dir_path, ): with open(op.join(data_dir_path, 'train_type_dict.json'), 'r') as fp: train_type_dict = json.load(fp) with open(op.join(data_dir_path, 'test_type_dict.json'), 'r') as fp: test_type_dict = json.load(fp) with open(op.join(data_dir_path, 'train_descriptions.json'), 'r') as fp: train_descr = json.load(fp) with open(op.join(data_dir_path, 'test_descriptions.json'), 'r') as fp: test_descr = json.load(fp) return train_type_dict, test_type_dict, train_descr, test_descr def save_model_ckpt(model, save_dir, idx=None): model_dir = op.join(save_dir, 'model_ckpt') pathlib.Path(model_dir).mkdir(parents=True, exist_ok=True) if idx is None: filenames = os.listdir(model_dir) pattern = r'^ckpt([0-9]+)\.pth$' valid = [ (f, re.search(pattern, f)[1]) for f in filenames \ if re.search(pattern, f) ] max_index = max([int(i) for _, i in valid] + [-1]) idx = max_index + 1 print(f'Saving model checkpoint ckpt{idx}.pth') torch.save(model.state_dict(), op.join(model_dir, f'ckpt{idx}.pth')) def read_raw_file(raw_data_file): with open(raw_data_file, 'rb') as fp: raw_data = pickle.load(fp) id2description_raw = raw_data['id2description'] if 'description2id' in raw_data.keys(): description2id_raw = raw_data['description2id'] else: description2id_raw = None obs_raw = raw_data['obs'] descriptions_ids_raw = raw_data['descriptions_ids'] if os.path.isfile(raw_data_file[:-3] + '_count.pk'): with open(raw_data_file[:-3] + '_count.pk', 'rb') as fp: count_dict = pickle.load(fp) else: count_dict = None return obs_raw, descriptions_ids_raw, id2description_raw, description2id_raw, count_dict def sanity_check_descriptions(descriptions_ids_raw, id2description_raw, id2description): ids_to_remove = [id for id in id2description_raw.keys() if id not in id2description.keys()] descriptions_ids_all = [] for d_ids in descriptions_ids_raw: new_d_ids = [] for d in d_ids: new_d_ids.append([id for id in d if id not in ids_to_remove]) descriptions_ids_all.append(new_d_ids) return descriptions_ids_all def compute_metrics(pred_probas, rewards): y_pred = (pred_probas > 0.5).to(torch.float32).squeeze().cpu() y_true = rewards.cpu() tp = torch.mul(y_true, y_pred).sum().to(torch.float32).cpu() tn = ((1 - y_true) * (1 - y_pred)).sum().to(torch.float32).cpu() fp = ((1 - y_true) * y_pred).sum().to(torch.float32).cpu() fn = (y_true * (1 - y_pred)).sum().to(torch.float32).cpu() accuracy = ((tp + tn) / (tp + tn + fp + fn)).float() precision = (tp / (tp + fp)).float() recall = (tp / (tp + fn)).float() f1 = 2 * precision * recall / (precision + recall) return accuracy, precision, recall, f1 def evaluate_test_metrics(model, state_idx_buffer, states, id2one_hot, size_dataset, proportion_pos_reward, batch_size, logging, use_cuda=False, eval_idx=0): logging.info('Evaluating {}'.format(str(eval_idx))) model.eval() if use_cuda: device = torch.device('cuda') else: device = torch.device('cpu') output_dict = {} count = 0 with torch.no_grad(): for id in list(state_idx_buffer): if count % 100 == 0: logging.info('Descr {}/{}'.format(str(count), str(len(state_idx_buffer)))) bodies = [] objs = [] rewards = [] descrs = [] if len(state_idx_buffer[id]['pos_reward']) < size_dataset * proportion_pos_reward // 2: size_pos = int(len(state_idx_buffer[id]['pos_reward'])) else: size_pos = int(size_dataset * proportion_pos_reward // 2) size_neg = int(size_dataset - size_pos) for pos_idx in state_idx_buffer[id]['pos_reward'][:size_pos]: bodies.append(states[pos_idx][0]) objs.append(states[pos_idx][1]) descrs.append(id2one_hot[id]) rewards.append(1) for neg_idx in state_idx_buffer[id]['neg_reward'][:size_neg]: bodies.append(states[neg_idx][0]) objs.append(states[neg_idx][1]) descrs.append(id2one_hot[id]) rewards.append(0) n_batch = int(size_dataset / batch_size) pred_probas = torch.tensor([], dtype=torch.float32).to(device) for batch in range(n_batch + 1): ind1 = batch * batch_size if (batch + 1) * batch_size > size_dataset: ind2 = size_dataset else: ind2 = (batch + 1) * batch_size bodies_batch = torch.tensor(bodies[ind1:ind2], dtype=torch.float32).to(device) objs_batch = torch.tensor(objs[ind1:ind2], dtype=torch.float32).to(device) descrs_batch = torch.tensor(descrs[ind1:ind2], dtype=torch.float32).to(device) pred_probas_batch = model(objs_batch, bodies_batch, descrs_batch) pred_probas = torch.cat([pred_probas, pred_probas_batch]) rewards = torch.tensor(rewards, dtype=torch.float32).to(device) accuracy, precision, recall, f1 = compute_metrics(pred_probas, rewards) output_dict[id] = (accuracy, precision, recall, f1) count += 1 model.train() return output_dict def compute_metric_by_type(metric_dict_test, id2description_test, type_dict, logging): description2id = {v: k for k, v in id2description_test.items()} output_dict = {} for k, v in type_dict.items(): f1_type_list = [] for descr in v: if descr in description2id.keys(): f1_type_list.append(metric_dict_test[description2id[descr]][3]) else: logging.info('FLAG description: ' + str(descr) + ' is missing from testing data') output_dict[k] = np.mean(np.nan_to_num(f1_type_list)) logging.info(str(k) + str(output_dict[k])) return output_dict def write_f1_type_to_df(df, metric_dict_by_type): dict_f1 = {'f1_{}'.format(t): f1 for t, f1 in metric_dict_by_type.items()} new_df = pd.DataFrame(dict_f1, index=[0]) df = df.append(new_df) return df def write_f1_to_df(df, metric_dict, id2description): dict_f1 = {'f1_{}'.format(descr): metric_dict[id][3].tolist() for id, descr in id2description.items() if id in metric_dict.keys()} f1_mean = np.nanmean([metric_dict[id][3] for id in metric_dict.keys()]) dict_f1['f1_mean'] = f1_mean new_df = pd.DataFrame(dict_f1, index=[0]) df = df.append(new_df) return df def append_value_to_dict_list(dict_list, tmp_dict): for k in dict_list.keys(): dict_list[k].append(tmp_dict[k]) return dict_list def mean_train_metrics_over_steps(train_metrics_dict): output_dict = {} for k in train_metrics_dict.keys(): output_dict['mean_' + k] = np.mean(np.nan_to_num(train_metrics_dict[k])) return output_dict
{"hexsha": "46e88833ca73bfbeef92b875035d55e8e1cdf653", "size": 8382, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/experiment/utils.py", "max_stars_repo_name": "flowersteam/spatio-temporal-language-transformers", "max_stars_repo_head_hexsha": "a33a9bc4748586ef08f9768de2aafd76de71823c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-11-26T18:04:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-02T05:28:04.000Z", "max_issues_repo_path": "src/experiment/utils.py", "max_issues_repo_name": "flowersteam/spatio-temporal-language-transformers", "max_issues_repo_head_hexsha": "a33a9bc4748586ef08f9768de2aafd76de71823c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/experiment/utils.py", "max_forks_repo_name": "flowersteam/spatio-temporal-language-transformers", "max_forks_repo_head_hexsha": "a33a9bc4748586ef08f9768de2aafd76de71823c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.974248927, "max_line_length": 119, "alphanum_fraction": 0.6208542114, "include": true, "reason": "import numpy", "num_tokens": 2104}
import torch import torch.nn.functional as F from torch.utils.data import DataLoader import skimage.io import argparse import numpy as np import time import os import cv2 import math # from memory_profiler import profile import nets import dataloader from dataloader import transforms from utils import utils from utils.file_io import write_pfm import webcamgrabber import visualise IMAGENET_MEAN = [0.485, 0.456, 0.406] IMAGENET_STD = [0.229, 0.224, 0.225] parser = argparse.ArgumentParser() parser.add_argument('--mode', default='test', type=str, help='Validation mode on small subset or test mode on full test data') parser.add_argument('--num_workers', default=0, type=int, help='Number of workers for data loading') parser.add_argument('--img_height', default=576, type=int, help='Image height for inference') parser.add_argument('--img_width', default=960, type=int, help='Image width for inference') # Model parser.add_argument('--seed', default=326, type=int, help='Random seed for reproducibility') parser.add_argument('--output_dir', default='output', type=str, help='Directory to save inference results') parser.add_argument('--max_disp', default=192, type=int, help='Max disparity') # AANet parser.add_argument('--feature_type', default='aanet', type=str, help='Type of feature extractor') parser.add_argument('--no_feature_mdconv', action='store_true', help='Whether to use mdconv for feature extraction') parser.add_argument('--feature_pyramid', action='store_true', help='Use pyramid feature') parser.add_argument('--feature_pyramid_network', action='store_true', help='Use FPN') parser.add_argument('--feature_similarity', default='correlation', type=str, help='Similarity measure for matching cost') parser.add_argument('--num_downsample', default=2, type=int, help='Number of downsample layer for feature extraction') parser.add_argument('--aggregation_type', default='adaptive', type=str, help='Type of cost aggregation') parser.add_argument('--num_scales', default=3, type=int, help='Number of stages when using parallel aggregation') parser.add_argument('--num_fusions', default=6, type=int, help='Number of multi-scale fusions when using parallel' 'aggragetion') parser.add_argument('--num_stage_blocks', default=1, type=int, help='Number of deform blocks for ISA') parser.add_argument('--num_deform_blocks', default=3, type=int, help='Number of DeformBlocks for aggregation') parser.add_argument('--no_intermediate_supervision', action='store_true', help='Whether to add intermediate supervision') parser.add_argument('--deformable_groups', default=2, type=int, help='Number of deformable groups') parser.add_argument('--mdconv_dilation', default=2, type=int, help='Dilation rate for deformable conv') parser.add_argument('--refinement_type', default='stereodrnet', help='Type of refinement module') parser.add_argument('--pretrained_aanet', default=None, type=str, help='Pretrained network') parser.add_argument('--save_type', default='png', choices=['pfm', 'png', 'npy'], help='Save file type') parser.add_argument('--visualize', action='store_true', help='Visualize disparity map') # Log args = parser.parse_args() model_name = os.path.basename(args.pretrained_aanet)[:-4] model_dir = os.path.basename(os.path.dirname(args.pretrained_aanet)) args.output_dir = os.path.join(args.output_dir, model_dir + '-' + model_name) utils.check_path(args.output_dir) utils.save_command(args.output_dir) # @profile def main(): # cam = webcamgrabber.Arducam("rtsp://192.168.1.70:8554/test") # cam = webcamgrabber.Arducam("parallel_dining.mp4") cam = webcamgrabber.Arducam("udpsrc port=5000 ! application/x-rtp, media=video, encoding-name=JPEG, payload=96 ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink") left, right = cam.read() img_height, img_width= left.shape[:2] vis = visualise.Visualiser(cam.Q_) # For reproducibility torch.manual_seed(args.seed) torch.cuda.manual_seed(args.seed) np.random.seed(args.seed) torch.backends.cudnn.benchmark = True device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Test loader test_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=IMAGENET_MEAN, std=IMAGENET_STD)]) print(f"Creating AANet...") aanet = nets.AANet(args.max_disp, num_downsample=args.num_downsample, feature_type=args.feature_type, no_feature_mdconv=args.no_feature_mdconv, feature_pyramid=args.feature_pyramid, feature_pyramid_network=args.feature_pyramid_network, feature_similarity=args.feature_similarity, aggregation_type=args.aggregation_type, num_scales=args.num_scales, num_fusions=args.num_fusions, num_stage_blocks=args.num_stage_blocks, num_deform_blocks=args.num_deform_blocks, no_intermediate_supervision=args.no_intermediate_supervision, refinement_type=args.refinement_type, mdconv_dilation=args.mdconv_dilation, deformable_groups=args.deformable_groups).to(device) # print(aanet) if os.path.exists(args.pretrained_aanet): print('=> Loading pretrained AANet:', args.pretrained_aanet) utils.load_pretrained_net(aanet, args.pretrained_aanet, no_strict=True) else: raise Exception(f'Model not found! {args.pretrained_aanet}') if torch.cuda.device_count() > 1: print('=> Use %d GPUs' % torch.cuda.device_count()) aanet = torch.nn.DataParallel(aanet) # Inference aanet.eval() inference_time = 0 framecount = 0 print(f"Finished warmup, starting inference...") while True: print(f"Frame {framecount}") left_img, right_img = cam.read() cv2.imshow("left", left_img) cv2.imshow("right", right_img) img = {'left': left_img, 'right': right_img} img = test_transform(img) left = img['left'].unsqueeze(0).to(device) # [B, 3, H, W] right = img['right'].unsqueeze(0).to(device) # Pad ori_height, ori_width = left.size()[2:] factor = 48 if args.refinement_type != 'hourglass' else 96 img_height = math.ceil(ori_height / factor) * factor img_width = math.ceil(ori_width / factor) * factor if ori_height < img_height or ori_width < img_width: top_pad = img_height - ori_height right_pad = img_width - ori_width # Pad size: (left_pad, right_pad, top_pad, bottom_pad) left = F.pad(left, (0, right_pad, top_pad, 0)) right = F.pad(right, (0, right_pad, top_pad, 0)) framecount += left.size(0) # print("Performing inference...") with torch.no_grad(): time_start = time.perf_counter() pred_disp = aanet(left, right) # [B, C, H, W] inference_time += time.perf_counter() - time_start if pred_disp.size(-1) < left.size(-1): print("Interpolating disparity...") pred_disp = pred_disp.unsqueeze(1) # [B, 1, H, W] pred_disp = F.interpolate(pred_disp, (left.size(-2), left.size(-1)), mode='bilinear', align_corners=True, recompute_scale_factor=True) * (left.size(-1) / pred_disp.size(-1)) pred_disp = pred_disp.cpu().squeeze(1) # [B, H, W] # Crop if ori_height < img_height or ori_width < img_width: if right_pad != 0: pred_disp = pred_disp[:, top_pad:, :-right_pad] else: pred_disp = pred_disp[:, top_pad:] disp = pred_disp[0].detach().cpu().numpy() vis.update(disp, left_img) if cv2.waitKey(1) & 0xFF == ord('q'): cam.release() vis.release() break # save image disp = 255 * disp img = disp.astype(np.uint8) cv2.imwrite("disparity.png", img) cv2.imwrite("left.png", left_img) cv2.imwrite("right.png", right_img) print('=> Mean inference time for %d images: %.3fs' % (framecount, inference_time / framecount)) if __name__ == '__main__': main()
{"hexsha": "817dfb37e56e1910223a26eedf4c828011033a39", "size": 8689, "ext": "py", "lang": "Python", "max_stars_repo_path": "webcam_inference.py", "max_stars_repo_name": "jwpleow/aanet", "max_stars_repo_head_hexsha": "b83e7b11dfee117114ae7b35645b85e886d3d436", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "webcam_inference.py", "max_issues_repo_name": "jwpleow/aanet", "max_issues_repo_head_hexsha": "b83e7b11dfee117114ae7b35645b85e886d3d436", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "webcam_inference.py", "max_forks_repo_name": "jwpleow/aanet", "max_forks_repo_head_hexsha": "b83e7b11dfee117114ae7b35645b85e886d3d436", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.0148514851, "max_line_length": 167, "alphanum_fraction": 0.6434572448, "include": true, "reason": "import numpy", "num_tokens": 2006}
import numpy as np import os import tensorflow.contrib.keras as kr import torch # 读取词汇表 def read_vocab(vocab_dir): with open(vocab_dir, 'r', encoding='utf-8', errors='ignore') as fp: words = [_.strip() for _ in fp.readlines()] word_to_id = dict(zip(words, range(len(words)))) return words, word_to_id # 读取分类目录,固定 def read_category(): categories = ['体育', '财经', '房产', '家居', '教育', '科技', '时尚', '时政', '游戏', '娱乐'] categories = [x for x in categories] cat_to_id = dict(zip(categories, range(len(categories)))) return categories, cat_to_id # 将文件转换为id表示 def process_file(filename, word_to_id, cat_to_id, max_length=600): contents, labels = [], [] with open(filename, 'r', encoding='utf-8', errors='ignore') as f: for line in f: try: label, content = line.strip().split('\t') if content: contents.append(list(content)) labels.append(label) except: pass data_id, label_id = [], [] for i in range(len(contents)): data_id.append([word_to_id[x] for x in contents[i] if x in word_to_id]) # 将每句话id化 label_id.append(cat_to_id[labels[i]]) # 每句话对应的类别的id # # 使用keras提供的pad_sequences来将文本pad为固定长度 x_pad = kr.preprocessing.sequence.pad_sequences(data_id, max_length) x_pad = torch.LongTensor(x_pad) # y_pad = kr.utils.to_categorical(label_id, num_classes=len(cat_to_id)) # 将标签转换为one-hot表示 return x_pad, torch.LongTensor(label_id) from torch.utils.data import Dataset class textData(Dataset): """ 下载数据、初始化数据,都可以在这里完成 """ def __init__(self, train=False, val=False): categories, cat_to_id = read_category() words, word_to_id = read_vocab('./dataset/cnews.vocab.txt') if train: # 数据加载及分批 # 获取训练数据每个字的id和对应标签的one-hot形式 self.data, self.label = process_file('./dataset/cnews.train.txt', word_to_id, cat_to_id, 600) if val: self.data, self.label = process_file('./dataset/cnews.val.txt', word_to_id, cat_to_id, 600) if not train and not val: self.data, self.label = process_file('./dataset/cnews.test.txt', word_to_id, cat_to_id, 600) def __getitem__(self, index): return self.data[index], self.label[index] def __len__(self): return self.data.shape[0]
{"hexsha": "b2a213af9e2487fa822f6ae418fc4d112fa766f2", "size": 2391, "ext": "py", "lang": "Python", "max_stars_repo_path": "cnews_loader.py", "max_stars_repo_name": "RikkyLai/CNews", "max_stars_repo_head_hexsha": "ee8c3597d44c2f765a65a7e5bafa432b305feec5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-06-26T08:06:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-06T07:29:38.000Z", "max_issues_repo_path": "cnews_loader.py", "max_issues_repo_name": "RikkyLai/CNews", "max_issues_repo_head_hexsha": "ee8c3597d44c2f765a65a7e5bafa432b305feec5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2020-09-25T22:42:55.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-10T02:04:43.000Z", "max_forks_repo_path": "cnews_loader.py", "max_forks_repo_name": "RikkyLai/CNews", "max_forks_repo_head_hexsha": "ee8c3597d44c2f765a65a7e5bafa432b305feec5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.2083333333, "max_line_length": 105, "alphanum_fraction": 0.6277708072, "include": true, "reason": "import numpy", "num_tokens": 694}
[STATEMENT] lemma OclOr_false2[simp]: "(Y or false) = Y" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (Y or false) = Y [PROOF STEP] by(simp add: OclOr_def)
{"llama_tokens": 74, "file": "Featherweight_OCL_UML_Logic", "length": 1}
# Imports from ast import literal_eval as make_tuple import configparser import os import time from PIL import Image import cv2 import imutils import numpy as np from encrypt_archive import p7zip fn_config = 'biometric.cfg' class FacialCamera: def __init__(self, pn_output="./"): """ Initialize application which uses OpenCV + Tkinter. It displays a video stream in a Tkinter window and stores current snapshot on disk. """ # Initialize the video stream, then allow the camera sensor to warm up print("[INFO] starting video stream...") self.vs = cv2.VideoCapture(0) # Capture video frames, 0 is default video camera time.sleep(2.0) # Load config config = configparser.ConfigParser() config.read(fn_config) self.pn_guest_images = config['DEFAULT']['pn_guest_images_archive'] self.guest_archive = p7zip(self.pn_guest_images) self.camera_rot = int(config['DEFAULT']['camera_rot']) self.image_width = int(config['DEFAULT']['image_width']) self.max_capture_interval = float(config['DEFAULT']['capture_interval']) self.max_capture_length = int(config['DEFAULT']['max_capture_length']) self.max_images = int(config['DEFAULT']['max_images']) # Capture Vars self.curr_pic = None # Current image from the camera self.gst_capture = None self.start_time = time.time() self.save_time = time.time() self.pic_num = None self.pn_gstcap_out = None # Face Detection Model self.min_detec_conf = float(config['DEFAULT']['min_detec_conf']) self.min_face_px = make_tuple(config['DEFAULT']['min_face_px']) pn_detector_model = config['DEFAULT']['pn_detector_model'] self.trainRBGavg = make_tuple(config['DEFAULT']['detector_trainrgbavg']) print("[INFO] loading face detector and embedding model...") protoPath = os.path.sep.join([pn_detector_model, "deploy.prototxt"]) modelPath = os.path.sep.join([pn_detector_model, "res10_300x300_ssd_iter_140000.caffemodel"]) self.detector = cv2.dnn.readNetFromCaffe(protoPath, modelPath) self.detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) # Face Recognition (extract/recognize embeddings) Model self.min_recog_prob = float(config['DEFAULT']['min_recog_prob']) fn_embedding_model = config['DEFAULT']['fn_embedding_model'] self.embedder = cv2.dnn.readNetFromTorch(fn_embedding_model) self.embedder.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) self.gst_identify = False self.guest_ids = {} # Guest Info (update outside of function) self.known_guest_meta = None def query_camera(self): """ Query camera for current image. """ ok, orig_pic = self.vs.read() # Read video stream if ok: # If no errors orig_pic = imutils.rotate(orig_pic, angle=self.camera_rot) curr_pic = imutils.resize(orig_pic, width=self.image_width) return curr_pic, orig_pic else: return None, None def convert_imgpil(self, pic): """ Convert image to something that can be saved. """ curr_pic = cv2.cvtColor(pic, cv2.COLOR_BGR2RGBA) return Image.fromarray(curr_pic) # Convert image for PIL def save_pic(self, path_pic, pic): """ Save image to disk. """ path_dir = os.path.dirname(path_pic) if not os.path.exists(path_dir): print("[INFO] Directory \"{}\" does not exist, creating..." .format(path_dir)) os.makedirs(path_dir) cv2.imwrite(path_pic, pic) def save_pic_archive(self, path_pic, pic): """ Save image to archive on disk. """ self.guest_archive.add_image(path_pic, pic) def guest_capture_func(self, pic_save, pic_display): """ Capture guest (capture images to be trained upon in separate step). """ capture_time_curr = time.time() (pic_height, pic_width) = pic_display.shape[:2] image_blob = cv2.dnn.blobFromImage( cv2.resize(pic_display, (300, 300)), 1.0, (300, 300), self.trainRBGavg, swapRB=False, crop=False) # Use previously loaded face detector on the blob self.detector.setInput(image_blob) detections = self.detector.forward() # Loop over detected faces for i in range(0, detections.shape[2]): curr_conf = detections[0, 0, i, 2] if curr_conf > self.min_detec_conf: bound_box = detections[0, 0, i, 3:7] * np.array([pic_width, pic_height, pic_width, pic_height]) (x_start, y_start, x_end, y_end) = bound_box.astype("int") # Only detect faces fully in the frame if x_start < 0 or y_start < 0: continue if x_end > self.image_width or y_end > self.image_width: continue # Draw face bounding box cv2.rectangle(pic_display, (x_start, y_start), (x_end, y_end), (0, 255, 0), 2) elap_seconds = capture_time_curr - self.capture_time_prev if elap_seconds >= self.max_capture_interval and \ np.any(detections[0, 0, :, 2] > self.min_detec_conf): # Capture image and save to file pn_pic = os.path.sep.join([self.pn_gstcap_out, "{}.png".format( str(self.pic_num).zfill(5))]) self.save_pic_archive(pn_pic, pic_save) self.pic_num += 1 self.capture_time_prev = capture_time_curr print("[INFO] Saved image {}.".format(self.pic_num)) if capture_time_curr - self.start_time > self.max_capture_length: print('[INFO] {} seconds elapsed, completed capturing images.' .format(self.max_capture_length)) self.gst_capture = False if self.pic_num >= self.max_images: print('[INFO] Max of {} images captured, completed capturing images.' .format(self.max_images)) self.gst_capture = False return pic_display def determine_guest_info(self, known_guest_meta, guest_id): """ Return guest metadata (if available). """ if known_guest_meta is not None: if guest_id in known_guest_meta: guest_info = known_guest_meta[guest_id] else: print('[ERROR] {} not in guest trained data. ' 'Please delete the folder {}/{} and ' 'run "Embed & Train" again.' .format(guest_id, self.pn_guest_images, guest_id)) guest_info = 'No Guest Info' else: print('[ERROR] Guest data not present.') guest_info = 'Guest Data Load Error' return guest_info def guest_identify_func(self, pic_display): """ Identify guests within a picture. """ (pic_height, pic_width) = pic_display.shape[:2] # Create OpenCV image blob image_blob = cv2.dnn.blobFromImage( cv2.resize(pic_display, (300, 300)), 1.0, (300, 300), self.trainRBGavg, swapRB=False, crop=False) # Use previously loaded face detector on the blob self.detector.setInput(image_blob) detections = self.detector.forward() self.guest_ids = {} for i in range(0, detections.shape[2]): # Determine detection confidence curr_confidence = detections[0, 0, i, 2] # Threshold confidence via configuration file if curr_confidence > self.min_detec_conf: # Return bounding box (x,y)-coordinates bound_box = detections[0, 0, i, 3:7] * np.array([pic_width, pic_height, pic_width, pic_height]) (x_start, y_start, x_end, y_end) = bound_box.astype("int") # Return face region of interest dimensions face = pic_display[y_start:y_end, x_start:x_end] # Skip faces below a min size (face_height, face_width) = face.shape[:2] if face_height < self.min_face_px[0] \ or face_width < self.min_face_px[1]: continue # Only detect faces fully in the frame if x_start < 0 or y_start < 0: continue if x_end > self.image_width or y_end > self.image_width: continue # Create OpenCV blob for face region of interest face_blob = cv2.dnn.blobFromImage(cv2.resize(face, (96, 96)), 1.0 / 255, (96, 96), (0, 0, 0), swapRB=True, crop=False) # Pass face blob into embedder, # return 128-D describing vector self.embedder.setInput(face_blob) face_vec = self.embedder.forward() # Use previously loaded recognizer on the face blob # to recognize the face preds = self.recognizer.predict_proba(face_vec)[0] max_pred_ind = np.argmax(preds) prob = preds[max_pred_ind] guest_id = self.label_encoder.classes_[max_pred_ind] # Filter out low classification probabilies # I.e. camera images must have a facial detection # of min_detect_conf and facial recognition # classification probability of min_recog_prob #print(curr_confidence, # Keep for optimizing the detect/recog % # prob, # self.determine_guest_info(self.known_guest_meta, # guest_id)) if prob >= self.min_recog_prob: # Store guest_id info as dict of {guest_id:prob} self.guest_ids[guest_id] = round(prob, 4) # Print guest_info from known_guest_meta data guest_info = self.determine_guest_info(self.known_guest_meta, guest_id) # Write out guest_info and recog probability text = "{:.2f}%: {}".format(round(prob*100, 2), guest_info) y = y_start - 15 if y_start - 15 > 15 else y_start + 15 cv2.rectangle(pic_display, (x_start, y_start), (x_end, y_end), (17, 190, 252), 2) cv2.putText(pic_display, text, (x_start, y), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (17, 190, 252), 2) return pic_display # Show the output frame def destructor(self): """ Destroy the root object and release all resources. """ cv2.destroyAllWindows()
{"hexsha": "8ecd2cc069c8df17bf4eafdadf92b91903ee2393", "size": 12168, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/camera.py", "max_stars_repo_name": "blakeflei/biometric_camera_signin", "max_stars_repo_head_hexsha": "e7c0c1e56d9193bf6bfc9bd9141a1bf4960f305e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/camera.py", "max_issues_repo_name": "blakeflei/biometric_camera_signin", "max_issues_repo_head_hexsha": "e7c0c1e56d9193bf6bfc9bd9141a1bf4960f305e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/camera.py", "max_forks_repo_name": "blakeflei/biometric_camera_signin", "max_forks_repo_head_hexsha": "e7c0c1e56d9193bf6bfc9bd9141a1bf4960f305e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-07T19:46:03.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-07T19:46:03.000Z", "avg_line_length": 41.387755102, "max_line_length": 88, "alphanum_fraction": 0.5290927022, "include": true, "reason": "import numpy", "num_tokens": 2460}
import random, os import argparse import numpy as np import torch import torch.optim as optim import torch.nn.functional as F from tqdm import tqdm from torch.autograd import Variable from transformers import * from models import inference_model from data_loader import DataLoader from torch.nn import NLLLoss import logging import json logger = logging.getLogger(__name__) def eval_result(predicts, labels): main_label = 0 main_correct_count = 0 correct_sum = 0 main_predicted_count = 0 main_total_count = 0 assert len(predicts) == len(labels) for i in range(len(predicts)): if labels[i] <= 1: predicted_label = predicts[i] gold_label = labels[i] if gold_label == predicted_label: correct_sum += 1 if predicted_label == main_label: main_predicted_count += 1 if gold_label == main_label: main_total_count += 1 if predicted_label == gold_label and gold_label == main_label: main_correct_count += 1 p = (float(main_correct_count) / float(main_predicted_count)) if (main_predicted_count > 0) else 0.0 r = (float(main_correct_count) / float(main_total_count)) if (main_total_count > 0) else 0.0 f = (2.0 * p * r / (p + r)) if (p + r > 0.0) else 0.0 f05 = ((1.0 + 0.5 * 0.5) * p * r / ((0.5 * 0.5 * p) + r)) if (p + r > 0.0) else 0.0 return {"p":p, "r":r, "f":f, "f05":f05} def eval_model(model, validset_reader, args): model.eval() predicts = list() labels = list() with torch.no_grad(): for inp_tensor, msk_tensor, seg_tensor, score_tensor in validset_reader: prob = model(inp_tensor, msk_tensor, seg_tensor) predict = torch.max(prob, -1)[1].type_as(score_tensor) predict = predict.view([-1, args.evi_num, args.max_len * 2 + 3]) score_tensor = score_tensor.view([-1, args.evi_num, args.max_len * 2 + 3]) score_tensor = score_tensor[:, 0] predict = predict[:, 0, :] predict = predict.contiguous().view(-1).tolist() score = score_tensor.contiguous().view(-1).tolist() predicts.extend(predict) labels.extend(score) results = eval_result(predicts, labels) return results if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument('--test_path', help='train path') parser.add_argument("--batch_size", default=16, type=int, help="Total batch size for training.") parser.add_argument('--bert_pretrain', required=True) parser.add_argument('--checkpoint', required=True) parser.add_argument("--bert_hidden_dim", default=768, type=int, help="Total batch size for training.") parser.add_argument("--evi_num", default=5, type=int, help="evidence number") parser.add_argument("--max_len", default=120, type=int, help="The maximum total input sequence length after WordPiece tokenization. Sequences " "longer than this will be truncated, and sequences shorter than this will be padded.") args = parser.parse_args() handlers = [logging.StreamHandler()] logging.basicConfig(format='[%(asctime)s] %(levelname)s: %(message)s', level=logging.DEBUG, datefmt='%d-%m-%Y %H:%M:%S', handlers=handlers) logger.info(args) logger.info('Start training!') tokenizer = AutoTokenizer.from_pretrained(args.bert_pretrain) logger.info("loading training set") validset_reader = DataLoader(args.test_path, tokenizer, args, batch_size=args.batch_size, hyp_flag=False, test=True) logger.info('initializing estimator model') bert_model = AutoModel.from_pretrained(args.bert_pretrain) bert_model = bert_model.cuda() model = inference_model(bert_model, args) model.load_state_dict(torch.load(args.checkpoint)['model']) model = model.cuda() logger.info('Start eval!') predict_dict = eval_model(model, validset_reader, args) print (predict_dict)
{"hexsha": "f4b5c7cfdf2a0907b6205d4257a618328be1d86d", "size": 4083, "ext": "py", "lang": "Python", "max_stars_repo_path": "model/test_src.py", "max_stars_repo_name": "andreaschari/VERNet", "max_stars_repo_head_hexsha": "e148ad1b0314d77838f5d0035aa946f01597b037", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2021-05-06T13:19:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-24T12:37:14.000Z", "max_issues_repo_path": "model/test_src.py", "max_issues_repo_name": "andreaschari/VERNet", "max_issues_repo_head_hexsha": "e148ad1b0314d77838f5d0035aa946f01597b037", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "model/test_src.py", "max_forks_repo_name": "andreaschari/VERNet", "max_forks_repo_head_hexsha": "e148ad1b0314d77838f5d0035aa946f01597b037", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-09T12:28:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-10T15:11:10.000Z", "avg_line_length": 41.6632653061, "max_line_length": 120, "alphanum_fraction": 0.6505020818, "include": true, "reason": "import numpy", "num_tokens": 987}
#!/usr/bin/python3 """ Generates plots from flow records and fitted models (requires `pandas` and `scipy`). """ import argparse import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import numpy as np from .lib.data import UNITS, LINE_NBINS, load_data from .lib.plot import plot_pdf, plot_cdf, plot_avg, save_figure, matplotlib_config, MODES_PDF, MODES_CDF from .lib.util import logmsg X_VALUES = ['length', 'size', 'duration', 'rate'] SIZE = 0.6 FIGSIZE = [SIZE * 11.2, SIZE * 6.8] def plot(objects, x_val='length', ext='png', single=False, normalize=True, fft=False, cdf_modes=(), pdf_modes=(), avg_modes=()): data = load_data(objects) idx = None if single: fig = plt.figure(figsize=[FIGSIZE[0] * 2.132, FIGSIZE[1] * 2]) ax = plt.subplot(2, 2, 1) else: fig = plt.figure(figsize=FIGSIZE) ax = plt.subplot(1, 1, 1) plt.subplots_adjust(0, 0, 1, 1) for obj, df in data.items(): if idx is None: idx = np.unique(np.rint(np.geomspace(df.index.min(), df.index.max(), LINE_NBINS)).astype(np.int64)) for what in ['flows', 'packets', 'octets']: logmsg('Drawing CDF', obj, what) plot_cdf(df, idx, x_val, what, mode={'line', 'mixture', *cdf_modes}) ax.set_xlabel(f'Flow {x_val} [{UNITS[x_val]}]') ax.set_ylabel('CDF (Fraction of)') if not single: out = 'cdf' logmsg('Saving', out) save_figure(fig, out, ext=ext) plt.close(fig) logmsg('Done', out) for n, what in enumerate(['flows', 'packets', 'octets']): if single: ax = plt.subplot(2, 2, n + 2, sharex=ax) else: fig, ax = plt.subplots(figsize=FIGSIZE) plt.subplots_adjust(0, 0, 1, 1) for obj, df in data.items(): logmsg('Drawing PDF', obj, what) plot_pdf(df, idx, x_val, what, mode={'line', 'mixture', *pdf_modes}, normalize=normalize, fft=fft) ax.set_xlabel(f'Flow {x_val} [{UNITS[x_val]}]') ax.set_ylabel(f'PDF of {what}') if not single: out = f'pdf-{what}' logmsg('Saving', out) save_figure(fig, out, ext=ext) plt.close(fig) logmsg('Done', out) if single: out = 'single' logmsg('Saving', out) save_figure(fig, out, ext=ext) plt.close(fig) logmsg('Done', out) for what in ['packets', 'octets', 'packet_size']: fig, ax = plt.subplots(figsize=FIGSIZE) for obj, df in data.items(): logmsg('Drawing AVG', obj, what) plot_avg(df, idx, x_val, what, mode={'line', 'mixture', *avg_modes}) ax.set_xlabel(f'Flow {x_val} [{UNITS[x_val]}]') ax.set_ylabel(f"Average {what.replace('_', ' ')} [bytes]") out = f'avg-{what}' logmsg('Saving', out) save_figure(fig, out, ext=ext) plt.close(fig) logmsg('Done', out) def parser(): p = argparse.ArgumentParser(description=__doc__) p.add_argument('--format', default='png', choices=['png', 'pdf'], help='plot file format') p.add_argument('--single', action='store_true', help='plot PDF and CDF in single file') p.add_argument('--no-normalize', action='store_false', help='do not normalize PDF datapoints') p.add_argument('--fft', action='store_true', help='use FFT for calculating KDE') p.add_argument('-P', action='append', default=[], choices=MODES_PDF, help='additional PDF plot modes (can be specified multiple times)') p.add_argument('-C', action='append', default=[], choices=MODES_CDF, help='additional CDF plot modes (can be specified multiple times)') p.add_argument('-x', default='length', choices=X_VALUES, help='x axis value') p.add_argument('histogram', help='csv_hist file to plot') p.add_argument('mixture', nargs='?', help='mixture directory to plot') return p def main(): app_args = parser().parse_args() files = [app_args.histogram] if app_args.mixture: files.append(app_args.mixture) with matplotlib_config(latex=False): plot(files, app_args.x, app_args.format, app_args.single, app_args.no_normalize, app_args.fft, app_args.C, app_args.P) if __name__ == '__main__': main()
{"hexsha": "a59f5babbb335d7613fe164366a75b4c24aee343", "size": 4264, "ext": "py", "lang": "Python", "max_stars_repo_path": "flow_models/plot.py", "max_stars_repo_name": "piotrjurkiewicz/flow_stats", "max_stars_repo_head_hexsha": "cc97a8381275cb9dd23ed0c3432abffaf4198431", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2019-07-08T09:53:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-19T07:50:11.000Z", "max_issues_repo_path": "flow_models/plot.py", "max_issues_repo_name": "ElsevierSoftwareX/SOFTX-D-21-00003", "max_issues_repo_head_hexsha": "cc97a8381275cb9dd23ed0c3432abffaf4198431", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-02-23T16:01:21.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-03T02:06:32.000Z", "max_forks_repo_path": "flow_models/plot.py", "max_forks_repo_name": "ElsevierSoftwareX/SOFTX-D-21-00003", "max_forks_repo_head_hexsha": "cc97a8381275cb9dd23ed0c3432abffaf4198431", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-09-27T14:52:54.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-25T07:58:24.000Z", "avg_line_length": 36.7586206897, "max_line_length": 140, "alphanum_fraction": 0.6125703565, "include": true, "reason": "import numpy", "num_tokens": 1155}
[STATEMENT] lemma module_pair_with_imp_module_with[explicit_ab_group_add]: "module_on_with S (+) (-) uminus 0 s" "module_on_with T (+) (-) uminus 0 t" if "module_pair_on_with S T (+) (-) uminus 0 s (+) (-) uminus 0 t" [PROOF STATE] proof (prove) goal (1 subgoal): 1. module_on_with S (+) (-) uminus (0::'a) s &&& module_on_with T (+) (-) uminus (0::'b) t [PROOF STEP] using that [PROOF STATE] proof (prove) using this: module_pair_on_with S T (+) (-) uminus (0::'a) s (+) (-) uminus (0::'b) t goal (1 subgoal): 1. module_on_with S (+) (-) uminus (0::'a) s &&& module_on_with T (+) (-) uminus (0::'b) t [PROOF STEP] unfolding module_pair_on_with_def [PROOF STATE] proof (prove) using this: module_on_with S (+) (-) uminus (0::'a) s \<and> module_on_with T (+) (-) uminus (0::'b) t goal (1 subgoal): 1. module_on_with S (+) (-) uminus (0::'a) s &&& module_on_with T (+) (-) uminus (0::'b) t [PROOF STEP] by simp_all
{"llama_tokens": 422, "file": null, "length": 3}
import numpy as np import pandas as pd from datetime import datetime, timedelta BUFFER_MAX = 700 BASE_SIZE = 500 class DataBuffer(object): def __init__(self, market, timeframe): self.market = market self.timeframe = timeframe self.buffer = [] self.last_time = None def add(self, tohlcv_list): if tohlcv_list is None: return [] if len(tohlcv_list) == 0: return [] if len(tohlcv_list) >= BUFFER_MAX: print('Bad Data size') return [] for tohlcv in tohlcv_list: t = tohlcv[0] if self.last_time is None: self.buffer.append(tohlcv) self.last_time = t continue if t <= self.last_time: continue self.buffer.append(tohlcv) print('t:', t, 'last:', self.last_time) print("(+) Market: ", self.market, self.timeframe.symbol, " Data: ", tohlcv) self.last_time = t return self.flush() def flush(self): n = len(self.buffer) if n >= BUFFER_MAX: d = self.buffer.copy() out = d[:BASE_SIZE] self.buffer = d[BASE_SIZE:] self.last_time = self.buffer[-1][0] return out else: return [] def clear(self): out = self.buffer.copy() self.buffer = [] self.last_time = None return out def data(self): n = len(self.buffer) return (n, self.buffer, self.last_time) def save(self, filepath): df = pd.DataFrame(data=self.buffer, columns=['Time', 'Opne', 'High', 'Low', 'Close', 'Volume']) df.to_csv(filepath, index=False) @classmethod def baseSize(cls): return BASE_SIZE @classmethod def maxSize(cls): return BUFFER_MAX if __name__ == '__main__': t0 = datetime.now() t1 = datetime(t0.year, t0.month, t0.day, t0.hour, t0.minute) data = DataBuffer() for i in range(3): t1 += timedelta(minutes=10) t2 = t1 + timedelta(minutes=1) t3 = t2 + timedelta(minutes=1) t4 = t3 + timedelta(minutes=1) t5 = t4 + timedelta(minutes=1) tohlcv = [[t1, 100.0, 200.0, 300.0, 400.0, 0.0], [t2, 100.0, 200.0, 300.0, 400.0, 0.0], [t3, 100.0, 200.0, 300.0, 400.0, 0.0], [t4, 100.0, 200.0, 300.0, 400.0, 0.0], [t5, 100.0, 200.0, 300.0, 400.0, 0.0]] print("---", i, "----") print("Before Buffer:", data.buffer) flu = data.add(tohlcv) print(" Flush:", flu) print("After Buffer:", data.buffer)
{"hexsha": "867192044fda9cb55afd551c2708e4dffa36f597", "size": 2791, "ext": "py", "lang": "Python", "max_stars_repo_path": "common/DataBuffer.py", "max_stars_repo_name": "Aquaware/MarketAlertWithXM", "max_stars_repo_head_hexsha": "6cfbc26f7b32880ff9a6911599b4a9614345e505", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "common/DataBuffer.py", "max_issues_repo_name": "Aquaware/MarketAlertWithXM", "max_issues_repo_head_hexsha": "6cfbc26f7b32880ff9a6911599b4a9614345e505", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "common/DataBuffer.py", "max_forks_repo_name": "Aquaware/MarketAlertWithXM", "max_forks_repo_head_hexsha": "6cfbc26f7b32880ff9a6911599b4a9614345e505", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.4795918367, "max_line_length": 103, "alphanum_fraction": 0.5019706198, "include": true, "reason": "import numpy", "num_tokens": 753}
# -*- coding: utf-8 -*- """ This module contains a local planner to perform low-level waypoint following based on PID controllers. """ # Author: Runsheng Xu <rxx3386@ucla.edu> # License: MIT from collections import deque from enum import Enum import statistics import math import carla import numpy as np from opencda.core.common.misc import distance_vehicle, draw_trajetory_points, cal_distance_angle from opencda.core.plan.spline import Spline2D class RoadOption(Enum): """ RoadOption represents the possible topological configurations when moving from a segment of lane to other. """ VOID = -1 LEFT = 1 RIGHT = 2 STRAIGHT = 3 LANEFOLLOW = 4 CHANGELANELEFT = 5 CHANGELANERIGHT = 6 class LocalPlanner(object): """ LocalPlanner implements the basic behavior of following a trajectory of waypoints that is generated on-the-fly. The low-level motion of the vehicle is computed by using lateral and longitudinal PID controllers. When multiple paths are available (intersections) this local planner makes a random choice. Parameters -agent : carla.agent The carla.agent that applying vehicle contorl. -carla_map : carla.map The HD map of the current simulation world. -config : dict The configuration dictionary of the trajectory planning module. Attributes -_vehicle : carla.vehicle The caral vehicle objcet. -_ego_pos : carla.position The current position of the ego vehicle. -_ego_speed : float The current speed of the ego vehicle. -waypoints_queue : deque The waypoint deque of the current plan. -_waypoint_buffer : deque A buffer deque to store waypoints of the next steps. -_long_plan_debug : list A list that stores the waypoints of global plan for debug purposes. -_trajectory_buffer : deque A deque buffer that stores the current trajectory. -_history_buffer : deque A deque buffer that stores the trajectory history of the ego vehicle. -lane_change : boolean A indicator used to identify whether lane change is operated -lane_id_change : boolean In some corner cases, the id is not changed but we regard it as lane change due to large lateral diff. """ # Minimum distance to target waypoint as a percentage # (e.g. within 80% of total distance) def __init__(self, agent, carla_map, config_yaml): self._vehicle = agent.vehicle self._map = carla_map self._ego_pos = None self._ego_speed = None # waypoint pop out thresholding self._min_distance = config_yaml['min_dist'] self._buffer_size = config_yaml['buffer_size'] # global route self.waypoints_queue = deque(maxlen=20000) # waypoint route self._waypoint_buffer = deque(maxlen=self._buffer_size) # trajectory buffer self._long_plan_debug = [] self._trajectory_buffer = deque(maxlen=30) self._history_buffer = deque(maxlen=3) self.trajectory_update_freq = config_yaml['trajectory_update_freq'] # trajectory sampling rate self.dt = config_yaml['trajectory_dt'] # used to identify whether lane change is operated self.lane_change = False # In some corner cases, the id is not changed but we regard it as lane change due to large lateral diff self.lane_id_change = False # debug option self.debug = config_yaml['debug'] self.debug_trajectory = config_yaml['debug_trajectory'] def set_global_plan(self, current_plan, clean=False): """ Sets new global plan. Args: -clean (boolean): Indicator of whether to clear the global plan. -current_plan (list): list of waypoints in the actual plan. """ for elem in current_plan: self.waypoints_queue.append(elem) if clean: self._waypoint_buffer.clear() for _ in range(self._buffer_size): if self.waypoints_queue: self._waypoint_buffer.append( self.waypoints_queue.popleft()) else: break def update_information(self, ego_pos, ego_speed): """ Update the ego position and speed for trajectory planner. Args: -ego_pos (carla.Transform): Ego position from localization module. -ego_speed (float): Ego speed(km/h) from localization module. """ self._ego_pos = ego_pos self._ego_speed = ego_speed def get_trajetory(self): """ Get the trajetory. """ return self._trajectory_buffer def generate_path(self): """ Generate the smooth path using cubic spline. Returns: -rx (list): List of planned path points' x coordinates. -ry (list): List of planned path points' y coordinates. -ryaw (list): List of planned path points' yaw angles. -rk (list): List of planned path points' curvatures. """ # used to save all key spline node x = [] y = [] # [m] distance of each interpolated points ds = 0.1 # retrieve current location, yaw angle todo: this should comes from self._egopos current_location = self._ego_pos.location current_yaw = self._ego_pos.rotation.yaw # retrieve the corresponding waypoint of the current location current_wpt = self._map.get_waypoint(current_location).next(1)[0] current_wpt_loc = current_wpt.transform.location # retrieve the future and past waypoint to check whether a lane change is gonna operated future_wpt = self._waypoint_buffer[-1][0] previous_wpt = self._history_buffer[0][0] if len(self._history_buffer) > 0 else current_wpt # check lateral offset from previous waypoint to current waypoint vec_norm, angle = cal_distance_angle(previous_wpt.transform.location, future_wpt.transform.location, future_wpt.transform.rotation.yaw) # distance in the lateral direction lateral_diff = abs(vec_norm * math.sin(math.radians(angle - 1 if angle > 90 else angle + 1))) boundingbox = self._vehicle.bounding_box veh_width = 2 * abs(boundingbox.location.y - boundingbox.extent.y) lane_width = current_wpt.lane_width is_lateral_within_range = veh_width < lateral_diff < 2 * lane_width # check if the vehicle is in lane change based on lane id and lateral offset self.lane_id_change = (future_wpt.lane_id != current_wpt.lane_id or previous_wpt.lane_id != future_wpt.lane_id) self.lane_change = self.lane_id_change or is_lateral_within_range _, angle = cal_distance_angle(self._waypoint_buffer[0][0].transform.location, current_location, current_yaw) # we consider history waypoint to generate trajectory index = 0 for i in range(len(self._history_buffer)): prev_wpt = self._history_buffer[i][0].transform.location _, angle = cal_distance_angle(prev_wpt, current_location, current_yaw) # make sure the history waypoint is already passed by if angle > 90 and not self.lane_change: x.append(prev_wpt.x) y.append(prev_wpt.y) index += 1 if self.lane_change: x.append(prev_wpt.x) y.append(prev_wpt.y) index += 1 # to make sure the vehicle is stable during lane change, we don't include any current position if self.lane_change: _, angle = cal_distance_angle(self._waypoint_buffer[0][0].transform.location, current_location, current_yaw) print('lane change') # if the vehicle starts lane change at the very start if len(x) == 0 or len(y) == 0: x.append(current_location.x) y.append(current_location.y) else: _, angle = cal_distance_angle(current_wpt_loc, current_location, current_yaw) # we prefer to use waypoint as the current position for path generation if the waypoint is # in front of us. This is because waypoint always sits in the center if angle < 90: x.append(current_wpt_loc.x) y.append(current_wpt_loc.y) else: x.append(current_location.x) y.append(current_location.y) # used to filter the waypoints that are too close prev_x = x[max(0, index - 1)] if self.lane_change else x[index] prev_y = y[max(0, index - 1)] if self.lane_change else y[index] for i in range(len(self._waypoint_buffer)): cur_x = self._waypoint_buffer[i][0].transform.location.x cur_y = self._waypoint_buffer[i][0].transform.location.y if abs(prev_x - cur_x) < 0.5 and abs(prev_y - cur_y) < 0.5: continue prev_x = cur_x prev_y = cur_y x.append(cur_x) y.append(cur_y) # Cubic Spline Interpolation calculation sp = Spline2D(x, y) diff_x = current_location.x - sp.sx.y[0] diff_y = current_location.y - sp.sy.y[0] diff_s = np.hypot(diff_x, diff_y) # we only need the interpolation points after current position s = np.arange(diff_s, sp.s[-1], ds) # calculate interpolation points rx, ry, ryaw, rk = [], [], [], [] self._long_plan_debug = [] # we only need the interpolation points until next waypoint for (i, i_s) in enumerate(s): ix, iy = sp.calc_position(i_s) if abs(ix - x[index]) <= ds and abs(iy - y[index]) <= ds: continue if i <= len(s) //2: self._long_plan_debug.append(carla.Transform(carla.Location(ix, iy, 0))) rx.append(ix) ry.append(iy) rk.append(max(min(sp.calc_curvature(i_s), 0.2), -0.2)) ryaw.append(sp.calc_yaw(i_s)) return rx, ry, rk, ryaw def generate_trajectory(self, rx, ry, rk): """ Sampling the generated path and assign speed to each point. Args: -rx (list): List of planned path points' x coordinates. -ry (list): List of planned path points' y coordinates. -rk (list): List of planned path points' curvatures. -debug (boolean): whether to draw the whole plan path """ # unit distance for interpolation points ds = 0.1 # unit sampling resolution dt = self.dt target_speed = self._target_speed current_speed = self._ego_speed # sample the trajectory by 0.1 second sample_num = 2.0 // dt break_flag = False current_speed = current_speed / 3.6 sample_resolution = 0 # use mean curvature to constrain the speed mean_k = 0.0001 if len(rk) < 2 else abs(statistics.mean(rk)) # v^2 <= a_lat_max / curvature, we assume 3.6 is the maximum lateral acceleration target_speed = min(target_speed, np.sqrt(5.0 / (mean_k + 10e-6)) * 3.6) # print('Vehicle Id:%d, current speed %f and target speed is %f' % (self._vehicle.id, # current_speed * 3.6, target_speed)) max_acc = 3.5 # todo: hard-coded, need to be tuned acceleration = max(min(max_acc, (target_speed / 3.6 - current_speed) / dt), -6.5) for i in range(1, int(sample_num) + 1): sample_resolution += current_speed * dt + 0.5 * acceleration * dt ** 2 current_speed += acceleration * dt # print(sample_resolution) if int(sample_resolution // ds - 1) >= len(rx): sample_x = rx[-1] sample_y = ry[-1] break_flag = True else: sample_x = rx[max(0, int(sample_resolution // ds - 1))] sample_y = ry[max(0, int(sample_resolution // ds - 1))] self._trajectory_buffer.append((carla.Transform(carla.Location(sample_x, sample_y, self._waypoint_buffer[0][ 0].transform.location.z + 0.5)), target_speed)) if break_flag: break def pop_buffer(self, vehicle_transform): """ Remove waypoints the ego vehicle has achieved. """ max_index = -1 for i, (waypoint, _) in enumerate(self._waypoint_buffer): if distance_vehicle( waypoint, vehicle_transform) < self._min_distance: max_index = i if max_index >= 0: for i in range(max_index + 1): if self._history_buffer: prev_wpt = self._history_buffer[-1] incoming_wpt = self._waypoint_buffer.popleft() if abs(prev_wpt[0].transform.location.x - incoming_wpt[0].transform.location.x) > 0.5 or \ abs(prev_wpt[0].transform.location.y - incoming_wpt[0].transform.location.y) > 0.5: self._history_buffer.append(incoming_wpt) else: self._history_buffer.append(self._waypoint_buffer.popleft()) if self._trajectory_buffer: max_index = -1 for i, (waypoint, _,) in enumerate(self._trajectory_buffer): if distance_vehicle( waypoint, vehicle_transform) < max(self._min_distance - 1, 1): max_index = i if max_index >= 0: for i in range(max_index + 1): self._trajectory_buffer.popleft() def run_step(self, rx, ry, rk, target_speed=None, trajectory=None, following=False): """ Execute one step of local planning which involves running the longitudinal and lateral PID controllers to follow the smooth waypoints trajectory. Args: -rx (list): List of planned path points' x coordinates. -ry (list): List of planned path points' y coordinates. -ryaw (list): List of planned path points' yaw angles. -rk (list): List of planned path points' curvatures. -following (boolean): Indicator of whether the vehicle is under following status. -trajectory (list): Pre-generated car-following trajectory only for platoon members. -target_speed (float): The ego vehicle's desired speed. Returns: -speed (float): Next trajectory point's target speed -waypoint (carla.waypoint): Next trajectory point's waypoint. """ self._target_speed = target_speed # Buffering the waypoints. Always keep the waypoint buffer alive todo:remove the hard coded if len(self._waypoint_buffer) < 9: for i in range(self._buffer_size - len(self._waypoint_buffer)): if self.waypoints_queue: self._waypoint_buffer.append( self.waypoints_queue.popleft()) else: break # we will generate the trajectory only if it is not a following vehicle in the platooning if not trajectory and len(self._trajectory_buffer) < self.trajectory_update_freq and not following: self._trajectory_buffer.clear() self.generate_trajectory(rx, ry, rk) elif trajectory: self._trajectory_buffer = trajectory.copy() # Target waypoint self.target_waypoint, self._target_speed = \ self._trajectory_buffer[min(1, len(self._trajectory_buffer) - 1)] # Purge the queue of obsolete waypoints vehicle_transform = self._ego_pos self.pop_buffer(vehicle_transform) if self.debug_trajectory: draw_trajetory_points(self._vehicle.get_world(), self._long_plan_debug, color=carla.Color(0, 255, 0), size=0.05, lt=0.1) # draw_trajetory_points(self._vehicle.get_world(), # self._trajectory_buffer, size=0.1, arrow_size=0.2, z=0.1, lt=0.1) if self.debug: draw_trajetory_points(self._vehicle.get_world(), self._waypoint_buffer, z=0.1, size=0.1, color=carla.Color(0, 0, 255), lt=0.2) draw_trajetory_points(self._vehicle.get_world(), self._history_buffer, z=0.1, size=0.1, color=carla.Color(255, 0, 255), lt=0.2) return self._target_speed, self.target_waypoint.transform.location \ if hasattr(self.target_waypoint, 'is_junction') else self.target_waypoint.location
{"hexsha": "49597fcebc57d475af73fd0f984eb036b05548a5", "size": 17603, "ext": "py", "lang": "Python", "max_stars_repo_path": "opencda/core/plan/local_planner_behavior.py", "max_stars_repo_name": "xiaxin2000/OpenCDA-Documents", "max_stars_repo_head_hexsha": "1ad4b368d4287dae8b282bac1665816a496d57c6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-17T10:45:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-17T10:45:33.000Z", "max_issues_repo_path": "opencda/core/plan/local_planner_behavior.py", "max_issues_repo_name": "xiaxin2000/OpenCDA-Documents", "max_issues_repo_head_hexsha": "1ad4b368d4287dae8b282bac1665816a496d57c6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "opencda/core/plan/local_planner_behavior.py", "max_forks_repo_name": "xiaxin2000/OpenCDA-Documents", "max_forks_repo_head_hexsha": "1ad4b368d4287dae8b282bac1665816a496d57c6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-10-01T19:41:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-01T19:41:52.000Z", "avg_line_length": 40.8422273782, "max_line_length": 119, "alphanum_fraction": 0.5891609385, "include": true, "reason": "import numpy", "num_tokens": 3828}
import numpy as np import tensorflow as tf import json import baseline import os from tensorflow.python.framework.errors_impl import NotFoundError import mead.utils import mead.exporters from mead.tf.signatures import SignatureInput, SignatureOutput from mead.tf.preprocessor import PreprocessorCreator from baseline.utils import export from baseline.tf.tfy import get_vocab_file_suffixes FIELD_NAME = 'text/tokens' __all__ = [] @export(__all__) class TensorFlowExporter(mead.exporters.Exporter): DEFAULT_VOCABS = {"word", "char"} def __init__(self, task): super(TensorFlowExporter, self).__init__(task) def _run(self, sess, model_file, embeddings_set): pass def get_raw_post(self, tf_example): return tf_example[FIELD_NAME] def restore_model(self, sess, basename): saver = tf.train.Saver() sess.run(tf.tables_initializer()) sess.run(tf.global_variables_initializer()) try: saver.restore(sess, basename) except NotFoundError: saver.restore(sess, basename + ".model") def run(self, model_file, embeddings, output_dir, model_version, use_preproc): embeddings_set = mead.utils.index_by_label(embeddings) with tf.Graph().as_default(): config_proto = tf.ConfigProto(allow_soft_placement=True) with tf.Session(config=config_proto) as sess: sig_input, sig_output, sig_name = self._run(sess, model_file, embeddings_set, use_preproc=use_preproc) output_path = os.path.join(tf.compat.as_bytes(output_dir), tf.compat.as_bytes(str(model_version))) print('Exporting trained model to %s' % output_path) builder = self._create_builder(sess, output_path, sig_input, sig_output, sig_name) builder.save() print('Successfully exported model to %s' % output_dir) @staticmethod def read_vocab(basename, ty): if ty is None: vocab_file = basename ty = 'word' else: vocab_file = "{}-{}.vocab".format(basename, ty) print('Reading {}'.format(vocab_file)) with open(vocab_file, 'r') as f: vocab = json.load(f) # Make a vocab list vocab_list = [''] * (len(vocab) + 1) for v, i in vocab.items(): vocab_list[i] = v tok2index = tf.contrib.lookup.index_table_from_tensor( tf.constant(vocab_list), default_value=0, dtype=tf.string, name='%s2index' % ty ) return tok2index, vocab def load_labels(self, basename): label_file = '%s.labels' % basename with open(label_file, 'r') as f: labels = json.load(f) return labels def _create_example(self, extra_features_required): serialized_tf_example = tf.placeholder(tf.string, name='tf_example') feature_configs = { FIELD_NAME: tf.FixedLenFeature(shape=[], dtype=tf.string), } for other in extra_features_required: feature_configs[other] = tf.FixedLenFeature(shape=[], dtype=tf.string) tf_example = tf.parse_example(serialized_tf_example, feature_configs) return serialized_tf_example, tf_example def _create_vocabs(self, model_file): """ :model_file the path-like object to the model and model name. :vocab_suffixes the list of vocab types. e.g. 'word', 'char', 'ner'. """ vocabs = {} indices = {} if os.path.exists(model_file + '.vocab'): indices['word'], vocabs['word'] = TensorFlowExporter.read_vocab(model_file + '.vocab', ty=None) else: vocab_suffixes = get_vocab_file_suffixes(model_file) for suffix in vocab_suffixes: indices[suffix], vocabs[suffix] = TensorFlowExporter.read_vocab(model_file, suffix) return indices, vocabs def assign_char_lookup(self): upchars = tf.constant([chr(i) for i in range(65, 91)]) self.lchars = tf.constant([chr(i) for i in range(97, 123)]) self.upchars_lut = tf.contrib.lookup.index_table_from_tensor(mapping=upchars, num_oov_buckets=1, default_value=-1) def _initialize_embeddings_map(self, vocabs, embeddings_set): """ generate a mapping of vocab_typ (word, char) to the embedding object. """ embeddings = {} for vocab_type in vocabs.keys(): dimension_size = self._get_embedding_dsz(embeddings_set, vocab_type) embeddings[vocab_type] = self._initialize_embedding(dimension_size, vocabs[vocab_type]) return embeddings def _get_embedding_dsz(self, embeddings_set, embed_type): if embed_type == 'word': word_embeddings = self.task.config_params["word_embeddings"] return embeddings_set[word_embeddings["label"]]["dsz"] elif embed_type == 'char': return self.task.config_params["charsz"] else: extra_info = self.task.config_params["extended_embed_info"] if embed_type not in extra_info: raise ValueError("could not find embedding type in configuration. If \ the embedding is not of type 'word' or 'char', please fill in and put \ { %s : {'dsz' : [ENTER_DIMENSION_SIZE_HERE] } } in the \ 'extended_embed_info config object." % (embed_type)) return extra_info[embed_type]['dsz'] def _initialize_embedding(self, dimensions_size, vocab): return baseline.RandomInitVecModel(dimensions_size, vocab, False) def _run_preproc(self, model_params, vocabs, model_file, indices, extra_features_required): serialized_tf_example, tf_example = self._create_example(extra_features_required) raw_posts = tf_example[FIELD_NAME] preprocessed, lengths = self._create_preprocessed_input(tf_example, model_file, indices, extra_features_required) model_params["x"] = preprocessed['word'] if 'char' in vocabs: model_params['xch'] = preprocessed['char'] for other in extra_features_required: model_params[other] = preprocessed[other] return serialized_tf_example, tf_example, raw_posts, lengths def _create_preprocessed_input(self, tf_example, model_file, indices, extra_features_required): """ Create a preprocessor chain inside of the tensorflow graph. """ mxlen, mxwlen = self._get_max_lens(model_file) preprocessor = PreprocessorCreator( indices, self.lchars, self.upchars_lut, self.task, FIELD_NAME, extra_features_required, mxlen, mxwlen ) types = {k: tf.int64 for k in indices} preprocessed, lengths = tf.map_fn( preprocessor.preproc_post, tf_example, dtype=(types, tf.int32), back_prop=False ) return preprocessed, lengths def _get_max_lens(self, base_name): mxlen = self.task.config_params['preproc']['mxlen'] mxwlen = self.task.config_params['preproc'].get('mxwlen') state = baseline.utils.read_json("{}.state".format(base_name)) if 'mxlen' in state: mxlen = state['mxlen'] # What should be called mxwlen is called maxw in the state object of this is for backwards compatibility. if 'maxw' in state: mxwlen = state['maxw'] if 'mxwlen' in state: mxwlen = state['mxwlen'] return mxlen, mxwlen def _create_builder(self, sess, output_path, sig_input, sig_output, sig_name): """ create the SavedModelBuilder with standard endpoints. we reuse the classify constants from tensorflow to define the predict endpoint so that we can call the output by classes/scores. """ builder = tf.saved_model.builder.SavedModelBuilder(output_path) classes_output_tensor = tf.saved_model.utils.build_tensor_info( sig_output.classes) output_def_map = { tf.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES: classes_output_tensor } if sig_output.scores is not None: scores_output_tensor = tf.saved_model.utils.build_tensor_info(sig_output.scores) output_def_map[tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES] = scores_output_tensor prediction_signature = ( tf.saved_model.signature_def_utils.build_signature_def( inputs=sig_input.predict, outputs=output_def_map, # we reuse classify constants here. method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME ) ) legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op') definition = {} definition[sig_name] = prediction_signature builder.add_meta_graph_and_variables( sess, [tf.saved_model.tag_constants.SERVING], signature_def_map=definition, legacy_init_op=legacy_init_op) return builder @export(__all__) class ClassifyTensorFlowExporter(TensorFlowExporter): def __init__(self, task): super(ClassifyTensorFlowExporter, self).__init__(task) def _run(self, sess, model_file, embeddings_set, use_preproc=True): indices, vocabs = self._create_vocabs(model_file) extra_features_required = [x for x in vocabs.keys() if x not in TensorFlowExporter.DEFAULT_VOCABS] self.assign_char_lookup() labels = self.load_labels(model_file) mxlen, mxwlen = self._get_max_lens(model_file) model_params = self.task.config_params["model"] if use_preproc: serialized_tf_example, tf_example, raw_posts, lengths = self._run_preproc(model_params, vocabs, model_file, indices, extra_features_required) model_params["pkeep"] = 1 model_params["sess"] = sess model_params["maxs"] = mxlen model_params["maxw"] = mxwlen print(model_params) embeddings = self._initialize_embeddings_map(vocabs, embeddings_set) model = baseline.tf.classify.create_model(embeddings, labels, **model_params) softmax_output = tf.nn.softmax(model.logits) values, indices = tf.nn.top_k(softmax_output, len(labels)) class_tensor = tf.constant(model.labels) table = tf.contrib.lookup.index_to_string_table_from_tensor(class_tensor) classes = table.lookup(tf.to_int64(indices)) self.restore_model(sess, model_file) if use_preproc: sig_input = SignatureInput(serialized_tf_example, raw_posts, extra_features_required) else: sig_input = SignatureInput(None, None, extra_features_required, model=model) sig_output = SignatureOutput(classes, values) return sig_input, sig_output, 'predict_text' @export(__all__) class TaggerTensorFlowExporter(TensorFlowExporter): def __init__(self, task): super(TaggerTensorFlowExporter, self).__init__(task) def _create_model(self, vocabs, labels, embeddings_set, mxlen, model_params): embeddings = self._initialize_embeddings_map(vocabs, embeddings_set) model = baseline.tf.tagger.create_model(labels, embeddings, **model_params) model.create_loss() softmax_output = tf.nn.softmax(model.probs) values, indices = tf.nn.top_k(softmax_output, 1) start_np = np.full((1, 1, len(labels)), -1e4, dtype=np.float32) start_np[:, 0, labels['<GO>']] = 0 start = tf.constant(start_np) model.probs = tf.concat([start, model.probs], 1) if model.crf is True: indices, _ = tf.contrib.crf.crf_decode(model.probs, model.A, tf.constant([mxlen + 1]))## We are assuming the batchsz is 1 here indices = indices[:, 1:] list_of_labels = [''] * len(labels) for label, idval in labels.items(): list_of_labels[idval] = label class_tensor = tf.constant(list_of_labels) table = tf.contrib.lookup.index_to_string_table_from_tensor(class_tensor) classes = table.lookup(tf.to_int64(indices)) return classes, values, model def _run(self, sess, model_file, embeddings_set, use_preproc=True): mxlen, mxwlen = self._get_max_lens(model_file) indices, vocabs = self._create_vocabs(model_file) self.assign_char_lookup() labels = self.load_labels(model_file) extra_features_required = [x for x in vocabs.keys() if x not in TensorFlowExporter.DEFAULT_VOCABS] model_params = self.task.config_params["model"] lengths = [] if use_preproc: serialized_tf_example, tf_example, raw_posts, lengths = self._run_preproc(model_params, vocabs, model_file, indices, extra_features_required) model_params["lengths"] = lengths model_params["pkeep"] = 1 model_params["sess"] = sess model_params["maxs"] = mxlen model_params["maxw"] = mxwlen model_params['span_type'] = self.task.config_params['train'].get('span_type') print(model_params) classes, values, model = self._create_model(vocabs, labels, embeddings_set, mxlen, model_params) self.restore_model(sess, model_file) if use_preproc: sig_input = SignatureInput(serialized_tf_example, raw_posts, extra_features_required) else: sig_input = SignatureInput(None, None, extra_features_required + ['lengths'], model=model) sig_output = SignatureOutput(classes, values) return sig_input, sig_output, 'tag_text' @export(__all__) class Seq2SeqTensorFlowExporter(TensorFlowExporter): def __init__(self, task): super(Seq2SeqTensorFlowExporter, self).__init__(task) @staticmethod def read_input_vocab(basename): vocab_file = '%s-1.vocab' % basename with open(vocab_file, 'r') as f: vocab = json.load(f) # Make a vocab list vocab_list = [''] * len(vocab) for v, i in vocab.items(): vocab_list[i] = v word2input = tf.contrib.lookup.index_table_from_tensor( tf.constant(vocab_list), default_value=0, dtype=tf.string, name='word2input' ) return word2input, vocab @staticmethod def read_output_vocab(basename): vocab_file = '%s-2.vocab' % basename with open(vocab_file, 'r') as f: vocab = json.load(f) # Make a vocab list vocab_list = [''] * len(vocab) for v, i in vocab.items(): vocab_list[i] = v output2word = tf.contrib.lookup.index_to_string_table_from_tensor( tf.constant(vocab_list), default_value='<PAD>', name='output2word' ) return output2word, vocab def get_dsz(self, embeddings_set): embeddings_section = self.task.config_params['word_embeddings'] if embeddings_section.get('label', None) is not None: embed_label = embeddings_section['label'] dsz = embeddings_set[embed_label]['dsz'] else: dsz = embeddings_section['dsz'] return dsz def _preproc_post_creator(self): word2input = self.word2input def preproc_post(raw_post): # raw_post is a "scalar string tensor" # (https://www.tensorflow.org/versions/r0.12/api_docs/python/image/encoding_and_decoding) # Split the input string, assuming that whitespace is splitter # The client should perform any required tokenization for us and join on ' ' #raw_post = tf.Print(raw_post, [raw_post]) mxlen = self.task.config_params['preproc']['mxlen'] raw_tokens = tf.string_split(tf.reshape(raw_post, [-1])).values npost = tf.reduce_join(raw_tokens[:mxlen], separator=" ") tokens = tf.string_split(tf.reshape(npost, [-1])) sentence_length = tf.size(tokens) # Convert the string values to word indices (ints) indices = word2input.lookup(tokens) # Reshape them out to the proper length reshaped = tf.sparse_reshape(indices, shape=[-1]) reshaped = tf.sparse_reset_shape(reshaped, new_shape=[mxlen]) # Now convert to a dense representation dense = tf.sparse_tensor_to_dense(reshaped) dense = tf.contrib.framework.with_shape([mxlen], dense) dense = tf.cast(dense, tf.int32) return dense, sentence_length return preproc_post def _run(self, sess, model_file, embeddings_set): self.word2input, vocab1 = Seq2SeqTensorFlowExporter.read_input_vocab(model_file) self.output2word, vocab2 = Seq2SeqTensorFlowExporter.read_output_vocab(model_file) # Make the TF example, network input serialized_tf_example = tf.placeholder(tf.string, name='tf_example') feature_configs = { FIELD_NAME: tf.FixedLenFeature(shape=[], dtype=tf.string), } tf_example = tf.parse_example(serialized_tf_example, feature_configs) raw_posts = tf_example[FIELD_NAME] # Run for each post dense, length = tf.map_fn(self._preproc_post_creator(), raw_posts, dtype=(tf.int32, tf.int32)) model_params = self.task.config_params["model"] model_params["dsz"] = self.get_dsz(embeddings_set) model_params["src"] = dense model_params["src_len"] = length model_params["mx_tgt_len"] = self.task.config_params["preproc"]["mxlen"] model_params["tgt_len"] = 1 model_params["pkeep"] = 1 model_params["sess"] = sess model_params["predict"] = True print(model_params) model = baseline.tf.seq2seq.create_model(vocab1, vocab2, **model_params) output = self.output2word.lookup(tf.cast(model.best, dtype=tf.int64)) self.restore_model(sess, model_file) sig_input = SignatureInput(serialized_tf_example, raw_posts) sig_output = SignatureOutput(classes, None) return sig_input, sig_outputl, 'suggest_text'
{"hexsha": "6b4826fff05aa867f151452b38c5ce2afe074923", "size": 19452, "ext": "py", "lang": "Python", "max_stars_repo_path": "python/mead/tf/exporters.py", "max_stars_repo_name": "bjayakumar/test_vendor", "max_stars_repo_head_hexsha": "e32c1a69754cedcec46d3e76e43a72743ebb8ed8", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "python/mead/tf/exporters.py", "max_issues_repo_name": "bjayakumar/test_vendor", "max_issues_repo_head_hexsha": "e32c1a69754cedcec46d3e76e43a72743ebb8ed8", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "python/mead/tf/exporters.py", "max_forks_repo_name": "bjayakumar/test_vendor", "max_forks_repo_head_hexsha": "e32c1a69754cedcec46d3e76e43a72743ebb8ed8", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.617107943, "max_line_length": 138, "alphanum_fraction": 0.6128418672, "include": true, "reason": "import numpy", "num_tokens": 4084}
""" Problem Statement inner The inner tool returns the inner product of two arrays. import numpy A = numpy.array([0, 1]) B = numpy.array([3, 4]) print numpy.inner(A, B) #Output : 4 outer The outer tool returns the outer product of two arrays. import numpy A = numpy.array([0, 1]) B = numpy.array([3, 4]) print numpy.outer(A, B) #Output : [[0 0] # [3 4]] Task You are given two arrays: A and B. Your task is to compute their inner and outer product. Input Format The first line contains the space separated elements of array A. The second line contains the space separated elements of array B. Output Format First, print the inner product. Second, print the outer product. Sample Input 0 1 2 3 Sample Output 3 [[0 0] [2 3]] """ import numpy a = numpy.array(map(int, raw_input().split()), dtype=numpy.int) b = numpy.array(map(int, raw_input().split()), dtype=numpy.int) print numpy.inner(a, b) print numpy.outer(a, b)
{"hexsha": "b0e2868ee66f4586e914506dedfe5dbc7ad247c4", "size": 981, "ext": "py", "lang": "Python", "max_stars_repo_path": "hackerrank/domain/python/numpy/inner_outer.py", "max_stars_repo_name": "spradeepv/dive-into-python", "max_stars_repo_head_hexsha": "ec27d4686b7b007d21f9ba4f85d042be31ee2639", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hackerrank/domain/python/numpy/inner_outer.py", "max_issues_repo_name": "spradeepv/dive-into-python", "max_issues_repo_head_hexsha": "ec27d4686b7b007d21f9ba4f85d042be31ee2639", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hackerrank/domain/python/numpy/inner_outer.py", "max_forks_repo_name": "spradeepv/dive-into-python", "max_forks_repo_head_hexsha": "ec27d4686b7b007d21f9ba4f85d042be31ee2639", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.2105263158, "max_line_length": 65, "alphanum_fraction": 0.6727828746, "include": true, "reason": "import numpy", "num_tokens": 259}
import torch from torch.utils.data import DataLoader import os.path as osp import cv2 import numpy as np import albumentations as A from albumentations.pytorch import ToTensorV2 from typing import List, Optional, Callable, List, Any from torch.utils.data.dataset import Dataset from .eyepacs import data_transformation class Diagnos(Dataset): def __init__( self, data_root: str, split: str = "test", transformer: Optional[Callable] = None, return_id: bool = False, ): self.data_root = data_root self.split = split self.transformer = transformer self.return_id = return_id self.num_classes = 5 self.img_dir = osp.join(self.data_root, "Images") self.load_list() def load_list(self): split_file = osp.join(self.data_root, self.split + ".csv") self.img_names = [] self.labels = [] with open(split_file, "r") as f: f.readline() # skip the head line for line in f: name, label = line.strip().split(",") self.img_names.append(name) self.labels.append(int(label)) def load_img(self, ind: int) -> np.ndarray: img_path = osp.join(self.img_dir, self.img_names[ind]) img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) return img def __getitem__(self, ind) -> List[Any]: img = self.load_img(ind) label = self.labels[ind] if self.transformer is not None: result = self.transformer(image=img) img = result["image"] ret = [img, label] if self.return_id: ret.append(self.img_names[ind]) return ret def __len__(self) -> int: return len(self.img_names) def __repr__(self) -> str: return ( "Diagnos(data_root={}, split={})\tSamples : {}".format( self.data_root, self.split, self.__len__() ) ) def get_dataset( data_root: str, split: str = "test", return_id: bool = False, ): assert split in [ "test", ], "Split '{}' not supported".format(split) transformer = A.Compose([ A.Normalize(), ]) dataset = Diagnos( data_root=data_root, split=split, transformer=transformer, return_id=return_id, ) return dataset
{"hexsha": "9eeeb9c1ffe0a9a2d682268edca5b0fc102fa6c7", "size": 2418, "ext": "py", "lang": "Python", "max_stars_repo_path": "retinal/data/diagnos.py", "max_stars_repo_name": "by-liu/RetinalApp", "max_stars_repo_head_hexsha": "53173b2b20dfcf613a3a22d6caa5178771d14225", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "retinal/data/diagnos.py", "max_issues_repo_name": "by-liu/RetinalApp", "max_issues_repo_head_hexsha": "53173b2b20dfcf613a3a22d6caa5178771d14225", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "retinal/data/diagnos.py", "max_forks_repo_name": "by-liu/RetinalApp", "max_forks_repo_head_hexsha": "53173b2b20dfcf613a3a22d6caa5178771d14225", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.7234042553, "max_line_length": 67, "alphanum_fraction": 0.5847808106, "include": true, "reason": "import numpy", "num_tokens": 562}
#!/bin/env python3.5 #Author: Saurabh Pathak (phoenix) import matplotlib.pyplot as pl, sys, math as m from decimal import Decimal from numpy import * from numpy.linalg import inv,det tSRiver = tSLand = None class Sampler: '''handles sampling and contains the mouse click handler''' def __init__(self, maxcount): self._samples = [[0]*4] self.maxcount = maxcount self.handle = pl.gcf().canvas.mpl_connect('button_press_event', self) def __len__(self): return len(self.samples) @property def samples(self): if len(self._samples) <= self.maxcount: return self._samples[1:] else: return self._samples[1:self.maxcount + 1] #discard if extra def __call__(self, event): '''click in images to collect samples. samples 9 neighbors in and around the clicked pixel on each click.''' x, y = int(event.xdata), int(event.ydata) xmin = x - 1 if 0 < x else x xmax = x + 2 if x < 511 else x ymin = y - 1 if 0 < y else y ymax = y + 2 if y < 511 else y self._samples = append(self._samples, im[xmin:xmax, ymin:ymax].reshape(abs(xmin-xmax) * abs(ymin-ymax), 4), 0) pl.title('{} pixels sampled...'.format(len(self))) if len(self) == self.maxcount: pl.title('Done! Please close the window.') pl.gcf().canvas.mpl_disconnect(self.handle) pl.draw() def trainSetAccumulate(): global tSRiver, tSLand, im1 displayim(im4, 'Select 50 river samples: Click 6 places', 'Select from RGBI') tSRiver = collectSamples(50) displayim(im4, 'Select 100 non river samples.', 'Select from RGBI') tSLand = collectSamples(100) def collectSamples(size): s = Sampler(size) pl.show() return s.samples def naiveBayes(): global tSRiver, tSLand, im, outimg covRiver, covLand = cov(tSRiver, rowvar=0), cov(tSLand, rowvar=0) icr, icl= inv(covRiver), inv(covLand) meanRiver, meanLand = mean(tSRiver, axis=0), mean(tSLand, axis=0) for i in range(512): for j in range(512): devRiver, devLand = subtract(im[i,j], meanRiver), subtract(im[i,j], meanLand) rivClass = dot(dot(devRiver, icr), devRiver.T) nrivClass = dot(dot(devLand, icl), devLand.T) try: p1 = Decimal(-0.5)/Decimal(m.sqrt(det(covRiver))) * Decimal(m.exp(rivClass)) except OverflowError: outimg[i,j] = 255 continue# as e: print(e, 'Variable in exp: ', rivClass) try: p2 = Decimal(-0.5)/Decimal(m.sqrt(det(covLand))) * Decimal(m.exp(nrivClass)) except OverflowError: continue# as e: # print(e, 'Variable in exp: ', rivClass, ' Setting pixel to 0') class1 = Decimal(P1)*p1 class2 = Decimal(P2)*p2 if class1 >= class2: outimg[i,j] = 255 def displayim(img, prompt, title): pl.imshow(img, origin='upper') pl.title(prompt) pl.gcf().canvas.set_window_title(title) pl.xlim(0,512) pl.ylim(512,0) def printUsage(): print('Usage: bayesian.py P1 P2') quit(1) im1 = pl.imread('data/1.gif')[:,:,0] im2 = pl.imread('data/2.gif')[:,:,0] im3 = pl.imread('data/3.gif')[:,:,0] im4 = pl.imread('data/4.gif') im = dstack((im1,im2,im3,im4[:,:,0])) P1, P2 = None, None if len(sys.argv) != 3: printUsage() P1, P2 = sys.argv[1], sys.argv[2] outimg = zeros((512, 512, 3), dtype='bool') trainSetAccumulate() print('Working...', end='', flush=True) naiveBayes() print('done.') displayim(outimg, 'P(river): '+str(P1)+' P(Non-river): '+str(P2), 'Naive Bayes Output') pl.show()
{"hexsha": "57f65a49c712b9b0bbbf49f1bc5ed554180f8719", "size": 3627, "ext": "py", "lang": "Python", "max_stars_repo_path": "ai_ml_projects/masters_courses/machine_learning/bayesian/bayesian.py", "max_stars_repo_name": "5aurabhpathak/src", "max_stars_repo_head_hexsha": "dda72beba2aaae67542a2f10e89048e86d04cb28", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-07T06:51:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-07T06:51:18.000Z", "max_issues_repo_path": "ai_ml_projects/masters_courses/machine_learning/bayesian/bayesian.py", "max_issues_repo_name": "5aurabhpathak/all-I-ve-done", "max_issues_repo_head_hexsha": "dda72beba2aaae67542a2f10e89048e86d04cb28", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ai_ml_projects/masters_courses/machine_learning/bayesian/bayesian.py", "max_forks_repo_name": "5aurabhpathak/all-I-ve-done", "max_forks_repo_head_hexsha": "dda72beba2aaae67542a2f10e89048e86d04cb28", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-11T09:53:22.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-11T09:53:22.000Z", "avg_line_length": 34.875, "max_line_length": 118, "alphanum_fraction": 0.6112489661, "include": true, "reason": "from numpy", "num_tokens": 1064}
% These lines are necessary to adjust the spacing of the heading \titleformat{\chapter}[hang]{\huge\bfseries}{\thechapter}{1em}{} \titlespacing{\chapter}{0pt}{0pt}{1cm} \chapter{Acknowledgments} Thank your professors, colleagues, funding agencies, friends, and family. Usually, people sign out by specifying the date and time of writing this thesis. \\\\ D\"usseldorf, \today
{"hexsha": "20d47498813654a62769788ced142b104f825e18", "size": 378, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/chapters/acknowledgements/acknowledgements.tex", "max_stars_repo_name": "ibrsam/matsci-thesis", "max_stars_repo_head_hexsha": "270b9917a398ae4004e35be1bba6a9c9800fd12f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-05-17T01:21:21.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-17T02:41:58.000Z", "max_issues_repo_path": "source/chapters/acknowledgements/acknowledgements.tex", "max_issues_repo_name": "ibrsam/matsci-thesis", "max_issues_repo_head_hexsha": "270b9917a398ae4004e35be1bba6a9c9800fd12f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-05-17T12:53:37.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-17T12:53:37.000Z", "max_forks_repo_path": "source/chapters/acknowledgements/acknowledgements.tex", "max_forks_repo_name": "ibrsam/matsci-thesis", "max_forks_repo_head_hexsha": "270b9917a398ae4004e35be1bba6a9c9800fd12f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-16T14:18:40.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-16T14:18:40.000Z", "avg_line_length": 34.3636363636, "max_line_length": 82, "alphanum_fraction": 0.7645502646, "num_tokens": 97}
# This file is part of the Astrometry.net suite. # Licensed under a 3-clause BSD style license - see LICENSE from __future__ import print_function import matplotlib matplotlib.use('Agg') import pylab as plt import sys from astrometry.sdss.dr8 import * import numpy as np def test_astrans(sdss, r,c,f,b): bandnum = band_index(b) sdss.retrieve('frame', r, c, f, b) frame = sdss.readFrame(r, c, f, b) astrans = frame.getAsTrans() sdss.retrieve('photoObj', r, c, f) obj = sdss.readPhotoObj(r, c, f) tab = obj.getTable() #tab.about() x,y = tab.colc[:,bandnum], tab.rowc[:,bandnum] ra,dec = tab.ra, tab.dec for r,d in zip(ra,dec): print('ra,dec', r,d) #print 'py:' x1,y1 = astrans.radec_to_pixel_single_py(r, d) print(' py', x1,y1) #print 'c:' x2,y2 = astrans.radec_to_pixel_single_c(r, d) print(' c', x2,y2) assert(np.abs(x1 - x2) < 1e-6) assert(np.abs(y1 - y2) < 1e-6) r2,d2 = astrans.pixel_to_radec(x, y) plt.clf() plt.plot(ra, dec, 'r.') plt.plot(r2, d2, 'bo', mec='b', mfc='none') plt.savefig('rd.png') r3,d3 = [],[] for xi,yi in zip(x,y): ri,di = astrans.pixel_to_radec(xi, yi) r3.append(ri) d3.append(di) plt.clf() plt.plot(ra, dec, 'r.') plt.plot(r3, d3, 'bo', mec='b', mfc='none') plt.savefig('rd3.png') x2,y2 = astrans.radec_to_pixel(ra, dec) plt.clf() plt.plot(x, y, 'r.') plt.plot(x2, y2, 'bo', mec='b', mfc='none') plt.savefig('xy.png') x3,y3 = [],[] for ri,di in zip(ra, dec): xi,yi = astrans.radec_to_pixel(ri, di) x3.append(xi) y3.append(yi) plt.clf() plt.plot(x, y, 'r.') plt.plot(x3, y3, 'bo', mec='b', mfc='none') plt.savefig('xy3.png') if __name__ == '__main__': sdss = DR8() #test_astrans(sdss, 4623, 1, 203, 'r') test_astrans(sdss, 5065, 1, 68, 'r') sys.exit(0) fnew = sdss.readFrame(4623, 1, 203, 'r', filename='frame-r-004623-1-0203.fits') print('fnew:', fnew) forig = sdss.readFrame(4623, 1, 203, 'r', 'frame-r-004623-1-0203.fits.orig') print('forig:', forig) frame = sdss.readFrame(3712, 3, 187, 'r') print('frame:', frame) img = frame.getImage() print(' image', img.shape) fpobj = sdss.readFpObjc(6581, 2, 135) print('fpobj:', fpobj) fpm = sdss.readFpM(6581, 2, 135, 'i') print('fpm:', fpm) psf = sdss.readPsField(6581, 2, 135) print('psfield:', psf)
{"hexsha": "813abd5597f5078bf564dca9f6eb8c20584ae4e1", "size": 2525, "ext": "py", "lang": "Python", "max_stars_repo_path": "sdss/test_dr8.py", "max_stars_repo_name": "juandesant/astrometry.net", "max_stars_repo_head_hexsha": "47849f0443b890c4a875360f881d2e60d1cba630", "max_stars_repo_licenses": ["Net-SNMP", "Xnet"], "max_stars_count": 460, "max_stars_repo_stars_event_min_datetime": "2015-01-06T13:20:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T00:37:55.000Z", "max_issues_repo_path": "sdss/test_dr8.py", "max_issues_repo_name": "juandesant/astrometry.net", "max_issues_repo_head_hexsha": "47849f0443b890c4a875360f881d2e60d1cba630", "max_issues_repo_licenses": ["Net-SNMP", "Xnet"], "max_issues_count": 208, "max_issues_repo_issues_event_min_datetime": "2015-01-08T20:26:38.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-25T15:21:34.000Z", "max_forks_repo_path": "sdss/test_dr8.py", "max_forks_repo_name": "juandesant/astrometry.net", "max_forks_repo_head_hexsha": "47849f0443b890c4a875360f881d2e60d1cba630", "max_forks_repo_licenses": ["Net-SNMP", "Xnet"], "max_forks_count": 173, "max_forks_repo_forks_event_min_datetime": "2015-01-08T18:01:54.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-27T07:27:04.000Z", "avg_line_length": 25.5050505051, "max_line_length": 84, "alphanum_fraction": 0.5691089109, "include": true, "reason": "import numpy", "num_tokens": 901}
[STATEMENT] lemma test_compl_1 [simp]: "is_test x \<Longrightarrow> x + tc x = 1'" [PROOF STATE] proof (prove) goal (1 subgoal): 1. is_test x \<Longrightarrow> x + tc x = 1' [PROOF STEP] by (metis is_test_def local.aux4 local.inf.absorb_iff1 local.inf_commute tc_def)
{"llama_tokens": 114, "file": "Relation_Algebra_Relation_Algebra_Tests", "length": 1}
""" ゼロから学ぶスパイキングニューラルネットワーク - Spiking Neural Networks from Scratch Copyright (c) 2020 HiroshiARAKI. All Rights Reserved. """ import numpy as np import matplotlib.pyplot as plt if __name__ == '__main__': time = 300 dt = 0.5 # Spike Traceを適当に作る spikes = np.zeros(int(time/dt)) # 5本適当にスパイクを立てる for _ in range(5): spikes[np.random.randint(0, int(time/dt))] = 1 # Firing Traceを作成 firing = [] fire = 0 tc = 20 # 時定数 for t in range(int(time/dt)): if spikes[t]: # 発火していれば1を立てる fire = 1 else: # 発火していなければ時間的減衰 fire -= fire / tc firing.append(fire) t = np.arange(0, time, dt) plt.subplot(2, 1, 1) plt.plot(t, spikes, label='Spike Trace') plt.ylabel('Spike Trace') plt.subplot(2, 1, 2) plt.plot(t, firing) plt.ylabel('Firing Trace') plt.xlabel('time [ms]') plt.show()
{"hexsha": "55fe204f56429307ad0535d33f2bdc461fdcf13d", "size": 906, "ext": "py", "lang": "Python", "max_stars_repo_path": "codes/s5-1-2_Trace.py", "max_stars_repo_name": "HiroshiARAKI/snn_from_scratch", "max_stars_repo_head_hexsha": "e26e7ce2bbebaa35ad3e325c09f05c334d753049", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2021-01-30T16:04:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-22T05:06:21.000Z", "max_issues_repo_path": "codes/s5-1-2_Trace.py", "max_issues_repo_name": "HiroshiARAKI/snn_from_scratch", "max_issues_repo_head_hexsha": "e26e7ce2bbebaa35ad3e325c09f05c334d753049", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "codes/s5-1-2_Trace.py", "max_forks_repo_name": "HiroshiARAKI/snn_from_scratch", "max_forks_repo_head_hexsha": "e26e7ce2bbebaa35ad3e325c09f05c334d753049", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-06-10T08:18:11.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-17T08:25:26.000Z", "avg_line_length": 20.1333333333, "max_line_length": 54, "alphanum_fraction": 0.587196468, "include": true, "reason": "import numpy", "num_tokens": 336}
import json import numpy as np from sklearn.linear_model import LinearRegression trainData = json.load(open("train_data.json", "r")) trainInput = list() trainOutput = list() for row in trainData: trainInput.append(row['date']) trainOutput.append(row['sea_level']) ti = np.array(trainInput) ti.reshape(-1, 1) print(ti) predictor = LinearRegression(n_jobs = -1) predictor.fit(X=ti, y=trainOutput)
{"hexsha": "6520e05e17de35daf4f9806387be925e5881da8f", "size": 409, "ext": "py", "lang": "Python", "max_stars_repo_path": "ml.py", "max_stars_repo_name": "Virmak/IOSea", "max_stars_repo_head_hexsha": "1ecbd5df7119a2dcd89bb97834f6c2fac6a55f8d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ml.py", "max_issues_repo_name": "Virmak/IOSea", "max_issues_repo_head_hexsha": "1ecbd5df7119a2dcd89bb97834f6c2fac6a55f8d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ml.py", "max_forks_repo_name": "Virmak/IOSea", "max_forks_repo_head_hexsha": "1ecbd5df7119a2dcd89bb97834f6c2fac6a55f8d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.4761904762, "max_line_length": 51, "alphanum_fraction": 0.7334963325, "include": true, "reason": "import numpy", "num_tokens": 103}
""" Some device functions for doing complex scalar maths with :mod:`numba.cuda`. """ import math import numpy as np import numba as nb from numba import cuda #@cuda.jit(device = True, inline = True) def conj(z): """ Conjugate of a complex number. .. math:: \\begin{align*} (a + ib)^* &= a - ib\\\\ a, b &\\in \\mathbb{R} \\end{align*} Parameters ---------- z : :class:`numpy.cdouble` The complex number to take the conjugate of. Returns ------- cz : :class:`numpy.cdouble` The conjugate of z. """ return (z.real - 1j*z.imag) #@cuda.jit(device = True, inline = True) def complex_abs(z): """ The absolute value of a complex number. .. math:: \\begin{align*} |a + ib| &= \\sqrt{a^2 + b^2}\\\\ a, b &\\in \\mathbb{R} \\end{align*} Parameters ---------- z : :class:`numpy.cdouble` The complex number to take the absolute value of. Returns ------- az : :class:`numpy.double` The absolute value of z. """ return math.sqrt(z.real**2 + z.imag**2)
{"hexsha": "ef1cdc3d48f24b9392632477427c296ba2013161", "size": 1144, "ext": "py", "lang": "Python", "max_stars_repo_path": "spinsim/utilities_old/scalar.py", "max_stars_repo_name": "rpanderson/spinsim", "max_stars_repo_head_hexsha": "8f93b7dd1964290e2cc85ae1c15e73ca31a34bdc", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-11-09T08:45:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-09T22:36:54.000Z", "max_issues_repo_path": "spinsim/utilities_old/scalar.py", "max_issues_repo_name": "rpanderson/spinsim", "max_issues_repo_head_hexsha": "8f93b7dd1964290e2cc85ae1c15e73ca31a34bdc", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "spinsim/utilities_old/scalar.py", "max_forks_repo_name": "rpanderson/spinsim", "max_forks_repo_head_hexsha": "8f93b7dd1964290e2cc85ae1c15e73ca31a34bdc", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-02T10:28:50.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-02T10:28:50.000Z", "avg_line_length": 21.1851851852, "max_line_length": 76, "alphanum_fraction": 0.5227272727, "include": true, "reason": "import numpy,import numba,from numba", "num_tokens": 316}
#!/usr/bin/env python # coding: utf-8 # In[1]: import os, sys, gc import time import glob import pickle import copy import json import random from collections import OrderedDict, namedtuple import multiprocessing import threading import traceback from typing import Tuple, List import h5py from tqdm import tqdm, tqdm_notebook import numpy as np import pandas as pd import matplotlib.pyplot as plt import cv2 from PIL import Image import torch import torchvision import torch.nn.functional as F from torch import nn, optim import torch.optim.lr_scheduler as lr_scheduler from torch.utils.data import Dataset, DataLoader from torch.optim.lr_scheduler import CosineAnnealingLR import torchmetrics import pl_bolts import pytorch_lightning as pl from IPython.display import display, clear_output import faiss from modules.AugsDS_v13 import * from modules.eval_functions import * from modules.eval_metrics import evaluate sys.path.append('./modules') # In[2]: def do_inference(model, args, ckpt_filename): # # Building model # In[3]: # # Inference configuration # In[5]: do_simple_augmentation = False K = 500 faiss_gpu_id = args.faiss_gpu_id # In[6]: while args.DS_INPUT_DIR[-1] in ['/', r'\\']: args.DS_INPUT_DIR = args.DS_INPUT_DIR[:-1] # Path where the rescaled images will be saved args.DS_DIR = f'{args.DS_INPUT_DIR}_jpg_{args.DATASET_WH[0]}x{args.DATASET_WH[1]}' print(args) # # Data Source # In[7]: if any( [not os.path.exists(os.path.join(args.DS_DIR, folder)) for folder in args.ALL_FOLDERS] ): assert os.path.exists(args.DS_INPUT_DIR), f'DS_INPUT_DIR not found: {args.DS_INPUT_DIR}' resize_dataset( ds_input_dir=args.DS_INPUT_DIR, ds_output_dir=args.DS_DIR, output_wh=args.DATASET_WH, output_ext='jpg', num_workers=args.N_WORKERS, ALL_FOLDERS=args.ALL_FOLDERS, verbose=False, ) print('Paths:') print(' - DS_INPUT_DIR:', args.DS_INPUT_DIR) print(' - DS_DIR: ', args.DS_DIR) assert os.path.exists(args.DS_DIR), f'DS_DIR not found: {args.DS_DIR}' try: public_ground_truth_path = os.path.join(args.DS_DIR, 'public_ground_truth.csv') public_gt = pd.read_csv( public_ground_truth_path) except: public_ground_truth_path = os.path.join(args.DS_INPUT_DIR, 'public_ground_truth.csv') public_gt = pd.read_csv( public_ground_truth_path) # # Datasets # In[8]: ds_qry_full = FacebookDataset( samples_id_v=[f'Q{i:05d}' for i in (range(50_000, 100_000) if args.phase_2 else range(0, 50_000))] , do_augmentation=False, ds_dir=args.DS_DIR, output_wh=args.OUTPUT_WH, channel_first=True, norm_type= args.img_norm_type, verbose=True, ) # ds_qry_full.plot_sample(4) ds_ref_full = FacebookDataset( samples_id_v=[f'R{i:06d}' for i in range(1_000_000)], do_augmentation=False, ds_dir=args.DS_DIR, output_wh=args.OUTPUT_WH, channel_first=True, norm_type=args.img_norm_type, verbose=True, ) # ds_ref_full.plot_sample(4) ds_trn_full = FacebookDataset( samples_id_v=[f'T{i:06d}' for i in range(1_000_000)], do_augmentation=False, ds_dir=args.DS_DIR, output_wh=args.OUTPUT_WH, channel_first=True, norm_type=args.img_norm_type, verbose=True, ) # ds_trn_full.plot_sample(4) dl_qry_full = DataLoader( ds_qry_full, batch_size=args.BATCH_SIZE, num_workers=args.N_WORKERS, shuffle=False, ) dl_ref_full = DataLoader( ds_ref_full, batch_size=args.BATCH_SIZE, num_workers=args.N_WORKERS, shuffle=False, ) dl_trn_full = DataLoader( ds_trn_full, batch_size=args.BATCH_SIZE, num_workers=args.N_WORKERS, shuffle=False, ) # In[9]: aug = '_AUG' if do_simple_augmentation else '' submission_path = ckpt_filename.replace('.ckpt', f'_{args.OUTPUT_WH[0]}x{args.OUTPUT_WH[1]}{aug}_REF.h5') scores_path = submission_path.replace('.h5', '_match_d.pickle') # ### Query embeddings # In[10]: embed_qry_d = calc_embed_d( model, dataloader=dl_qry_full, do_simple_augmentation=do_simple_augmentation, ) # ### Reference embeddings # In[12]: if not os.path.exists(submission_path): embed_ref_d = calc_embed_d( model, dataloader=dl_ref_full, do_simple_augmentation=do_simple_augmentation ) else: _, embed_ref_d = read_submission(submission_path) save_submission( embed_qry_d, embed_ref_d, save_path=submission_path, ) match_d = calc_match_scores(embed_qry_d, embed_ref_d, k=K, gpu_id=faiss_gpu_id) save_obj(match_d, scores_path) # ### Public GT validation # In[16]: if not args.phase_2: eval_d = evaluate( submission_path=submission_path, gt_path=public_ground_truth_path, is_matching=False, ) # ### Training embeddings # In[17]: aug = '_AUG' if do_simple_augmentation else '' submission_path = ckpt_filename.replace('.ckpt', f'_{args.OUTPUT_WH[0]}x{args.OUTPUT_WH[1]}{aug}_TRN.h5') scores_path = submission_path.replace('.h5', '_match_d.pickle') # In[13]: if not os.path.exists(submission_path): embed_trn_d = calc_embed_d( model, dataloader=dl_trn_full, do_simple_augmentation=do_simple_augmentation ) else: _, embed_trn_d = read_submission(submission_path) save_submission( embed_qry_d, embed_trn_d, save_path=submission_path, ) # In[14]: match_d = calc_match_scores(embed_qry_d, embed_trn_d, k=K, gpu_id=faiss_gpu_id) save_obj(match_d, scores_path) if __name__ == '__main__': from modules.Facebook_model_v20 import ArgsT15_EffNetV2L, FacebookModel ckpt_filename = './checkpoints/sjy_test5/FacebookModel_Eepoch=51_TLtrn_loss_epoch=0.8669_TAtrn_acc_epoch=0.9914_VLval_loss_epoch=0.4390_VAval_acc_epoch=0.9930.ckpt' args = ArgsT15_EffNetV2L() args.BATCH_SIZE = 64 args.N_WORKERS = 7 args.DS_INPUT_DIR = f'./all_datasets/dataset' args.pretrained_bb = False args.arc_classnum = 40 args.ALL_FOLDERS = ['query_images', 'reference_images', 'training_images'] args.faiss_gpu_id = 0 args.phase_2 = True print(args) model = FacebookModel(args) _ = model.restore_checkpoint(ckpt_filename) do_inference(model, args, ckpt_filename)
{"hexsha": "16621515f19326c432b80a74ed7077283fe12327", "size": 6839, "ext": "py", "lang": "Python", "max_stars_repo_path": "phase2_scripts/model_inference.py", "max_stars_repo_name": "socom20/facebook-image-similarity-challenge-2021", "max_stars_repo_head_hexsha": "bf4226241be30cdf99180543f214edf571043e8d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-12-02T04:05:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T07:57:22.000Z", "max_issues_repo_path": "phase2_scripts/model_inference.py", "max_issues_repo_name": "socom20/facebook-image-similarity-challenge-2021", "max_issues_repo_head_hexsha": "bf4226241be30cdf99180543f214edf571043e8d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-12-07T07:05:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-08T02:24:17.000Z", "max_forks_repo_path": "phase2_scripts/model_inference.py", "max_forks_repo_name": "socom20/facebook-image-similarity-challenge-2021", "max_forks_repo_head_hexsha": "bf4226241be30cdf99180543f214edf571043e8d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-12-12T09:58:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T05:50:57.000Z", "avg_line_length": 22.2768729642, "max_line_length": 168, "alphanum_fraction": 0.6578447141, "include": true, "reason": "import numpy", "num_tokens": 1779}
#include <string> #include <vector> #include <memory> #include <map> #include <iostream> #include <fstream> #include <boost/filesystem.hpp> #include <yaml-cpp/yaml.h> #include "cantera/base/stringUtils.h" #include "cantera/base/ct_defs.h" //#include "cantera/IdealGasMix.h" //#include "cantera/InterfaceLatInt.h" //#include "cantera/Interface.h" //#include "cantera/thermo/SurfLatIntPhase.h" #include "cantera/thermo/SurfPhase.h" //#include "cantera/thermo/LateralInteraction.h" //#include "cantera/thermo/StoichSubstance.h" //#include "cantera/thermo/ThermoPhase.h" //#include "cantera/zeroD/Reactor.h" //#include "cantera/zeroD/ReactorNet.h" //#include "cantera/zeroD/Reservoir.h" //#include "cantera/zeroD/ReactorSurface.h" //#include "cantera/zeroD/flowControllers.h" #include "util.h" #include "run_reactor.h" #include "pfr1d.h" #include "pfr1d_solver.h" #include "io.h" using namespace std; using namespace Cantera; namespace fs = boost::filesystem; namespace OpenMKM { void run_1d_reactor(ReactorParser& rctr_parser, shared_ptr<Solution> gas, vector<shared_ptr<Solution>>& surfaces, ofstream& gen_info) { //Define the reactor based on the input file // Read the reactor dimensions double rctr_xc_area = rctr_parser.getXCArea(); double rctr_len = rctr_parser.getLength(); bool fr_defined = rctr_parser.FlowRateDefined(); bool mfr_defined = rctr_parser.MassFlowRateDefined(); bool rt_defined = rctr_parser.ResidenceTimeDefined(); int no_var_defined = fr_defined + mfr_defined + rt_defined; if (no_var_defined > 1) { cout << "Define only one of 'flow_rate', 'mass_flow_rate', 'residence_time'" << "Only one of the variables is arbitrarily used" << endl; } double velocity{0}; if (mfr_defined) { auto mfr = rctr_parser.getMassFlowRate(); auto flow_rate = mfr / gas->thermo()->density(); velocity = flow_rate / rctr_xc_area; } else if (fr_defined) { auto flow_rate = rctr_parser.getFlowRate(); velocity = flow_rate / rctr_xc_area; } else if (rt_defined) { auto rt = rctr_parser.getResidenceTime(); velocity = rctr_len / rt; } double cat_abyv = 0; if(rctr_parser.catalystAreaDefined()) { cat_abyv = rctr_parser.getCatalystAbyV(); } cout << "Catalyst loading (Area/Volume): " << cat_abyv << endl; if (cat_abyv == 0.0 && surfaces.size() > 0) { cout << "WARNING!!!\nCatalyst loading is zero.\n" << "Ignoring the surface phases given in the YAML file\n" << "--------------\n"; } vector<InterfaceKinetics*> ikin; vector<SurfPhase*> surf_ph; for (const auto surf_soln: surfaces) { ikin.push_back(dynamic_cast<InterfaceKinetics*>(surf_soln->kinetics().get())); surf_ph.push_back(dynamic_cast<SurfPhase*> (surf_soln->thermo().get())); } // Before simulation save the initial coverages of surface species for reuse //vector<vector<double>> surf_init_covs; vector<double> cov; for (const auto surf: surf_ph) { cov.resize(surf->nSpecies()); surf->getCoverages(cov.data()); //surf_init_covs.push_back(cov); } // Start the simulation gen_info << "Solving for equilibirum surface coverages at PFR inlet" << endl; for (size_t i = 0; i < surfaces.size(); i++) { cout << "Surface Site Density " << surf_ph[i]->siteDensity() << endl; ikin[i]->solvePseudoSteadyStateProblem(); vector<double> cov(surf_ph[i]->nSpecies()); surf_ph[i]->getCoverages(cov.data()); gen_info << "Equilibrium surface coverages on Surface: " << surf_ph[i]->name() << endl; for (auto j = 0; j < surf_ph[i]->nSpecies(); j++) gen_info << surf_ph[i]->speciesSPName(j) << " coverage: " << cov[j] << endl; } auto pfr = PFR1d(gas.get(), ikin, surf_ph, rctr_xc_area, cat_abyv, velocity); //string mode = rctr_node["mode"].as<string>(); string mode = rctr_parser.getTMode(); cout << "Reactor temperature mode: " << mode << endl; gen_info << "Reactor temperature mode: " << mode << endl; if (mode == "isothermal") { pfr.setEnergy(0); } else if (mode == "tprofile") { pfr.setEnergy(0); pfr.setTProfile(rctr_parser.getTProfile()); } else { pfr.setEnergy(1); //TODO: explicitly check for adiabatic or heat modes if (mode == "heat") { double htc = rctr_parser.getWallHeatTransferCoeff(); // htc double wall_abyv = rctr_parser.getWallSpecificArea(); // wall_abyv double ext_temp = rctr_parser.getExternalTemp(); // Text pfr.setHeatTransfer(htc, ext_temp, wall_abyv); } pfr.reinit(); } pfr.setConstraints(); gen_info << "Energy enabled? " << pfr.energyEnabled() << endl; // Read the sensitivity coefficients bool sens_on = rctr_parser.isSensitivityAnalysisEnabled(); bool full_sens = rctr_parser.isfullSensitivityAnalysis(); vector<std::string> sens_ids; int nquad; if (sens_on) { if (!full_sens){ // Read the sensitivity equations and enable them sens_ids = rctr_parser.getSensitivityReactions(); for (auto& id : sens_ids) { pfr.addSensitivityReaction(id); } auto sp_names = rctr_parser.getSensitivitySpecies(); for (auto& sp : sp_names) { pfr.addSensitivitySpecies(sp); } sens_ids.insert(sens_ids.end(), make_move_iterator(sp_names.begin()), make_move_iterator(sp_names.end())); } else { // Full sens enabled. Here all rxnids are counted and species are ignored nquad = gas->kinetics()->nReactions(); for (int i = 0; i < gas->kinetics()->nReactions(); i++){ sens_ids.push_back(gas->kinetics()->reaction(i)->id); } for (auto kin : ikin){ nquad += kin->nReactions(); for (int i = 0; i < kin->nReactions(); i++){ sens_ids.push_back(kin->reaction(i)->id); } } } } /* vector<double> ydot(25); vector<double> y(25); pfr.getInitialConditions(0, y.data(), ydot.data()); for (size_t i = 0; i < 25; i++){ cout << "i: " << i << " y: " << y[i] << " ydot: " << ydot[i] << endl; } */ PFR1dSolver pfr_solver {make_shared<PFR1d>(pfr)}; //auto simul_node = tube_node["simulation"]; if (rctr_parser.tolerancesDefined()){ auto abs_tol = rctr_parser.get_atol(); auto rel_tol = rctr_parser.get_rtol(); pfr_solver.setTolerances(rel_tol, abs_tol); } // Full sensitivity is set through PFR solver if (sens_on && full_sens){ pfr_solver.setQuadratureSize(nquad); } if (rctr_parser.solverInitStepSizeDefined()){ pfr_solver.setInitialStepSize(rctr_parser.getSolverInitStepSize()); } if (rctr_parser.solverMaxStepsDefined()){ pfr_solver.setMaxNumSteps(rctr_parser.getSolverMaxSteps()); } double simul_init_step = 1e-6; if (rctr_parser.initStepDefined()){ simul_init_step = rctr_parser.getInitStep(); } auto rpa_flag = rctr_parser.RPA(); vector<double> zvals = get_log10_intervals(rctr_len, simul_init_step); vector<double> T_params = rctr_parser.Ts(); vector<double> P_params = rctr_parser.Ps(); vector<double> fr_params = rctr_parser.FRs(); if (!fr_params.size()){ auto fr = velocity * rctr_xc_area; fr_params.push_back(fr); } auto get_vel = [&](double fr) -> double { return fr / rctr_xc_area; }; // Set the output type OutputFormat data_format = rctr_parser.printFormat(); setOutputFormat(data_format); auto surf_init_covs = rctr_parser.getSurfPhaseCompositions(); fs::path curr_dir = "."; for (const auto& T : T_params){ for (const auto& P : P_params){ for (const auto& fr : fr_params){ pfr.setVelocity(get_vel(fr)); string gas_comp = rctr_parser.getGasPhaseComposition(); gas->thermo()->setState_TPX(T, P, gas_comp); for (size_t i = 0; i < surfaces.size(); i++) { surf_ph[i]->setState_TP(T, P); //cout << "Initial Surface Coverages: " << i << endl; //for (auto cov : surf_init_covs[i]) // cout << cov << " "; //cout << endl; //cout << "Density " << surf->siteDensity() << endl; //surf->setCoveragesByName(surf_init_covs[i++]); ikin[i]->solvePseudoSteadyStateProblem(); } pfr.reinit(); pfr_solver.reinit(); string new_dir = "T-" + to_string(T) + ",P-" + to_string(P) + ",fr-" + to_string(fr); fs::path out_dir = curr_dir; if (T_params.size() > 1 || P_params.size() > 1 || fr_params.size() > 1){ out_dir /= new_dir; create_directory(out_dir); } string file_ext; if (data_format == OutputFormat::CSV) { file_ext = "csv"; } else { file_ext = "dat"; } ofstream gas_mole_out((out_dir / ("gas_mole_ss." + file_ext)).string(), ios::out); ofstream gas_mass_out((out_dir / ("gas_mass_ss." + file_ext)).string(), ios::out); ofstream gas_sdot_out((out_dir / ("gas_sdot_ss." + file_ext)).string(), ios::out); ofstream surf_cov_out((out_dir / ("surf_cov_ss." + file_ext)).string(), ios::out); ofstream surf_sdot_out((out_dir / ("surf_sdot_ss." + file_ext)).string(), ios::out); ofstream state_var_out((out_dir / ("rctr_state_ss." + file_ext)).string(), ios::out); ofstream rates_out((out_dir / "rates_ss.out").string(), ios::out); print_rxn_rates_hdr(rates_out); if (data_format == OutputFormat::DAT) { gas_mole_out << "# Gas Mole fractions\n"; gas_mass_out << "# Gas Mass fractions\n"; gas_sdot_out << "# Surface Production Rates of Gas Species (units of kmol/s)\n"; surf_cov_out << "# Surace Coverages\n"; surf_sdot_out << "# Production Rates of Surface Species (units of kmol/m2/s) \n"; state_var_out << "# Steady State Reactor State\n"; } print_gas_species_hdr(gas_mole_out, gas->thermo().get(), "z(m)"); print_gas_species_hdr(gas_mass_out, gas->thermo().get(), "z(m)"); print_gas_species_hdr(gas_sdot_out, gas->thermo().get(), "z(m)"); print_surface_species_hdr(surf_cov_out, surfaces, "z(m)"); print_surface_species_hdr(surf_sdot_out, surfaces, "z(m)"); print_pfr_state_hdr(state_var_out); gas_mole_out.precision(6); gas_mass_out.precision(6); gas_sdot_out.precision(6); surf_cov_out.precision(6); surf_sdot_out.precision(6); state_var_out.precision(6); rates_out.precision(6); for (const auto& z : zvals) { pfr_solver.solve(z); print_pfr_rctr_state(z, &pfr, gas_mole_out, gas_mass_out, gas_sdot_out, surf_cov_out, surf_sdot_out, state_var_out); if (rpa_flag) { string rpa_file_name = "rates_z-"; rpa_file_name += to_string(z); rpa_file_name += ".out"; ofstream rates_out ((out_dir / rpa_file_name).string(), ios::out); // Masks the name print_rxn_rates_hdr(//"Rates (mol/s) and Partial Equilibrium Analysis:", rates_out); rates_out.precision(6); print_rxn_rates(gas->kinetics().get(), rates_out); for (auto surf : surfaces) { print_rxn_rates(surf->kinetics().get(), rates_out); } rates_out.close(); } } pfr_solver.writeStateData((out_dir / "1d_pfr_state.out").string()); pfr_solver.writeGasData((out_dir / "1d_pfr_gas.out").string()); pfr_solver.writeSurfaceData((out_dir / "1d_pfr_surface.out").string()); if (sens_on){ string sep = (file_ext == "csv") ? "," : "\t"; if (!full_sens) pfr_solver.writeSensitivityData( (out_dir / ("1d_pfr_sensitivity." + file_ext)).string(), sens_ids, sep); else pfr_solver.writeFisherInformationMatrixDiag( (out_dir / ("1d_pfr_sensitivity." + file_ext)).string(), sens_ids, sep); } // Print final rpa data rates_out.precision(6); print_rxn_rates(gas->kinetics().get(), rates_out); for (auto surf : surfaces) { print_rxn_rates(surf->kinetics().get(), rates_out); } gas_mole_out.close(); gas_mass_out.close(); gas_sdot_out.close(); surf_cov_out.close(); surf_sdot_out.close(); state_var_out.close(); rates_out.close(); } } } } }
{"hexsha": "1bb9142e82231c6c8f77cee1459e1aab2d976898", "size": 14050, "ext": "cpp", "lang": "C++", "max_stars_repo_path": "src/onedReactor.cpp", "max_stars_repo_name": "skasiraj/openmkm", "max_stars_repo_head_hexsha": "ec910ba78f6510647bfe1a2e5e0d7a68a63c0261", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3.0, "max_stars_repo_stars_event_min_datetime": "2019-11-09T14:57:29.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-26T07:37:15.000Z", "max_issues_repo_path": "src/onedReactor.cpp", "max_issues_repo_name": "SINTEF/openmkm", "max_issues_repo_head_hexsha": "5d9136848ddc5268cb43f11b8815645f3b205277", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 39.0, "max_issues_repo_issues_event_min_datetime": "2019-12-09T13:55:34.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-03T00:59:53.000Z", "max_forks_repo_path": "src/onedReactor.cpp", "max_forks_repo_name": "SINTEF/openmkm", "max_forks_repo_head_hexsha": "5d9136848ddc5268cb43f11b8815645f3b205277", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8.0, "max_forks_repo_forks_event_min_datetime": "2019-12-08T18:52:45.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-27T17:49:13.000Z", "avg_line_length": 40.1428571429, "max_line_length": 108, "alphanum_fraction": 0.555658363, "num_tokens": 3387}
#!-*-coding:utf-8-*- import numpy as np import cv2 #相机的行列数 cols = 640 rows = 480 # 获得相应的帧图像 def getDataset(): cap = cv2.VideoCapture('./test.mp4') cap.set(cv2.CAP_PROP_FRAME_WIDTH, cols) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, rows) i = 0 while(cap.isOpened()): ret,frame = cap.read() cv2.imshow('data',frame) key = cv2.waitKey(0) if key&0xff == ord('\n'): continue if key&0xff == ord('s'): cv2.imwrite('./dataset/'+str(i)+'.png',frame) i+=1 if key&0xff == ord('q'): break cap.release() cv2.destroyAllWindows() # 检查相机情况,并保存图像 def checkTheCamera(): cap = cv2.VideoCapture('/dev/video10') fourcc = cv2.VideoWriter_fourcc(*'XVID') save_path = ("please input the save_path: ") writer = cv2.VideoWriter(save_path, fourcc, 30, (cols, rows)) cap.set(cv2.CAP_PROP_FRAME_WIDTH, cols) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, rows) while (cap.isOpened()): ret, frame = cap.read() cv2.imshow('cameraDown', frame) writer.write(frame) if cv2.waitKey(1) & 0xff == ord('q'): break writer.release() cap.release() cv2.destroyAllWindows() if __name__ == '__main__': getDataset()
{"hexsha": "0dacd6697fe04362621538d831435376f1a7f9e6", "size": 1315, "ext": "py", "lang": "Python", "max_stars_repo_path": "calibration_bev_fitcurve-1/get_dataset.py", "max_stars_repo_name": "GuoPingPan/LinearTracking_Huawei", "max_stars_repo_head_hexsha": "499e16448081421766df66614551750c1cb71a1d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "calibration_bev_fitcurve-1/get_dataset.py", "max_issues_repo_name": "GuoPingPan/LinearTracking_Huawei", "max_issues_repo_head_hexsha": "499e16448081421766df66614551750c1cb71a1d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "calibration_bev_fitcurve-1/get_dataset.py", "max_forks_repo_name": "GuoPingPan/LinearTracking_Huawei", "max_forks_repo_head_hexsha": "499e16448081421766df66614551750c1cb71a1d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.0701754386, "max_line_length": 65, "alphanum_fraction": 0.5657794677, "include": true, "reason": "import numpy", "num_tokens": 376}
# EPA_GHGI.py (flowsa) # !/usr/bin/env python3 # coding=utf-8 """ Inventory of US EPA GHG https://www.epa.gov/ghgemissions/inventory-us-greenhouse-gas-emissions-and-sinks-1990-2018 """ import io import zipfile import numpy as np import pandas as pd from flowsa.flowbyfunctions import assign_fips_location_system DEFAULT_YEAR = 9999 # Decided to add tables as a constant in the source code because the YML config isn't available in the ghg_call method. # Only keeping years 2010-2018 for the following tables: TABLES = { "Ch 2 - Trends": ["2-1"], "Ch 3 - Energy": ["3-10", "3-11", "3-14", "3-15", "3-21", "3-37", "3-38", "3-39", "3-57", "3-59", "3-22"], "Ch 4 - Industrial Processes": ["4-48", "4-94", "4-99", "4-101", "4-43", "4-80"], "Ch 5 - Agriculture": ["5-3", "5-7", "5-18", "5-19", "5-30"], "Executive Summary": ["ES-5"] } ANNEX_TABLES = { "Annex": ["A-17", "A-93", "A-94", "A-118"] } A_17_COMMON_HEADERS = ['Res.', 'Comm.', 'Ind.', 'Trans.', 'Elec.', 'Terr.', 'Total'] A_17_TBTU_HEADER = ['Adjusted Consumption (TBtu)a', 'Adjusted Consumption (TBtu)'] A_17_CO2_HEADER = ['Emissionsb (MMT CO2 Eq.) from Energy Use', 'Emissions (MMT CO2 Eq.) from Energy Use'] SPECIAL_FORMAT = ["3-22", "4-43", "4-80", "A-17", "A-93", "A-94", "A-118"] SRC_NAME_SPECIAL_FORMAT = ["T_3_22", "T_4_43", "T_4_80", "T_A_17"] DROP_COLS = ["Unnamed: 0", "1990", "1991", "1992", "1993", "1994", "1995", "1996", "1997", "1998", "1999", "2000", "2001", "2002", "2003", "2004", "2005", "2006", "2007", "2008", "2009"] TBL_META = { "EPA_GHG_Inventory_T_2_1": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "Recent Trends in U.S. Greenhouse Gas Emissions and Sinks", "desc": "Table 2-1: Recent Trends in U.S. Greenhouse Gas Emissions and Sinks (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_3_10": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "CH4 Emissions from Stationary Combustion", "desc": "Table 3-10: CH4 Emissions from Stationary Combustion (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_3_11": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "N2O Emissions from Stationary Combustion", "desc": "Table 3-11: N2O Emissions from Stationary Combustion (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_3_14": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "CH4 Emissions from Mobile Combustion", "desc": "Table 3-14: CH4 Emissions from Mobile Combustion (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_3_15": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "CH4 Emissions from Mobile Combustion", "desc": "Table 3-14: CH4 Emissions from Mobile Combustion (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_3_21": { "class": "Energy", "unit": "TBtu", "compartment": "air", "flow_name": "Adjusted Consumption of Fossil Fuels for Non-Energy Uses", "desc": "Table 3-21: Adjusted Consumption of Fossil Fuels for Non-Energy Uses (TBtu)" }, "EPA_GHG_Inventory_T_3_22": { "class": "Energy", "unit": "Other", "compartment": "air", "flow_name": "2018 Adjusted Non-Energy Use Fossil Fuel - __type__", "desc": "Table 3-22: 2018 Adjusted Non-Energy Use Fossil Fuel Consumption, Storage, and Emissions", "year": "2018" }, "EPA_GHG_Inventory_T_3_37": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "CH4 Emissions from Petroleum Systems", "desc": "Table 3-37: CH4 Emissions from Petroleum Systems (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_3_38": { "class": "Chemicals", "unit": "kt", "compartment": "air", "flow_name": "CH4 Emissions from Petroleum Systems", "desc": "Table 3-38: CH4 Emissions from Petroleum Systems (kt CH4)" }, "EPA_GHG_Inventory_T_3_39": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "CO2 Emissions from Petroleum Systems", "desc": "Table 3-39: CO2 Emissions from Petroleum Systems (MMT CO2)" }, "EPA_GHG_Inventory_T_3_57": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "CH4 Emissions from Natural Gas Systems", "desc": "Table 3-57: CH4 Emissions from Natural Gas Systems (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_3_59": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "Non-combustion CO2 Emissions from Natural Gas Systems", "desc": "Table 3-59: Non-combustion CO2 Emissions from Natural Gas Systems (MMT)" }, "EPA_GHG_Inventory_T_4_43": { "class": "Chemicals", "unit": "Other", "compartment": "air", "flow_name": "CO2 Emissions from Soda Ash Production", "desc": "Table 4-43: CO2 Emissions from Soda Ash Production" }, "EPA_GHG_Inventory_T_4_80": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "PFC Emissions from Aluminum Production", "desc": "Table 4-80: PFC Emissions from Aluminum Production (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_4_48": { "class": "Chemicals", "unit": "kt", "compartment": "air", "flow_name": "Production of Selected Petrochemicals", "desc": "Table 4-48: Production of Selected Petrochemicals (kt)" }, "EPA_GHG_Inventory_T_4_94": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "PFC, HFC, SF6, NF3, and N2O Emissions from Electronics Manufacture", "desc": "Table 4-94: PFC, HFC, SF6, NF3, and N2O Emissions from Electronics Manufacture [1] (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_4_99": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "Emissions of HFCs and PFCs from ODS Substitutes", "desc": "Table 4-99: Emissions of HFCs and PFCs from ODS Substitutes (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_4_101": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "Emissions of HFCs and PFCs from ODS Substitutes", "desc": "Table 4-101: Emissions of HFCs and PFCs from ODS Substitutes (MMT CO2 Eq.) by Sector" }, "EPA_GHG_Inventory_T_5_3": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "CH4 Emissions from Enteric Fermentation", "desc": "Table 5-3: CH4 Emissions from Enteric Fermentation (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_5_7": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "CH4 and N2O Emissions from Manure Management", "desc": "Table 5-7: CH4 and N2O Emissions from Manure Management (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_5_18": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "Direct N2O Emissions from Agricultural Soils by Land Use Type and N Input Type", "desc": "Table 5-18: Direct N2O Emissions from Agricultural " + "Soils by Land Use Type and N Input Type (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_5_19": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "Indirect N2O Emissions from Agricultural Soils", "desc": "Table 5-19: Indirect N2O Emissions from Agricultural Soils (MMT CO2 Eq.)" }, "EPA_GHG_Inventory_T_5_30": { "class": "Chemicals", "unit": "kt", "compartment": "air", "flow_name": "CH4, N2O, CO, and NOx Emissions from Field Burning of Agricultural Residues", "desc": "Table 5-30: CH4, N2O, CO, and NOx Emissions from Field Burning of Agricultural Residues (kt)" }, "EPA_GHG_Inventory_T_A_17": { "class": "Energy", "unit": "Other", "compartment": "air", "flow_name": "2012 Energy Consumption Data and CO2 Emissions from Fossil Fuel Combustion - __type__", "desc": "2012 Energy Consumption Data and CO2 Emissions from Fossil Fuel Combustion by Fuel Type" }, "EPA_GHG_Inventory_T_A_93": { "class": "Chemicals", "unit": "kt", "compartment": "air", "flow_name": "NOx Emissions from Stationary Combustion", "desc": "NOx Emissions from Stationary Combustion (kt)" }, "EPA_GHG_Inventory_T_A_94": { "class": "Chemicals", "unit": "kt", "compartment": "air", "flow_name": "CO Emissions from Stationary Combustion", "desc": "CO Emissions from Stationary Combustion (kt)" }, "EPA_GHG_Inventory_T_A_118": { "class": "Chemicals", "unit": "kt", "compartment": "air", "flow_name": "NMVOCs Emissions from Mobile Combustion", "desc": "NMVOCs Emissions from Mobile Combustion (kt)" }, "EPA_GHG_Inventory_T_ES_5": { "class": "Chemicals", "unit": "MMT", "compartment": "air", "flow_name": "U.S. Greenhouse Gas Emissions and Removals (Net Flux) " + "from Land Use, Land-Use Change, and Forestry", "desc": "Table ES-5: U.S. Greenhouse Gas Emissions and Removals (Net Flux) " + "from Land Use, Land-Use Change, and Forestry (MMT CO2 Eq.)" }, } YEARS = ["2010", "2011", "2012", "2013", "2014", "2015", "2016", "2017", "2018"] def ghg_url_helper(build_url, config, args): """ Only one URL is needed to retrieve the data for all tables for all years. :param build_url: :param config: :param args: :return: """ annex_url = config['url']['annex_url'] return [build_url, annex_url] def fix_a17_headers(header): """ Fix A-17 headers, trim white spaces, convert shortened words such as Elec., Res., etc. :param header: :return: """ if header == A_17_TBTU_HEADER[0]: header = f' {A_17_TBTU_HEADER[1].strip()}'.replace('') elif header == A_17_CO2_HEADER[0]: header = f' {A_17_CO2_HEADER[1].strip()}' else: header = header.strip() header = header.replace('Res.', 'Residential') header = header.replace('Comm.', 'Commercial') header = header.replace('Ind.', 'Industrial Other') header = header.replace('Trans.', 'Transportation') header = header.replace('Elec.', 'Electricity Power') header = header.replace('Terr.', 'U.S. Territory') return header def cell_get_name(value, default_flow_name): """ Given a single string value (cell), separate the name and units. :param value: :param default_flow_name: :return: """ if '(' not in value: return default_flow_name.replace('__type__', value.strip()) spl = value.split(' ') name = '' found_units = False for sub in spl: if '(' not in sub and not found_units: name = f'{name.strip()} {sub}' else: found_units = True return default_flow_name.replace('__type__', name.strip()) def cell_get_units(value, default_units): """ Given a single string value (cell), separate the name and units. :param value: :param default_units: :return: """ if '(' not in value: return default_units spl = value.split(' ') name = '' found_units = False for sub in spl: if ')' in sub: found_units = False if '(' in sub or found_units: name = f'{name} {sub.replace("(", "").replace(")", "")} ' found_units = True return name.strip() def series_separate_name_and_units(series, default_flow_name, default_units): """ Given a series (such as a df column), split the contents' strings into a name and units. An example might be converting "Carbon Stored (MMT C)" into ["Carbon Stored", "MMT C"]. :param series: :param default_flow_name: :param default_units: :return: """ names = series.apply(lambda x: cell_get_name(x, default_flow_name)) units = series.apply(lambda x: cell_get_units(x, default_units)) return {'names': names, 'units': units} def ghg_call(url, response, args): """ Callback function for the US GHG Emissions download. Open the downloaded zip file and read the contained CSV(s) into pandas dataframe(s). :param url: :param response: :param args: :return: """ df = None year = args['year'] with zipfile.ZipFile(io.BytesIO(response.content), "r") as f: frames = [] # TODO: replace this TABLES constant with kwarg['tables'] if 'annex' in url: is_annex = True t_tables = ANNEX_TABLES else: is_annex = False t_tables = TABLES for chapter, tables in t_tables.items(): for table in tables: # path = os.path.join("Chapter Text", chapter, f"Table {table}.csv") if is_annex: path = f"Annex/Table {table}.csv" else: path = f"Chapter Text/{chapter}/Table {table}.csv" data = f.open(path) if table not in SPECIAL_FORMAT: df = pd.read_csv(data, skiprows=2, encoding="ISO-8859-1", thousands=",") elif '3-' in table: # Skip first two rows, as usual, but make headers the next 3 rows: df = pd.read_csv(data, skiprows=2, encoding="ISO-8859-1", header=[0, 1, 2], thousands=",") # The next two rows are headers and the third is units: new_headers = [] for col in df.columns: # unit = col[2] new_header = 'Unnamed: 0' if 'Unnamed' not in col[0]: if 'Unnamed' not in col[1]: new_header = f'{col[0]} {col[1]}' else: new_header = col[0] if 'Unnamed' not in col[2]: new_header += f' {col[2]}' # unit = col[2] elif 'Unnamed' in col[0] and 'Unnamed' not in col[2]: new_header = col[2] new_headers.append(new_header) df.columns = new_headers print('break') elif '4-' in table: df = pd.read_csv(data, skiprows=2, encoding="ISO-8859-1", thousands=",", decimal=".") elif 'A-' in table: if table == 'A-17': # A-17 is similar to T 3-23, the entire table is 2012 and headings are completely different. if str(year) == '2012': df = pd.read_csv(data, skiprows=2, encoding="ISO-8859-1", header=[0, 1], thousands=",") new_headers = [] header_grouping = '' for col in df.columns: if 'Unnamed' in col[0]: # new_headers.append(f'{header_grouping}{col[1]}') new_headers.append(f'{fix_a17_headers(col[1])}{header_grouping}') else: if len(col) == 2: # header_grouping = f'{col[0]}__' if col[0] == A_17_TBTU_HEADER[0]: header_grouping = f' {A_17_TBTU_HEADER[1].strip()}' else: header_grouping = f' {A_17_CO2_HEADER[1].strip()}' # new_headers.append(f'{header_grouping}{col[1]}') new_headers.append(f'{fix_a17_headers(col[1])}{header_grouping}') df.columns = new_headers nan_col = 'Electricity Power Emissions (MMT CO2 Eq.) from Energy Use' fill_col = 'Unnamed: 12_level_1 Emissions (MMT CO2 Eq.) from Energy Use' df = df.drop(nan_col, 1) df.columns = [nan_col if x == fill_col else x for x in df.columns] df['Year'] = year else: df = pd.read_csv(data, skiprows=1, encoding="ISO-8859-1", thousands=",", decimal=".") if df is not None and len(df.columns) > 1: years = YEARS.copy() years.remove(str(year)) df = df.drop(columns=(DROP_COLS + years), errors='ignore') # Assign SourceName now while we still have access to the table name: source_name = f"EPA_GHG_Inventory_T_{table.replace('-', '_')}" df["SourceName"] = source_name frames.append(df) # return pd.concat(frames) return frames def get_unnamed_cols(df): """ Get a list of all unnamed columns, used to drop them. :param df: :return: """ return [col for col in df.columns if "Unnamed" in col] def is_consumption(source_name): """ Determine whether the given source contains consumption or production data. :param source_name: :return: """ if 'consum' in TBL_META[source_name]['desc'].lower(): return True return False def ghg_parse(dataframe_list, args): """ Parse the given EPA GHGI data and return multiple dataframes, one per-year per-table. :param dataframe_list: :param args: :return: """ cleaned_list = [] for df in dataframe_list: special_format = False source_name = df["SourceName"][0] print(f'Processing Source Name {source_name}') for src in SRC_NAME_SPECIAL_FORMAT: if src in source_name: special_format = True # Specify to ignore errors in case one of the drop_cols is missing. drop_cols = get_unnamed_cols(df) df = df.drop(columns=drop_cols, errors='ignore') is_cons = is_consumption(source_name) if not special_format or "T_4_" not in source_name: # Rename the PK column from data_type to "ActivityProducedBy" or "ActivityConsumedBy": if is_cons: df = df.rename(columns={df.columns[0]: "ActivityConsumedBy"}) df["ActivityProducedBy"] = 'None' else: df = df.rename(columns={df.columns[0]: "ActivityProducedBy"}) df["ActivityConsumedBy"] = 'None' else: df["ActivityConsumedBy"] = 'None' df["ActivityProducedBy"] = 'None' df["FlowType"] = "ELEMENTARY_FLOW" df["Location"] = "00000" id_vars = ["SourceName", "ActivityConsumedBy", "ActivityProducedBy", "FlowType", "Location"] if special_format and "Year" in df.columns: id_vars.append("Year") # Cast Year column to numeric and delete any years != year df = df[pd.to_numeric(df["Year"], errors="coerce") == int(args['year'])] # Set index on the df: df.set_index(id_vars) if special_format: if "T_4_" not in source_name: df = df.melt(id_vars=id_vars, var_name="FlowName", value_name="FlowAmount") else: df = df.melt(id_vars=id_vars, var_name="Units", value_name="FlowAmount") else: df = df.melt(id_vars=id_vars, var_name="Year", value_name="FlowAmount") # Dropping all rows with value "+" try: df = df[~df["FlowAmount"].str.contains("\\+", na=False)] except AttributeError as ex: print(ex) # Dropping all rows with value "NE" try: df = df[~df["FlowAmount"].str.contains("NE", na=False)] except AttributeError as ex: print(ex) # Convert all empty cells to nan cells df["FlowAmount"].replace("", np.nan, inplace=True) # Table 3-10 has some NO values, dropping these. df["FlowAmount"].replace("NO", np.nan, inplace=True) # Table A-118 has some IE values, dropping these. df["FlowAmount"].replace("IE", np.nan, inplace=True) # Drop any nan rows df.dropna(subset=['FlowAmount'], inplace=True) df["Description"] = 'None' df["Unit"] = "Other" # Update classes: # TODO: replace this TBL_META constant with kwargs['tbl_meta'] meta = TBL_META[source_name] df.loc[df["SourceName"] == source_name, "Class"] = meta["class"] df.loc[df["SourceName"] == source_name, "Unit"] = meta["unit"] df.loc[df["SourceName"] == source_name, "Description"] = meta["desc"] df.loc[df["SourceName"] == source_name, "Compartment"] = meta["compartment"] if not special_format or "T_4_" in source_name: df.loc[df["SourceName"] == source_name, "FlowName"] = meta["flow_name"] else: if "T_4_" not in source_name: flow_name_units = series_separate_name_and_units(df["FlowName"], meta["flow_name"], meta["unit"]) df['Unit'] = flow_name_units['units'] df.loc[df["SourceName"] == source_name, "FlowName"] = flow_name_units['names'] # We also need to fix the Activity PRODUCED or CONSUMED, now that we know units. # Any units TBtu will be CONSUMED, all other units will be PRODUCED. if is_cons: df['ActivityProducedBy'] = df['ActivityConsumedBy'] df.loc[df["Unit"] == 'TBtu', 'ActivityProducedBy'] = 'None' df.loc[df["Unit"] != 'TBtu', 'ActivityConsumedBy'] = 'None' else: df['ActivityConsumedBy'] = df['ActivityProducedBy'] df.loc[df["Unit"] == 'TBtu', 'ActivityProducedBy'] = 'None' df.loc[df["Unit"] != 'TBtu', 'ActivityConsumedBy'] = 'None' if 'Year' not in df.columns: df['Year'] = meta.get("year", DEFAULT_YEAR) # Some of the datasets, 4-43 and 4-80, still have years we don't want at this point. # Remove rows matching the years we don't want: try: df = df[df['Year'].isin([args['year']])] except AttributeError as ex: print(ex) # Add tmp DQ scores df["DataReliability"] = 5 df["DataCollection"] = 5 # Fill in the rest of the Flow by fields so they show "None" instead of nan.76i df["MeasureofSpread"] = 'None' df["DistributionType"] = 'None' df["LocationSystem"] = 'None' df = assign_fips_location_system(df, str(args['year'])) df = df.loc[:, ~df.columns.duplicated()] cleaned_list.append(df) if cleaned_list: for df in cleaned_list: # Remove commas from numbers again in case any were missed: df["FlowAmount"].replace(',', '', regex=True, inplace=True) return cleaned_list # df = pd.concat(cleaned_list) else: df = pd.DataFrame() return df
{"hexsha": "d7628ee5699c4dbf2ccc54c92fe4ce4a86960059", "size": 23157, "ext": "py", "lang": "Python", "max_stars_repo_path": "flowsa/data_source_scripts/EPA_GHGI.py", "max_stars_repo_name": "JohnAndrewTaylor/flowsa", "max_stars_repo_head_hexsha": "21b14b19f08370db574bdd59219a2773983c6f95", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "flowsa/data_source_scripts/EPA_GHGI.py", "max_issues_repo_name": "JohnAndrewTaylor/flowsa", "max_issues_repo_head_hexsha": "21b14b19f08370db574bdd59219a2773983c6f95", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "flowsa/data_source_scripts/EPA_GHGI.py", "max_forks_repo_name": "JohnAndrewTaylor/flowsa", "max_forks_repo_head_hexsha": "21b14b19f08370db574bdd59219a2773983c6f95", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.6101694915, "max_line_length": 119, "alphanum_fraction": 0.5691583538, "include": true, "reason": "import numpy", "num_tokens": 6285}
import dill import numpy as np from typing import * from abc import abstractmethod from abc import ABCMeta from cftool.misc import register_core from cftool.misc import shallow_copy_dict from ..misc import DataStructure processor_dict: Dict[str, Type["Processor"]] = {} class Processor(DataStructure, metaclass=ABCMeta): def __init__( self, previous_processors: List["Processor"], *, inplace: bool = False, **kwargs: Any, ): self._config = kwargs self._inplace = inplace self._caches: Dict[str, float] = {} self._previous_processors = previous_processors dims = [processor.input_dim for processor in self._previous_processors] start_idx = sum(dims) self._col_indices = [start_idx + i for i in range(self.input_dim)] def __str__(self) -> str: return f"{type(self).__name__}()" __repr__ = __str__ @property @abstractmethod def input_dim(self) -> int: pass @property @abstractmethod def output_dim(self) -> int: pass @abstractmethod def fit(self, columns: np.ndarray) -> "Processor": pass @abstractmethod def _process(self, columns: np.ndarray) -> np.ndarray: pass @abstractmethod def _recover(self, processed_columns: np.ndarray) -> np.ndarray: pass @property def input_indices(self) -> List[int]: return self._col_indices @property def output_indices(self) -> List[int]: dims = [method.output_dim for method in self._previous_processors] previous_dimensions = sum(dims) return list(range(previous_dimensions, previous_dimensions + self.output_dim)) @property def cache_excludes(self) -> Set[str]: return {"_previous_processors"} @property def data_tuple_base(self) -> Optional[Type[NamedTuple]]: return None @property def data_tuple_attributes(self) -> Optional[List[str]]: return None def initialize(self) -> None: pass def process(self, columns: np.ndarray) -> np.ndarray: if not self._inplace: columns = columns.copy() return self._process(columns) def recover(self, columns: np.ndarray, *, inplace: bool = False) -> np.ndarray: if not inplace: columns = columns.copy() return self._recover(columns) identifier_key = "__identifier__" def dumps_(self) -> Any: instance_dict = shallow_copy_dict(self.__dict__) instance_dict[self.identifier_key] = self.__identifier__ return instance_dict @classmethod def loads(cls, instance_dict: Dict[str, Any], **kwargs: Any) -> "Processor": previous_processors = kwargs.get("previous_processors") if previous_processors is None: raise ValueError("`previous_processors` must be provided") identifier = instance_dict.pop(cls.identifier_key) processor = processor_dict[identifier](previous_processors) processor.__dict__.update(instance_dict) return processor @classmethod def make_with( cls, previous_processors: List["Processor"], *, inplace: bool = False, **kwargs: Any, ) -> "Processor": instance = cls(previous_processors, inplace=inplace, **kwargs) instance.initialize() return instance @classmethod def register(cls, name: str) -> Callable[[Type], Type]: global processor_dict def before(cls_: Type) -> None: cls_.__identifier__ = name return register_core(name, processor_dict, before_register=before) __all__ = ["Processor", "processor_dict"]
{"hexsha": "3ee206152e6020a8f51281157ca4082ecc3d0846", "size": 3717, "ext": "py", "lang": "Python", "max_stars_repo_path": "cfdata/tabular/processors/base.py", "max_stars_repo_name": "carefree0910/carefree-data", "max_stars_repo_head_hexsha": "ae0f4ea5724b4efd5d76f2a9d420acf3322c1d19", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2020-10-25T11:52:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-23T02:45:41.000Z", "max_issues_repo_path": "cfdata/tabular/processors/base.py", "max_issues_repo_name": "carefree0910/carefree-data", "max_issues_repo_head_hexsha": "ae0f4ea5724b4efd5d76f2a9d420acf3322c1d19", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-08-02T01:58:48.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-26T11:24:19.000Z", "max_forks_repo_path": "cfdata/tabular/processors/base.py", "max_forks_repo_name": "carefree0910/carefree-data", "max_forks_repo_head_hexsha": "ae0f4ea5724b4efd5d76f2a9d420acf3322c1d19", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-04T14:34:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-04T14:34:13.000Z", "avg_line_length": 27.5333333333, "max_line_length": 86, "alphanum_fraction": 0.6467581383, "include": true, "reason": "import numpy", "num_tokens": 810}
from __future__ import annotations import logging from math import floor, sqrt import numpy as np from numpy.linalg import inv, norm from cctbx.array_family import flex from dxtbx import flumpy from scitbx import matrix from dials.algorithms.profile_model.ellipsoid import chisq_quantile from dials.algorithms.statistics.fast_mcd import FastMCD, maha_dist_sq logger = logging.getLogger("dials") def _index(reflection_table, experiment, fail_on_bad_index=False): """Index the strong spots""" # Get some stuff from experiment A = np.array(experiment.crystal.get_A(), dtype=np.float64).reshape(3, 3) s0 = np.array([experiment.beam.get_s0()], dtype=np.float64).reshape(3, 1) s0_length = norm(s0) detector = experiment.detector # Create array if necessary if "miller_index" not in reflection_table: reflection_table["miller_index"] = flex.miller_index(len(reflection_table)) # Index all the reflections miller_index = reflection_table["miller_index"] selection = flex.size_t() num_reindexed = 0 for i, xyz in enumerate(reflection_table["xyzobs.px.value"]): # Get the observed pixel coordinate x, y, _ = xyz # Get the lab coord s1 = np.array( detector[0].get_pixel_lab_coord((x, y)), dtype=np.float64 ).reshape(3, 1) s1_norm = norm(s1) s1 *= s0_length / s1_norm # Get the reciprocal lattice vector r = s1 - s0 # Compute the fractional miller index hf = np.matmul(inv(A), r) # Compute the integer miller index h = np.array([int(floor(j + 0.5)) for j in hf[:, 0]], dtype=int).reshape(3, 1) # Print warning if reindexing if tuple(h) != miller_index[i]: logger.warn( "Reindexing (% 3d, % 3d, % 3d) -> (% 3d, % 3d, % 3d)" % (miller_index[i] + tuple(h)) ) num_reindexed += 1 miller_index[i] = matrix.col(flumpy.from_numpy(h)) if fail_on_bad_index: raise RuntimeError("Bad index") # If its not indexed as 0, 0, 0 then append if h.any() and norm(h - hf) < 0.3: selection.append(i) # Print some info logger.info( "Reindexed %d/%d input reflections" % (num_reindexed, len(reflection_table)) ) logger.info( "Selected %d/%d input reflections" % (len(selection), len(reflection_table)) ) # Select all the indexed reflections reflection_table.set_flags(selection, reflection_table.flags.indexed) reflection_table = reflection_table.select(selection) return reflection_table def _predict(reflection_table, experiment): """ Predict the position of the spots """ # Get some stuff from experiment A = np.array(experiment.crystal.get_A(), dtype=np.float64).reshape((3, 3)) s0 = np.array([experiment.beam.get_s0()], dtype=np.float64).reshape(3, 1) s0_length = norm(s0) # Compute the vector to the reciprocal lattice point # since this is not on the ewald sphere, lets call it s2 s1 = flex.vec3_double(reflection_table.size()) s2 = flex.vec3_double(reflection_table.size()) for i, h in enumerate(reflection_table["miller_index"]): r = np.matmul(A, np.array([h], dtype=np.float64).reshape(3, 1)) s2_i = r + s0 s2[i] = matrix.col(flumpy.from_numpy(s2_i)) s1[i] = matrix.col(flumpy.from_numpy(s2_i * s0_length / norm(s2_i))) reflection_table["s1"] = s1 reflection_table["s2"] = s2 reflection_table["entering"] = flex.bool(reflection_table.size(), False) # Compute the ray intersections xyzpx = flex.vec3_double() xyzmm = flex.vec3_double() for ss in s1: mm = experiment.detector[0].get_ray_intersection(ss) px = experiment.detector[0].millimeter_to_pixel(mm) xyzpx.append(px + (0,)) xyzmm.append(mm + (0,)) reflection_table["xyzcal.mm"] = xyzmm reflection_table["xyzcal.px"] = xyzpx logger.info("Do prediction for %d reflections" % len(reflection_table)) return reflection_table def _filter_reflections_based_on_centroid_distance( reflection_table, experiment, outlier_probability=0.975, max_separation=2, ): """ Filter reflections too far from predicted position """ # Compute the x and y residuals Xobs, Yobs, _ = reflection_table["xyzobs.px.value"].parts() Xcal, Ycal, _ = reflection_table["xyzcal.px"].parts() Xres = Xobs - Xcal Yres = Yobs - Ycal # Compute the epsilon residual s0_length = 1.0 / experiment.beam.get_wavelength() s1x, s1y, s1z = reflection_table["s2"].parts() s1_length = flex.sqrt(s1x**2 + s1y**2 + s1z**2) Eres = s1_length - s0_length # Initialise the fast_mcd outlier algorithm # fast_mcd = FastMCD((Xres, Yres, Eres)) fast_mcd = FastMCD((Xres, Yres)) # get location and MCD scatter estimate T, S = fast_mcd.get_corrected_T_and_S() # get squared Mahalanobis distances # d2s = maha_dist_sq((Xres, Yres, Eres), T, S) d2s = maha_dist_sq((Xres, Yres), T, S) # Compute the cutoff mahasq_cutoff = chisq_quantile(2, outlier_probability) # compare to the threshold and select reflections selection1 = d2s < mahasq_cutoff selection2 = flex.sqrt(Xres**2 + Yres**2) < max_separation selection = selection1 & selection2 reflection_table = reflection_table.select(selection) n_refl = reflection_table.size() # Print some stuff logger.info("-" * 80) logger.info("Centroid outlier rejection") logger.info(f" Using MCD algorithm with probability = {outlier_probability}") logger.info(" Max X residual: %f" % flex.max(flex.abs(Xres))) logger.info(" Max Y residual: %f" % flex.max(flex.abs(Yres))) logger.info(" Max E residual: %f" % flex.max(flex.abs(Eres))) logger.info(" Mean X RMSD: %f" % (sqrt(flex.sum(Xres**2) / len(Xres)))) logger.info(" Mean Y RMSD: %f" % (sqrt(flex.sum(Yres**2) / len(Yres)))) logger.info(" Mean E RMSD: %f" % (sqrt(flex.sum(Eres**2) / len(Eres)))) logger.info(" MCD location estimate: %.4f, %.4f" % tuple(T)) logger.info( """ MCD scatter estimate: %.7f, %.7f, %.7f, %.7f""" % tuple(S) ) logger.info(" Number of outliers: %d" % selection1.count(False)) logger.info( " Number of reflections with residual > %0.2f pixels: %d" % (max_separation, selection2.count(False)) ) logger.info(f"Number of reflections selection for refinement: {n_refl}") logger.info("-" * 80) return reflection_table def reindex( reflection_table, experiment, outlier_probability=0.975, max_separation=2, fail_on_bad_index=False, ): """Reindex strong spots and perform filtering""" reflection_table = _index(reflection_table, experiment, fail_on_bad_index) reflection_table = _predict(reflection_table, experiment) reflection_table = _filter_reflections_based_on_centroid_distance( reflection_table, experiment, outlier_probability=outlier_probability, max_separation=max_separation, ) return reflection_table
{"hexsha": "3fd568a09fc928d6ddc1ebbe27bdae27214f7acd", "size": 7181, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/dials/algorithms/profile_model/ellipsoid/indexer.py", "max_stars_repo_name": "dials-src/dials", "max_stars_repo_head_hexsha": "25055c1f6164dc33e672e7c5c6a9c5a35e870660", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-10T17:28:16.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-10T17:28:16.000Z", "max_issues_repo_path": "src/dials/algorithms/profile_model/ellipsoid/indexer.py", "max_issues_repo_name": "dials-src/dials", "max_issues_repo_head_hexsha": "25055c1f6164dc33e672e7c5c6a9c5a35e870660", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/dials/algorithms/profile_model/ellipsoid/indexer.py", "max_forks_repo_name": "dials-src/dials", "max_forks_repo_head_hexsha": "25055c1f6164dc33e672e7c5c6a9c5a35e870660", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-07T12:39:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-07T12:39:04.000Z", "avg_line_length": 34.1952380952, "max_line_length": 86, "alphanum_fraction": 0.6564545328, "include": true, "reason": "import numpy,from numpy", "num_tokens": 1936}
import base64 import json import posixpath import nbformat import numpy import tiledb import tiledb.cloud import tornado.web import traitlets from notebook.services.contents import checkpoints from notebook.services.contents import filecheckpoints from notebook.services.contents import filemanager from notebook.services.contents import manager from . import arrays from . import caching from . import listings from . import models from . import paths NBFORMAT_VERSION = 4 NOTEBOOK_MIME = "application/x-ipynb+json" class TileDBContents(manager.ContentsManager): """ A general class for TileDB Contents, parent of the actual contents class and checkpoints """ def _save_notebook_tiledb(self, path: str, model: models.Model): """ Save a notebook to tiledb array :param model: model notebook :param uri: URI of notebook :return: any messages """ nb_contents = nbformat.from_dict(model["content"]) self.check_and_sign(nb_contents, path) file_contents = numpy.array(bytearray(json.dumps(model["content"]), "utf-8")) final_name = arrays.write_bytes( path, file_contents, mimetype=model.get("mimetype"), format=model.get("format"), type="notebook", s3_prefix=model.get("tiledb:s3_prefix", None), s3_credentials=model.get("tiledb:s3_credentials", None), is_user_defined_name="name" in model, is_new=model.get("tiledb:is_new", False), ) self.validate_notebook_model(model) return final_name, model.get("message") def _notebook_from_array(self, path, content=True): """ Build a notebook model from database record. """ model = models.create(path=path, type="notebook") if content: tiledb_uri = paths.tiledb_uri_from_path(path) try: info = tiledb.cloud.array.info(tiledb_uri) model["last_modified"] = models.to_utc(info.last_accessed) if "write" not in info.allowed_actions: model["writable"] = False arr = caching.Array.from_cache(tiledb_uri) nb_content = [] file_content = arr.read() if file_content is not None: nb_content = nbformat.reads( file_content["contents"].tostring().decode("utf-8", "backslashreplace"), as_version=NBFORMAT_VERSION, ) self.mark_trusted_cells(nb_content, path) model["format"] = "json" model["content"] = nb_content self.validate_notebook_model(model) except tiledb.cloud.tiledb_cloud_error.TileDBCloudError as e: raise tornado.web.HTTPError(400, "Error fetching notebook info: {}".format(str(e))) except tiledb.TileDBError as e: raise tornado.web.HTTPError( 400, "Error fetching notebook: {}".format(str(e)), ) except Exception as e: raise tornado.web.HTTPError( 400, "Error fetching notebook: {}".format(str(e)), ) return model def _file_from_array(self, path, content=True, format=None): """ Build a notebook model from database record. """ model = models.create(path=path, type="file") if content: tiledb_uri = paths.tiledb_uri_from_path(path) try: info = tiledb.cloud.array.info(tiledb_uri) model["last_modified"] = models.to_utc(info.last_accessed) if "write" not in info.allowed_actions: model["writable"] = False arr = caching.Array.from_cache(tiledb_uri) # Use cached meta, only file_size is ever updated meta = arr.cached_meta # Get metadata information if "mimetype" in meta: model["mimetype"] = meta["mimetype"] if "format" in meta: model["format"] = meta["format"] else: model["format"] = format if "type" in meta: model["type"] = meta["type"] file_content = arr.read() if file_content is not None: nb_content = file_content["contents"] model["content"] = nb_content else: model["content"] = [] if ( "type" in meta and meta["type"] == "notebook" and file_content is not None ): nb_content = nbformat.reads( file_content["contents"].tostring().decode("utf-8", "backslashreplace"), as_version=NBFORMAT_VERSION, ) self.mark_trusted_cells(nb_content, path) model["format"] = "json" model["content"] = nb_content self.validate_notebook_model(model) except tiledb.cloud.tiledb_cloud_error.TileDBCloudError as e: raise tornado.web.HTTPError(500, "Error fetching file info: {}".format(str(e))) except tiledb.TileDBError as e: raise tornado.web.HTTPError( 500, "Error fetching file: {}".format(str(e)), ) except Exception as e: raise tornado.web.HTTPError( 400, "Error fetching file: {}".format(str(e)), ) return model def guess_type(self, path, allow_directory=True): """ Guess the type of a file. Taken from https://github.com/danielfrg/s3contents/blob/master/s3contents/genericmanager.py If allow_directory is False, don't consider the possibility that the file is a directory. Parameters ---------- obj: s3.Object or string """ path = paths.strip(path) if paths.is_remote(path): if paths.is_remote_dir(path): return "directory" else: if path.endswith(paths.NOTEBOOK_EXT): path = path[: -1 * len(paths.NOTEBOOK_EXT)] try: tiledb_uri = paths.tiledb_uri_from_path(path) return arrays.fetch_type(tiledb_uri) except Exception: return "directory" if path.endswith(".ipynb"): return "notebook" elif allow_directory and self.dir_exists(path): return "directory" else: return "file" class TileDBCheckpoints(filecheckpoints.GenericFileCheckpoints, checkpoints.Checkpoints): """ A wrapper of a class which will in the future support checkpoints by time traveling. It inherits from GenericFileCheckpoints for local notebooks """ # Immutable version of the only model we return ourselves. _BASE_MODEL = ( ("id", "checkpoints-not-supported"), ("last_modified", "models._DUMMY_DATE"), ) def create_file_checkpoint(self, content, format, path): """ -> checkpoint model""" if not paths.is_remote(path): return super().create_file_checkpoint(content, format, path) return dict(self._BASE_MODEL) def create_notebook_checkpoint(self, nb, path): """ -> checkpoint model""" if not paths.is_remote(path): return super().create_notebook_checkpoint(nb, path) return dict(self._BASE_MODEL) def get_file_checkpoint(self, checkpoint_id, path): """ -> {'type': 'file', 'content': <str>, 'format': {'text', 'base64'}}""" if not paths.is_remote(path): return super().get_file_checkpoint(checkpoint_id, path) def get_notebook_checkpoint(self, checkpoint_id, path): """ -> {'type': 'notebook', 'content': <output of nbformat.read>}""" if not paths.is_remote(path): return super().get_notebook_checkpoint(checkpoint_id, path) def delete_checkpoint(self, checkpoint_id, path): """deletes a checkpoint for a file""" if not paths.is_remote(path): return super().delete_checkpoint(checkpoint_id, path) def list_checkpoints(self, path): """returns a list of checkpoint models for a given file, default just does one per file """ path = paths.strip(path) if not paths.is_remote(path): return super().list_checkpoints(path) return [] def rename_checkpoint(self, checkpoint_id, old_path, new_path): """renames checkpoint from old path to new path""" if not paths.is_remote(old_path): return super().rename_checkpoint(checkpoint_id, old_path, new_path) class TileDBCloudContentsManager(TileDBContents, filemanager.FileContentsManager, traitlets.HasTraits): # This makes the checkpoints get saved on this directory root_dir = traitlets.Unicode("./", config=True) def _checkpoints_class_default(self): """ Set checkpoint class to custom checkpoint class :return: """ return TileDBCheckpoints def _directory_model_from_path(self, path, *, content: bool = False): # if self.vfs.is_dir(path): # lstat = self.fs.lstat(path) # if "ST_MTIME" in lstat and lstat["ST_MTIME"]: if not paths.is_remote(path) and not paths.is_remote_dir(path): return self._dir_model(path, content=content) if path == "cloud": cloud = models.create(path="cloud", type="directory") if content: cloud["format"] = "json" cloud["content"] = listings.all_notebooks() cloud["last_modified"] = max( models.to_utc(cat["last_modified"]) for cat in cloud["content"]) return cloud category, namespace = paths.category_namespace(path) if category: if namespace: return listings.namespace(category, namespace, content=content) return listings.category(category, content=content) return models.create( path=path, type="directory", last_modified=models.DUMMY_DATE, created=models.DUMMY_DATE, ) def get(self, path, content=True, type=None, format=None): """Get a file or directory model.""" path = paths.strip(path) try: if not paths.is_remote(path): model = super().get(path, content, type, format) if path == "" and content and _is_cloud_enabled(): cloud_content = listings.all_notebooks() model["content"].append( models.create( path="cloud", type="directory", content=content, format="json", last_modified=max( models.to_utc(cat["last_modified"]) for cat in cloud_content ), ) ) return model if path.endswith(paths.NOTEBOOK_EXT): path = path[: -1 * len(paths.NOTEBOOK_EXT)] if type is None: if paths.is_remote_dir(path): type = "directory" else: type = self.guess_type(path, allow_directory=True) if type == "notebook": return self._notebook_from_array(path, content=content) elif type == "file": return self._file_from_array(path, content=content, format=format) elif type == "directory": return self._directory_model_from_path(path, content=content) # if model is not None: # model. except Exception as e: raise tornado.web.HTTPError( 500, "Error opening notebook {}: {}".format(path, str(e)) ) def save(self, model, path=""): """ Save a file or directory model to path. Should return the saved model with no content. Save implementations should call self.run_pre_save_hook(model=model, path=path) prior to writing any data. """ path = paths.strip(path) try: model_type = model["type"] except KeyError: raise tornado.web.HTTPError(400, "No file type provided") if "content" not in model and model_type != "directory": raise tornado.web.HTTPError(400, u"No file content provided") if model_type not in ("directory", "file", "notebook"): raise tornado.web.HTTPError(400, "Unhandled contents type: %s" % model["type"]) if not paths.is_remote(path): if model.get("tiledb:is_new"): # Since we don't try to increment the filename in self.new(), # do it here for newly-created files. dir, name = posixpath.split(path) incremented = self.increment_filename(name, dir, insert='-') path = paths.join(dir, incremented) return super().save(model, path) if path.endswith(paths.NOTEBOOK_EXT): path = path[:-len(paths.NOTEBOOK_EXT)] if model["type"] == "file": try: _try_convert_file_to_notebook(model) except ValueError as ve: raise tornado.web.HTTPError(400, f"Cannot parse Jupyter notebook: {ve}") self.run_pre_save_hook(model=model, path=path) validation_message = None try: if model["type"] == "notebook": final_name, validation_message = self._save_notebook_tiledb(path, model) if final_name is not None: parts = paths.split(path) parts_length = len(parts) parts[parts_length - 1] = final_name path = paths.join(*parts) elif model["type"] == "file": raise tornado.web.HTTPError(400, "Only .ipynb files may be created in the cloud.") else: if paths.is_remote(path): raise tornado.web.HTTPError( 400, "Trying to create unsupported type: %s in cloud" % model["type"], ) # else: # return super().save(model, path) # validation_message = self.__create_directory_and_group(path) except Exception as e: self.log.error("Error while saving file: %s %s", path, e, exc_info=True) raise model = self.get(path, type=model["type"], content=False) if validation_message is not None: model["message"] = validation_message return model def delete_file(self, path): """Delete the file or directory at path.""" path = paths.strip(path) if paths.is_remote(path): if path.endswith(paths.NOTEBOOK_EXT): path = path[: -1 * len(paths.NOTEBOOK_EXT)] tiledb_uri = paths.tiledb_uri_from_path(path) try: caching.Array.purge(tiledb_uri) return tiledb.cloud.array.delete_array( tiledb_uri, "application/x-ipynb+json" ) except tiledb.cloud.tiledb_cloud_error.TileDBCloudError as e: raise tornado.web.HTTPError( 500, f"Error deregistering {tiledb_uri!r}: {e}" ) except tiledb.TileDBError as e: raise tornado.web.HTTPError( 500, str(e), ) else: return super().delete_file(path) def rename_file(self, old_path, new_path): """Rename a file or directory.""" old_path = paths.strip(old_path) new_path = paths.strip(new_path) if paths.is_remote(old_path): if old_path.endswith(paths.NOTEBOOK_EXT): old_path = old_path[: -1 * len(paths.NOTEBOOK_EXT)] tiledb_uri = paths.tiledb_uri_from_path(old_path) parts_new = paths.split(new_path) parts_new_length = len(parts_new) array_name_new = parts_new[parts_new_length - 1] try: caching.Array.purge(tiledb_uri) return tiledb.cloud.notebook.rename_notebook( uri=tiledb_uri, notebook_name=array_name_new ) except tiledb.cloud.tiledb_cloud_error.TileDBCloudError as e: raise tornado.web.HTTPError(500, f"Error renaming {tiledb_uri!r}: {e}") except tiledb.TileDBError as e: raise tornado.web.HTTPError( 500, str(e), ) else: return super().rename_file(old_path, new_path) # ContentsManager API part 2: methods that have usable default # implementations, but can be overridden in subclasses. def dir_exists(self, path): """Does a directory exist at the given path? Like os.path.isdir Override this method in subclasses. Parameters ---------- path : string The path to check Returns ------- exists : bool Whether the path does indeed exist. """ path = paths.strip(path) if paths.is_remote_dir(path): return True return super().dir_exists(path) def is_hidden(self, path): """Is path a hidden directory or file? Parameters ---------- path : string The path to check. This is an API path (`/` separated, relative to root dir). Returns ------- hidden : bool Whether the path is hidden. """ path = paths.strip(path) if paths.is_remote(path): return False return super().is_hidden(path) def file_exists(self, path=""): """Does a file exist at the given path? Like os.path.isfile Override this method in subclasses. Parameters ---------- path : string The API path of a file to check for. Returns ------- exists : bool Whether the file exists. """ path = paths.strip(path) if paths.is_remote(path): if path.endswith(paths.NOTEBOOK_EXT): path = path[: -1 * len(paths.NOTEBOOK_EXT)] return arrays.exists(path) return super().file_exists(path) def new_untitled(self, path="", type="", ext="", options=""): """Create a new untitled file or directory in path path must be a directory File extension can be specified. Use `new` to create files with a fully specified path (including filename). options is a json string passed by the TileDB Prompt User Contents Jupyterlab notebook extension for additional notebook creation options """ path = paths.strip(path) if not self.dir_exists(path): raise tornado.web.HTTPError(404, "No such directory: %s" % path) model = {} if type: model["type"] = type if ext == ".ipynb": model.setdefault("type", "notebook") else: model.setdefault("type", "file") if options: try: options_json = json.loads(options) model["name"] = options_json["name"] model["tiledb:s3_prefix"] = options_json["s3_prefix"] model["tiledb:s3_credentials"] = options_json["s3_credentials"] except Exception as e: raise tornado.web.HTTPError( 400, u"Could not read TileDB user options: {}".format(e) ) if model["type"] == "directory": prefix = self.untitled_directory elif model["type"] == "notebook": prefix = model.get("name", self.untitled_notebook) ext = ".ipynb" elif model["type"] == "file": prefix = self.untitled_file else: raise tornado.web.HTTPError(400, "Unexpected model type: %r" % model["type"]) # We don't do the "increment" step that the default ContentsManager does # because we generate a random suffix or increment the filename in # _save_notebook_tiledb. full_path = paths.join(path, prefix + ext) return self.new(model, full_path) def new(self, model=None, path=""): if model is None: model = {} model["tiledb:is_new"] = True return super().new(model, path) def copy(self, from_path, to_path=None): from_path = paths.strip(from_path) model = self.get(from_path) model.pop('path', None) if not to_path: # A missing to_path implies that we should create a duplicate # in the same location (with a new name). to_path = from_path else: to_path = paths.strip(to_path) if self.dir_exists(to_path): # to_path may be a directory, in which case we copy # the model to an identically-named entry in that directory. from_parts = paths.split(from_path) from_filename = from_parts[-1] to_path = paths.join(to_path, from_filename) # As in new_untitled, we don't increment our filenames because they are # dedup'd in _save_notebook_tiledb. return self.new(model, to_path) def _try_convert_file_to_notebook(model): """Attempts to convert the passed ``model`` from a "file" to a "notebook". Modifies the passed-in model in-place. If the model cannot be converted to a notebook, the model is guaranteed to be unmodified. Raises a ValueError if there is any error converting the notebook. """ try: fmt = model["format"] raw_content = model["content"] except KeyError as ke: raise ValueError(f"missing model key {ke.args[0]}") if fmt == "text": content = raw_content elif fmt == "base64": content = base64.b64decode(raw_content).decode("utf-8") else: raise ValueError(f"unknown content format {fmt!r}") nb = nbformat.reads(content, NBFORMAT_VERSION) model.update( format="json", mimetype=None, content=nb, ) def _is_cloud_enabled(): """ Check if a user is allowed to access notebook sharing """ try: profile = tiledb.cloud.client.user_profile() if "notebook_sharing" in set(profile.enabled_features): return True except tiledb.cloud.tiledb_cloud_error.TileDBCloudError as e: raise tornado.web.HTTPError( 400, "Error fetching user default s3 path for new notebooks {}".format(str(e)), ) return False
{"hexsha": "33b9b26cadbaa18bb6fdbf0f666d4d77e2f50bbf", "size": 23572, "ext": "py", "lang": "Python", "max_stars_repo_path": "tiledbcontents/tiledbcontents.py", "max_stars_repo_name": "TileDB-Inc/TileDB-Cloud-Jupyter-Contents", "max_stars_repo_head_hexsha": "4161772f27befd2ad27c76297266f38794abc3b4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-04-20T00:47:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-19T09:05:15.000Z", "max_issues_repo_path": "tiledbcontents/tiledbcontents.py", "max_issues_repo_name": "TileDB-Inc/TileDB-Cloud-Jupyter-Contents", "max_issues_repo_head_hexsha": "4161772f27befd2ad27c76297266f38794abc3b4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 18, "max_issues_repo_issues_event_min_datetime": "2020-10-06T15:05:07.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-07T15:49:00.000Z", "max_forks_repo_path": "tiledbcontents/tiledbcontents.py", "max_forks_repo_name": "TileDB-Inc/TileDB-Cloud-Jupyter-Contents", "max_forks_repo_head_hexsha": "4161772f27befd2ad27c76297266f38794abc3b4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-04T18:54:58.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-31T00:17:24.000Z", "avg_line_length": 36.602484472, "max_line_length": 145, "alphanum_fraction": 0.5591379603, "include": true, "reason": "import numpy", "num_tokens": 4747}
import ctypes import os import platform import cv2 import json import math import numpy as np dir_path = os.path.dirname(os.path.realpath(__file__)) if platform.system() == "Windows": path = os.path.join(dir_path, "moildev.dll") shared_lib_path = path else: path = os.path.join(dir_path, "moildev.so") shared_lib_path = path try: lib = ctypes.cdll.LoadLibrary(shared_lib_path) except Exception as e: print(e) class Moildev(object): def __init__(self, parameter_path, camera_type): """ This is the initial configuration that you need provide the parameter. The camera parameter is the result from calibration camera by MOIL laboratory. before the successive functions can work correctly,configuration is necessary in the beginning of program. Args: parameter_path (): camera_type (): for more detail, please reference https://github.com/MoilOrg/moildev """ """ This is the initial configuration that you need provide the parameter. The camera parameter is the result from calibration camera by MOIL laboratory. before the successive functions can work correctly,configuration is necessary in the beginning of program. Args: . camera_name - A string to describe this camera . sensor_width - Camera sensor width (cm) . sensor_height - Camera Sensor Height (cm) . Icx - image center X coordinate(pixel) . Icy - image center Y coordinate(pixel) . ratio : Sensor pixel aspect ratio. . imageWidth : Input image width . imageHeight : Input image height . parameter0 .. parameter5 : calibration's parameters . calibrationRatio : input image with/ calibrationRatio image width for more detail, please reference https://github.com/MoilOrg/moildev """ super(Moildev, self).__init__() self.__PI = 3.1415926 self.__alphaToRho_Table = [] self.__rhoToAlpha_Table = [] self.__moildev = None if parameter_path is None: pass else: with open(parameter_path) as f: data = json.load(f) if camera_type in data.keys(): self.__camera = data[camera_type]["cameraName"] self.__sensor_width = data[camera_type]['cameraSensorWidth'] self.__sensor_height = data[camera_type]['cameraSensorHeight'] self.__Icx = data[camera_type]['iCx'] self.__Icy = data[camera_type]['iCy'] self.__ratio = data[camera_type]['ratio'] self.__imageWidth = data[camera_type]['imageWidth'] self.__imageHeight = data[camera_type]['imageHeight'] self.__calibrationRatio = data[camera_type]['calibrationRatio'] self.__parameter0 = data[camera_type]['parameter0'] self.__parameter1 = data[camera_type]['parameter1'] self.__parameter2 = data[camera_type]['parameter2'] self.__parameter3 = data[camera_type]['parameter3'] self.__parameter4 = data[camera_type]['parameter4'] self.__parameter5 = data[camera_type]['parameter5'] self.__import_moildev() self.__initAlphaRho_Table() else: print( "Error 1: camera parameter not available, please check your camera type") def __initAlphaRho_Table(self): """ Create and calculate a list for initial alpha proportionate with rho image (height image). Returns: Initial alpha proportionate with rho image table. """ for i in range(1800): alpha = i / 10 * 3.1415926 / 180 self.__alphaToRho_Table.append( (self.__parameter0 * alpha * alpha * alpha * alpha * alpha * alpha + self.__parameter1 * alpha * alpha * alpha * alpha * alpha + self.__parameter2 * alpha * alpha * alpha * alpha + self.__parameter3 * alpha * alpha * alpha + self.__parameter4 * alpha * alpha + self.__parameter5 * alpha) * self.__calibrationRatio) i += 1 i = 0 index = 0 while i < 1800: while index < self.__alphaToRho_Table[i]: self.__rhoToAlpha_Table.append(i) index += 1 i += 1 while index < 3600: self.__rhoToAlpha_Table.append(i) index += 1 def __import_moildev(self): """ Create Moildev instance from Moildev SDK share object library. Returns: """ lib.moildev_new.argtypes = [ ctypes.c_char_p, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double] lib.moildev_new.restype = ctypes.c_void_p self.__moildev = lib.moildev_new( self.__camera.encode('utf-8'), self.__sensor_width, self.__sensor_height, self.__Icx, self.__Icy, self.__ratio, self.__imageWidth, self.__imageHeight, self.__parameter0, self.__parameter1, self.__parameter2, self.__parameter3, self.__parameter4, self.__parameter5, self.__calibrationRatio) self.__map_x = np.zeros( (self.__imageHeight, self.__imageWidth), dtype=np.float32) self.__map_y = np.zeros( (self.__imageHeight, self.__imageWidth), dtype=np.float32) self.__res = self.__create_map_result_image() def __create_map_result_image(self): """ Create Maps image from zeroes matrix for result image Returns: Zeroes matrix. """ size = self.__imageHeight, self.__imageWidth, 3 return np.zeros(size, dtype=np.uint8) def test(self): """ Test to link with Moildev share library Returns: Hello from C++ """ if self.__moildev is not None: if platform.system() == "Windows": lib.test.argtypes = [ctypes.c_void_p] lib.test.restype = ctypes.c_char_p print((lib.test(self.__moildev)).decode()) else: lib.test.argtypes = [ctypes.c_void_p] lib.test.restype = ctypes.c_void_p return lib.test(self.__moildev) else: return None def clean(self): """ clean the memory of pointer. Returns: None """ lib.cleanup_moildev.argtypes = [ctypes.c_void_p] lib.cleanup_moildev.restype = ctypes.c_void_p return lib.cleanup_moildev(self.__moildev) def getIcx(self): """ Get center image from width image (x axis). Returns: """ return self.__Icx def getIcy(self): """ Get center image from height image (y axis). Returns: """ return self.__Icy def getImageWidth(self): """ Get image width. Returns: """ return self.__imageWidth def getImageHeight(self): """Get image height. :return: image height :rtype: int """ return self.__imageHeight def getCalibrationRatio(self): """ Returns: """ return self.__calibrationRatio def getAnypointMaps(self, alpha, beta, zoom, mode=1): """The purpose is to generate a pair of X-Y Maps for the specified alpha, beta and zoom parameters, the result X-Y Maps can be used later to remap the original fisheye image to the target angle image. Args: :param alpha: alpha :type alpha: float :param beta: beta :type beta: float :param zoom: decimal zoom factor, normally 1..12 :type zoom: int :param mode: selection anypoint mode(1 or 2) :type mode: int Return: :return: map_x, map_y :rtype: float Examples: please reference: https://github.com/MoilOrg/moildev """ if self.__moildev is not None: if mode == 1: if beta < 0: beta = beta + 360 if alpha < -90 or alpha > 90 or beta < 0 or beta > 360: alpha = 0 beta = 0 else: alpha = -90 if alpha < -90 else alpha alpha = 90 if alpha > 90 else alpha beta = 0 if beta < 0 else beta beta = 360 if beta > 360 else beta lib.moil_anypointM.argtypes = [ ctypes.c_void_p, ctypes.POINTER( ctypes.c_float), ctypes.POINTER( ctypes.c_float), ctypes.c_int, ctypes.c_int, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double] lib.moil_anypointM.restype = None mapX = self.__map_x.ctypes.data_as( ctypes.POINTER(ctypes.c_float)) mapY = self.__map_y.ctypes.data_as( ctypes.POINTER(ctypes.c_float)) lib.moil_anypointM( self.__moildev, mapX, mapY, self.__imageWidth, self.__imageHeight, alpha, beta, zoom, self.__ratio) else: if alpha < - 90 or alpha > 90 or beta < -90 or beta > 90: alpha = 0 beta = 0 else: alpha = -90 if alpha < -90 else alpha alpha = 90 if alpha > 90 else alpha beta = -90 if beta < -90 else beta beta = 90 if beta > 90 else beta lib.moil_anypointM2.argtypes = [ ctypes.c_void_p, ctypes.POINTER( ctypes.c_float), ctypes.POINTER( ctypes.c_float), ctypes.c_int, ctypes.c_int, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double] lib.moil_anypointM2.restype = None mapX = self.__map_x.ctypes.data_as( ctypes.POINTER(ctypes.c_float)) mapY = self.__map_y.ctypes.data_as( ctypes.POINTER(ctypes.c_float)) lib.moil_anypointM2( self.__moildev, mapX, mapY, self.__imageWidth, self.__imageHeight, alpha, beta, zoom, self.__ratio) return self.__map_x, self.__map_y else: return None, None def getPanoramaMaps(self, alpha_min, alpha_max): """ To generate a pair of X-Y Maps for alpha within 0..alpha_max degree, the result X-Y Maps can be used later to generate a panorama image from the original fisheye image.. Args: :param alpha_min: alpha min :type alpha_min: float :param alpha_max: alpha max :type alpha_max: float Return: :return: pair maps x-y :rtype: array Examples: please reference: https://github.com/MoilOrg/moildev """ lib.moil_panoramaX.argtypes = [ ctypes.c_void_p, ctypes.POINTER( ctypes.c_float), ctypes.POINTER( ctypes.c_float), ctypes.c_int, ctypes.c_int, ctypes.c_double, ctypes.c_double, ctypes.c_double] lib.moil_panoramaX.restype = None mapX = self.__map_x.ctypes.data_as(ctypes.POINTER(ctypes.c_float)) mapY = self.__map_y.ctypes.data_as(ctypes.POINTER(ctypes.c_float)) lib.moil_panoramaX( self.__moildev, mapX, mapY, self.__imageWidth, self.__imageHeight, self.__ratio, alpha_max, alpha_min) return self.__map_x, self.__map_y def anypoint(self, image, alpha, beta, zoom, mode=1): """Generate anypoint widget_controller.for mode 1, the result rotation is betaOffset degree rotation around the Z-axis(roll) after alphaOffset degree rotation around the X-axis(pitch). for mode 2, The result rotation is thetaY degree rotation around the Y-axis(yaw) after thetaX degree rotation around the X-axis(pitch). Args: :param image: source image :type image: array :param alpha: alpha :type alpha: float :param beta: beta :type beta: float :param zoom: zoom :type zoom: int :param mode: mode anypoint widget_controller :type mode: int Return: :return: anypoint widget_controller :rtype: array Examples: please reference: https://github.com/MoilOrg/moildev """ map_x, map_y = self.getAnypointMaps(alpha, beta, zoom, mode) image = cv2.remap(image, map_x, map_y, cv2.INTER_CUBIC) return image def panorama(self, image, alpha_min, alpha_max): """The panorama image centered at the 3D direction with alpha = iC_alpha_degree and beta = iC_beta_degree Args: :param image: Original image :type image: array :param alpha_min: min of alpha. by default it 10 degree. :type alpha_min: float :param alpha_max: max of alpha. The recommended value is half of camera FOV. For example, use 90 for a 180 degree fisheye images and use 110 for a 220 degree fisheye images. :type alpha_max: float Returns: :return: panorama image :rtype: array Examples: please reference: https://github.com/MoilOrg/moildev """ if alpha_min < 10: print( "Oops! That was no valid number on alpha_min. the value must equal or more than 10") return None else: map_x, map_y = self.getPanoramaMaps(alpha_min, alpha_max) image = cv2.remap(image, map_x, map_y, cv2.INTER_CUBIC) return image def __PanoramaM_Rt(self, alpha_max, iC_alpha_degree, iC_beta_degree): """ To generate a pair of X-Y Maps for alpha within 0..alpha_max degree, the result X-Y Maps can be used later to generate a panorama image from the original fisheye image. The panorama image centered at the 3D direction with alpha = iC_alpha_degree and beta = iC_beta_degree. Args: . mapX : memory pointer of result X-Map . mapY : memory pointer of result Y-Map . w : width of the Map (both mapX and mapY) . h : height of the Map (both mapX and mapY) . magnification : input imageWidth / sensor_width, m_ratio is normally equal to 1. . alpha_max : max of alpha. The recommended value is half of camera FOV. For example, use 90 for a 180 degree fisheye images and use 110 for a 220 degree fisheye images. . iC_alpha_degree : alpha angle of panorama center. . iC_beta_degree : beta angle of panorama center. Returns: New mapX and mapY. Examples: please reference: https://github.com/MoilOrg/moildev """ lib.moil_panoramaM_Rt.argtypes = [ ctypes.c_void_p, ctypes.POINTER( ctypes.c_float), ctypes.POINTER( ctypes.c_float), ctypes.c_int, ctypes.c_int, ctypes.c_double, ctypes.c_double, ctypes.c_double, ctypes.c_double] lib.moil_panoramaM_Rt.restype = None mapX = self.__map_x.ctypes.data_as(ctypes.POINTER(ctypes.c_float)) mapY = self.__map_y.ctypes.data_as(ctypes.POINTER(ctypes.c_float)) lib.moil_panoramaM_Rt( self.__moildev, mapX, mapY, self.__imageWidth, self.__imageHeight, self.__ratio, alpha_max, iC_alpha_degree, iC_beta_degree) return self.__map_x, self.__map_y def reverseImage(self, image, alpha_max, alpha, beta): """To generate the image reverse image from panorama that can change the focus direction from the original images. The panorama reverse image centered at the 3D direction with alpha_max = max of alpha and beta = iC_beta_degree. Args: :param image: source image :type image: array :param alpha_max: alpha max :type alpha_max: float :param alpha: alpha :type alpha: float :param beta: beta :type beta: float Return: :return: reverse widget_controller image :rtype: array Examples: please reference: https://github.com/MoilOrg/moildev """ map_x, map_y = self.__PanoramaM_Rt(alpha_max, alpha, beta) panorama_image = cv2.remap(image, map_x, map_y, cv2.INTER_CUBIC) if platform.system() == "Windows": print("revPanorama available at Moildev library version 1.3.0 \n" "make sure you have install OpenCV and Visual Studio code") else: lib.moil_revPanorama.argtypes = [ ctypes.c_void_p, ctypes.POINTER( ctypes.c_void_p), ctypes.POINTER( ctypes.c_void_p), ctypes.c_int, ctypes.c_int, ctypes.c_double, ctypes.c_double, ] lib.moil_revPanorama.restype = None panoramaImage = panorama_image.ctypes.data_as( ctypes.POINTER(ctypes.c_void_p)) res = self.__res.ctypes.data_as(ctypes.POINTER(ctypes.c_void_p)) lib.moil_revPanorama( self.__moildev, panoramaImage, res, self.__imageWidth, self.__imageHeight, alpha_max, beta) return self.__res def getAlphaFromRho(self, rho): """Get the alpha from rho image. Args: :param rho: rho image :type rho: int Return: :return: alpha :rtype: float Examples: please reference: https://github.com/MoilOrg/moildev """ if rho >= 0: return self.__rhoToAlpha_Table[rho] / 10 else: return -self.__rhoToAlpha_Table[-rho] / 10 def getRhoFromAlpha(self, alpha): """Get rho image from alpha given. Args: :param alpha:alpha :type alpha: float Return: :return: rho image :rtype: int Examples: please reference: https://github.com/MoilOrg/moildev """ return self.__alphaToRho_Table[round(alpha * 10)] def getAlphaBeta(self, coordinateX, coordinateY, mode=1): """Get the alpha beta from specific coordinate image. Args: :param mode: the anypoint mode. :type mode: int :param coordinateX: the coordinate point X axis. :type coordinateX: int :param coordinateY: the coordinate point Y axis. :type coordinateY: int Return: :return: alpha, beta :rtype: float Examples: please reference: https://github.com/MoilOrg/moildev """ if self.__moildev is not None: delta_x = round(coordinateX - self.__imageWidth * 0.5) delta_y = round(- (coordinateY - self.__imageHeight * 0.5)) if mode == 1: r = round( math.sqrt( math.pow( delta_x, 2) + math.pow( delta_y, 2))) alpha = self.getAlphaFromRho(r) if delta_x == 0: angle = 0 else: angle = (math.atan2(delta_y, delta_x) * 180) / self.__PI beta = 90 - angle else: alpha = self.getAlphaFromRho(delta_y) beta = self.getAlphaFromRho(delta_x) return alpha, beta else: return None, None
{"hexsha": "d5ee7168bfa76e1781d3fa04d2b1b90f41821bf4", "size": 22880, "ext": "py", "lang": "Python", "max_stars_repo_path": "src/Moildev/Moildev.py", "max_stars_repo_name": "aji-ptn/MoilApp", "max_stars_repo_head_hexsha": "9742a28074add23fda1afa534f25a1b8bea68c93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Moildev/Moildev.py", "max_issues_repo_name": "aji-ptn/MoilApp", "max_issues_repo_head_hexsha": "9742a28074add23fda1afa534f25a1b8bea68c93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Moildev/Moildev.py", "max_forks_repo_name": "aji-ptn/MoilApp", "max_forks_repo_head_hexsha": "9742a28074add23fda1afa534f25a1b8bea68c93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.6142208775, "max_line_length": 120, "alphanum_fraction": 0.5082604895, "include": true, "reason": "import numpy", "num_tokens": 4726}
import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas_datareader import data import pymc3 as pm np.random.seed(0) def main(): #load data returns = data.get_data_google('SPY', start='2008-5-1', end='2009-12-1')['Close'].pct_change() returns.plot() plt.ylabel('daily returns in %'); with pm.Model() as sp500_model: nu = pm.Exponential('nu', 1./10, testval=5.0) sigma = pm.Exponential('sigma', 1./0.02, testval=0.1) s = pm.GaussianRandomWalk('s', sigma**-2, shape=len(returns)) r = pm.StudentT('r', nu, lam=pm.math.exp(-2*s), observed=returns) with sp500_model: trace = pm.sample(2000) pm.traceplot(trace, [nu, sigma]); plt.show() plt.figure() returns.plot() plt.plot(returns.index, np.exp(trace['s',::5].T), 'r', alpha=.03) plt.legend(['S&P500', 'stochastic volatility process']) plt.show() if __name__ == "__main__": main()
{"hexsha": "5c0d92c128205870ee5175ad61eeb6f6c537ab1b", "size": 1021, "ext": "py", "lang": "Python", "max_stars_repo_path": "stochastic_volatility.py", "max_stars_repo_name": "vsmolyakov/fin", "max_stars_repo_head_hexsha": "901f3c5a9a17e65913fa5a00fc5bf7c3b9de6a5d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2017-07-08T13:34:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T14:54:58.000Z", "max_issues_repo_path": "stochastic_volatility.py", "max_issues_repo_name": "vsmolyakov/fin", "max_issues_repo_head_hexsha": "901f3c5a9a17e65913fa5a00fc5bf7c3b9de6a5d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "stochastic_volatility.py", "max_forks_repo_name": "vsmolyakov/fin", "max_forks_repo_head_hexsha": "901f3c5a9a17e65913fa5a00fc5bf7c3b9de6a5d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2017-02-01T07:47:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T11:14:13.000Z", "avg_line_length": 24.9024390244, "max_line_length": 98, "alphanum_fraction": 0.5857002938, "include": true, "reason": "import numpy,import pymc3", "num_tokens": 290}
# Gamma is a discrete RandomVariable that represents # the instantaneous values of a model parameter # to be embedded into continuous space # parameters: # # stencil : list of values that the parameter takes # alphas: probabilities of taking each value. # For example, stencil = [2, 3] and alphas=[0.2, 0.8] # means the random variable takes value 3 with prob 0.2 # and the value 3 with prob 0.8. import numpy as np import math import random class Gamma(): def __init__(self, stencil, alphas): self.stencil = stencil self.alphas = alphas assert(len(stencil)>0) assert(len(alphas)==len(stencil)) assert(sum(alphas)<=1.0+1e-6)#all probabilities should sum to 1 #instantaneous and mean values self.value = self.stencil[0] self.mean_value=sum([(stencil[i]*alphas[i]) for i in range(len(stencil))]) #update and return the instantaneous value: def get(self): v = np.random.choice(self.stencil,p=self.alphas) self.value=v return v def Test(): gamma = Gamma([2,3,4],[0.4,0.4,0.2]) for i in range(20): print (gamma.get()) print("Mean=", gamma.mean_value)
{"hexsha": "1a3f02f06051179e57c0a6fe730b8ce2e37bc952", "size": 1191, "ext": "py", "lang": "Python", "max_stars_repo_path": "2_SimPy_models/Gamma.py", "max_stars_repo_name": "NehaKaranjkar/Embedding", "max_stars_repo_head_hexsha": "0f6ed608819cdf680a8db9beae939bf8617797a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2_SimPy_models/Gamma.py", "max_issues_repo_name": "NehaKaranjkar/Embedding", "max_issues_repo_head_hexsha": "0f6ed608819cdf680a8db9beae939bf8617797a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2_SimPy_models/Gamma.py", "max_forks_repo_name": "NehaKaranjkar/Embedding", "max_forks_repo_head_hexsha": "0f6ed608819cdf680a8db9beae939bf8617797a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.4666666667, "max_line_length": 82, "alphanum_fraction": 0.6481947943, "include": true, "reason": "import numpy", "num_tokens": 318}
from __future__ import print_function import sys import argparse import os import torch import time import imp import numpy as np import datetime from torch import nn, optim from PIL import Image from torch.nn import functional as F from torch.utils.data import DataLoader from utils.AverageMeter import AverageMeter from utils.logger import logger import utils.utils as utils from model_1 import * from dataloader.dataset import * parser = argparse.ArgumentParser(description='jingwei') parser.add_argument("--exp", type = str, default = "", help = "experiment") parser.add_argument("--num_workers", type = int, default = 16, help = "num_workers") parser.add_argument("--checkpoint", type = int, default = 0, help = "load checkpoint") parser.add_argument('--gpu', type = str, default = "0", help = 'choose GPU') args = parser.parse_args() torch.manual_seed(0) torch.cuda.manual_seed_all(0) np.random.seed(0) exp_config = os.path.join(".", "config", args.exp + ".py") exp_dir = os.path.join("../data/jingwei", args.exp) exp_log_dir = os.path.join(exp_dir, "log") if not os.path.exists(exp_log_dir): os.makedirs(exp_log_dir) #读取参数 config = imp.load_source("", exp_config).config #tensorboard && logger now_str = datetime.datetime.now().__str__().replace(' ','_') logger_path = os.path.join(exp_log_dir, now_str + ".log") logger = logger(logger_path).get_logger() os.environ["CUDA_VISIBLE_DEVICES"] = '1, 2, 3' train_config = config['train_config'] logger.info('preparing data......') train_dataset = jingwei_train_dataset( csv_root = train_config['csv_root'], ) trainloader = DataLoader( dataset = train_dataset, batch_size = train_config['batch_size'], shuffle = True, num_workers = args.num_workers, drop_last = True ) logger.info('data done!') net_opt = config['net'] net = DeepLabV3_4(net_opt) net = net.cuda() net = nn.DataParallel(net, device_ids=[0, 1, 2]) optim_opt = config["optim"] optimizer = optim.SGD( net.parameters(), lr = optim_opt["lr"], \ momentum = optim_opt["momentum"], \ nesterov = optim_opt["nesterov"], \ weight_decay=optim_opt['weight_decay']) if args.checkpoint > 0 : checkpoint_name = args.exp + "_epoch" + args.checkpoint checkpoint_path = os.path.join(exp_dir, net_checkpoint_name) assert(os.path.exists(net_checkpoint_path)) try: checkpoint = torch.load(net_checkpoint_path) network.load_state_dict(checkpoint["network"]) logger.info("Load net checkpoint epoch {}".format(args.checkpoint)) optimizer.load_state_dict(checkpoint["optimizer"]) logger.info("Load optimizer checkpoint epoch {}".format(args.checkpoint)) except: logger.info("Can not load checkpoint from {}".format(net_checkpoint_path)) weight = torch.Tensor([0.2, 1.0, 1.0, 1.0]).cuda() if args.checkpoint > 0: iter = args.checkpoint * len(trainloader) else: iter = 0 train_loss_1 = AverageMeter() train_acc = AverageMeter() train_IOU = AverageMeter() train_back_IOU = AverageMeter() val_loss_1 = AverageMeter() val_acc = AverageMeter() val_IOU = AverageMeter() val_back_IOU = AverageMeter() def train(epoch): global iter max_iter = config['num_epochs'] * len(trainloader) train_loss_1.reset() train_acc.reset() train_IOU.reset() train_back_IOU.reset() net.train() for idx, batch in enumerate(trainloader): end = time.time() new_lr = utils.polynomial_decay(optim_opt['lr'], iter, max_iter, power = 0.9, end_learning_rate = 1e-4) utils.adjust_learning_rate(optimizer, new_lr) image = batch[0].cuda() instance_label = batch[1].cuda() optimizer.zero_grad() prob_output = net(image) loss1 = F.cross_entropy(prob_output, instance_label, weight = weight) total_loss = loss1 total_loss.backward() optimizer.step() #################################### train_loss_1.update(loss1.item()) acc, IOU, back_IOU = utils.compute_accuracy(prob_output, instance_label) train_acc.update(acc) train_IOU.update(IOU) train_back_IOU.update(back_IOU) if idx % config['display_step'] == 0: logger.info('==> Iteration [{}][{}/{}][{}/{}]: loss1: {:.4f} ({:.4f}) lr:{:.4f} acc: {:.4f} ({:.4f}) IOU: {:.4f} ({:.4f}) back_IOU: {:.4f} ({:.4f}) time: {:.4f}'.format( epoch + 1, idx, len(trainloader), iter, max_iter, loss1.item(), train_loss_1.avg, new_lr, acc, train_acc.avg, IOU, train_IOU.avg, back_IOU, train_back_IOU.avg, time.time() - end, )) iter += 1 logger.info("training Status: ") logger.info(config) assert(args.checkpoint < config['num_epochs']) for epoch in range(args.checkpoint, config['num_epochs']): logger.info("Experiment:{}".format(args.exp)) logger.info("Begin training epoch {}".format(epoch + 1)) train(epoch) checkpoint_name = args.exp + '_epoch' + str(epoch + 1) checkpoint_path = os.path.join(exp_dir, checkpoint_name) ckpt = {'epoch': epoch + 1, 'network': net.state_dict(), 'optimizer': optimizer.state_dict()} torch.save(ckpt, checkpoint_path)
{"hexsha": "b86530274286fdeb11de7532ca7db529d5d4fbb4", "size": 7025, "ext": "py", "lang": "Python", "max_stars_repo_path": "code/model_1_main.py", "max_stars_repo_name": "zzz1515151/TianChi-JingWei-Competation-Round1", "max_stars_repo_head_hexsha": "82b26a4cd0e7a6e3c0264c9dbd0100326f9727ad", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-02-13T13:28:56.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-23T01:51:32.000Z", "max_issues_repo_path": "code/model_1_main.py", "max_issues_repo_name": "zzz1515151/TianChi-JingWei-Competation-Round1", "max_issues_repo_head_hexsha": "82b26a4cd0e7a6e3c0264c9dbd0100326f9727ad", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/model_1_main.py", "max_forks_repo_name": "zzz1515151/TianChi-JingWei-Competation-Round1", "max_forks_repo_head_hexsha": "82b26a4cd0e7a6e3c0264c9dbd0100326f9727ad", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.6069364162, "max_line_length": 187, "alphanum_fraction": 0.495088968, "include": true, "reason": "import numpy", "num_tokens": 1350}
##~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~## ## ## ## This file forms part of the Badlands surface processes modelling companion. ## ## ## ## For full license and copyright information, please refer to the LICENSE.md file ## ## located at the project root, or contact the authors. ## ## ## ##~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~#~## """ Here we set usefull functions used to analyse morphometrics from Badlands outputs. """ import os import math import h5py import errno import pandas import numpy as np import matplotlib.pyplot as plt from scipy.spatial import cKDTree import xml.etree.ElementTree as ETO from scipy.interpolate import RectBivariateSpline import plotly from plotly.graph_objs import * plotly.offline.init_notebook_mode() import warnings warnings.simplefilter(action = "ignore", category = FutureWarning) class morphoGrid: """ Class for analysing morphometrics from Badlands outputs. """ def __init__(self, folder=None, ncpus=1, bbox=None, dx=None): """ Initialization function which takes the folder path to Badlands outputs and the number of CPUs used to run the simulation. It also takes the bounding box and discretization value at which one wants to interpolate the data. Parameters ---------- variable : folder Folder path to Badlands outputs. variable: ncpus Number of CPUs used to run the simulation. variable: bbox Bounding box extent SW corner and NE corner. variable: dx Discretisation value in metres. """ self.folder = folder if not os.path.isdir(folder): raise RuntimeError('The given folder cannot be found or the path is incomplete.') self.ncpus = ncpus self.x = None self.y = None self.z = None self.discharge = None self.logdischarge = None self.cumchange = None self.dx = None self.grad = None self.aspect = None self.hcurv = None self.vcurv = None self.Zbc = None self.hillshade = None self.nx = None self.ny = None if dx == None: raise RuntimeError('Discretization space value is required.') self.dx = dx self.bbox = bbox return def loadHDF5(self, timestep=0): """ Read the HDF5 file for a given time step. Parameters ---------- variable : timestep Time step to load. """ for i in range(0, self.ncpus): df = h5py.File('%s/tin.time%s.p%s.hdf5'%(self.folder, timestep, i), 'r') coords = np.array((df['/coords'])) cumdiff = np.array((df['/cumdiff'])) discharge = np.array((df['/discharge'])) if i == 0: x, y, z = np.hsplit(coords, 3) c = cumdiff d = discharge else: c = np.append(c, cumdiff) d = np.append(d, discharge) x = np.append(x, coords[:,0]) y = np.append(y, coords[:,1]) z = np.append(z, coords[:,2]) if self.bbox == None: self.nx = int((x.max() - x.min())/self.dx+1) self.ny = int((y.max() - y.min())/self.dx+1) self.x = np.linspace(x.min(), x.max(), self.nx) self.y = np.linspace(y.min(), y.max(), self.ny) self.bbox = np.zeros(4,dtype=float) self.bbox[0] = x.min() self.bbox[1] = y.min() self.bbox[2] = x.max() self.bbox[3] = y.max() else: if self.bbox[0] < x.min(): self.bbox[0] = x.min() if self.bbox[2] > x.max(): self.bbox[2] = x.max() if self.bbox[1] < y.min(): self.bbox[1] = y.min() if self.bbox[3] > y.max(): self.bbox[3] = y.max() self.nx = int((self.bbox[2] - self.bbox[0])/self.dx+1) self.ny = int((self.bbox[3] - self.bbox[1])/self.dx+1) self.x = np.linspace(self.bbox[0], self.bbox[2], self.nx) self.y = np.linspace(self.bbox[1], self.bbox[3], self.ny) self.x, self.y = np.meshgrid(self.x, self.y) xyi = np.dstack([self.x.flatten(), self.y.flatten()])[0] XY = np.column_stack((x,y)) tree = cKDTree(XY) distances, indices = tree.query(xyi, k=3) z_vals = z[indices][:,:,0] d_vals = d[indices][:,:,0] c_vals = c[indices][:,:,0] zi = np.zeros(len(xyi)) di = np.zeros(len(xyi)) ci = np.zeros(len(xyi)) onIDs = np.where(distances[:,0] > 0)[0] zi[onIDs] = np.average(z_vals[onIDs,:],weights=(1./distances[onIDs,:]), axis=1) di[onIDs] = np.average(d_vals[onIDs,:],weights=(1./distances[onIDs,:]), axis=1) ci[onIDs] = np.average(c_vals[onIDs,:],weights=(1./distances[onIDs,:]), axis=1) onIDs = np.where(distances[:,0] == 0)[0] if len(onIDs) > 0: zi[onIDs] = z[indices[onIDs,0],0] di[onIDs] = d[indices[onIDs,0],0] ci[onIDs] = c[indices[onIDs,0],0] self.z = np.reshape(zi,(self.ny,self.nx)) self.discharge = np.reshape(di,(self.ny,self.nx)) self.cumchange = np.reshape(ci,(self.ny,self.nx)) logdis = self.discharge IDs = np.where(logdis<1.) logdis[IDs] = 1. self.logdischarge = logdis return def _assignBCs(self): """ Pads the boundaries of a grid. Boundary condition pads the boundaries with equivalent values to the data margins, e.g. x[-1,1] = x[1,1]. It creates a grid 2 rows and 2 columns larger than the input. """ self.Zbc = np.zeros((self.ny + 2, self.nx + 2)) self.Zbc[1:-1,1:-1] = self.z # Assign boundary conditions - sides self.Zbc[0, 1:-1] = self.z[0, :] self.Zbc[-1, 1:-1] = self.z[-1, :] self.Zbc[1:-1, 0] = self.z[:, 0] self.Zbc[1:-1, -1] = self.z[:,-1] # Assign boundary conditions - corners self.Zbc[0, 0] = self.z[0, 0] self.Zbc[0, -1] = self.z[0, -1] self.Zbc[-1, 0] = self.z[-1, 0] self.Zbc[-1, -1] = self.z[-1, 0] return def _calcFiniteSlopes(self): """ Calculate slope with 2nd order/centered difference method. """ self._assignBCs() Sx = (self.Zbc[1:-1, 2:] - self.Zbc[1:-1, :-2]) / (2*self.dx) Sy = (self.Zbc[2:,1:-1] - self.Zbc[:-2, 1:-1]) / (2*self.dx) return Sx, Sy def hillShade(self, az=315, altitude=45): """ Creates a shaded relief from a surface raster by considering the illumination source angle and shadows. Parameters ---------- variable : az Azimuth angle of the light source.The azimuth is expressed in positive degrees from 0 to 360, measured clockwise from north. The default is 315 degrees. variable : altitude Altitude angle of the light source above the horizon. The altitude is expressed in positive degrees, with 0 degrees at the horizon and 90 degrees directly overhead. The default is 45 degrees. """ # Convert angular measurements to radians azRad, elevRad = (360 - az + 90)*np.pi/180, (90 - altitude)*np.pi/180 # Calculate slope in X and Y directions Sx, Sy = self._calcFiniteSlopes() #Smag = np.sqrt(Sx**2 + Sy**2) # Angle of aspect AspectRad = np.arctan2(Sy, Sx) # Magnitude of slope in radians SmagRad = np.arctan(np.sqrt(Sx**2 + Sy**2)) self.hillshade = 255.0 * ((np.cos(elevRad) * np.cos(SmagRad)) + \ (np.sin(elevRad)* np.sin(SmagRad) * np.cos(azRad - AspectRad))) return def getParams(self): """ Define aspect, gradient and horizontal/vertical curvature using a quadratic polynomial method. """ # Assign boundary conditions if self.Zbc == None: self.Zbc = self._assignBCs() # Neighborhood definition # z1 z2 z3 # z4 z5 z6 # z7 z8 z9 z1 = self.Zbc[2:, :-2] z2 = self.Zbc[2:,1:-1] z3 = self.Zbc[2:,2:] z4 = self.Zbc[1:-1, :-2] z5 = self.Zbc[1:-1,1:-1] z6 = self.Zbc[1:-1, 2:] z7 = self.Zbc[:-2, :-2] z8 = self.Zbc[:-2, 1:-1] z9 = self.Zbc[:-2, 2:] # Compute coefficient values zz = z2+z5 r = ((z1+z3+z4+z6+z7+z9)-2.*(z2+z5+z8))/(3. * self.dx**2) t = ((z1+z2+z3+z7+z8+z9)-2.*(z4+z5+z6))/(3. * self.dx**2) s = (z3+z7-z1-z9)/(4. * self.dx**2) p = (z3+z6+z9-z1-z4-z7)/(6.*self.dx) q = (z1+z2+z3-z7-z8-z9)/(6.*self.dx) u = (5.*z1+2.*(z2+z4+z6+z8)-z1-z3-z7-z9)/9. # with np.errstate(invalid='ignore',divide='ignore'): self.grad = np.arctan(np.sqrt(p**2+q**2)) self.aspect = np.arctan(q/p) self.hcurv = -(r*q**2-2.*p*q*s+t*p**2) / \ ((p**2+q**2)*np.sqrt(1+p**2+q**2)) self.vcurv = -(r*p**2+2.*p*q*s+t*q**2) / \ ((p**2+q**2)*np.sqrt(1+p**2+q**2)) return def _cross_section(self, xo, yo, xm, ym, pts): """ Compute cross section coordinates. """ if xm == xo: ysec = np.linspace(yo, ym, pts) xsec = np.zeros(pts) xsec.fill(xo) elif ym == yo: xsec = np.linspace(xo, xm, pts) ysec = np.zeros(pts) ysec.fill(yo) else: a = (ym-yo)/(xm-xo) b = yo - a * xo xsec = np.linspace(xo, xm, pts) ysec = a * xsec + b return xsec,ysec def viewSection(self, xo = None, yo = None, xm = None, ym = None, pts = 100, vData = None, width = 800, height = 400, color = 'green', linesize = 3, markersize = 5, title = 'Cross section'): """ Extract a slice from the 3D data set and plot required data on a graph. Parameters ---------- variable: xo, yo Lower X,Y coordinates of the cross-section variable: xm, ym Upper X,Y coordinates of the cross-section variable: pts Number of points to discretise the cross-section variable: vData Dataset to plot. variable: width Figure width. variable: height Figure height. variable: color Color scale. variable: linesize, markersize Requested size for the line and markers. variable: title Title of the graph. """ if xm > self.x.max(): xm = self.x.max() if ym > self.y.max(): ym = self.y.max() if xo < self.x.min(): xo = self.x.min() if yo < self.y.min(): yo = self.y.min() xsec, ysec = self._cross_section(xo, yo, xm, ym, pts) rect_B_spline = RectBivariateSpline(self.y[:,0], self.x[0,:], vData) datasec = rect_B_spline.ev(ysec, xsec) dist = np.sqrt(( xsec - xo )**2 + ( ysec - yo )**2) data = Data([ Scatter( x=dist, y=datasec, mode='lines+markers', name="'spline'", line=dict( shape='spline', color = color, width = linesize ), marker = dict( symbol='circle', size = markersize, color = 'white', line = dict( width = 1, color = 'black' ) ) ) ]) layout = dict( title=title, width=width, height=height ) fig = Figure(data=data, layout=layout) plotly.offline.iplot(fig) return def extractSection(self, xo = None, yo = None, xm = None, ym = None, pts = 100, vData = None, view = True, width = 800, height = 400, color = 'green', linesize = 3, markersize = 5, title = 'Cross section'): """ Extract a slice from the 3D data set and plot required data on a graph. Parameters ---------- variable: xo, yo Lower X,Y coordinates of the cross-section variable: xm, ym Upper X,Y coordinates of the cross-section variable: pts Number of points to discretise the cross-section variable: vData Dataset to plot. variable: view Show the section plot. variable: width Figure width. variable: height Figure height. variable: color Color scale. variable: linesize, markersize Requested size for the line and markers. variable: title Title of the graph. Return: variable: dist, datasec X, Y values for the profile """ if xm > self.x.max(): xm = self.x.max() if ym > self.y.max(): ym = self.y.max() if xo < self.x.min(): xo = self.x.min() if yo < self.y.min(): yo = self.y.min() xsec, ysec = self._cross_section(xo, yo, xm, ym, pts) rect_B_spline = RectBivariateSpline(self.y[:,0], self.x[0,:], vData) datasec = rect_B_spline.ev(ysec, xsec) dist = np.sqrt(( xsec - xo )**2 + ( ysec - yo )**2) if view: data = Data([ Scatter( x=dist, y=datasec, mode='lines+markers', name="'spline'", line=dict( shape='spline', color = color, width = linesize ), marker = dict( symbol='circle', size = markersize, color = 'white', line = dict( width = 1, color = 'black' ) ) ) ]) layout = dict( title=title, width=width, height=height ) fig = Figure(data=data, layout=layout) plotly.offline.iplot(fig) return dist, datasec def profile_mean(self,a): return sum(a) / len(a) def profile_min(self,a): return min(a) def profile_max(self,a): return max(a) def statProfiles(self, pData = None, pDist = None, width = 800, height = 400, color = 'green', linesize = 2, title = 'Section Min, Mean, Max '): """ Plot profile mean, max and min. Parameters ---------- variable: pData Dataset to plot along Y axis. variable: pDist Dataset to plot along X axis. variable: width Figure width. variable: height Figure height. variable: color Color scale. variable: linesize, markersize Requested size for the line and markers. variable: title Title of the graph. Return: variable: minZ, meanZ, maxZ Y values for the profile (minZ, meanZ, maxZ) """ meanZ = map(self.profile_mean, zip(*pData)) minZ = map(self.profile_min, zip(*pData)) maxZ = map(self.profile_max, zip(*pData)) trace0 = Scatter( x=pDist, y=maxZ, mode='lines', line=dict( shape='spline', width = 0.5, color = 'rgb(0, 0, 0)' ), name='max' ) trace1 = Scatter( x=pDist, y=minZ, mode='lines', line=dict( shape='spline', width = 0.5, color = 'rgb(0, 0, 0)' ), opacity=0.5, fill='tonexty', fillcolor=color, name='min' ) trace2 = Scatter( x=pDist, y=meanZ, mode='lines', line=dict( shape='spline', width = linesize, color = 'rgb(0, 0, 0)' ), name='mean' ) data = [trace0,trace1,trace2] layout = dict( title=title, width=width, height=height ) fig = Figure(data=data, layout=layout) plotly.offline.iplot(fig) return minZ, meanZ, maxZ def timeProfiles(self, pData = None, pDist = None, width = 800, height = 400, linesize = 2, title = 'Profile evolution with time'): """ Plot profile mean, max and min. Parameters ---------- variable: pData Dataset to plot along Y axis. variable: pDist Dataset to plot along X axis. variable: width Figure width. variable: height Figure height. variable: color Color scale. variable: linesize, markersize Requested size for the line and markers. variable: title Title of the graph. Return: variable: minZ, meanZ, maxZ Y values for the profile (minZ, meanZ, maxZ) """ trace = {} data = [] for i in range(0,len(pData)): trace[i] = Scatter( x=pDist, y=pData[i,:], mode='lines', line=dict( shape='spline', width = linesize, #color = color ), ) data.append(trace[i]) layout = dict( title=title, width=width, height=height ) fig = Figure(data=data, layout=layout) plotly.offline.iplot(fig) def viewGrid(self, width = 800, height = 800, Dmin = None, Dmax = None, color = None, reverse=False, Data = None, title='Grid'): """ Use Plotly library to visualise a dataset in 2D. Parameters ---------- variable: width Figure width. variable: height Figure height. variable: Dmin Colorbar minimal value. variable: Dmax Colorbar maximal value. variable: color Color scale. variable: reverse Reverse color scale. variable: Data Dataset to plot. variable: title Title of the graph. """ if color == None: color = 'Picnic' data = [ Heatmap( z = Data, colorscale = color,\ zmin = Dmin, zmax = Dmax, reversescale=reverse ) ] dy = self.bbox[3]-self.bbox[1] dx = self.bbox[2]-self.bbox[0] if dx>=dy: dr = 0.5 * (dx-dy) rangeX = [self.bbox[0],self.bbox[2]] rangeY = [self.bbox[1]-dr,self.bbox[3]+dr] else: dr = 0.5 * (dy-dx) rangeX = [self.bbox[0]-dr,self.bbox[2]+dr] rangeY = [self.bbox[1],self.bbox[3]] layout = Layout( title=title, autosize=True, width=width, height=height, scene=Scene( xaxis=XAxis(autorange=False, range=rangeX, nticks=10, \ gridcolor='rgb(255, 255, 255)', \ gridwidth=2,zerolinecolor='rgb(255, 255, 255)', \ zerolinewidth=2), yaxis=YAxis(autorange=False, range=rangeY, nticks=10, \ gridcolor='rgb(255, 255, 255)', \ gridwidth=2,zerolinecolor='rgb(255, 255, 255)', \ zerolinewidth=2), bgcolor="rgb(244, 244, 248)" ) ) fig = Figure(data=data, layout=layout) plotly.offline.iplot(fig) return def viewScatter3D(self, width = 800, height = 800, colors='Viridis', dataX = None, dataY = None, dataZ = None, title='Scatter plot'): """ Use Plotly library to visualise a dataset in 3D. Parameters ---------- variable: width Figure width. variable: height Figure height. variable: colors Color scale. variable: dataX Data for X-axis. variable: dataY Data for Y-axis. variable: dataZ Data for Z-axis. variable: title Title of the graph. """ #trace = {} data = [] #A = np.asarray(dataX) / np.asarray(dataY) #A[np.isnan(A)] = 0 #A[np.isinf(A)] = max(A[A<1000])+1 trace = Scatter3d( x=dataX, y=dataY, z=dataZ, mode='markers', marker=dict( size=8, #color=A, #colorscale=colors, opacity=0.8 ) ) data.append(trace) layout = dict( title=title, width=width, height=height, margin=dict( l=0, r=0, b=0, t=0 ), scene=Scene( xaxis=XAxis(title='dip'), yaxis=YAxis(title='slip'), zaxis=ZAxis(title='sed') ) ) fig = Figure(data=data, layout=layout) plotly.offline.iplot(fig) return def viewScatter(self, width = 800, height = 800, dataX = None, dataY = None, title='Scatter plot'): """ Use Plotly library to visualise a dataset in 2D. Parameters ---------- variable: width Figure width. variable: height Figure height. variable: dataX Data for X-axis. variable: dataY Data for Y-axis. variable: title Title of the graph. """ #trace = {} data = [] trace = Scatter( x=dataX, y=dataY, mode='markers', ) data.append(trace) layout = dict( title=title, width=width, height=height ) fig = Figure(data=data, layout=layout) plotly.offline.iplot(fig) return def viewSurf(self, width = 800, height = 800, zmin = None, zmax = None, color = None, reverse=False, vData = None, subsample = 1, title='Surface'): """ Use Plotly library to visualise a dataset over a surface in 3D. Parameters ---------- variable: width Figure width. variable: height Figure height. variable: zmin Minimal Z-axis value. variable: zmax Maximal Z-axis value. variable: color Color scale. variable: reverse Reverse color scale. variable: vData Dataset to plot. variable: subsample Subsampling data everythin nth value. variable: title Title of the graph. """ if color == None: color = 'YIGnBu' if zmin == None: zmin = vData.min() if zmax == None: zmax = vData.max() data = Data([ Surface( x = self.x[::subsample,::subsample], y = self.y[::subsample,::subsample], z = vData[::subsample,::subsample], colorscale = color, reversescale=reverse ) ]) dy = self.bbox[3]-self.bbox[1] dx = self.bbox[2]-self.bbox[0] if dx>=dy: dr = 0.5 * (dx-dy) rangeX = [self.bbox[0],self.bbox[2]] rangeY = [self.bbox[1]-dr,self.bbox[3]+dr] else: dr = 0.5 * (dy-dx) rangeX = [self.bbox[0]-dr,self.bbox[2]+dr] rangeY = [self.bbox[1],self.bbox[3]] layout = Layout( title=title, autosize=True, width=width, height=height, scene=Scene( zaxis=ZAxis(range=[zmin, zmax], \ autorange=False,nticks=10, \ gridcolor='rgb(255, 255, 255)', \ gridwidth=2,zerolinecolor='rgb(255, 255, 255)', \ zerolinewidth=2), xaxis=XAxis(autorange=False, range=rangeX, \ nticks=10, gridcolor='rgb(255, 255, 255)', \ gridwidth=2,zerolinecolor='rgb(255, 255, 255)', \ zerolinewidth=2), yaxis=YAxis(autorange=False, range=rangeY, nticks=10, \ gridcolor='rgb(255, 255, 255)', \ gridwidth=2,zerolinecolor='rgb(255, 255, 255)', \ zerolinewidth=2), bgcolor="rgb(244, 244, 248)" ) ) fig = Figure(data=data, layout=layout) plotly.offline.iplot(fig) return
{"hexsha": "6858f80339a4f3d81cac710176f9180bb9b713f4", "size": 26546, "ext": "py", "lang": "Python", "max_stars_repo_path": "Examples/mountain/morphoGrid.py", "max_stars_repo_name": "intelligentEarth/surrogateBayeslands", "max_stars_repo_head_hexsha": "24462cafed05ac0c377865d8fe039cafa0aa59d4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-12T13:54:01.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-12T13:54:01.000Z", "max_issues_repo_path": "Examples/mountain_data/morphoGrid.py", "max_issues_repo_name": "intelligentEarth/surrogateBayeslands", "max_issues_repo_head_hexsha": "24462cafed05ac0c377865d8fe039cafa0aa59d4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Examples/mountain_data/morphoGrid.py", "max_forks_repo_name": "intelligentEarth/surrogateBayeslands", "max_forks_repo_head_hexsha": "24462cafed05ac0c377865d8fe039cafa0aa59d4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0316789863, "max_line_length": 112, "alphanum_fraction": 0.4600693136, "include": true, "reason": "import numpy,from scipy", "num_tokens": 6690}
using TensorFlowBuilder using Base.Test include("test_apigen.jl")
{"hexsha": "26f2fee83184899d7e24e86b747590cb67f13891", "size": 67, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "test/runtests.jl", "max_stars_repo_name": "benmoran/TensorFlowBuilder.jl", "max_stars_repo_head_hexsha": "fbe31778f65e7ac45319b9a53d1cbd4afdc7c7ab", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2016-04-18T09:33:04.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-21T18:58:24.000Z", "max_issues_repo_path": "test/runtests.jl", "max_issues_repo_name": "benmoran/TensorFlowBuilder.jl", "max_issues_repo_head_hexsha": "fbe31778f65e7ac45319b9a53d1cbd4afdc7c7ab", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-04-24T15:38:58.000Z", "max_issues_repo_issues_event_max_datetime": "2016-04-24T15:38:58.000Z", "max_forks_repo_path": "test/runtests.jl", "max_forks_repo_name": "benmoran/TensorFlowBuilder.jl", "max_forks_repo_head_hexsha": "fbe31778f65e7ac45319b9a53d1cbd4afdc7c7ab", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 13.4, "max_line_length": 25, "alphanum_fraction": 0.8208955224, "num_tokens": 16}
module Braking export dist, speed const EARTH_GRAVITY = 9.81 function dist(v::Number, μ::Number)::Real # convert from km/h to m/s v /= 3.6 # Reaction time: 1 v + v^2/2/μ/EARTH_GRAVITY end # Reaction time: 1 function speed(d::Number, μ::Number)::Real μg = μ * EARTH_GRAVITY v = sqrt( μg^2 + 2d*μg ) - μg # Convert m/s in km/h 3.6v end end
{"hexsha": "e1f4ae1f94f6bee889128498ec75110edae646e0", "size": 440, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "6_kyu/Braking_well.jl", "max_stars_repo_name": "UlrichBerntien/Codewars-Katas", "max_stars_repo_head_hexsha": "bbd025e67aa352d313564d3862db19fffa39f552", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6_kyu/Braking_well.jl", "max_issues_repo_name": "UlrichBerntien/Codewars-Katas", "max_issues_repo_head_hexsha": "bbd025e67aa352d313564d3862db19fffa39f552", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6_kyu/Braking_well.jl", "max_forks_repo_name": "UlrichBerntien/Codewars-Katas", "max_forks_repo_head_hexsha": "bbd025e67aa352d313564d3862db19fffa39f552", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.0, "max_line_length": 46, "alphanum_fraction": 0.525, "num_tokens": 155}
import cv2 import numpy as np ###pip install cutil from cutils.cv.bwutils import remove_spirious_blobs, fill_hole def gen_thickness_map(segmap, layer_index, axial_resolution=255, exclude_disc=True, pvars=None, pvar_thresh=None): """ :param segmap: segmentation map :param layer_index: index of layer of which thikcness map is to be computed :param axial_resolution: axisal resolution of the original oct image :param exclude_disc: :param pvars: (optional) variance associated with the prediction of segmap :param pvar_thresh: (optional) the voxels where pvars > pvar_thres will be not included while computed to compute the map :return: the thickness map of the given layer """ layer_map = segmap == layer_index if(pvars is not None): assert pvar_thresh is not None, 'threshold should be provide' mask_ignore = pvars > pvar_thresh layer_map = layer_map * ~mask_ignore rnfl_tmap = np.sum(layer_map, axis=1) rnfl_tmap = (rnfl_tmap/rnfl_tmap.max()) * axial_resolution disc_mask = compute_disc(segmap) non_disc = 1 - disc_mask/255.0 #disc_mask = cup_mask if (exclude_disc): rnfl_tmap = non_disc * rnfl_tmap return rnfl_tmap, disc_mask def compute_disc(segmap): #disc_mask = (np.sum(segmap == 7, axis=1) <=4).astype(np.uint8)*255 #disc_mask = remove_spirious_blobs(disc_mask , 50) #disc_mask = fill_hole(disc_mask) # not include gcl to compute mask as gcl can vanish sometime non_disc = (np.sum(segmap == 2, axis=1) > 0) * \ (np.sum(segmap == 3, axis=1) > 0) * \ (np.sum(segmap == 4, axis=1) > 0) disc_mask = (1 - non_disc.astype(np.uint8))*255 disc_mask = remove_spirious_blobs(disc_mask , 50) disc_mask = fill_hole(disc_mask) return disc_mask.astype(np.uint8) def gen_enface_projection(cube): #octma = np.ma.array(oct, mask=segmap != 7) #proj_max = np.max(octma, axis=1).astype(np.uint8) proj = np.max(cube, axis=1).astype(np.uint8) clahe = cv2.createCLAHE(clipLimit=4.0, tileGridSize=(8, 8)) proj = clahe.apply(proj) return proj.astype (np.uint8) def gen_enface_projection_rpe(oct_cube, segmap=None): octma = np.ma.array(oct_cube, mask=segmap != 7) proj_mean = np.mean(octma, axis=1).astype(np.uint8) return proj_mean.astype (np.uint8) import cv2 from glob import glob import numpy as np import os def save_thicknesses(layer_index=[0, 1], layer_name=['RNFL', 'GCIPL'], load_dir='/Users/gyasmeen/Downloads/'): files_list = glob(load_dir + 'layerSeg/*.npy') for i in range(len(layer_index)): save_dir = load_dir + layer_name[i] + '/' if not os.path.exists(save_dir): os.makedirs(save_dir) for fpath in files_list: fname = fpath.split('/')[-1] segmap = np.load(fpath) tmap, disc_mask = gen_thickness_map(segmap, layer_index[i], axial_resolution=255, exclude_disc=True, pvars=None, pvar_thresh=None) cv2.imwrite(save_dir + fname.split('.')[0] + '_thickness.png', tmap) cv2.imwrite(save_dir + fname.split('.')[0] + '_disc.png', disc_mask) if __name__ == "__main__": save_thicknesses(layer_index=[0, 1], layer_name=['RNFL', 'GCIPL'], load_dir='/Users/gyasmeen/Downloads/')
{"hexsha": "489c1156d93a6da70febd2d49d0e4426c44e9bd7", "size": 3363, "ext": "py", "lang": "Python", "max_stars_repo_path": "python_code/thickness_maps.py", "max_stars_repo_name": "IBM/oct-glaucoma-vf-estimate", "max_stars_repo_head_hexsha": "ea79352547f33fe05ee532ab9faad6a5e4811a76", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "python_code/thickness_maps.py", "max_issues_repo_name": "IBM/oct-glaucoma-vf-estimate", "max_issues_repo_head_hexsha": "ea79352547f33fe05ee532ab9faad6a5e4811a76", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "python_code/thickness_maps.py", "max_forks_repo_name": "IBM/oct-glaucoma-vf-estimate", "max_forks_repo_head_hexsha": "ea79352547f33fe05ee532ab9faad6a5e4811a76", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.3365384615, "max_line_length": 125, "alphanum_fraction": 0.6633957776, "include": true, "reason": "import numpy", "num_tokens": 945}
# -*- coding: UTF-8 -*- # ask_yes_no.py from EmotionDetection import TrainOption from EmotionDetection import TestOption from EmotionDetection import WordFilter from EmotionDetection import EvaluateText from math import log10 from Tkinter import * try: import Tkinter as tk import tkMessageBox, tkFileDialog, ttk except ImportError: import tkinter as tk import numpy as np import matplotlib matplotlib.use('TkAgg') from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import matplotlib.pyplot as plt root=tk.Tk() root.title("Emotion Detection Analysis") root.state("zoomed") root.configure(background="lightsteelblue") root.grid_rowconfigure(0, weight=1) root.grid_columnconfigure(0, weight=1) topFrame=Frame(root) topFrame.configure(background="lightsteelblue") topFrame.grid(row=0, column=0, sticky="nwe") middleFrame=Frame(root) middleFrame.configure(background="lightsteelblue") middleFrame.grid(row=1, column=0, sticky="nw") bottomFrame=Frame(root) bottomFrame.configure(background="lightsteelblue") bottomFrame.grid(row=2, columnspan=2, sticky="nsew") trainBox=LabelFrame(middleFrame, relief=GROOVE, borderwidth=2, text="Training") trainBox.configure(background="lightsteelblue") trainBox.grid(row=0, sticky="nw") testBox=LabelFrame(middleFrame, relief=GROOVE, borderwidth=2, text="Testing") testBox.configure(background="lightsteelblue") testBox.grid(row=0, sticky="nw") guiBox=LabelFrame(middleFrame, relief=GROOVE, borderwidth=2, text="Evaluate Input") guiBox.configure(background="lightsteelblue") def runTrain(): clearAll() answer = tkMessageBox.askyesno("Reset Data", "Reset training data?") if answer: trainBox.grid(row=0, column=1, rowspan=5, columnspan=5, padx=15, pady=15) btnTweetText=Button(trainBox, text='Tweet Text', command=selectTweetText, height=1, width=10, borderwidth=3) btnTweetText.configure(background="steelblue", fg="white") btnTweetText.grid(row=0, column=0, padx=15, pady=15) entryTweetText=Entry(trainBox, textvariable=varTweetText, width=80) entryTweetText.grid(row=0, column=2, padx=15, pady=15) btnTweetValues=Button(trainBox, text='Tweet Values', command=selectTweetValues, height=1, width=10, borderwidth=3) btnTweetValues.configure(background="steelblue", fg="white") btnTweetValues.grid(row=2, column=0, padx=15, pady=15) entryTweetValues=Entry(trainBox, textvariable=varTweetValues, width=80) entryTweetValues.grid(row=2, column=2, padx=15, pady=15) btnTrain=Button(trainBox, text='Train', command=lambda:TrainOption.startTraining(varStatusBar, varTweetText, varTweetValues), height=1, width=10, borderwidth=3) btnTrain.configure(background="steelblue", fg="white") btnTrain.grid(row=3, column=3, padx=15, pady=15) def runTest(): clearAll() testBox.grid(row=0, column=1, rowspan=5, columnspan=5, padx=15, pady=15) btnTestText=Button(testBox, text='Tweet Text', command=selectTestText, height=1, width=10, borderwidth=3) btnTestText.configure(background="steelblue", fg="white") btnTestText.grid(row=0, column=0, padx=15, pady=15) entryTestText=Entry(testBox, textvariable=varTestText, width=80) entryTestText.grid(row=0, column=2, padx=15, pady=15) btnTestValues=Button(testBox, text='Tweet Values', command=selectTestValues, height=1, width=10, borderwidth=3) btnTestValues.configure(background="steelblue", fg="white") btnTestValues.grid(row=2, column=0, padx=15, pady=15) entryTestValues=Entry(testBox, textvariable=varTestValues, width=80) entryTestValues.grid(row=2, column=2, padx=15, pady=15) btnTest=Button(testBox, text='Test', command=lambda:TestOption.startTesting(varStatusBar, varOutput, varCmOutput, varTestText, varTestValues), height=1, width=10, borderwidth=3) btnTest.configure(background="steelblue", fg="white") btnTest.grid(row=3, column=3, padx=15, pady=15) lblOutput=Label(testBox, textvariable=varOutput, font=('Consolas', 10), anchor="w") lblOutput.configure(background="lightsteelblue") lblOutput.grid(row=4, column=0, sticky='w') lblCmOutput=Label(testBox, textvariable=varCmOutput, font=('Consolas', 10), anchor="w") lblCmOutput.configure(background="lightsteelblue") lblCmOutput.grid(row=5, column=0, sticky='w') def evaluateInput(): clearAll() guiBox.grid(row=1, column=1, rowspan=5, columnspan=5, padx=15, pady=15) lblGuiInput=Label(guiBox, text="Input text:", font=(None, 15)) lblGuiInput.configure(background="lightsteelblue") lblGuiInput.grid(row=0, sticky="nsew", pady=5) lblGuiPred=Label(guiBox, text="Predicted:", font=(None, 15)) lblGuiPred.configure(background="lightsteelblue") lblGuiPred.grid(row=1) entryGuiInput = Entry(guiBox, textvariable=varGuiInput, font=(None, 15), width=50) entryGuiInput.grid(row=0, column=1, sticky="nsew", pady=10, padx=(0, 10)) lblGuiOutput = Label(guiBox, textvariable=varGuiOutput, font=(None, 15)) lblGuiOutput.configure(background="lightsteelblue") lblGuiOutput.grid(row=1, column=1, sticky="W") btnClear=Button(guiBox, text='Clear', command=clearEvaluateGUI, font=(None, 10)) btnClear.configure(background="steelblue", fg="white") btnClear.grid(row=2, column=0, stick="nsew", pady=(8, 2), padx=10) btnPredict=Button(guiBox, text='Predict', command=predictEmotion, font=(None, 15)) btnPredict.configure(background="steelblue", fg="white") btnPredict.grid(row=2, column=1, rowspan=2, sticky="nsew", pady=9, padx=20) def selectTweetText(): varTweetText.set(tkFileDialog.askopenfilename(filetypes=[('.csvfiles', '.csv')], title='Select file [tweet text]')) def selectTweetValues(): varTweetValues.set(tkFileDialog.askopenfilename(filetypes=[('.csvfiles', '.csv')], title='Select file [tweet values]')) def selectTestText(): varTestText.set(tkFileDialog.askopenfilename(filetypes=[('.csvfiles', '.csv')], title='Select file [test text]')) def selectTestValues(): varTestValues.set(tkFileDialog.askopenfilename(filetypes=[('.csvfiles', '.csv')], title='Select file [test values]')) def clearEvaluateGUI(): varStatusBar.set("") varGuiInput.set("") varGuiOutput.set("") def predictEmotion(): varStatusBar.set("") with open("./data/Priors.csv", "r") as priorFile: priors = priorFile.readline().strip().split(',')[1:] priors = [log10(float(x)) for x in priors] predValues = [] unfound = [] wf = WordFilter.WordFilter() words = varGuiInput.get() #print "Input:", words words = wf.filterWords(words) #print "Tokens:", words for word in words: try: values = EvaluateText.evaluateWord(word) except IOError: varStatusBar.set("WordMap not found. Please train system first.") raise if values is not None: predValues.append(values) else: unfound.append(word) predValues = map(sum, zip(*predValues)) predProb = map(sum, zip(priors, predValues)) predEmotion = EvaluateText.guessEmotion(predProb) varGuiOutput.set(predEmotion) #print "Unfound:", unfound print "Prob:",', '.join([('%.2f') %x for x in predProb]) max=10 max=getBarScale(predProb) str="Input:", words, "Tokens:", words, "Unfound:", unfound, " ", "Prob:",', '.join([('%.2f') %abs(float("{0:.2f}".format(x))+max) for x in predProb]) varStatusBar.set(str) iterable = ([abs(float("{0:.2f}".format(x))+max) for x in predProb]) emotions=np.fromiter(iterable, float) objects = ("Empty", "Sadness", "Enthusiasm", "Neutral", "Worry", "Surprise", "Love", "Fun", "Hate", "Happiness", "Boredom", "Relief", "Anger") y_pos = np.arange(len(objects)) fig = plt.figure(figsize=(13, 6)) plt.bar(np.asarray(y_pos, dtype='float'), emotions, align='center', alpha=0.5, color="blue") plt.xticks(y_pos, objects) plt.yticks([]) canvas = FigureCanvasTkAgg(fig, master=guiBox) canvas.get_tk_widget().grid(row=4, columnspan=2) canvas.draw() def getBarScale(predProb): max=10 for x in predProb: if abs(x)<80: max=80 elif abs(x)<70: max=70 elif abs(x)<60: max=60 elif abs(x)<50: max=50 elif abs(x)<40: max=40 elif abs(x)<30: max=30 elif abs(x)<20: max=20 elif abs(x)<10: max=10 return max def exitEmotionDetection(): root.destroy() def clearAll(): varStatusBar.set("") trainBox.grid_forget() varTweetText.set("") varTweetValues.set("") testBox.grid_forget() varTestText.set("") varTestValues.set("") guiBox.grid_forget() varGuiInput.set("") varGuiOutput.set("") varCmOutput.set("") varOutput.set("") trainButton=Button(topFrame, text='Train', command=runTrain, height=3, width=55, borderwidth=3) trainButton.configure(background="steelblue", fg="white") trainButton.grid(row=0, column=0) testButton=Button(topFrame, text='Test', command=runTest, height=3, width=55, borderwidth=3) testButton.configure(background="steelblue", fg="white") testButton.grid(row=0, column=1) evaluateUserInputButton=Button(topFrame, text='Evaluate User Input', command=evaluateInput, height=3, width=55, borderwidth=3, wraplength=80) evaluateUserInputButton.configure(background="steelblue", fg="white") evaluateUserInputButton.grid(row=0, column=2) exitButton=Button(topFrame, text='Exit', command=exitEmotionDetection, height=3, width=55, borderwidth=3) exitButton.configure(background="steelblue", fg="white") exitButton.grid(row=0, column=3) varStatusBar=StringVar() varStatusBar.set("") statusBar = Label(bottomFrame, textvariable=varStatusBar, relief=SUNKEN, anchor=W, background="lightsteelblue", foreground="navy", borderwidth=1) statusBar.grid(row=0, sticky="new") varTweetText=StringVar() varTweetText.set("") varTweetValues=StringVar() varTweetValues.set("") varTestText=StringVar() varTestText.set("") varTestValues=StringVar() varTestValues.set("") varGuiInput=StringVar() varGuiInput.set("") varGuiOutput=StringVar() varGuiOutput.set("") varCmOutput=StringVar() varCmOutput.set("") varOutput=StringVar() varOutput.set("") fig = plt.figure(figsize=(13, 6)) canvas = FigureCanvasTkAgg(fig, master=guiBox) root.mainloop()
{"hexsha": "144be0a1ba4a265df79f2fdebb9d304e95c69037", "size": 10545, "ext": "py", "lang": "Python", "max_stars_repo_path": "EmotionDetectionGUI.py", "max_stars_repo_name": "emotion-detection-analysis/Emotion-detection", "max_stars_repo_head_hexsha": "e21ec0817ff86d8ce8d58534053aff6d731407c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "EmotionDetectionGUI.py", "max_issues_repo_name": "emotion-detection-analysis/Emotion-detection", "max_issues_repo_head_hexsha": "e21ec0817ff86d8ce8d58534053aff6d731407c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EmotionDetectionGUI.py", "max_forks_repo_name": "emotion-detection-analysis/Emotion-detection", "max_forks_repo_head_hexsha": "e21ec0817ff86d8ce8d58534053aff6d731407c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3859060403, "max_line_length": 181, "alphanum_fraction": 0.6931247037, "include": true, "reason": "import numpy", "num_tokens": 2864}
# # Copyright (c) 2017 Intel Corporation # SPDX-License-Identifier: BSD-2-Clause # from numba import njit import numpy as np from math import sqrt import argparse import time @njit def kmeans(A, numCenter, numIter, N, D, init_centroids): centroids = init_centroids for l in range(numIter): dist = np.array([[sqrt(np.sum((A[i,:]-centroids[j,:])**2)) for j in range(numCenter)] for i in range(N)]) labels = np.array([dist[i,:].argmin() for i in range(N)]) centroids = np.array([[np.sum(A[labels==i, j])/np.sum(labels==i) for j in range(D)] for i in range(numCenter)]) return centroids def main(): parser = argparse.ArgumentParser(description='K-Means') parser.add_argument('--size', dest='size', type=int, default=1000000) parser.add_argument('--features', dest='features', type=int, default=10) parser.add_argument('--centers', dest='centers', type=int, default=5) parser.add_argument('--iterations', dest='iterations', type=int, default=20) args = parser.parse_args() size = args.size features = args.features centers = args.centers iterations = args.iterations np.random.seed(0) init_centroids = np.random.ranf((centers, features)) kmeans(np.random.ranf((3000, features)), centers, 1, 3000, features, init_centroids) print("size:", size) A = np.random.ranf((size, features)) t1 = time.time() res = kmeans(A, centers, iterations, size, features, init_centroids) t = time.time()-t1 print("checksum:", res.sum()) print("SELFTIMED:", t) if __name__ == '__main__': main()
{"hexsha": "b4c22cd9d5af2a940fe3e368d509ae64375ab401", "size": 1662, "ext": "py", "lang": "Python", "max_stars_repo_path": "examples/k-means/k-means_numba.py", "max_stars_repo_name": "uw-ipd/numba", "max_stars_repo_head_hexsha": "26dde2b28cadda403a5549a84dc1698900b23f74", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 140, "max_stars_repo_stars_event_min_datetime": "2017-07-15T21:17:44.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-19T00:56:05.000Z", "max_issues_repo_path": "examples/k-means/k-means_numba.py", "max_issues_repo_name": "uw-ipd/numba", "max_issues_repo_head_hexsha": "26dde2b28cadda403a5549a84dc1698900b23f74", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 24, "max_issues_repo_issues_event_min_datetime": "2017-07-24T16:25:35.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-08T17:54:38.000Z", "max_forks_repo_path": "examples/k-means/k-means_numba.py", "max_forks_repo_name": "uw-ipd/numba", "max_forks_repo_head_hexsha": "26dde2b28cadda403a5549a84dc1698900b23f74", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 50, "max_forks_repo_forks_event_min_datetime": "2017-07-15T21:15:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-12T15:27:05.000Z", "avg_line_length": 31.9615384615, "max_line_length": 88, "alphanum_fraction": 0.6377858002, "include": true, "reason": "import numpy,from numba", "num_tokens": 435}
/** * (c) Author: Woongkyu Jee, woong.jee.16@ucl.ac.uk, wldndrb1@gmail.com * Created: 02.06.2019 ~ * * University College London, Department of Chemistry **/ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <gsl/gsl_eigen.h> #include <gsl/gsl_matrix.h> #include <gsl/gsl_math.h> #include <gsl/gsl_blas.h> #include <gsl/gsl_linalg.h> #include"sp_cluster_type.h" #define SP_SUPPORT_TRUE 1 #define SP_SUPPORT_FALSE -1 #define DEBUG_SUPPORT /* Get the Lowest energy state */ int sp_cluster_support_get_lowest_state( gsl_vector* v ) { int Return = 0; double min; min = gsl_vector_get(v,0); for(int i=0;i<4;i++) { if( gsl_vector_get(v,i) < min ) { min = gsl_vector_get(v,i); Return = i; } } return Return; // Returns the index of element in gsl_vector* eval } void sp_cluster_support_sign_gs_eigenvector( void* sp_sys_void ) { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; int low_idx; for(int i=0;i<sp_sys->number_of_sp_ion;i++) { low_idx = sp_cluster_support_get_lowest_state(sp_sys->sp_ion[i].eigen_value); if( gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,0,low_idx) < 0. ) // eigen_vector : matrix(4,4) // eigen_vector_gs : vector(4) { gsl_matrix_set(sp_sys->sp_ion[i].eigen_vector,0,low_idx,-1.*gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,0,low_idx)); gsl_matrix_set(sp_sys->sp_ion[i].eigen_vector,1,low_idx,-1.*gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,1,low_idx)); gsl_matrix_set(sp_sys->sp_ion[i].eigen_vector,2,low_idx,-1.*gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,2,low_idx)); gsl_matrix_set(sp_sys->sp_ion[i].eigen_vector,3,low_idx,-1.*gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,3,low_idx)); } } return; } void sp_cluster_support_load_gs_eigenvector( void* sp_sys_void ) { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; int low_idx; for(int i=0;i<sp_sys->number_of_sp_ion;i++) { low_idx = sp_cluster_support_get_lowest_state(sp_sys->sp_ion[i].eigen_value); gsl_vector_set(sp_sys->sp_ion[i].eigen_vector_gs,0,gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,0,low_idx)); gsl_vector_set(sp_sys->sp_ion[i].eigen_vector_gs,1,gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,1,low_idx)); gsl_vector_set(sp_sys->sp_ion[i].eigen_vector_gs,2,gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,2,low_idx)); gsl_vector_set(sp_sys->sp_ion[i].eigen_vector_gs,3,gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,3,low_idx)); } return; } /* Matrix Viewer: Print out a N x N matrix on console */ void sp_cluster_support_matrix_view( const gsl_matrix* m ) { if( m != NULL ) { for(int i=0;i<m->size1;i++) { for(int j=0;j<m->size2;j++) { //printf("%s%12.4lf",gsl_matrix_get(m,i,j)>0.?"+":"",gsl_matrix_get(m,i,j)); printf("%12.4e",gsl_matrix_get(m,i,j)); } puts(""); } } else puts("sp_cluster_support_matrix_view input 'm' (gsl_matrix*) is a null pointer ... in SP_Support.c or SP_Support.h"); return; } /* Vector Viewer: Print out a vector on console */ void sp_cluster_support_vector_view( const gsl_vector* v ) { if( v != NULL ) { for(int i=0;i<v->size;i++) { //printf("%s%12.4f",gsl_vector_get(v,i)>0.?"+":"",gsl_vector_get(v,i)); printf("%12.4e",gsl_vector_get(v,i)); } puts(""); } else puts("sp_cluster_support_vector_view input 'v' (gsl_vector*) is a null pointer ... in SP_Support.c or SP_Support.h"); return; } /* Matrix Viewer: Print out a N x N matrix on file */ void sp_cluster_support_matrix_view_f( FILE* fp, const gsl_matrix* m ) { if( m != NULL ) { for(int i=0;i<m->size1;i++) { for(int j=0;j<m->size2;j++) fprintf(fp,"%s%.18lf\t",gsl_matrix_get(m,i,j)>0.?"+":"",gsl_matrix_get(m,i,j)); fprintf(fp,"\n"); } } else fputs("sp_cluster_support_matrix_view input 'm' (gsl_matrix*) is a null pointer ... in SP_Support.c or SP_Support.h",fp); fflush(fp); return; } /* Vector Viewer: Print out a vector on file */ void sp_cluster_support_vector_view_f( FILE* fp, const gsl_vector* v ) { if( v != NULL ) { for(int i=0;i<v->size;i++) fprintf(fp,"%s%.18f\t",gsl_vector_get(v,i)>0.?"+":"",gsl_vector_get(v,i)); fprintf(fp,"\n"); } else fputs("sp_cluster_support_vector_view input 'v' (gsl_vector*) is a null pointer ... in SP_Support.c or SP_Support.h",fp); fflush(fp); return; } /* Transformation Matrix calculator */ /* * this function takes a vector, and the vector is transformed along transformed z-axis * the final return is the transformation matrix (rank 2 tensor), and the this data type is * gsl_matrix* */ gsl_matrix* sp_cluster_support_get_transformation_matrix( const gsl_vector* v ) { // gsl_vector_get(v,0) == 'x' // gsl_vector_get(v,1) == 'y' // gsl_vector_get(v,2) == 'z' gsl_matrix* pReturn = NULL; const double rxy = sqrt(pow(gsl_vector_get(v,0),2.)+pow(gsl_vector_get(v,1),2.)); const double R = sqrt(pow(gsl_vector_get(v,0),2.)+pow(gsl_vector_get(v,1),2.)+pow(gsl_vector_get(v,2),2.)); double n1, n2, tmp; // dummy variables for workspace pReturn = gsl_matrix_calloc(4,4); // Normally a rotation matrix is 3x3 // For the 1st column or row has an element of '1' at T_11. if( pReturn != NULL ) { gsl_matrix_set(pReturn,0,0,1.); if( gsl_vector_get(v,0) == 0. && gsl_vector_get(v,1) == 0. && gsl_vector_get(v,2) > 0. ) // if vector 'v' is on z-axis { gsl_matrix_set(pReturn,1,1,1.); gsl_matrix_set(pReturn,2,2,1.); gsl_matrix_set(pReturn,3,3,1.); // set the matrix as I } else if( gsl_vector_get(v,0) == 0. && gsl_vector_get(v,1) == 0. && gsl_vector_get(v,2) < 0. ) // if vector 'v' is on negative z-axis { gsl_matrix_set(pReturn,1,1,1.); gsl_matrix_set(pReturn,2,2,1.); gsl_matrix_set(pReturn,3,3,-1.); // set the matrix has xy-plane reflection } else { gsl_matrix_set(pReturn,3,1,gsl_vector_get(v,0)/R); gsl_matrix_set(pReturn,3,2,gsl_vector_get(v,1)/R); gsl_matrix_set(pReturn,3,3,gsl_vector_get(v,2)/R); // set k' in the transformed (local) symmetry gsl_matrix_set(pReturn,2,1,gsl_vector_get(v,2)*gsl_vector_get(v,0)/rxy); gsl_matrix_set(pReturn,2,2,gsl_vector_get(v,2)*gsl_vector_get(v,1)/rxy); gsl_matrix_set(pReturn,2,3,-R*sqrt(1.-gsl_vector_get(v,2)*gsl_vector_get(v,2)/R/R)); n1 = 1./sqrt(pow(gsl_matrix_get(pReturn,2,1),2.) + pow(gsl_matrix_get(pReturn,2,2),2.) + pow(gsl_matrix_get(pReturn,2,3),2.)); for(int i=0;i<3;i++) { tmp = gsl_matrix_get(pReturn,2,i+1); tmp = tmp*n1; gsl_matrix_set(pReturn,2,i+1,tmp); // set j' in the transformed (local) symmetry } gsl_matrix_set(pReturn,1,1,n1/R*(pow(gsl_vector_get(v,2),2.)*gsl_vector_get(v,1)/rxy + R*gsl_vector_get(v,1)*sqrt(1.-pow(gsl_vector_get(v,2),2.)/R/R))); gsl_matrix_set(pReturn,1,2,n1/R*(-pow(gsl_vector_get(v,2),2.)*gsl_vector_get(v,0)/rxy - R*gsl_vector_get(v,0)*sqrt(1.-pow(gsl_vector_get(v,2),2.)/R/R))); gsl_matrix_set(pReturn,1,3,0.); n2 = 1./sqrt(pow(gsl_matrix_get(pReturn,1,1),2.) + pow(gsl_matrix_get(pReturn,1,2),2.) + pow(gsl_matrix_get(pReturn,1,3),2.)); for(int i=0;i<3;i++) { tmp = gsl_matrix_get(pReturn,1,i+1); tmp = tmp/n2; gsl_matrix_set(pReturn,1,i+1,tmp); // set i' in the transformed (local) symmetry } } } else puts("sp_cluster_get_trans_mat 'pReturn' alloc error ... in SP_support.c or SP_support.h"); return pReturn; } /* SEE THE DETAILS IN THE 'N1' RING BINDER */ double sp_cluster_support_kronecker_delta( int a, int b ) { double Return = 0.; if( a == b ) Return = 1.; return Return; } // norm of vector double sp_cluster_support_get_norm( double v1, double v2, double v3 ) { return pow(v1*v1+v2*v2+v3*v3,0.5); } // SPLINER double** sp_cluster_support_get_spline( const double** data, const int knot_stride ) { double** pReturn = (double**)malloc((knot_stride-1)*sizeof(double*)); for(int i=0;i<knot_stride-1;i++) pReturn[i] = (double*)calloc(4,sizeof(double)); // Work Space const int n = knot_stride; double f[n]; double t[n]; double a[n]; double b[n]; double c[n]; double d[n]; double z[n]; double L[n]; double h[n]; double alpha[n]; double mu[n]; // Measuring slopes const double rslope = 0.; const double lslope = (data[3][1]-data[0][1])/(data[3][0]-data[0][0]); // Memset memset(f,0,n*sizeof(double)); memset(t,0,n*sizeof(double)); memset(a,0,n*sizeof(double)); memset(b,0,n*sizeof(double)); memset(c,0,n*sizeof(double)); memset(d,0,n*sizeof(double)); memset(z,0,n*sizeof(double)); memset(L,0,n*sizeof(double)); memset(h,0,n*sizeof(double)); memset(alpha,0,n*sizeof(double)); memset(mu,0,n*sizeof(double)); // Make Spline for(int i=0;i<n;i++) { f[i] = data[i][1]; t[i] = data[i][0]; } h[0] = t[1] - t[0]; alpha[0] = 3.*(f[1]-f[0])/h[0]-3.*lslope; L[0] = 2.*h[0]; mu[0] = 0.5; z[0] = alpha[0]/L[0]; b[0] = lslope; // End Left Init for(int k=1;k<n-1;k++) { h[k] = t[k+1] - t[k]; alpha[k] = (3./(h[k]*h[k-1]))*(f[k+1]*h[k-1]-f[k]*(h[k]+h[k-1])+f[k-1]*h[k]); L[k] = 2.*(h[k]+h[k-1])-h[k-1]*mu[k-1]; mu[k] = h[k]/L[k]; z[k] = (alpha[k]-h[k-1]*z[k-1])/L[k]; } // Right End init alpha[n-1] = 3.*rslope - 3.*(f[n-1]-f[n-2])/h[n-2]; L[n-1] = h[n-2]*(2.-mu[n-2]); z[n-1] = (alpha[n-1]-h[n-2]*z[n-2])/L[n-1]; c[n-1] = z[n-1]; for(int j=n-2;j>=0;j--) { c[j] = z[j]-mu[j]*c[j+1]; b[j] = (f[j+1]-f[j])/h[j] - h[j]*(c[j+1]+2.*c[j])/3.; d[j] = (c[j+1]-c[j])/(3.*h[j]); a[j] = f[j]; } // R-E END // SAVE DATA for(int i=0;i<knot_stride-1;i++) { pReturn[i][0] = d[i]; pReturn[i][1] = c[i] - 3.*t[i]*d[i]; pReturn[i][2] = b[i] - 2.*c[i]*t[i] + 3.*t[i]*t[i]*d[i]; pReturn[i][3] = a[i] - b[i]*t[i] + c[i]*t[i]*t[i] - d[i]*t[i]*t[i]*t[i]; } return pReturn; } //int sp_cluster_support_get_atom_number( void sp_cluster_support_print_xyz( void* sp_sys_void, const double cur_energy, const int rank, const int numtasks ) { int offset; // cast the type into "sp_cluster_system*" sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; int atom_number=0; for(int i=0;i<sp_sys->number_of_classic_ion;i++) { if( sp_sys->classic_ion[i].if_shell == SP_SUPPORT_FALSE ) atom_number++; // CNT ONLY WHEN ITS CORE } if( cur_energy != 0. ) { //printf("\t%d\n",sp_sys->number_of_classic_ion+sp_sys->number_of_sp_ion); printf("\t%d\n",atom_number+sp_sys->number_of_sp_ion); printf(" SCF DONE %.6lf\n",cur_energy); } // CORE POSITION ONLY ... "xyz" Format Compatible for(int n=0;n<sp_sys->number_of_sp_ion+sp_sys->number_of_classic_ion;n++) { offset = n-sp_sys->number_of_classic_ion; if( n < sp_sys->number_of_classic_ion ) { if( sp_sys->classic_ion[n].if_shell == SP_SUPPORT_FALSE ) // if it is core { fprintf(stdout,"%3s%12.6lf%12.6lf%12.6lf\n", sp_sys->classic_ion[n].atom_name, gsl_vector_get(sp_sys->classic_ion[n].core_position,0), gsl_vector_get(sp_sys->classic_ion[n].core_position,1), gsl_vector_get(sp_sys->classic_ion[n].core_position,2)); } } else // this is for printing sp-ions { offset = n - sp_sys->number_of_classic_ion; fprintf(stdout,"%3s%12.6lf%12.6lf%12.6lf\n", sp_sys->sp_ion[offset].atom_name, gsl_vector_get(sp_sys->sp_ion[offset].core_position,0), gsl_vector_get(sp_sys->sp_ion[offset].core_position,1), gsl_vector_get(sp_sys->sp_ion[offset].core_position,2)); } } printf("\n"); printf(" CONFIGURATION_XYZ_SC_INFO\n"); printf(" %d\t%d\n",sp_sys->number_of_classic_ion,sp_sys->number_of_sp_ion); // SHOW SHELL CORE POSITION BOTH for(int n=0;n<sp_sys->number_of_sp_ion+sp_sys->number_of_classic_ion;n++) { offset = n-sp_sys->number_of_classic_ion; if( n < sp_sys->number_of_classic_ion ) { if( sp_sys->classic_ion[n].if_shell == SP_SUPPORT_FALSE ) { fprintf(stdout,"%3s%3s%12.6lf%12.6lf%12.6lf\n", sp_sys->classic_ion[n].atom_name,"c", gsl_vector_get(sp_sys->classic_ion[n].core_position,0), gsl_vector_get(sp_sys->classic_ion[n].core_position,1), gsl_vector_get(sp_sys->classic_ion[n].core_position,2)); } else if( sp_sys->classic_ion[n].if_shell == SP_SUPPORT_TRUE ) { fprintf(stdout,"%3s%3s%12.6lf%12.6lf%12.6lf\n", sp_sys->classic_ion[n].atom_name,"s", gsl_vector_get(sp_sys->classic_ion[n].core_position,0), gsl_vector_get(sp_sys->classic_ion[n].core_position,1), gsl_vector_get(sp_sys->classic_ion[n].core_position,2)); } } else // this is for printing sp-ions { offset = n - sp_sys->number_of_classic_ion; fprintf(stdout,"%3s%15.6lf%12.6lf%12.6lf\n", sp_sys->sp_ion[offset].atom_name, gsl_vector_get(sp_sys->sp_ion[offset].core_position,0), gsl_vector_get(sp_sys->sp_ion[offset].core_position,1), gsl_vector_get(sp_sys->sp_ion[offset].core_position,2)); } } return; } /// DIIS_SUPPORT TOOLS /* int if_diis; int diis_max_depth; int diis_cur_depth; gsl_vector** diis_error_vector; // SAVE AS CIRCULAR QUEUE gsl_vector* diis_coefficient_vector; gsl_permutation* diis_ws_p; gsl_matrix* diis_error_matrix; gsl_matrix* diis_error_matrix_inv; */ void sp_cluster_support_diis_error_vector_queue_init( void* sp_sys_void ) { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; sp_sys->diis_error_vector_queue_front = 0; sp_sys->diis_error_vector_queue_rear = 0; return; } int sp_cluster_support_diis_error_vector_queue_isfull( void* sp_sys_void ) { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; if( (sp_sys->diis_error_vector_queue_rear+1)%sp_sys->diis_max_depth == sp_sys->diis_error_vector_queue_front ) return SP_SUPPORT_TRUE; else return SP_SUPPORT_FALSE; } int sp_cluster_support_diis_error_vector_queue_isempty( void* sp_sys_void ) { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; if( sp_sys->diis_error_vector_queue_front == sp_sys->diis_error_vector_queue_rear ) return SP_SUPPORT_TRUE; else return SP_SUPPORT_FALSE; } void sp_cluster_support_diis_error_vector_enqueue( void* sp_sys_void ) { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; double delta_evec[4]; int low_idx; double delta_evec_sum = 0.; //if( !((sp_sys->diis_error_vector_queue_rear+1)%sp_sys->diis_max_depth == sp_sys->diis_error_vector_queue_front) ) // check if queue is full if( sp_cluster_support_diis_error_vector_queue_isfull( sp_sys ) == SP_SUPPORT_FALSE ) { sp_sys->diis_error_vector_queue_rear = (sp_sys->diis_error_vector_queue_rear+1)%sp_sys->diis_max_depth; // rear index ++ sp_sys->diis_cur_depth++; for(int i=0;i<sp_sys->number_of_sp_ion;i++) { low_idx = sp_cluster_support_get_lowest_state(sp_sys->sp_ion[i].eigen_value); // current eigenvector - (loaded) previous eigenvector_gs delta_evec[0] = gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,0,low_idx) - gsl_vector_get(sp_sys->sp_ion[i].eigen_vector_gs,0); delta_evec[1] = gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,1,low_idx) - gsl_vector_get(sp_sys->sp_ion[i].eigen_vector_gs,1); delta_evec[2] = gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,2,low_idx) - gsl_vector_get(sp_sys->sp_ion[i].eigen_vector_gs,2); delta_evec[3] = gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,3,low_idx) - gsl_vector_get(sp_sys->sp_ion[i].eigen_vector_gs,3); delta_evec_sum += delta_evec[0]*delta_evec[0] + delta_evec[1]*delta_evec[1] + delta_evec[2]*delta_evec[2] + delta_evec[3]*delta_evec[3]; gsl_vector_set(sp_sys->diis_error_vector[ sp_sys->diis_error_vector_queue_rear ],i*4+0,delta_evec[0]); gsl_vector_set(sp_sys->diis_error_vector[ sp_sys->diis_error_vector_queue_rear ],i*4+1,delta_evec[0]); gsl_vector_set(sp_sys->diis_error_vector[ sp_sys->diis_error_vector_queue_rear ],i*4+2,delta_evec[0]); gsl_vector_set(sp_sys->diis_error_vector[ sp_sys->diis_error_vector_queue_rear ],i*4+3,delta_evec[0]); // loading .. previous eigen_vector_gs (note that the function 'sp_cluster_support_load_gs_eigenvector()' must be called before enqueue) gsl_vector_set(sp_sys->diis_prev_eigen_vector[ sp_sys->diis_error_vector_queue_rear ],i*4+0, gsl_vector_get(sp_sys->sp_ion[i].eigen_vector_gs,0)); gsl_vector_set(sp_sys->diis_prev_eigen_vector[ sp_sys->diis_error_vector_queue_rear ],i*4+1, gsl_vector_get(sp_sys->sp_ion[i].eigen_vector_gs,1)); gsl_vector_set(sp_sys->diis_prev_eigen_vector[ sp_sys->diis_error_vector_queue_rear ],i*4+2, gsl_vector_get(sp_sys->sp_ion[i].eigen_vector_gs,2)); gsl_vector_set(sp_sys->diis_prev_eigen_vector[ sp_sys->diis_error_vector_queue_rear ],i*4+3, gsl_vector_get(sp_sys->sp_ion[i].eigen_vector_gs,3)); #ifdef DEBUG_SUPPORT printf("\ndelta evec\n"); printf("%12.6lf%12.6lf%12.6lf%12.6lf\n",delta_evec[0],delta_evec[1],delta_evec[2],delta_evec[3]); printf("%12.6lf%12.6lf%12.6lf%12.6lf\n", gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,0,low_idx), gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,1,low_idx), gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,2,low_idx), gsl_matrix_get(sp_sys->sp_ion[i].eigen_vector,3,low_idx)); #endif /* printf("%12.6lf%12.6lf%12.6lf%12.6lf\n",gsl_vector_get(sp_sys->diis_prev_eigen_vector[ sp_sys->diis_error_vector_queue_rear ], i*4 + 0 ), gsl_vector_get(sp_sys->diis_prev_eigen_vector[ sp_sys->diis_error_vector_queue_rear ], i*4 + 1 ), gsl_vector_get(sp_sys->diis_prev_eigen_vector[ sp_sys->diis_error_vector_queue_rear ], i*4 + 2 ), gsl_vector_get(sp_sys->diis_prev_eigen_vector[ sp_sys->diis_error_vector_queue_rear ], i*4 + 3 )); */ /* Note that the both, Error vector ( sp_sys->diis_error_vector ) Prev evecs ( sp_sys->diis_prev_eigen_vector ) Follows the form of (gsl_vector)v[i][j] v[i] ... 'i' -> index in the queue // queue is circular type, max dept is 'sp_sys->diis_max_depth' v[i][j] ... 'j' -> DATA, s_sp1, px_sp1, py_sp1, pz_sp1 // s_sp2, px_sp2, py_sp2, pz_sp2 // .... i.e., length of number_of_sp_ions * 4 (each lone pair ground-state MO saved with stride of 4) */ } printf("delta_evec_sum: %12.8lf\n",sqrt(delta_evec_sum)/sp_sys->number_of_sp_ion); } return; // if queue is full do nothing } void sp_cluster_support_diis_error_vector_dequeue( void* sp_sys_void ) { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; if( sp_cluster_support_diis_error_vector_queue_isempty( sp_sys ) == SP_SUPPORT_FALSE ) { sp_sys->diis_error_vector_queue_front = (sp_sys->diis_error_vector_queue_front+1)%sp_sys->diis_max_depth; sp_sys->diis_cur_depth--; } #ifdef DEBUG_SUPPORT const int front = sp_sys->diis_error_vector_queue_front; const int rear = sp_sys->diis_error_vector_queue_rear; printf("%d\t%d\n",front,rear); #endif return; // if queue is empty do nothing } double sp_cluster_support_diis_get_error_vector_gnorm( void* sp_sys_void ) // returns latest .. i.e., the one at rear { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; double ret; if( sp_cluster_support_diis_error_vector_queue_isempty( sp_sys ) == SP_SUPPORT_FALSE ) { ret = gsl_blas_dnrm2( sp_sys->diis_error_vector[ sp_sys->diis_error_vector_queue_rear ] ); return ret/sp_sys->number_of_sp_ion; } return 0.; } void sp_cluster_support_diis_solve_least_square_problem( void* sp_sys_void ) { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; // int count = front > rear ? (MAX - front + rear) : (rear - front); const int queue_capacity = sp_sys->diis_max_depth; const int front = sp_sys->diis_error_vector_queue_front; const int rear = sp_sys->diis_error_vector_queue_rear; const int cur_size = front > rear ? (queue_capacity - front + rear) : (rear - front); // number of elements in the queue (error_vector) /* gsl_vector* diis_coefficient_vector; gsl_permutation* diis_ws_p; gsl_matrix* diis_error_matrix; gsl_matrix* diis_error_matrix_inv; */ size_t len = cur_size + 1; printf("\n#### IN LSP SOLVER / len : %zu\n",len); printf("ws size : ErrorMat / %zu %zu\n",sp_sys->diis_error_matrix->size1,sp_sys->diis_error_matrix->size2); printf("cursize/max_depth/cur_depth : %4d%4d%4d\n", cur_size, sp_sys->diis_max_depth - 1, sp_sys->diis_cur_depth ); //if( !(cur_size == (sp_sys->diis_max_depth - 1)) ) //if( !(cur_size == (sp_sys->diis_max_depth)) ) //if( !(len == (sp_sys->diis_max_depth)) ) if( !(len == sp_sys->diis_error_matrix->size1) ) { printf("Resizing!!\n"); // if current size is not same with actual max_depth in use ... i.e., max_depth - 1, since circular queue in use // have to resize the workspaces ... diis_ws_p (gsl_permutation*) / diis_error_matrix (gsl_matrix*) / diis_error_matrix_inv (gsl_matrix*) gsl_vector_free(sp_sys->diis_coefficient_vector); gsl_vector_free(sp_sys->diis_least_square_condition); gsl_permutation_free(sp_sys->diis_ws_p); gsl_matrix_free(sp_sys->diis_error_matrix); gsl_matrix_free(sp_sys->diis_error_matrix_inv); sp_sys->diis_coefficient_vector = gsl_vector_calloc(len); sp_sys->diis_least_square_condition = gsl_vector_calloc(len); sp_sys->diis_ws_p = gsl_permutation_calloc(len); sp_sys->diis_error_matrix = gsl_matrix_calloc(len,len); sp_sys->diis_error_matrix_inv = gsl_matrix_calloc(len,len); // len = cur_size (error_vector length) + 1 // 1 is to represent least square sense of coeff } int ii=0; int jj=0; // tags to get actual matrix indices ! double elem_tmp; for(int i=(front+1)%queue_capacity; i!=(rear+1)%queue_capacity ; i=(i+1)%queue_capacity) // for loop circulate over queue elements { for(int j=(front+1)%queue_capacity; j!=(rear+1)%queue_capacity ; j=(j+1)%queue_capacity) { #ifdef DEBUG_SUPPORT printf("i,j / tag_i,tag_j : %d\t%d / %d\t%d\n",i,j,ii,jj); //printf("size1 / size2 : %d\t%d\n", sp_sys->diis_error_matrix->size1, sp_sys->diis_error_matrix->size2); #endif gsl_blas_ddot(sp_sys->diis_error_vector[j],sp_sys->diis_error_vector[i],&elem_tmp); gsl_matrix_set(sp_sys->diis_error_matrix,ii,jj,elem_tmp); jj++; } jj=0; ii++; } printf("ERROR - 0 ?\n"); ///// SET RESTS for(int i=0;i<len;i++) { gsl_matrix_set(sp_sys->diis_error_matrix,i,len-1,-1.); gsl_matrix_set(sp_sys->diis_error_matrix,len-1,i,-1.); } printf("ERROR?\n"); gsl_matrix_set(sp_sys->diis_error_matrix,len-1,len-1,0.); printf("ERROR?\n"); ///// SET least Square Condition vector gsl_vector_set(sp_sys->diis_least_square_condition,len-1,-1.); //// calculate inverse int signum; // variable for LU_decomp #ifdef DEBUG_SUPPORT printf("ErrorMatrix\n"); sp_cluster_support_matrix_view( sp_sys->diis_error_matrix ); #endif /* Say, Error matrix E Coefficient Vector C RHS Least Square Condition L Need to find 'C' vector Solve : EC = L, i.e., C = E_Inverse L */ gsl_linalg_LU_decomp(sp_sys->diis_error_matrix,sp_sys->diis_ws_p,&signum); gsl_linalg_LU_invert(sp_sys->diis_error_matrix,sp_sys->diis_ws_p,sp_sys->diis_error_matrix_inv); // invert is saved in 'sp_sys->diis_error_matrix_inv' // Inverse Found // int gsl_blas_dgemv(CBLAS_TRANSPOSE_t TransA, double alpha, const gsl_matrix *A, const gsl_vector *x, double beta, gsl_vector *y) gsl_blas_dgemv(CblasNoTrans,1.,sp_sys->diis_error_matrix_inv,sp_sys->diis_least_square_condition,0.,sp_sys->diis_coefficient_vector); // Solve LeaseSquare Problem Answer saved in 'sp_sys->diis_coefficient_vector' #ifdef DEBUG_SUPPORT //sp_cluster_support_vector_view( sp_sys->diis_least_square_condition ); printf("coefficient vector, ignore the last dummy lambda value\n"); sp_cluster_support_vector_view( sp_sys->diis_coefficient_vector ); //sp_cluster_support_matrix_view( sp_sys->diis_error_matrix ); printf("InverseMatrix\n"); sp_cluster_support_matrix_view( sp_sys->diis_error_matrix_inv ); printf("cursize/max_depth/cur_depth : %d\t%d\t%d\n", cur_size, sp_sys->diis_max_depth - 1, sp_sys->diis_cur_depth ); printf(" ---- FINALISE LEAST SQUARE SOLVER \n\n"); #endif return; } void sp_cluster_support_diis_least_square_result_update( void* sp_sys_void ) { sp_cluster_system* sp_sys = (sp_cluster_system*)sp_sys_void; int low_idx; int jj = 0; double cs, cx, cy, cz, norm_factor; const int queue_capacity = sp_sys->diis_max_depth; const int front = sp_sys->diis_error_vector_queue_front; const int rear = sp_sys->diis_error_vector_queue_rear; for(int i=0;i<sp_sys->number_of_sp_ion;i++) { low_idx = sp_cluster_support_get_lowest_state(sp_sys->sp_ion[i].eigen_value); // results are in 'sp_sys->diis_coefficient_vector' ... data structure sp_sys->diis_coefficient_vector->size or 'stride'? // size can also be obtained by 'sp_sys->diis_cur_depth' jj = 0; cs = 0.; cx = 0.; cy = 0.; cz = 0.; for(int j=(front+1)%queue_capacity; j!=(rear+1)%queue_capacity ; j=(j+1)%queue_capacity) { cs += gsl_vector_get( sp_sys->diis_coefficient_vector,jj) * gsl_vector_get(sp_sys->diis_prev_eigen_vector[j],i*4+0); cx += gsl_vector_get( sp_sys->diis_coefficient_vector,jj) * gsl_vector_get(sp_sys->diis_prev_eigen_vector[j],i*4+1); cy += gsl_vector_get( sp_sys->diis_coefficient_vector,jj) * gsl_vector_get(sp_sys->diis_prev_eigen_vector[j],i*4+2); cz += gsl_vector_get( sp_sys->diis_coefficient_vector,jj) * gsl_vector_get(sp_sys->diis_prev_eigen_vector[j],i*4+3); jj++; printf("index j / tag jj: %d\t%d\n",j,jj); } #ifdef DEBUG_SUPPORT printf("%dth\t%12.4lf%12.4lf%12.4lf%12.4lf%20.4lf\n",i+1,cs,cx,cy,cz,sqrt(cs*cs+cx*cx+cy*cy+cz*cz)); #endif norm_factor = sqrt(cs*cs+cx*cx+cy*cy+cz*cz); norm_factor = 1.; gsl_matrix_set(sp_sys->sp_ion[i].eigen_vector,0,low_idx,cs/norm_factor); gsl_matrix_set(sp_sys->sp_ion[i].eigen_vector,1,low_idx,cx/norm_factor); gsl_matrix_set(sp_sys->sp_ion[i].eigen_vector,2,low_idx,cy/norm_factor); gsl_matrix_set(sp_sys->sp_ion[i].eigen_vector,3,low_idx,cz/norm_factor); // set eigenvector ... updated!!! // finish up the rest ... } return; }
{"hexsha": "f860481510256732fa35f546e67ba1a996d7642b", "size": 28009, "ext": "c", "lang": "C", "max_stars_repo_path": "src/sp_cluster_support.c", "max_stars_repo_name": "sweetmixture/SLAM_2.2.1_snapshot", "max_stars_repo_head_hexsha": "60335c37ce75b82f6589c67f3a1c1be37decfd71", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1.0, "max_stars_repo_stars_event_min_datetime": "2022-02-02T07:01:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-02T07:01:42.000Z", "max_issues_repo_path": "src/sp_cluster_support.c", "max_issues_repo_name": "sweetmixture/SLAM_2.2.1_snapshot", "max_issues_repo_head_hexsha": "60335c37ce75b82f6589c67f3a1c1be37decfd71", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/sp_cluster_support.c", "max_forks_repo_name": "sweetmixture/SLAM_2.2.1_snapshot", "max_forks_repo_head_hexsha": "60335c37ce75b82f6589c67f3a1c1be37decfd71", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.2282913165, "max_line_length": 175, "alphanum_fraction": 0.6490770824, "num_tokens": 8568}
/* * Author: Johannes M Dieterich */ #ifndef CARTESIANGRID_HPP #define CARTESIANGRID_HPP #include <armadillo> #include <cmath> #include <memory> #include <complex.h> #include <tgmath.h> #include "BasicGridComputer.hpp" #include "FourierGrid.hpp" #include "GVectorBuilder.hpp" using namespace std; using namespace arma; class CartesianGrid: public FourierGrid { public: virtual ~CartesianGrid(){ #ifdef LIBKEDF_DEBUG cout << "DEBUG: destructor for Cartesian grid called." << endl; #endif _gNorms.reset(); } void multiplyGNorms(){ #ifdef LIBKEDF_DEBUG cout << "DEBUG: multiplying gNorms in Cartesian grid." << endl; #endif // do a Fourier operation cx_cube* rec = this->getReciprocalGrid(); const cube* gNorms = this->getGNorms(); const uword elems = rec->n_elem; #pragma omp parallel for default(none) shared(rec,gNorms) for(uword x = 0; x < elems; ++x){ const double norm = gNorms->at(x); rec->at(x) *= -norm*norm; } this->completeReciprocal(rec); #ifdef LIBKEDF_DEBUG cout << "DEBUG: Done multiplying gNorms in Cartesian grid." << endl; #endif } void multiplyGVectorsX(){ cx_cube* rec = getReciprocalGrid(); const cube* gVecs = getGVectorsX(); const uword elems = rec->n_elem; const complex<double> imag(0,1); #pragma omp parallel for default(none) shared(rec,gVecs) for(uword x = 0; x < elems; ++x){ rec->at(x) *= imag*gVecs->at(x);; } completeReciprocal(rec); } void multiplyGVectorsY(){ cx_cube* rec = getReciprocalGrid(); const cube* gVecs = getGVectorsY(); const uword elems = rec->n_elem; const complex<double> imag(0,1); #pragma omp parallel for default(none) shared(rec,gVecs) for(uword x = 0; x < elems; ++x){ rec->at(x) *= imag*gVecs->at(x); } completeReciprocal(rec); } void multiplyGVectorsZ(){ cx_cube* rec = getReciprocalGrid(); const cube* gVecs = getGVectorsZ(); const uword elems = rec->n_elem; const complex<double> imag(0,1); #pragma omp parallel for default(none) shared(rec,gVecs) for(uword x = 0; x < elems; ++x){ rec->at(x) *= imag*gVecs->at(x); } completeReciprocal(rec); } double integrate() { const double sum = sumOver(); return sum*_cellVolume/_noGridPoints; } double sumOver() { const cube* grid = this->readRealGrid(); #ifdef _OPENMP const size_t nSlices = grid->n_slices; const size_t nRows = grid->n_rows; const size_t nCols = grid->n_cols; double sum = 0.0; #pragma omp parallel for default(none) shared(grid,sum) for(size_t x = 0; x < nSlices; ++x){ double tmpSum = 0.0; for(size_t col = 0; col < nCols; ++col){ for(size_t row = 0; row < nRows; ++row){ tmpSum += grid->at(row,col,x); } } #pragma omp atomic sum += tmpSum; } #else const double sum = accu(*grid); #endif return sum; } void minMax(double& min, double& max){ const cube* grid = this->readRealGrid(); const size_t nSlices = grid->n_slices; const size_t nRows = grid->n_rows; const size_t nCols = grid->n_cols; #ifdef _OPENMP // parallelize this double minVals[nSlices]; double maxVals[nSlices]; #pragma omp parallel for default(none) shared(grid,minVals,maxVals) for(size_t x = 0; x < nSlices; ++x){ minVals[x] = 1e42; maxVals[x] = 0.0; for(size_t col = 0; col < nCols; ++col){ for(size_t row = 0; row < nRows; ++row){ const double d = grid->at(row,col,x); if(d < minVals[x]){ minVals[x] = d; } else if(d > maxVals[x]){ maxVals[x] = d; } } } } // do the aggregation double myMin = 1e42; double myMax = 0.0; for(size_t x = 0; x < nSlices; ++x){ if(minVals[x] < myMin){ myMin = minVals[x]; } if(maxVals[x] > myMax){ myMax = maxVals[x]; } } #else double myMin = 1e42; double myMax = 0.0; for(size_t x = 0; x < nSlices; ++x){ for(size_t col = 0; col < nCols; ++col){ for(size_t row = 0; row < nRows; ++row){ const double d = grid->at(row,col,x); myMin = std::min(myMin,d); myMax = std::max(myMax,d); } } } #endif min = myMin; max = myMax; } const cube* getGNorms() { // lazy init if(_gNorms == NULL){ setupGVectors(_cellVectors); } return _gNorms.get(); } const cube* getGVectorsX(){ // lazy init if(_gVectorsX == NULL){ setupGVectors(_cellVectors); } return _gVectorsX.get(); } const cube* getGVectorsY(){ // lazy init if(_gVectorsY == NULL){ setupGVectors(_cellVectors); } return _gVectorsY.get(); } const cube* getGVectorsZ(){ // lazy init if(_gVectorsZ == NULL){ setupGVectors(_cellVectors); } return _gVectorsZ.get(); } double stressNorm() const { return 3*_cellVolume; } size_t getGridPointsX() const { return _xDim; } size_t getGridPointsY() const { return _yDim; } size_t getGridPointsZ() const { return _zDim; } size_t getReciGridPointsX() const { return _xRecDim; } size_t getReciGridPointsY() const { return _yRecDim; } size_t getReciGridPointsZ() const { return _zRecDim; } unsigned long long getTotalGridPoints() const { const unsigned long long gridX = getGridPointsX(); const unsigned long long gridY = getGridPointsY(); const unsigned long long gridZ = getGridPointsZ(); return gridX*gridY*gridZ; } double getCellVolume(){ return _cellVolume; } double getCellX(){ return _cellX; } double getCellY(){ return _cellY; } double getCellZ(){ return _cellZ; } void updateCellVectors(const double *vecX, const double *vecY, const double *vecZ){ _cellVectors->at(0,0) = vecX[0]; _cellVectors->at(0,1) = vecX[1]; _cellVectors->at(0,2) = vecX[2]; _cellVectors->at(1,0) = vecY[0]; _cellVectors->at(1,1) = vecY[1]; _cellVectors->at(1,2) = vecY[2]; _cellVectors->at(2,0) = vecZ[0]; _cellVectors->at(2,1) = vecZ[1]; _cellVectors->at(2,2) = vecZ[2]; // following http://mathworld.wolfram.com/Parallelepiped.html vec::fixed<3> vecA; vec::fixed<3> vecB; vec::fixed<3> vecC; vecA.at(0) = _cellVectors->at(0,0); vecA.at(1) = _cellVectors->at(0,1); vecA.at(2) = _cellVectors->at(0,2); vecB.at(0) = _cellVectors->at(1,0); vecB.at(1) = _cellVectors->at(1,1); vecB.at(2) = _cellVectors->at(1,2); vecC.at(0) = _cellVectors->at(2,0); vecC.at(1) = _cellVectors->at(2,1); vecC.at(2) = _cellVectors->at(2,2); const vec vecBC = cross(vecB, vecC); _cellVolume = abs(dot(vecA, vecBC)); _cellX = sqrt(vecA.at(0)*vecA.at(0) + vecA.at(1)*vecA.at(1) + vecA.at(2)*vecA.at(2)); _cellY = sqrt(vecB.at(0)*vecB.at(0) + vecB.at(1)*vecB.at(1) + vecB.at(2)*vecB.at(2)); _cellZ = sqrt(vecC.at(0)*vecC.at(0) + vecC.at(1)*vecC.at(1) + vecC.at(2)*vecC.at(2)); this->_gNorms.reset(); this->_gVectorsX.reset(); this->_gVectorsY.reset(); this->_gVectorsZ.reset(); this->_gNorms = NULL; this->_gVectorsX = NULL; this->_gVectorsY = NULL; this->_gVectorsZ = NULL; } protected: CartesianGrid(const size_t xDim, const size_t yDim, const size_t zDim, const shared_ptr<mat> cellVectors) : _xDim(xDim), _yDim(yDim), _zDim(zDim), _xRecDim(floor(_xDim/2)+1), // this is due to the arma data format (which we use as basis: col/row/slice in contiguous -> least contiguous order) _yRecDim(yDim), _zRecDim(zDim), _noGridPoints(xDim*yDim*zDim), _norm((double) _xDim*_yDim*_zDim), _invNorm(1.0/_norm){ _cellVectors = cellVectors; // following http://mathworld.wolfram.com/Parallelepiped.html vec::fixed<3> vecA; vec::fixed<3> vecB; vec::fixed<3> vecC; vecA.at(0) = _cellVectors->at(0,0); vecA.at(1) = _cellVectors->at(0,1); vecA.at(2) = _cellVectors->at(0,2); vecB.at(0) = _cellVectors->at(1,0); vecB.at(1) = _cellVectors->at(1,1); vecB.at(2) = _cellVectors->at(1,2); vecC.at(0) = _cellVectors->at(2,0); vecC.at(1) = _cellVectors->at(2,1); vecC.at(2) = _cellVectors->at(2,2); const vec vecBC = cross(vecB, vecC); _cellVolume = abs(dot(vecA, vecBC)); _cellX = sqrt(vecA.at(0)*vecA.at(0) + vecA.at(1)*vecA.at(1) + vecA.at(2)*vecA.at(2)); _cellY = sqrt(vecB.at(0)*vecB.at(0) + vecB.at(1)*vecB.at(1) + vecB.at(2)*vecB.at(2)); _cellZ = sqrt(vecC.at(0)*vecC.at(0) + vecC.at(1)*vecC.at(1) + vecC.at(2)*vecC.at(2)); // initialize to NULL for lazy init _gNorms = NULL; _gVectorsX = NULL; _gVectorsY = NULL; _gVectorsZ = NULL; } CartesianGrid(const CartesianGrid& orig) : _xDim(orig._xDim), _yDim(orig._yDim), _zDim(orig._zDim), _xRecDim(orig._xRecDim), _yRecDim(orig._yRecDim), _zRecDim(orig._zRecDim), _noGridPoints(orig._noGridPoints), _norm(orig._norm), _invNorm(orig._invNorm), _cellVolume(orig._cellVolume), _cellX(orig._cellX), _cellY(orig._cellY), _cellZ(orig._cellZ){ _cellVectors = orig._cellVectors; _gNorms = orig._gNorms; _gVectorsX = orig._gVectorsX; _gVectorsY = orig._gVectorsY; _gVectorsZ = orig._gVectorsZ; } void setupGVectors(const shared_ptr<mat> cellVectors){ setupSimpleGVectors(cellVectors); } void setupSimpleGVectors(const shared_ptr<mat> cellVectors){ _gNorms = make_shared<cube>(_xRecDim,_yRecDim,_zRecDim); _gVectorsX = make_shared<cube>(_xRecDim,_yRecDim,_zRecDim); _gVectorsY = make_shared<cube>(_xRecDim,_yRecDim,_zRecDim); _gVectorsZ = make_shared<cube>(_xRecDim,_yRecDim,_zRecDim); // construct g norms and vectors GVectorBuilder::buildGVectors(cellVectors,_xRecDim,_yRecDim,_zRecDim,_gNorms,_gVectorsX,_gVectorsY,_gVectorsZ); } shared_ptr<mat> _cellVectors; shared_ptr<cube> _gNorms; shared_ptr<cube> _gVectorsX; shared_ptr<cube> _gVectorsY; shared_ptr<cube> _gVectorsZ; const size_t _xDim; const size_t _yDim; const size_t _zDim; const size_t _xRecDim; const size_t _yRecDim; const size_t _zRecDim; const size_t _noGridPoints; const double _norm; const double _invNorm; double _cellVolume; double _cellX; double _cellY; double _cellZ; }; #endif /* CARTESIANGRID_HPP */
{"hexsha": "64b8e11feae802851f6615a98352b5c1b1e3fc90", "size": 12185, "ext": "hpp", "lang": "C++", "max_stars_repo_path": "include/CartesianGrid.hpp", "max_stars_repo_name": "EACcodes/libKEDF", "max_stars_repo_head_hexsha": "3dff53318ce7be52be5f45242ea8daf08a032866", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3.0, "max_stars_repo_stars_event_min_datetime": "2017-04-21T12:13:26.000Z", "max_stars_repo_stars_event_max_datetime": "2019-03-29T01:13:25.000Z", "max_issues_repo_path": "include/CartesianGrid.hpp", "max_issues_repo_name": "EACcodes/libKEDF", "max_issues_repo_head_hexsha": "3dff53318ce7be52be5f45242ea8daf08a032866", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "include/CartesianGrid.hpp", "max_forks_repo_name": "EACcodes/libKEDF", "max_forks_repo_head_hexsha": "3dff53318ce7be52be5f45242ea8daf08a032866", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.1408775982, "max_line_length": 194, "alphanum_fraction": 0.5358227329, "num_tokens": 3530}
""" This is the script that is used for the implementation of HoloNet. The class HoloNet(nn.module) is described in the following. This code and data is released under the Creative Commons Attribution-NonCommercial 4.0 International license (CC BY-NC.) In a nutshell: # The license is only for non-commercial use (commercial licenses can be obtained from Stanford). # The material is provided as-is, with no warranties whatsoever. # If you publish any code, data, or scientific work based on this, please cite our work. Technical Paper: Y. Peng, S. Choi, N. Padmanaban, G. Wetzstein. Neural Holography with Camera-in-the-loop Training. ACM TOG (SIGGRAPH Asia), 2020. """ import math import numpy as np import torch import torch.nn as nn import utils.utils as utils from algorithms import double_phase from propagation_ASM import propagation_ASM, compute_zernike_basis, combine_zernike_basis from utils.pytorch_prototyping.pytorch_prototyping import Conv2dSame, Unet class HoloNet(nn.Module): """Generates phase for the final non-iterative model Class initialization parameters ------------------------------- distance: propagation dist between SLM and target, in meters, default 0.1. Note: distance is negated internally, so the PhaseGenerator and ProcessAndPropagate get the same input wavelength: the wavelength of interest, in meters, default 520e-9 feature_size: the SLM pixel pitch, in meters, default 6.4e-6 zernike_coeffs: a torch tensor that corresponds to process_phase.py, ProcessAndPropagate.coeffs, after training is completed. Default None, which disables passing zernike coeffs to the final network source_amplitude: a process_phase.SourceAmplitude module, after training. Default None, which disables passing source amp to the final network target_field: a torch tensor that corresponds to propagation_model.py, citl_calibrated_model.target_field, after training is completed. Default None, which disables passing target_field to the final network latent_codes: a citl_calibrated_model.latent_codes parameter, after training. Default None, which disables passing latent_codes to the final network initial_phase: a module that returns an initial phase given the target amp. Default None, which assumes all zeros initial phase final_phase_only: a module that processes the post-propagation amp+phase to a phase-only output that works as well as iterative results. Default None, which switches to double phase coding proptype: chooses the propagation operator ('ASM': propagation_ASM, 'fresnel': propagation_fresnel). Default ASM. linear_conv: if True, pads for linear conv for propagation. Default True Usage ----- Functions as a pytorch module: >>> phase_generator = HoloNet(...) >>> slm_amp, slm_phase = phase_generator(target_amp) target_amp: amplitude at the target plane, with dimensions [batch, 1, height, width] slm_amp: amplitude to be encoded in the phase pattern at the SLM plane. Used to enforce uniformity, if desired. Same as target dimensions slm_phase: encoded phase-only representation at SLM plane, same dimensions """ def __init__(self, distance=0.1, wavelength=520e-9, feature_size=6.4e-6, zernike_coeffs=None, source_amplitude=None, target_field=None, latent_codes=None, initial_phase=None, final_phase_only=None, proptype='ASM', linear_conv=True, manual_aberr_corr=False): super(HoloNet, self).__init__() # submodules self.source_amplitude = source_amplitude self.initial_phase = initial_phase self.final_phase_only = final_phase_only if target_field is not None: self.target_field = target_field.detach() else: self.target_field = None if latent_codes is not None: self.latent_codes = latent_codes.detach() else: self.latent_codes = None # propagation parameters self.wavelength = wavelength self.feature_size = (feature_size if hasattr(feature_size, '__len__') else [feature_size] * 2) self.distance = -distance self.zernike_coeffs = (None if zernike_coeffs is None else -zernike_coeffs.clone().detach()) # objects to precompute self.zernike = None self.precomped_H = None self.precomped_H_zernike = None self.source_amp = None # whether to pass zernike/source amp as layers or divide out manually self.manual_aberr_corr = manual_aberr_corr # make sure parameters from the model training phase don't update if self.zernike_coeffs is not None: self.zernike_coeffs.requires_grad = False if self.source_amplitude is not None: for p in self.source_amplitude.parameters(): p.requires_grad = False # change out the propagation operator if proptype == 'ASM': self.prop = propagation_ASM else: ValueError(f'Unsupported prop type {proptype}') self.linear_conv = linear_conv # set a device for initializing the precomputed objects try: self.dev = next(self.parameters()).device except StopIteration: # no parameters self.dev = torch.device('cpu') def forward(self, target_amp): # compute some initial phase, convert to real+imag representation if self.initial_phase is not None: init_phase = self.initial_phase(target_amp) real, imag = utils.polar_to_rect(target_amp, init_phase) target_complex = torch.complex(real, imag) else: init_phase = torch.zeros_like(target_amp) # no need to convert, zero phase implies amplitude = real part target_complex = torch.complex(target_amp, init_phase) # subtract the additional target field if self.target_field is not None: target_complex_diff = target_complex - self.target_field else: target_complex_diff = target_complex # precompute the propagation kernel only once if self.precomped_H is None: self.precomped_H = self.prop(target_complex_diff, self.feature_size, self.wavelength, self.distance, return_H=True, linear_conv=self.linear_conv) self.precomped_H = self.precomped_H.to(self.dev).detach() self.precomped_H.requires_grad = False if self.precomped_H_zernike is None: if self.zernike is None and self.zernike_coeffs is not None: self.zernike_basis = compute_zernike_basis(self.zernike_coeffs.size()[0], [i * 2 for i in target_amp.size()[-2:]], wo_piston=True) self.zernike_basis = self.zernike_basis.to(self.dev).detach() self.zernike = combine_zernike_basis(self.zernike_coeffs, self.zernike_basis) self.zernike = utils.ifftshift(self.zernike) self.zernike = self.zernike.to(self.dev).detach() self.zernike.requires_grad = False self.precomped_H_zernike = self.zernike * self.precomped_H self.precomped_H_zernike = self.precomped_H_zernike.to(self.dev).detach() self.precomped_H_zernike.requires_grad = False else: self.precomped_H_zernike = self.precomped_H # precompute the source amplitude, only once if self.source_amp is None and self.source_amplitude is not None: self.source_amp = self.source_amplitude(target_amp) self.source_amp = self.source_amp.to(self.dev).detach() self.source_amp.requires_grad = False # implement the basic propagation to the SLM plane slm_naive = self.prop(target_complex_diff, self.feature_size, self.wavelength, self.distance, precomped_H=self.precomped_H_zernike, linear_conv=self.linear_conv) # switch to amplitude+phase and apply source amplitude adjustment amp, ang = utils.rect_to_polar(slm_naive.real, slm_naive.imag) # amp, ang = slm_naive.abs(), slm_naive.angle() # PyTorch 1.7.0 Complex tensor doesn't support # the gradient of angle() currently. if self.source_amp is not None and self.manual_aberr_corr: amp = amp / self.source_amp if self.final_phase_only is None: return amp, double_phase(amp, ang, three_pi=False) else: # note the change to usual complex number stacking! # We're making this the channel dim via cat instead of stack if (self.zernike is None and self.source_amp is None or self.manual_aberr_corr): if self.latent_codes is not None: slm_amp_phase = torch.cat((amp, ang, self.latent_codes.repeat(amp.shape[0], 1, 1, 1)), -3) else: slm_amp_phase = torch.cat((amp, ang), -3) elif self.zernike is None: slm_amp_phase = torch.cat((amp, ang, self.source_amp), -3) elif self.source_amp is None: slm_amp_phase = torch.cat((amp, ang, self.zernike), -3) else: slm_amp_phase = torch.cat((amp, ang, self.zernike, self.source_amp), -3) return amp, self.final_phase_only(slm_amp_phase) def to(self, *args, **kwargs): slf = super().to(*args, **kwargs) if slf.zernike is not None: slf.zernike = slf.zernike.to(*args, **kwargs) if slf.precomped_H is not None: slf.precomped_H = slf.precomped_H.to(*args, **kwargs) if slf.source_amp is not None: slf.source_amp = slf.source_amp.to(*args, **kwargs) if slf.target_field is not None: slf.target_field = slf.target_field.to(*args, **kwargs) if slf.latent_codes is not None: slf.latent_codes = slf.latent_codes.to(*args, **kwargs) # try setting dev based on some parameter, default to cpu try: slf.dev = next(slf.parameters()).device except StopIteration: # no parameters device_arg = torch._C._nn._parse_to(*args, **kwargs)[0] if device_arg is not None: slf.dev = device_arg return slf class InitialPhaseUnet(nn.Module): """computes the initial input phase given a target amplitude""" def __init__(self, num_down=8, num_features_init=32, max_features=256, norm=nn.BatchNorm2d): super(InitialPhaseUnet, self).__init__() net = [Unet(1, 1, num_features_init, num_down, max_features, use_dropout=False, upsampling_mode='transpose', norm=norm, outermost_linear=True), nn.Hardtanh(-math.pi, math.pi)] self.net = nn.Sequential(*net) def forward(self, amp): out_phase = self.net(amp) return out_phase class FinalPhaseOnlyUnet(nn.Module): """computes the final SLM phase given a naive SLM amplitude and phase""" def __init__(self, num_down=8, num_features_init=32, max_features=256, norm=nn.BatchNorm2d, num_in=4): super(FinalPhaseOnlyUnet, self).__init__() net = [Unet(num_in, 1, num_features_init, num_down, max_features, use_dropout=False, upsampling_mode='transpose', norm=norm, outermost_linear=True), nn.Hardtanh(-math.pi, math.pi)] self.net = nn.Sequential(*net) def forward(self, amp_phase): out_phase = self.net(amp_phase) return out_phase class PhaseOnlyUnet(nn.Module): """computes the final SLM phase given a target amplitude""" def __init__(self, num_down=10, num_features_init=16, norm=nn.BatchNorm2d): super(PhaseOnlyUnet, self).__init__() net = [Unet(1, 1, num_features_init, num_down, 1024, use_dropout=False, upsampling_mode='transpose', norm=norm, outermost_linear=True), nn.Hardtanh(-math.pi, math.pi)] self.net = nn.Sequential(*net) def forward(self, target_amp): out_phase = self.net(target_amp) return (torch.ones(1), out_phase)
{"hexsha": "8ea528d7f7008a6149a2600d5c7244bc2f25abf3", "size": 12940, "ext": "py", "lang": "Python", "max_stars_repo_path": "holonet.py", "max_stars_repo_name": "Ter-hash/holography_test", "max_stars_repo_head_hexsha": "372e5192cd1355cb565159f2a96fd2f7370095ce", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "holonet.py", "max_issues_repo_name": "Ter-hash/holography_test", "max_issues_repo_head_hexsha": "372e5192cd1355cb565159f2a96fd2f7370095ce", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "holonet.py", "max_forks_repo_name": "Ter-hash/holography_test", "max_forks_repo_head_hexsha": "372e5192cd1355cb565159f2a96fd2f7370095ce", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.0136054422, "max_line_length": 136, "alphanum_fraction": 0.6315301391, "include": true, "reason": "import numpy", "num_tokens": 2858}
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Warming-up" data-toc-modified-id="Warming-up-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Warming-up</a></span><ul class="toc-item"><li><span><a href="#Point-Estimate" data-toc-modified-id="Point-Estimate-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Point Estimate</a></span></li><li><span><a href="#Sampling-Distributions-and-The-Central-Limit-Theorem" data-toc-modified-id="Sampling-Distributions-and-The-Central-Limit-Theorem-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Sampling Distributions and The Central Limit Theorem</a></span></li><li><span><a href="#Confidence-Interval" data-toc-modified-id="Confidence-Interval-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Confidence Interval</a></span></li><li><span><a href="#Hypothesis-Testing" data-toc-modified-id="Hypothesis-Testing-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Hypothesis Testing</a></span></li><li><span><a href="#Simulation" data-toc-modified-id="Simulation-1.5"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Simulation</a></span></li></ul></li><li><span><a href="#Frequentist-A/B-testing" data-toc-modified-id="Frequentist-A/B-testing-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Frequentist A/B testing</a></span><ul class="toc-item"><li><span><a href="#Comparing-Two-Proportions" data-toc-modified-id="Comparing-Two-Proportions-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Comparing Two Proportions</a></span></li><li><span><a href="#Introducing-Power" data-toc-modified-id="Introducing-Power-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Introducing Power</a></span></li><li><span><a href="#Determining-Sample-Size" data-toc-modified-id="Determining-Sample-Size-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Determining Sample Size</a></span></li><li><span><a href="#Alternative-View-of-the-Test-Statistic" data-toc-modified-id="Alternative-View-of-the-Test-Statistic-2.4"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>Alternative View of the Test Statistic</a></span></li></ul></li><li><span><a href="#Frequentist-A/B-Testing-Workflow" data-toc-modified-id="Frequentist-A/B-Testing-Workflow-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Frequentist A/B Testing Workflow</a></span><ul class="toc-item"><li><span><a href="#Formulate-Business-Goals-&amp;-Hypothesis-Test" data-toc-modified-id="Formulate-Business-Goals-&amp;-Hypothesis-Test-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Formulate Business Goals &amp; Hypothesis Test</a></span><ul class="toc-item"><li><span><a href="#Result" data-toc-modified-id="Result-3.1.1"><span class="toc-item-num">3.1.1&nbsp;&nbsp;</span>Result</a></span></li><li><span><a href="#Rationale" data-toc-modified-id="Rationale-3.1.2"><span class="toc-item-num">3.1.2&nbsp;&nbsp;</span>Rationale</a></span></li><li><span><a href="#Variable" data-toc-modified-id="Variable-3.1.3"><span class="toc-item-num">3.1.3&nbsp;&nbsp;</span>Variable</a></span></li></ul></li><li><span><a href="#Quantitative-A/B-testing" data-toc-modified-id="Quantitative-A/B-testing-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Quantitative A/B testing</a></span><ul class="toc-item"><li><span><a href="#Define-the-Size-and-Duration" data-toc-modified-id="Define-the-Size-and-Duration-3.2.1"><span class="toc-item-num">3.2.1&nbsp;&nbsp;</span>Define the Size and Duration</a></span></li></ul></li><li><span><a href="#Define-the-Population" data-toc-modified-id="Define-the-Population-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>Define the Population</a></span></li><li><span><a href="#Evaluating-Result" data-toc-modified-id="Evaluating-Result-3.4"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>Evaluating Result</a></span></li><li><span><a href="#Sanity-Check" data-toc-modified-id="Sanity-Check-3.5"><span class="toc-item-num">3.5&nbsp;&nbsp;</span>Sanity Check</a></span></li></ul></li><li><span><a href="#A/B-Test-Caveats-&amp;-Advices" data-toc-modified-id="A/B-Test-Caveats-&amp;-Advices-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>A/B Test Caveats &amp; Advices</a></span><ul class="toc-item"><li><span><a href="#Avoid-Biased-Stopping-Times" data-toc-modified-id="Avoid-Biased-Stopping-Times-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Avoid Biased Stopping Times</a></span></li><li><span><a href="#Do-Follow-Up-Tests-and-Watch-your-Overall-Success-Rate" data-toc-modified-id="Do-Follow-Up-Tests-and-Watch-your-Overall-Success-Rate-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Do Follow Up Tests and Watch your Overall Success Rate</a></span></li><li><span><a href="#False-Reporting" data-toc-modified-id="False-Reporting-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>False Reporting</a></span></li><li><span><a href="#Seasonality-/-Not-Running-it-Against-the-Correct-Target" data-toc-modified-id="Seasonality-/-Not-Running-it-Against-the-Correct-Target-4.4"><span class="toc-item-num">4.4&nbsp;&nbsp;</span>Seasonality / Not Running it Against the Correct Target</a></span></li><li><span><a href="#Non-Randomized-Bucketing" data-toc-modified-id="Non-Randomized-Bucketing-4.5"><span class="toc-item-num">4.5&nbsp;&nbsp;</span>Non-Randomized Bucketing</a></span></li><li><span><a href="#Others" data-toc-modified-id="Others-4.6"><span class="toc-item-num">4.6&nbsp;&nbsp;</span>Others</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Reference</a></span></li></ul></div> ```python # code for loading the format for the notebook import os # path : store the current path to convert back to it later path = os.getcwd() os.chdir(os.path.join('..', 'notebook_format')) from formats import load_style load_style(plot_style = False) ``` <style> @import url('http://fonts.googleapis.com/css?family=Source+Code+Pro'); @import url('http://fonts.googleapis.com/css?family=Vollkorn'); @import url('http://fonts.googleapis.com/css?family=Arimo'); @import url('http://fonts.googleapis.com/css?family=Fira_sans'); div.cell { width: 1000px; margin-left: 0% !important; margin-right: auto; } div.text_cell code { background: transparent; color: #000000; font-weight: 600; font-size: 12pt; font-style: bold; font-family: 'Source Code Pro', Consolas, monocco, monospace; } h1 { font-family: 'Open sans',verdana,arial,sans-serif; } div.input_area { background: #F6F6F9; border: 1px solid #586e75; } .text_cell_render h1 { font-weight: 200; font-size: 30pt; line-height: 100%; color:#c76c0c; margin-bottom: 0.5em; margin-top: 1em; display: block; white-space: wrap; text-align: left; } h2 { font-family: 'Open sans',verdana,arial,sans-serif; text-align: left; } .text_cell_render h2 { font-weight: 200; font-size: 16pt; font-style: italic; line-height: 100%; color:#c76c0c; margin-bottom: 0.5em; margin-top: 1.5em; display: block; white-space: wrap; text-align: left; } h3 { font-family: 'Open sans',verdana,arial,sans-serif; } .text_cell_render h3 { font-weight: 200; font-size: 14pt; line-height: 100%; color:#d77c0c; margin-bottom: 0.5em; margin-top: 2em; display: block; white-space: wrap; text-align: left; } h4 { font-family: 'Open sans',verdana,arial,sans-serif; } .text_cell_render h4 { font-weight: 100; font-size: 14pt; color:#d77c0c; margin-bottom: 0.5em; margin-top: 0.5em; display: block; white-space: nowrap; } h5 { font-family: 'Open sans',verdana,arial,sans-serif; } .text_cell_render h5 { font-weight: 200; font-style: normal; color: #1d3b84; font-size: 16pt; margin-bottom: 0em; margin-top: 0.5em; display: block; white-space: nowrap; } div.text_cell_render{ font-family: 'Fira sans', verdana,arial,sans-serif; line-height: 125%; font-size: 115%; text-align:justify; text-justify:inter-word; } div.output_wrapper{ margin-top:0.2em; margin-bottom:0.2em; } code{ font-size: 70%; } .rendered_html code{ background-color: transparent; } ul{ margin: 2em; } ul li{ padding-left: 0.5em; margin-bottom: 0.5em; margin-top: 0.5em; } ul li li{ padding-left: 0.2em; margin-bottom: 0.2em; margin-top: 0.2em; } ol{ margin: 2em; } ol li{ padding-left: 0.5em; margin-bottom: 0.5em; margin-top: 0.5em; } ul li{ padding-left: 0.5em; margin-bottom: 0.5em; margin-top: 0.2em; } a:link{ font-weight: bold; color:#447adb; } a:visited{ font-weight: bold; color: #1d3b84; } a:hover{ font-weight: bold; color: #1d3b84; } a:focus{ font-weight: bold; color:#447adb; } a:active{ font-weight: bold; color:#447adb; } .rendered_html :link { text-decoration: underline; } .rendered_html :hover { text-decoration: none; } .rendered_html :visited { text-decoration: none; } .rendered_html :focus { text-decoration: none; } .rendered_html :active { text-decoration: none; } .warning{ color: rgb( 240, 20, 20 ) } hr { color: #f3f3f3; background-color: #f3f3f3; height: 1px; } blockquote{ display:block; background: #fcfcfc; border-left: 5px solid #c76c0c; font-family: 'Open sans',verdana,arial,sans-serif; width:680px; padding: 10px 10px 10px 10px; text-align:justify; text-justify:inter-word; } blockquote p { margin-bottom: 0; line-height: 125%; font-size: 100%; } </style> # Warming-up ```python os.chdir(path) # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 %matplotlib inline %load_ext watermark %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import seaborn as sns import scipy.stats as stats import matplotlib.pyplot as plt from statsmodels.stats.proportion import proportions_ztest from statsmodels.stats.proportion import proportions_chisquare %watermark -a 'Ethen' -d -t -v -p numpy,scipy,pandas,matplotlib,statsmodels ``` Ethen 2018-02-08 09:09:17 CPython 3.6.3 IPython 6.1.0 numpy 1.14.0 scipy 1.0.0 pandas 0.22.0 matplotlib 2.1.0 statsmodels 0.8.0 ```python # setup the look and feel of the notebook plt.rcParams['figure.figsize'] = 8, 6 sns.set_context('notebook', font_scale = 1.5, rc = {'lines.linewidth': 2.5}) sns.set_style('whitegrid') sns.set_palette('deep') # Create a couple of colors to use throughout the notebook red = sns.xkcd_rgb['vermillion'] blue = sns.xkcd_rgb['dark sky blue'] ``` Ideally, the reader should already understand or vaguely remember the statistic concepts such as z-score, p-value, hypothesis test, confidence interval. The warming-up section is a quick review of the concept, feel free to skip it if you're already acquainted with the concept. Statistical inference is the process of analyzing sample data to gain insight into the population from which the data was collected and to investigate differences between data samples. In data analysis, we are often interested in the characteristics of some large population, but collecting data on the entire population may be infeasible. For example, leading up to U.S. presidential elections it could be very useful to know the political leanings of every single eligible voter, but surveying every voter is not feasible. Instead, we could poll some subset of the population, such as a thousand registered voters, and use that data to make inferences about the population as a whole. ## Point Estimate Point estimates are estimates of population parameters based on sample data. For instance, if we wanted to know the average age of registered voters in the U.S., we could take a survey of registered voters and then use the average age of the respondents as a point estimate of the average age of the population as a whole. The average of a sample is known as the sample mean. The sample mean is usually not exactly the same as the population mean. This difference can be caused by many factors including poor survey design, biased sampling methods and the randomness inherent to drawing a sample from a population. Let's investigate point estimates by generating a population of random age data and then drawing a sample from it to estimate the mean: ```python # generate some random number to serve as our population np.random.seed(10) population_ages1 = stats.poisson.rvs(loc = 18, mu = 35, size = 150000) population_ages2 = stats.poisson.rvs(loc = 18, mu = 10, size = 100000) population_ages = np.concatenate((population_ages1, population_ages2)) print('population mean:', np.mean(population_ages)) ``` population mean: 43.002372 ```python np.random.seed(6) sample_ages = np.random.choice(population_ages, size = 500) print('sample mean:', np.mean(sample_ages)) ``` sample mean: 42.388 The experiment tells us that we'd expect the distribution of the population to be a similar shape to that of the sample, so we can assume that the mean of the sample and the population should have the same value. Note that we can't say that they exactly match, but it's the best estimate we can make. The population mean is often denoted as $\mu$, the estimated population mean as $\hat{\mu}$, mean of the sample $\bar{x}$. So here we're basically saying $\hat{\mu} = \bar{x}$, where we're using the sample mean to estimate the mean of the population and usually the larger the size of our sample, the more accurate our point estimator for the estimated population mean is going to be. ## Sampling Distributions and The Central Limit Theorem Many statistical procedures assume that data follows a normal distribution, because the normal distribution has nice properties like being symmetric and having the majority of the data clustered within a few standard deviations of the mean. Unfortunately, real world data is often not normally distributed and the distribution of a sample tends to mirror the distribution of the population. This means a sample taken from a population with a skewed distribution will also tend to be skewed. ```python fig = plt.figure(figsize = (12, 6)) plt.subplot(1, 2, 1) plt.hist(population_ages) plt.title('Population') plt.subplot(1, 2, 2) plt.hist(sample_ages) plt.title('Sample') plt.show() ``` The plot reveals the data is clearly not normal: instead of one symmetric bell curve, it has as bimodal distribution with two high density peaks. Because of this, the sample we drew from this population should have roughly the same shape and skew. The sample has roughly the same shape as the underlying population. This suggests that we can't apply techniques that assume a normal distribution to this data set, since it is not normal. This leads to our next topic, the **central limit theorem**. The central limit theorem is one of the most important results of probability theory and serves as the foundation of many methods of statistical analysis. At a high level, the theorem states the distribution of many sample means, known as a sampling distribution, will be normally distributed. This rule holds even if the underlying distribution itself is not normally distributed. As a result we can treat the sample mean as if it were drawn normal distribution. To illustrate, let's create a sampling distribution by taking 200 samples from our population and then making 200 point estimates of the mean: ```python np.random.seed(10) samples = 200 point_estimates = [np.random.choice(population_ages, size = 500).mean() for _ in range(samples)] plt.hist(point_estimates) plt.show() ``` The sampling distribution appears to be roughly normal, despite the bimodal population distribution that the samples were drawn from. In addition, the mean of the sampling distribution approaches the true population mean: ```python population_ages.mean() - np.mean(point_estimates) ``` -0.084407999999996264 To hit the notion home, Central Limit Theorem states that that if we collect "a large number" of different samples mean from the population, the sampling distribution, the distribution of the samples mean you collected, will approximately take the shape of a normal distribution around the population mean no matter what the orginal population distribution is. Knowing that the sampling distribution will take the shape of a normal distribution is what makes the theorem so powerful, as it is the foundation of concepts such as confidence intervals and margins of error in frequentist statistics. ## Confidence Interval A point estimate can give us a rough idea of a population parameter like the mean, but estimates are prone to error. A confidence interval is a range of values above and below a point estimate that captures the true population parameter at some predetermined confidence level. For example, if you want to have a 95% chance of capturing the true population parameter with a point estimate and a corresponding confidence interval, we'd set our confidence level to 95%. Higher confidence levels result in a wider confidence intervals. The interval is computed using the formula: $$ \begin{align} \text{point estimate} \pm z * SE \end{align} $$ Where - $z$ is called the **critical value** and it corresponds to the **confidence level** that we chose. Critical value is the number of standard deviations we'd have to go from the mean of the normal distribution to capture the proportion of the data associated with the desired confidence level. For instance, we know that roughly 95% of the data in a normal distribution lies within 2 standard deviations from the mean, so we could use 2 as the z-critical value for a 95% confidence interval (although it is more exact to get z-critical values with `stats.norm.ppf()`) - $SE$ represents the **standard error**. Generally the standard error for a point estimate is estimated from the data and computed using a formula. For example, the standard error for the sample mean is $\frac{s}{ \sqrt{n} }$, where $s$ is the standard deviation and $n$ is the number of samples. - The value $z * SE$ is called the **margin of error**. - Note that this constructing confidence intervals framework can be easily adapted for any estimator that has a nearly normal sampling distribution. e.g. sample mean, two sample mean, sample proportion and two sample proportion (we'll later see). All we have to do this is change the way that we're calculating the standard error. ```python np.random.seed(10) sample_size = 1000 sample = np.random.choice(population_ages, size = sample_size) sample_mean = sample.mean() confidence = 0.95 z_critical = stats.norm.ppf(q = confidence + (1 - confidence) / 2) print('z-critical value:', z_critical) pop_stdev = population_ages.std() margin_of_error = z_critical * (pop_stdev / np.sqrt(sample_size)) confint = sample_mean - margin_of_error, sample_mean + margin_of_error print('point esimate:', sample_mean) print('Confidence interval:', confint) ``` z-critical value: 1.95996398454 point esimate: 42.523 Confidence interval: (41.703064068826833, 43.342935931173173) Notice that the confidence interval we calculated captures the true population mean of 43.0023. Let's create several confidence intervals and plot them to get a better sense of what it means to "capture" the true mean: ```python np.random.seed(12) confidence = 0.95 sample_size = 1000 intervals = [] sample_means = [] for sample in range(25): sample = np.random.choice(population_ages, size = sample_size) sample_mean = sample.mean() sample_means.append(sample_mean) z_critical = stats.norm.ppf(q = confidence + (1 - confidence) / 2) pop_std = population_ages.std() margin_error = z_critical * (pop_stdev / np.sqrt(sample_size)) confint = sample_mean - margin_error, sample_mean + margin_error intervals.append(confint) plt.figure(figsize = (10, 8)) plt.errorbar(x = np.arange(0.1, 25, 1), y = sample_means, yerr = [(top - bot) / 2 for top, bot in intervals], fmt = 'o') plt.hlines(xmin = 0, xmax = 25, y = population_ages.mean(), linewidth = 2.0, color = red) plt.show() ``` Notice that in the plot above, all but one of the 95% confidence intervals overlap the red line marking the true mean. This is to be expected: since a 95% confidence interval captures the true mean 95% of the time, we'd expect our interval to miss the true mean 5% of the time. More formally, the definition of a 95% confidence interval means that **95% of confidence intervals, created based on random samples of the same size from the same population will contain the true population parameter**. ## Hypothesis Testing Lets starts off with a motivating example that asks the question "If you toss a coin 30 times and see 22 heads, is it a fair coin?" We all know that a fair coin should come up heads roughly 15 out of 30 tosses, give or take, so it does seem unlikely to see so many heads. However, the skeptic might argue that even a fair coin could show 22 heads in 30 tosses from time-to-time. This could just be a chance event. So, the question would then be "how can you determine if we're tossing a fair coin?" Let's start by first considering the probability of a single coin flip coming up heads and work our way up to 22 out of 30. $$ \begin{align} P(H) = \frac{1}{2} \end{align} $$ As our equation shows, the probability of a single coin toss turning up heads is exactly 50% since there is an equal chance of either heads or tails turning up. Taking this one step further, to determine the probability of getting 2 heads in a row with 2 coin tosses, we would need to multiply the probability of getting heads by the probability of getting heads again since the two events are independent of one another. $$ \begin{align} P(HH) = P(H) \cdot P(H) = P(H)^2 = \left(\frac{1}{2}\right)^2 = \frac{1}{4} \end{align} $$ Let's now take a look at a slightly different scenario and calculate the probability of getting 2 heads and 1 tails with 3 coin tosses. To get the actual probability of tossing 2 heads and 1 tails we will have to add the probabilities for all of the possible permutations, of which there are exactly three: HHT, HTH, and THH. $$ \begin{align} P(2H,1T) = P(HHT) + P(HTH) + P(THH) = \frac{1}{8} + \frac{1}{8} + \frac{1}{8} = \frac{3}{8} \end{align} $$ Another way we could do this is to use the binomial distribution: $$ \begin{align} P(N_H,N_T) = \binom{n}{k} p^{k} \left( 1 - p \right)^{n - k} \end{align} $$ Where - $n$ is number of coin flips - $p$ is the probability of getting heads on each flip The $\binom{n}{k}$ tells us how many ways are there to get $k$ heads our of $n$ total number of coin flips?" and the $p^k(1-p)^{n-k}$ answers the question "how likely is any given $k$ heads and $n-k$ tails?", multiply them together and we get the probability of getting exactly $k$ heads. Now that we understand the classic method, let's use it to test whether we are actually tossing a fair coin. ```python # Calculate the probability for every possible outcome # of tossing a fair coin 30 k_range k_range = range(1, 31) # number of heads appearing n = 30 # number of k_range tossing the coin p = 0.5 # probability of coin appearing up as head prob = stats.binom(n = n, p = p).pmf(k = k_range) # Plot the probability distribution using the probabilities list # we created above. plt.step(k_range, prob, where = 'mid', color = blue) plt.xlabel('Number of heads') plt.ylabel('Probability') plt.plot((22, 22), (0, 0.1599), color = red) plt.annotate('0.8%', xytext = (25, 0.08), xy = (22, 0.08), va = 'center', color = red, size = 'large', arrowprops = {'arrowstyle': '<|-', 'lw': 2, 'color': red, 'shrinkA': 10}) plt.show() ``` The visualization above shows the probability distribution for flipping a fair coin 30 times. Using this visualization we can now determine the probability of getting, say for example, 12 heads in 30 flips, which looks to be about 8%. Notice that we've labeled our example of 22 heads as 0.8%. If we look at the probability of flipping exactly 22 heads, it looks likes to be a little less than 0.8%, in fact if we calculate it using the function from above, we get 0.5%. ```python prob = stats.binom(n = n, p = p).pmf(k = 22) print('Probability of flipping 22 heads: {:0.1f}%'.format(prob * 100)) ``` Probability of flipping 22 heads: 0.5% So, then why do we have 0.8% labeled in our probability distribution above? Well, that's because we are showing the probability of getting at least 22 heads, which is also known as the **p-value**. Let's pull back from our example and discuss formally about hypothesis testing. In standard frequentist statistic's hypothesis testing, we start with a null hypothesis that we usually call $H_0$ (pronouced as H naught), which represents our status quo. On the other hand, we also have an alternative hypothesis our $H_1$ that represents the question that we wish to answer, i.e. what we’re testing for. After setting up our null and alternative hypothesis, we conduct a hypothesis test under the assumption that the null hypothesis is true. If the test results suggest that the data do not provide convincing evidence for the alternative hypothesis, we stick with the null hypothesis. If they do, then we reject the null hypothesis in favor of the alternative. Frequentist statistic's hypothesis testing uses a p-value to weigh the strength of the evidence (what the data is telling you about the population). p-value is defined as **the probability of obtaining the observed or more extreme outcome, given that the null hypothesis is true (not the probability that the alternative hypthesis is true)**. It is a number between 0 and 1 and interpreted in the following way: - A small p-value (typically <= 0.05, 0.05 is a commonly used threshold, the threshold is often denoted as $\alpha$) indicates strong evidence against the null hypothesis, so we reject the null hypothesis. This means that something interesting is going on and it’s not just noise! - A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so we fail to reject the null hypothesis. Although p-value is still in our favor, we cannot conclusively say that it was not due to random noise. - p-values very close to the cutoff (0.05) are considered to be marginal (could go either way). If you carefully read good papers on these kind of topics, you will always see the p-values being reported so that the readers can draw their own conclusions. **Example:** Let's say that a pizza place claims their delivery times are 30 minutes or less on average. Now we think it's actually takes more than 30 minutes. We conduct a hypothesis test because we believe the null hypothesis, that the mean delivery time is 30 minutes maximum, is incorrect. This means that our alternative hypothesis is the mean time is greater than 30 minutes. We randomly sample some delivery times and run the data through the hypothesis test, and our p-value turns out to be 0.01, which is much less than 0.05. In real terms, there is a probability of 0.01 that we will mistakenly reject the pizza place's claim that their delivery time is less than or equal to 30 minutes. Since typically we are willing to reject the null hypothesis when this probability is less than 0.05, we conclude that the pizza place is wrong; their delivery times are in fact more than 30 minutes on average. Back with our coin toss example, the null hypothesis assumes we have a fair coin, and the way we determine if this hypothesis is true or not is by calculating how often flipping this fair coin 30 times would result in 22 or more heads. If we then take the number of times that we got 22 or more heads and divide that number by the total of all possible permutations of 30 coin tosses, we get the probability of tossing 22 or more heads with a fair coin. This probability is essentially our p-value. ```python def compute_pvalue(n, k, p): """Returns the p-value for binomial distribution""" k_range = range(k, n + 1) pvalue = stats.binom(n = n, p = p).pmf(k = k_range).sum() return pvalue pvalue = compute_pvalue(n = 30, k = 22, p = 0.5) print('P-value: {:0.1f}%'.format(pvalue * 100)) ``` P-value: 0.8% The role of p-value is used to check the validity of the null hypothesis. The way this is done is by agreeing upon some predetermined upper limit for our p-value, below which we will assume that our null hypothesis is false. In other words, if our null hypothesis were true, and 22 heads in 30 flips could happen often enough by chance, we would expect to see it happen more often than the given threshold percentage of times. So, for example, if we chose 10% as our p-value threshold, then we would expect to see 22 or more heads show up at least 10% of the time to determine that this is a chance occurrence and not due to some bias in the coin. Historically, the generally accepted threshold has been 5%, and so if our p-value is less than 5%, we can then make the assumption that our coin may not be fair. Running the code above gives us a p-value of roughly 0.8%, which matches the value in our probability distribution above and is also less than the 5% threshold needed to reject our null hypothesis, so it does look like we may have a biased coin. ```python # we can also use the binom_test function from scipy to # perform the hypothesis testing pvalue = stats.binom_test(x = 22, n = 30, p = 0.5, alternative = 'greater') print('P-value: {:0.1f}%'.format(pvalue * 100)) ``` P-value: 0.8% ## Simulation Instead of using the statistical approach, the code below seeks to answer the same question of whether or not our coin is fair by running a large number of simulated coin flips and calculating the proportion of these experiments that resulted in at least 22 heads or more. ```python def coin_toss(n_simulation = 100000): """ computing a fair coin resulting in at least 22 heads or more through simulation """ pvalue = 0 for i in range(n_simulation): # trials: 1 denotes head, 0 denotes tail trials = np.random.randint(2, size = 30) if trials.sum() >= 22: pvalue += 1 pvalue /= n_simulation return pvalue pvalue = coin_toss() print('Simulated P-value: {:0.1f}%'.format(pvalue * 100)) ``` Simulated P-value: 0.8% The result of our simulations is 0.8%, the exact same result we got earlier when we calculated the p-value using the classical method above. # Frequentist A/B testing A/B testing is essentially a simple randomized trial. Randomized trials are (usually) considered the gold standard study design for evaluating the efficacy of new medical treatments, but they are also used much more widely in experimental research. For example, when someone visits a website, the site sends them to one of two (or possibly more) different landing or home pages, and which one they are sent to is chosen at random. The purpose is to determine which page version generates a superior outcome, e.g. which page generates more advertising revenue, or which which page leads a greater proportion of visitors to continue visiting the site. The key idea is that because we randomize which landing page (or treatment in the case of a randomized clinical trial) someone goes to, after a large number of visitors, the groups of people who visited the two pages are completely comparable in respect of all characteristics (e.g. age, gender, location, and anything else you can think of!). Because the two groups are comparable, we can compare the outcomes (e.g. amount of advertising revenue) between the two groups to obtain an unbiased, and fair, assessment of the relative effectiveness (in terms of our defined outcome) of the two designs. Suppose for the moment that we've had two visitors to our site, and one visitor has been randomized to page A, and the other visitor to page B (note that it is entirely possible, with simple randomization, that both visitors could have both been sent to page A). Suppose next that the visitor to page A generated revenue, but the visitor to page B generated no revenue. Should we conclude that page A is superior to page B, in terms of revenue generation? Of course not. Because we have only sampled two visitors, it is entirely possible that the visitor to page A would have generated revenue even if they had been sent to page B, perhaps because they are very interested in the site's content, whereas perhaps the visitor to page B was not particularly interested in the site content, and was never going to generate revenue. We can overcome this problem by running the A/B testing for a sufficiently large number of visitors, such that the probability that the scenario described above is sufficiently small. Scenario: We ran an A/B test with two different versions of a web page, a and b, for which we count the number of visitors and whether they convert or not. We can summarize this in a contingency table showing the frequency distribution of the events: ```python data = pd.DataFrame({ 'version': ['A', 'B'], 'not_converted': [4514, 4473], 'converted': [486, 527] })[['version', 'not_converted', 'converted']] data ``` <div> <style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>version</th> <th>not_converted</th> <th>converted</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>A</td> <td>4514</td> <td>486</td> </tr> <tr> <th>1</th> <td>B</td> <td>4473</td> <td>527</td> </tr> </tbody> </table> </div> It is trivial to compute the conversion rate of each version, 486/(486 + 4514) = 9.72% for a and 10.5% for b. With such a relatively small difference, however, can we convincingly say that the version b converts better? To test the statistical significance of a result like this, a hypothesis testing can be used. ## Comparing Two Proportions Let's formalize our thought process a little bit, suppose that we have obtained data from n visitors, $n_A$ of which have been (randomly) sent to page A, and $n_B$ of which have been sent to page B. Further, let $X_A$ and $X_B$ denote the number of visitors for whom we obtained a 'successful' outcome in the two groups. The proportion of successes in the two groups is then given by $\hat{p_A} = X_A/n_A$ and $\hat{p_B} = X_B/n_B$ respectively. The estimated difference in success rates is then give by the difference in proportions: $\hat{p_A} - \hat{p_B}$: To assess whether we have statistical evidence that the two pages' success rates truely differ, we can perform a hypothesis test. The null hypothesis that we want to test is that the two pages' true success rates are equal, whereas the alternative is that they differ (one is higher than the other). If $p_A$ = the proportion of the page A population whom we obtained a successful outcome and $p_B$ = the proportion of the page B population whom we obtained a successful outcome then we are interested in testing the following hypothesis: $$ \begin{align} H_0:p_A = p_B \text{ versus } H_A: p_A \neq p_B \end{align} $$ Or put it in another way, the null hypothesis says that the factors 'page type' and 'outcome' are statistically independent of each other. In words, this means knowing which page someone is sent to tells you nothing about the chance that they will have a successful outcome. Now that we know what hypothesis test we're interested in, we'll have to derive the appropriate test statistic. A test statistic is a single metric that can be used to evaluate the null hypothesis and the standard way to obtain this metric is to compute the z-score that measures how many standard deviations below or above the population mean a raw score is: $$ \begin{align} z = \frac{x - \mu}{SE} \end{align} $$ Where: - $\mu$ denotes the mean - $SE$ or sometimes seen as the symbol $\sigma$ denotes the standard error, computed by $\frac{s}{\sqrt{n}}$, where $s$ denotes the standard error and $n$ denotes the number of samples The following link contains an example of where this is applied in proportion hypothesis testing for those who feels uncomfortable with this concept. [Notes: Eberly College of Science STAT 414/415: Test About Proportions](https://onlinecourses.science.psu.edu/stat414/node/265) For our test the underlying metric is a binary yes/no variable (event), which means the appropriate test statistic is a test for differences in proportions: $$ \begin{align} Z = \frac{ (\hat{p_A} - \hat{p_B}) - (p_A - p_B) }{SE(p_A - p_B)} \end{align} $$ The test statistic makes sense as it measuring the difference in the observed proportions and the estimated proportion, standardized by an estimate of the standard error of this quantity. To compute the test statistic, we first need to find the standard deviation/variance of $p_A - p_B$: $$ \begin{align} Var(p_A - p_B) &= Var(p_A) + Var(p_B) \\ &= \frac{p_A (1 - p_A)}{n_A} + \frac{p_B (1 - p_B)}{n_B} \\ &= p (1 - p) \left( \frac{1}{n_A} + \frac{1}{n_B} \right) \end{align} $$ - The first step stems from that fact that, given that we know: - The variance of a random variable X is defined as $Var(X) = E[X^2] - E[X]^2$ - The covariance between two random variable X and Y is defined as $Cov(X, Y) = E[(X - u_x)(y - u_y)] = E[XY] - E[X]E[Y]$ - When conducting hypothesis test, we know that the two groups should be independent of each other, i.e. the covariance between the two should be 0 $$ \begin{align} Var(X - Y) &= E[(X - Y)(X - Y)] - E[X - Y]^2 \\ &= E[X^2 - 2XY + Y^2] - (u_x - u_y)^2 \\ &= E[X^2 - 2XY + Y^2] - u_x^2 + 2u_xu_y - u_y^2 \\ &= (E[X^2] - u_x^2) + (E[Y^2] - u_y^2) - 2(E[XY] - u_xu_y) \\ &= Var(X) + Var(Y) - 2 Cov(X, Y) \end{align} $$ - We're using the property that the variance of a binomial proportion is given by: $Var(p_A) = p_A (1 - p_A) / n_A$, the same can be applied for group B - The third step comes from the fact that if we assume that the null hypothesis, $p_A = p_B$ is true, then the population proportions equal some common value $p$, that is, $p_A = p_B = p$. Since we don't know the assumed common population proportion $p$ any more than we know the proportions $p_A$ and $p_B$ of each population, we can estimate $p$ using the proportion of "successes" in the two combined, $\hat{p} = (X_A + X_B)/(n_A + n_B)$, which is commonly referred to as the **pooled probability** During the third step, we utilized that fact that if we assume that the null hypothesis is true, then $p_A = p_B$, this also means $p_A - p_B = 0$. Given all of these information, the formula for our test statistic now becomes: $$ \begin{align} Z &= \frac{ (\hat{p_A} - \hat{p_B}) - (p_A - p_B) }{SE(p_A - p_B)} \\ &= \frac{ (\hat{p_A} - \hat{p_B}) - 0 }{\sqrt{\hat{p} (1 - \hat{p}) \left( \frac{1}{n_A} + \frac{1}{n_B} \right)}} \end{align} $$ Where $\hat{p} = (X_A + X_B)/(n_A + n_B)$ ```python def two_proprotions_test(success_a, size_a, success_b, size_b): """ A/B test for two proportions; given a success a trial size of group A and B compute its zscore and pvalue Parameters ---------- success_a, success_b : int Number of successes in each group size_a, size_b : int Size, or number of observations in each group Returns ------- zscore : float test statistic for the two proportion z-test pvalue : float p-value for the two proportion z-test """ prop_a = success_a / size_a prop_b = success_b / size_b prop_pooled = (success_a + success_b) / (size_a + size_b) var = prop_pooled * (1 - prop_pooled) * (1 / size_a + 1 / size_b) zscore = np.abs(prop_b - prop_a) / np.sqrt(var) one_side = 1 - stats.norm(loc = 0, scale = 1).cdf(zscore) pvalue = one_side * 2 return zscore, pvalue ``` ```python success_a = 486 size_a = 5000 success_b = 527 size_b = 5000 zscore, pvalue = two_proprotions_test(success_a, size_a, success_b, size_b) print('zscore = {:.3f}, pvalue = {:.3f}'.format(zscore, pvalue)) ``` zscore = 1.359, pvalue = 0.174 ```python # or we can use the implementation from statsmodels # where we pass in the success (they call the argument counts) # and the total number for each group (they call the argument nobs, # number of observations) counts = np.array([486, 527]) nobs = np.array([5000, 5000]) zscore, pvalue = proportions_ztest(counts, nobs, alternative = 'two-sided') print('zscore = {:.3f}, pvalue = {:.3f}'.format(zscore, pvalue)) ``` zscore = -1.359, pvalue = 0.174 Based on the fact that our p-value is not smaller than the 0.05 commonly used threshold, the test statistic tells us we do not have strong evidence against our null hypothesis, i.e. we do not have strong evidence that the two pages are not equally effective. Apart from spitting out the p-value, we will also look at forming a confidence interval for $\hat{p_A} - \hat{p_B}$. If the number of trials in both groups is large, and the observed number of successes are not too small, we can calculate a 95% confidence interval using the formula: $$ \begin{align} \text{point estimate} \pm z * SE &= (\hat{p_A} - \hat{p_B}) \pm z * \frac{p_A (1 - p_A)}{n_A} + \frac{p_B (1 - p_B)}{n_B} \end{align} $$ Note that when calculating the confidence interval because we no longer have the assumption that $p_A = p_B$ from our null hypothesis, thus we can't leverage this property and use the pooled probability. ```python def two_proprotions_confint(success_a, size_a, success_b, size_b, significance = 0.05): """ A/B test for two proportions; given a success a trial size of group A and B compute its confidence interval; resulting confidence interval matches R's prop.test function Parameters ---------- success_a, success_b : int Number of successes in each group size_a, size_b : int Size, or number of observations in each group significance : float, default 0.05 Often denoted as alpha. Governs the chance of a false positive. A significance level of 0.05 means that there is a 5% chance of a false positive. In other words, our confidence level is 1 - 0.05 = 0.95 Returns ------- prop_diff : float Difference between the two proportion confint : 1d ndarray Confidence interval of the two proportion test """ prop_a = success_a / size_a prop_b = success_b / size_b var = prop_a * (1 - prop_a) / size_a + prop_b * (1 - prop_b) / size_b se = np.sqrt(var) # z critical value confidence = 1 - significance z = stats.norm(loc = 0, scale = 1).ppf(confidence + significance / 2) # standard formula for the confidence interval # point-estimtate +- z * standard-error prop_diff = prop_b - prop_a confint = prop_diff + np.array([-1, 1]) * z * se return prop_diff, confint ``` ```python prop_diff, confint = two_proprotions_confint(success_a, size_a, success_b, size_b) print('estimate difference:', prop_diff) print('confidence interval:', confint) ``` estimate difference: 0.008199999999999999 confidence interval: [-0.00362633 0.02002633] Up till this point, we've been using the 5000 as the total number of observations/samples that are involved in the A/B testing process. The next question that we'll address is, in real world scenarios, how many obeservations do we need in order to draw a valid verdict on the test result. This leads us to our next topic **power**. ## Introducing Power In the world of hypothesis testing, rejecting the null hypothesis when it is actually true is called a type 1 error, often denoted as $\alpha$. Committing a type 1 error is a false positive because we end up recommending something that does not work. Conversely, a type 2 error, often denoted as $\beta$, occurs when we do not reject the null hypothesis when it is actually false. This is a false negative because we end up sitting on our hands when we should have taken action. We need to consider both types of errors when choosing the sample size. Two important probabilities related to type 1 and type 2 error are: - **Significance level:** Governs the chance of a false positive. A significance level of 0.05 means that there is a 5% chance of a false positive. Choosing level of significance is an arbitrary task, but for many applications, a level of 5% is chosen, for no better reason than that it is conventional - **Statistical power** Power of 0.80 means that there is an 80% chance that if there was an effect, we would detect it (or a 20% chance that we'd miss the effect). In other words, power is equivalent to $1 - \beta$. There are no formal standards for power, most researchers assess the power of their tests using 0.80 for adequacy | Scenario | $H_0$ is true | $H_0$ is false | |:--------------:|:----------------------------------:|:-------------------------:| | Accept $H_0$ | Correct Decision | Type 2 Error (1 - power) | | Reject $H_0$ | Type 1 Error (significance level) | Correct decision | The concepts of power and significance level can seem somewhat convoluted at first glance. A good way to get a feel for the underlying mechanics is to plot the probability distribution of $Z$ assuming that the null hypothesis is true. Then do the same assuming that the alternative hypothesis is true, and overlay the two plots. Consider the following example: $H_0: p_A = p_B, H_1: p_A > p_B$. A one-sided test was chosen here for charting-simplicity. - Total sample size, N=5,000 (assume equal sample sizes for the control and experiment groups, meaning exactly 2,500 in each group) - Say we've decided we need to observe a difference of 0.02 in order to for us to be satisfied the intervention worked (i.e., assuming that our original baseline, $p_B$ was 0.08, then we want $p_A = 0.10$). We will discuss how to make this decision later in the post ```python def plot_power(min_diff, prob_b, size_a, size_b, significance = 0.05): """illustrating power through a one-tailed hypothesis test""" # obtain the z-score for the minimum detectable # difference using proportion_ztest prob_a = prob_b + min_diff count_a = size_a * prob_a count_b = size_b * prob_b counts = np.array([count_a, count_b]) nobs = np.array([size_a, size_b]) zscore, _ = proportions_ztest(counts, nobs, alternative = 'larger') # distribution for the null hypothesis, h0 # and alternative hypothesis, h1 h0 = stats.norm(loc = 0, scale = 1) h1 = stats.norm(loc = zscore, scale = 1) # points that are greater than the zscore for the # specified significance level x = np.linspace(-5, 6, num = 100) threshold = h0.ppf(1 - significance) mask = x > threshold # power is the area after the threshold, i.e. # 1 - the cumulative distribution function of that point power = np.round(1 - h1.cdf(threshold), 2) hypotheses = [h1, h0] labels = ['$H_1$ is true', '$H_0$ is true'] for hypothesis, label in zip(hypotheses, labels): y = hypothesis.pdf(x) line = plt.plot(x, y, label = label) plt.fill_between(x = x[mask], y1 = 0.0, y2 = y[mask], alpha = 0.2, color = line[0].get_color()) title = 'p1: {}, p2: {}, size1: {}, size2: {}, power: {}' plt.title(title.format(prob_a, prob_b, size_a, size_b, power)) plt.legend() plt.tight_layout() plt.show() ``` ```python prob_b = 0.08 min_diff = 0.02 size_a = 2500 size_b = 2500 plot_power(min_diff, prob_b, size_a, size_b) ``` The shaded green area denotes the significance region, while the shaded blue area denotes the power (note that it includes the shaded green area). Note that if we pick a smaller N, or a smaller probability difference between the control and experiment group, the power drops (the shaded blue area decreases), meaning that if there’s is in fact a change, there’s lesser percent chance that we’ll detect it. ```python # smaller N prob_b = 0.08 min_diff = 0.02 size_a = 1250 size_b = 1250 plot_power(min_diff, prob_b, size_a, size_b) ``` ```python # smaller probability difference prob_b = 0.08 min_diff = 0.001 size_a = 2500 size_b = 2500 plot_power(min_diff, prob_b, size_a, size_b) ``` The following link illustrates power for a two-sided hypothesis test for those interested. [Youtube: Calculating Power and the Probability of a Type II Error (A Two-Tailed Example)](https://www.youtube.com/watch?v=NbeHZp23ubs) ## Determining Sample Size Say we've followed the rule of thumb and required the significance level to be 5% and the power to be 80%. This means we have now specified the two key components of a power analysis. - A decision rule of when to reject the null hypothesis. We reject the null when the p-value is less than 5%. - Our tolerance for committing type 2 error (1−80%=20%). To actually solve for the equation of finding the suitable sample size, we also need to specify the detectable difference, i.e. the level of impact we want to be able to detect with our test. In order to explain the dynamics behind this, we'll return to the definition of power: the power is the probability of rejecting the null hypothesis when it is false. Hence for us to calculate the power, we need to define what "false" means to us in the context of the study. In other words, how much impact, i.e., difference between test and control, do we need to observe in order to reject the null hypothesis and conclude that the action worked? Let's consider two illustrative examples: if we think that an event rate reduction of, say, $10^{-10}$ is enough to reject the null hypothesis, then we need a very large sample size to get a power of 80%. This is pretty easy to deduce from the charts above: if the difference in event rates between test and control is a small number like $10^{-10}$, the null and alternative probability distributions will be nearly indistinguishable. Hence we will need to increase the sample size in order to move the alternative distribution to the right and gain power. Conversely, if we only require a reduction of 0.02 in order to claim success, we can make do with a much smaller sample size. > The smaller the detectable difference, the larger the required sample size Here's how we could conduct a power test in python: ```python import statsmodels.stats.api as sms def compute_sample_size(prop1, min_diff, significance = 0.05, power = 0.8): """ Computes the sample sized required for a two-proportion A/B test; result matches R's pwr.2p.test from the pwr package Parameters ---------- prop1 : float The baseline proportion, e.g. conversion rate min_diff : float Minimum detectable difference significance : float, default 0.05 Often denoted as alpha. Governs the chance of a false positive. A significance level of 0.05 means that there is a 5% chance of a false positive. In other words, our confidence level is 1 - 0.05 = 0.95 power : float, default 0.8 Often denoted as beta. Power of 0.80 means that there is an 80% chance that if there was an effect, we would detect it (or a 20% chance that we'd miss the effect) Returns ------- sample_size : int Required sample size for each group of the experiment References ---------- R pwr package's vignette - https://cran.r-project.org/web/packages/pwr/vignettes/pwr-vignette.html Stackoverflow: Is there a python (scipy) function to determine parameters needed to obtain a target power? - https://stackoverflow.com/questions/15204070/is-there-a-python-scipy-function-to-determine-parameters-needed-to-obtain-a-ta """ prop2 = prop1 + min_diff effect_size = sms.proportion_effectsize(prop1, prop2) sample_size = sms.NormalIndPower().solve_power( effect_size, power = power, alpha = significance, ratio = 1) return sample_size ``` ```python sample_size = compute_sample_size(prop1 = 0.1, min_diff = 0.02) print('sample size required per group:', sample_size) ``` sample size required per group: 3834.5957398840183 Note that the printed result is the sample size needed for each group! Unlike the significance level and the power, there are no plug-and-play values we can use for the detectable difference. The key is to define what "pay off" means for the study at hand, which depends on what the adverse event is a well as the cost of the action. Two guiding principles: - **Avoid wasteful sampling** Let’s say it takes an absolute difference of 0.02 between test and control in order for the treatment to pay off. In this case, aiming for a 0.01 detectable difference would just lead to more precision than we really need. Why have the ability to detect 0.01 if we don’t really care about a 0.01 difference? In many cases, sampling for unnecessary precision can be costly and a waste of time - **Avoid missed opportunities** Conversely, if we are analyzing a sensitive metric where small changes can have a large impact e.g. email campaigns, we have to aim for a small detectable difference. If we choose an insufficient sample size, we may end up sitting on our hands and missing an opportunity (type 2 error) Hence, choosing the minimum detectable difference should be a cross-functional analysis/discussion between the data scientist and the business stakeholder. Once there is a viable range for the detectable difference, we can evaluate the sample size required for each option. For example, let’s say that $p1=0.10$ and we want the detectable difference to be between 0.01 and 0.03. Clearly, we’d rather be able to detect a difference of 0.01, but it may be too costly and hence we want to evaluate more conservative options as well. ```python # calculate the the required sample size # for a range of minimum detectable difference sample_sizes = [] min_diffs = np.arange(0.01, 0.03, 0.001) for min_diff in min_diffs: sample_size = compute_sample_size(prop1 = 0.1, min_diff = min_diff) sample_sizes.append(sample_size) plt.plot(min_diffs, sample_sizes) plt.title('Sample Size Required for the Minimum Detectable Difference') plt.ylabel('Sample Size') plt.xlabel('Minimum Detectable Difference') plt.tight_layout() plt.show() ``` From the graph, we can see that we need roughly 10x more observations to get a detectable difference of 0.01 compared to 0.03. The following section is an alternative way of conducting a test statistic for proportional A/B test, feel free to skip it as it will not affect the understanding of later section. ## Alternative View of the Test Statistic There are two types of the chi-squared test, goodness of fit and test of independence, where the latter is more applicable for our use case here. We start off by converting the contingency table into a probability matrix by dividing each element with the total frequencies: ```python cols = ['not_converted', 'converted'] data[cols] = data[cols] / data[cols].values.sum() data ``` <div> <style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>version</th> <th>not_converted</th> <th>converted</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>A</td> <td>0.4514</td> <td>0.0486</td> </tr> <tr> <th>1</th> <td>B</td> <td>0.4473</td> <td>0.0527</td> </tr> </tbody> </table> </div> We will denote $V$ as the version of the web page ($a$ or $b$) and $C$ as the conversion result, $f$ (false did not convert) or $t$ (true did in fact convert). The table that we computed above, which this the data that we observed can then be translated into this form: | Version (V) | $f$ (false did not convert) | $t$ (true did in fact convert) | |:-----------:|:----------------------------:|:------------------------------:| | A | $P(V = a, C = f)$ | $P(V = a, C = t)$ | | B | $P(V = b, C = f)$ | $P(V = b, C = t)$ | Now, our interest is whether the conversion $C$ depends on the page version $V$, and if it does, to learn which version converts better. In probability theory, the events $C$ and $V$ are said to be independent if the joint probability can be computed by $P(V, C) = P(V) \cdot P(C)$, where $P(V)$ and $P(C)$ are marginal probabilities of $V$ and $C$, respectively. It is straightforward to compute the marginal probabilities from row and column marginals: $$P(V = a) = \frac{4514 + 486}{10000} \hspace{1cm} P(V = b) = \frac{4473 + 527}{10000}$$ $$P(C = f) = \frac{4514 + 4473}{10000} \hspace{1cm} P(C = t) = \frac{486 + 527}{10000}$$ Our null hypothesis is $V$ and $C$ are independent, in which case the elements of the matrix, a.k.a the distribution that we're expecting is equivalent to: | Version (V) | $f$ (false did not convert) | $t$ (true did in fact convert) | |:-----------:|:----------------------------:|:------------------------------:| | A | $P(V = a)P(C = f)$ | $P(V = a)P(C = t)$ | | B | $P(V = b)P(C = f)$ | $P(V = b)P(C = t)$ | The conversion $C$ is said to be dependent on the version $V$ of the web site if this null hypothesis is rejected. Hence rejecting the null hypothesis means that one version is better at converting than the other. When dealing with counts and investigating how far the observed counts are from the expected counts, we use a test statistic called the **chi-square test**. The chi-squared test compares an observed distribution $O_{ij}$ to an expected distribution $E_{ij}$: $$ \begin{align} \chi^2 = \sum_{i,j} \frac{(O_{ij} - E_{ij})^2}{E_{ij}} \end{align} $$ It's calculated as the observed minus the expected for each cell squared divided by the expected counts, the division with the expected counts makes final result proportional to our expected frequency. After performing the computation for each cell, we want to sum this over all the cells (levels of the categorical variable). This $\chi^2$ probability distribution has only one parameter, the degrees of freedom. It influences the shape, the center and the spread of the chi-square distribution. ```python # chi square distribution with varying degrees of freedom fig = plt.figure(figsize = (8, 6)) x = np.linspace(0, 5, 1000) deg_of_freedom = [1, 2, 3, 4] for df in deg_of_freedom: plt.plot(x, stats.chi2.pdf(x, df), label = '$df={}$'.format(df)) plt.xlim(0, 5) plt.ylim(0, 0.5) plt.xlabel('$\chi^2$') plt.ylabel('$f(\chi^2)$') plt.title('$\chi^2\ \mathrm{Distribution}$') plt.legend() plt.show() ``` chi-square distribution gives a way of measuring the difference between the frequencies we observe and the frequencies we expect. The smaller the value of $\chi^2$, the smaller the difference overall between the observed and expected frequencies. The way to compute the degree of freedom for the test of independence using a $r \times c$ contingency matrix is: $$ \begin{align} df = (r - 1)(c - 1) \end{align} $$ Where $r$ denotes the number of rows and $c$ denotes the number of columns. The rationale behind this calculation is because degrees of freedom is the number of expected frequencies we have to calculate independently after taking into account any restrictions. The restrictions come from the row and column sum constraints, but decreased by one because the last entry in the table/matrix is determined by either the row or column sum on that row/column. Fortunately it is very straightforward to carry out this hypothesis testing using packages. All we need is to supply the function with a contingency matrix and it will return the $\chi^2$ statistic and the corresponding p-value: ```python # we can use the proportions_chisquare function, # where we pass in the number of successes and # the total number of trials/observation count = np.array([486, 527]) nobs = np.array([5000, 5000]) # note that in this case (a two sample case with two sided # alternative), the test produces the same value as porportions_ztest # since the chi-square distribution is the square of a normal distribution chisq, pvalue, table = proportions_chisquare(count, nobs) print('chisq = {}, pvalue = {}'.format(chisq, pvalue)) ``` chisq = 1.8464754013996965, pvalue = 0.17419388311716985 ```python # or the chi2_contingency function where we pass # in the observed contingency table observed = np.array([[4514, 486], [4473, 527]]) # more about the correction = False parameter later result = stats.chi2_contingency(observed, correction = False) chisq, pvalue = result[:2] print('chisq = {}, pvalue = {}'.format(chisq, pvalue)) ``` chisq = 1.8464754013996965, pvalue = 0.17419388311716985 The result for our experiment has a $\chi^2 = 1.74$ and $p = 0.185$. Since the p-value is greater than the standard threshold 0.05, we cannot reject the null hypothesis that the page version and the conversion is independent. Therefore the difference in the conversion rates is not statistically significant. For a 2 x 2 contingency table, Yate’s chi-squared test is commonly used. This applies a correction of the form: $$ \begin{align} \chi^2_{Yate's} = \sum_{i,j} \frac{(\big|O_{ij} - E_{ij}\big| - 0.5)^2}{E_{ij}} \end{align} $$ to account for an error between the observed discrete distribution and the continuous chi-squared distribution (the step of -0.5 is often referred to as continuity correction). ```python # we can use the correcction form, by specifying # correction = True result = stats.chi2_contingency(observed, correction = True) chisq, pvalue = result[:2] print('chisq = {}, pvalue = {}'.format(chisq, pvalue)) ``` chisq = 1.7575018692680038, pvalue = 0.18493641552090323 Again, our pvalue is greater than the critical value, hence we simply would not reject the null hypothesis (that there is no relationship between the categorical variables). > Side note: in practice, we want to make sure that each particular scenario or cell has at least five expected counts before employing the chi-square test. # Frequentist A/B Testing Workflow After diving into the technical details of conducting a frequentist A/B testing, we will now introduce one possible template/workflow/thought-process for conducting A/B testing. ## Formulate Business Goals & Hypothesis Test **Define Business Goal** Every project or plan or test always starts with a goal e.g. A business objective for an online flower store is to "Increase our sales by receiving online orders for our bouquets" **Formulate A/B Test** The crux of A/B testing can be summarized into one sentence: > If **[Variable]**, then **[Result]**, because **[Rationale]** - **[Variable]** is the element such as call to action, media that we've modified - **[Result]** is basically what we expect to see, such as more clicks, more sign-ups. The effect size of [Result] will be determined by the data - **[Rationale]** what assumptions will be proven right/wrong after the experiment ### Result We start by asking ourselves, what result are we expecting out of this test? To do this, we need to: - **Define our Key Performance Indicators.** e.g. Our flower store's business objective is to sell bouquets. Our KPI could be number of bouquets sold online. - **Define our target metrics.** e.g. For our imaginary flower store, we can define a monthly target of 175 bouquets sold. ### Rationale A lot of times, people have the idea that A/B testing is panacea, too many people think they'll just guess their way to great conversion and revenue, when truly successful tests are typically much more complicated than that. After defining the high level goal and knowing the result that we're aiming for, find out (not guess) which parts of our business are underperforming or trending and why. Ways to perform this step are: **Quantitative methods** We can start by looking at quantitative data if we have any. These methods do a much better job answering how many and how much types of questions. Say we're a website, we can take a look at our conversion funnel and examine the flow from the persuasive end (top of the funnel) and the transactional end (bottom of the funnel). e.g. We can identify problems by starting from the top 5 highest bounce rate pages. During the examination, segment to spot underlying underperformance or trends. - **Segment by source:** Separate people who arrive on your website from e-mail campaigns, google, twitter, youtube, etc. Find answers to questions like: Is there a difference between bounce rates for those segments? Is there a difference in Visitor Loyalty between those who came from Youtube versus those who came from Twitter? What products do people who come from Youtube care about more than people who come from Google? - **Segment by behavior:** Focus on groups of people who have similar behaviors For example, we can separate out people who visit more than ten times a month versus those that visit only twice. Do these people look for products in different price ranges? Are they from different regions? Or separate people out by the products they purchase, by order size, by people who have signed up. e.g. We're looking at our metric of total active users over time and we see a spike in one of the timelines. After confirming that this is not caused by seasonal variation, we can look at different segment of our visitors to see if one of the segment is causing the spike. Suppose we have chosen segment to be geographic, it might just happen that we’ve identify a large proportion of the traffic is generated by a specific region During the process we should ask ourselves: 1) Why is it happening? 2) How can we spread the success of other areas of the site. And it might be best for us to use qualitative methods to dig deeper and understand why, i.e. the rationale that behind the hypothesis test. **Qualitative methods:** Ideas for gathering qualitative data to understand the why a problem exists and how to potentially fix it: - Add an exit survey on our site, asking why our visitors did/didn't complete the goal - Track what customers are saying in social media and on review sites - User Experience Group (this is the preferred way as it is going really deep with a few users and ask qualitative questions such as what's holding them back from doing what we hope they'll do, e.g. converting) ### Variable Upon identifying the overall business goal and the possible issue, it's time to determine the variable, which is the element that we'll be testing for. e.g. we've identified through quantitative method that less than one percent of visitors sign up for our newsletter and after conducting qualitative studies it's because the call to action wording does not resonate with the audience, then our variable will be changing the call to action's wording. Note that we may have multiple ideas for our variable, in that case we can collate all the ideas, prioritize them based on three simple metrics: - **Potential** How much potential for a conversion rate increase? We can check to see if this kind of idea worked before. - **Importance** How many visitors will be impacted from the test? - **Ease** How easy is it to implement the test? Go for the low-hanging fruit first. Every test that's developed should documented so that we can review and prioritize ideas that are inspired by winning tests. Some ideas worth experimenting are: Headlines, CTA (call to actions), check-out pages, forms and the elements include: - Wording. e.g. Call to action or value proposition. - Image. e.g. Replacing a general logistics image with the image of an actual employee. - Layout. e.g. Increased the size of the contact form or amount of content on the page. --- So given all of that a strong A/B test hypothesis may be: - If the call to action text is changed to "Complete My Order", the conversion rates in the checkout will increase, because the copy is more specific and personalized - If the navigation link is removed from checkout pages, the conversation rate will increase because our website analytics shows portions of our traffic drop out of the funnel by clicking on those links ## Quantitative A/B testing Suppose we're running an educational platform and your A/B testing hypothesis is : Will changing the "Start Now" button from orange to pink increase how many students explore the platform's courses. So in this case the metric that's use to evaluate the change's performance is the click through probability (unique visitors who click the button / unique visitors to page). Note that it is often times impractical to use metrices such as total number of students that completed the course as it often takes weeks or months before a student can do that. Next we will jot down the hypothesis that we wish to test out, in our case the null and alternative hypothesis would be: - $H_0$: The experimental and control groups have the same probability of clicking the button. Or equivalent to saying that the differences of the two groups' probability is 0 - $H_1$: The two groups have different probability of completing a clicking the button ### Define the Size and Duration After we've defined our hypothesis, the first question that comes into mind is how many tests do we need to run, or in a sense how long should the test last in order for us to make our decisions. To do that we can use a power analysis for two independent samples: Now suppose that our current baseline is 0.1, i.e. there's a 10 percent chance that people who saw the button will click it and we wish to detect a change of 2 percent in the click through rate (This change is quite high for online experiment). ```python sample_size = compute_sample_size(prop1 = 0.1, min_diff = 0.02) print('sample size required per group:', sample_size) ``` sample size required per group: 3834.5957398840183 The result shows that we need at least 3841 sample size for each scenario to detect if there will actually be a 2 percent more than baseline click through probability. Note that this is only telling us the minimum sample size required per group, we still need to decide when do we want to run the experiment and for how long. e.g. Suppose we’ve chosen the goal to increase click-through rates, which is defined by the unique number of people who click the button versus the number of users who visited the page that the button was located. But to actually use the definition, we’ll also have to address some other questions. Such as, if the same user visits the page once and comes back a week or two later, do we still only want to count that once? Thus we’ll also need to specify a time period To account for this, if 99% of our visitors convert after 1 week, then we should do the following. - Run our test for two weeks - Include in the test only users who show up in the first week. If a user shows up on day 13, we have not given them enough time to convert (click-through) - At the end of the test, if a user who showed up on day 2 converts more than 7 days after he/she first arrived, he must be counted as a non-conversion There will be more discussion about this in the A/B Test Caveats & Advice section. For this step, there is also an online calculator that non-technical audience could use. [Online Calculator: Sample Size Calculator](http://www.evanmiller.org/ab-testing/sample-size.html) ## Define the Population Another consideration is what fraction of the traffic are we going to send through the experiment. The key is to identify which population of our users will be affected by our experiment, we might want to target our experiment to that traffic (e.g. changing features specific to one language’s users) so that the rest of the population won’t dilute the effect. Next, depending on the problem we're looking at, we might want to use a cohort instead of a population. A cohort makes much more sense than looking at the entire population when testing out learning effects, examining user retention or anything else that requires the users to be established for some reason. A quick note on cohort. The gist of cohort analysis is basically putting our customers into buckets so we can track their behaviours over a period of time. The term cohort stands for a group of customers grouped by the timeline (can be week, month) where they first made a purchase (can be a different action that’s valuable to the business). Having similar traits makes the two groups more comparable. e.g. You’re an educational platform has an existing course that’s already up and running. Some of the students have completed the course, some of them are midway through and there’re students who have not yet started. If you want to change the structure of of one of the lessons to see if it improves the completion rate of the entire course and they started the experiment at time X. For students who have started before the experiment initiated they may have already finished the lesson already leading to the fact that they may not even see the change. So taking the whole population of students and running the experiment on them isn’t what you want. Instead, you want to segment out the cohort, the group of customers, that started the lesson as the experiment was launched and split that into an experiment and control group. ## Evaluating Result Suppose we have ran the test and obtained the total number of sample sizes and the total number of successes for both groups. Given these variables we can use it to calculate whether the proportional change was due to variation or not. ```python # made-up results success_a = 386 size_a = 3834 success_b = 530 size_b = 3842 prob_diff, confint = two_proprotions_confint(success_a, size_a, success_b, size_b) print('estimate difference:', prob_diff) print('confidence interval:', confint) ``` estimate difference: 0.03727084197203194 confidence interval: [ 0.02279256 0.05174912] In order to launch a change, the change should be larger than the minimum detectable change that we wished to detect. In our case, the value we've set was 0.02. Base on the result above, we can denote that since even the lower bound of the confidence interval is larger than the value, we'll definitely launch the newer version of the click button. There is also an online calculator that we can use to perform the proportion test. [Online Calculator: AB Testguide](https://abtestguide.com/calc/) ## Sanity Check When running experiments, especially online experiments, it's a good idea to check whether the experiments were setup properly, i.e. are the users being split equally amongst the two groups. For instance, after running your experiment for a week, you've discovered that the total number of users assigned to the control group is 64454 and the total number of users assigned to the experiment group 61818. How would you figure out whether the difference is within expectation given that each user is randomly assigned to the control or experiment group with a probability of 0.5? It's usually good idea to check this. This is equivalent to saying out of a total 126272 (64454 + 61818) users, is it surprising to see if 64454 users are assigned to the control group? This is essentially a binomial distribution, thus, knowing this information, we can construct a confidence interval to test if the number lies within the confidence interval. The confidence interval can be calculated by the mean plus and minus the z-score times the standard error. \begin{align} mean \pm Z * \sqrt{np(1 - p)} \end{align} Where the mean is expected number of users in the control / experiment group, which is simply the total number of the two groups times 0.5, since the probability of a user falling into either group is 50%. And the standard error of a binomial distribution is $\sqrt{np(1-p)}$. ```python def sanity_check(size1, size2, significance = 0.05): n = size1 + size2 confidence = 1 - significance z = stats.norm.ppf(confidence + significance / 2) confint = n * 0.5 + np.array([-1, 1]) * np.sqrt(n * 0.5 * 0.5) return confint ``` ```python size1 = 64454 size2 = 61818 sanity_check(size1, size2) ``` array([ 62958.32614148, 63313.67385852]) The result shows that 64454 does not lie within the range of the computed 95 percent confidence interval and therefore it indicates the two groups may not be split equally. When this kind of situation happens it's usually best to go back to the day by day data to get a better idea of what could be going wrong. One good thing is to check whether any particular day stands out, or it is just an overall pattern. If it is an overall pattern, then it is suggested that we should check if something went wrong with the experiment setup before proceeding on to analyzing the result. # A/B Test Caveats & Advices ## Avoid Biased Stopping Times NO PEEKING. When running an A/B test, we should avoid stopping the experiment as soon as the results "look" significant. Using a stopping time that is dependent upon the results of the experiment can inflate our false-positive rate substantially. Recall that in many experiments, we set the significance threshold to be 5% (or a p-value threshold of 0.05). This means that we'll accept that Variation A is better than Variation B if A beats B by a margin large enough that a false positive would only happen 5% of the time. If we, however, were to check the experiment with the intent of stopping it if it shows significance, then every time we perform the significance test, we're essentially inflating our false-positive rate. To be more explicit, every time we perform the test there's a 5% chance of false-positive, so in other words, 95% chance of drawing the right conclusion, if we perform it again then that means we need both test to be correct to draw the right conclusion, i.e. the probability of both test giving us the correct result now becomes (1 - 5%)(1 - 5%) and the probability of committing a false positive error is now: 1 - (1 - 5%)(1 - 5%). ```python # the false positive rate of conducting the test for n times significance = 0.05 print('conducting the test 2 times', 1 - (1 - significance) ** 2) print('conducting the test 10 times', 1 - (1 - significance) ** 10) ``` conducting the test 2 times 0.09750000000000003 conducting the test 10 times 0.4012630607616213 The easiest way to avoid this problem is to **choose a stopping time that's independent of the test results**. We could, for example, decide in advance to run the test for a fix amount of time, no matter the results we observed during the test's tenure. Thus just like in the template above, if 99% of our visitors convert after 1 week, then we should do the following. - Run your test for two weeks. - Include in the test only users who show up in the first week. If a user shows up on day 13, you have not given them enough time to convert. - At the end of the test, if a user who showed up on day 2 converts more than 7 days after he first arrived, he must be counted as a non-conversion. Or we could decide to run the test until each bucket has received more than 10,000 visitors, again ignoring the test results until that condition is met. There are tests like power tests that let's us determine how many tests we should run before we make a conclusion about the result. Although we should be very careful with this, because the truth is: It's not really the number of conversions that matters; it's whether the time frame of the test is long enough to capture variations on your site. For instance, the website traffic may behave one way during the day and another way at night (the same holds on weekdays and weekends). Then it's worth noting that there are two effects that could occur when new features are introduced: **Primacy** and **Novelty** effect. - Primacy effect occurs when we change something and experienced users may be less efficient until they get used to the new feature, thus giving inherent advantage to the control (original version) - Novelty effect. Meaning when users are switched to a new experience, their initial reactions may not be their long-term reactions. In other words, if we are testing a new color for a button, the user may initially love the button and click it more often, just because it's novel, but eventually he/she would get used to the new color and behave as he/she did before. It's important to run the trial long enough to get past the period of the "shock of the new". In summary, we should setting a results-independent stopping time (a week) is the easiest and most reliable way to avoid biased stopping times. Note that running the test for a least a week is advised since it'll make sure that the experiment captures the different user behavior of weekdays, weekends and try to avoid holidays .... ## Do Follow Up Tests and Watch your Overall Success Rate If we're running a lot of A/B tests, we should run follow-up tests and pay attention to our base success rate. Let's talk about these in reverse order. Imagine that we've done everything right. We set our stopping time in advance, and keep it independent from the test results. We set a relatively high success criterion: A probability of at least 95% that the variant is better than the control (formally, $p \leq 0.05$). We do all of that. Then We run 100 tests, each with all the rigor just described. In the end, of those 100 tests, 5 of them claims that the variant will beat the control. How many of those variants do we think are really better than the control, though? If we run 20 tests in a row in which the "best" variant is worse or statistically indistinguishable from the control, then we should be suspicious when our 21st test comes out positive. If a button-color test failed to elicit a winner six months ago, but did produce one today, we should be skeptical. Why now but not then? Here's an intuitive way of thinking about this problem. Let's say we have a class of students who each took a 100-item true/false test on a certain subject. Suppose each student chooses randomly on all questions. Each student would achieve a random score between 0 and 100, with an average of 50. Now take only the top scoring 10% of the class, and declaring them "winners", give them a second test, on which they again choose randomly. They will most likely score less on the second test than the first test. That's because, no matter what they scored on the first test they will still average 50 correct answers in the second test. This is what's called the **regression to the mean**. Meaning that tests which seem to be successful but then lose their uplift over time. It can be wise to run our A/B tests twice (a validation test). You'll find that doing so helps to eliminate illusory results. If the results of the first test aren’t robust, you’ll see noticeable decay with the second. But, if the uplift is real, you should still see uplift during the second test. This approach isn’t fail-safe but it will help check whether your results are robust. e.g. In a multiple testing, you tried out three variants, B, C, and D against the control A. Variant C won. Don't deploy it fully yet. Drive 50% of your traffic to Variant C and 50% to Variant A (or some modification on this; the percent split is not important as long as you will have reasonable statistical power within an acceptable time period). As this will give you more information about C's true performance relative to A. Given the situation above, it's better to keep a record of previous tests, when they were run, the variants that were tried, etc. Since these historical record gives you an idea of what's reasonable. Despite the fact that this information is not directly informative of the rates you should expect from future tests (The absolute numbers are extremely time dependent, so the raw numbers that you get today will be completely different than the ones you would have gotten six months later), it gives you an idea of what's plausible in terms of each test's relative performance. Also, by keeping a record of previous tests, we can avoid: - Falling into the trap of "We already tried that". A hypothesis can be implemented in so many different ways. If you just do one headline test and say "we tried that," you’re really selling yourself short. - Not testing continually or not retesting after months or years. Just because you tested a variation in the past doesn’t necessarily mean that those results are going to be valid a year or two from now (Because we have the record of what we did, we can easily reproduce the test). ## False Reporting Let's say we deploy a new feature to your product and wish to see if it increases the product's activation rate (or any other metric or KPI that's relevant to you). Currently the baseline of the product's activation rate is somewhere around 40%. After running the test, we realized that it WORKED, the activation went up to 50%. So we're like, YES! we just raised activation by 25%! and we sent this info to the head of product and ask for a raise. After two months, the head of product comes back to us and said "you told me you raised the activation rate by 25%, shouldn't this mean that I should see a big jump in the overall activation? What's going on?" Well, what's going on is, we did raised activation by 25%, but only for user who uses the product's feature. So if only 10 percent of our users use that product, then the overall increase in activation rate will probably only be around 2.5% (25% * 10%). Which is still probably very good, but the expectation that we've set by mis-reporting can get us into trouble. ## Seasonality / Not Running it Against the Correct Target Suppose you have different types of users (or users with different usage patterns) using your product. e.g. business user and students. Then what can happen is your A/B testing will have different result in July versus October. The reason may be in July all your student users are out on vacation (not using your product) and in October after school starts they start using it again. This is simply saying that the weighting of your user population may be different in different times of the year (seasonality). Thus, you should be clear with yourself about who you're targeting. ## Non-Randomized Bucketing Double check if we're actually randomly splitting you're users, this will most likely burn us if our system assigns user id to users in a systematical way. e.g. user id whose last two digits are 70 are all from a specific region. A more quintessential example of non-randomized bucketing is say we have two types of users. Casual users who spend on average 10 dollars per month, and power users who spend on average 100 dollars. When we randomly assign users to treatment or control in our A/B test there's a chance that power users and casual users will not be evenly split between the two groups. As the sample size for our experiment grows, the likelihood of these imbalances between treatment and control shrinks, but it never completely goes away. Let's say that during our A/B test we had 100 total users. 50 in Group A (Treatment) and 50 in Group B (Control), but Group A had 20 power users and 30 casual users, while Group B had 30 power users and 20 casual users. This means is that even before there was any experiment, the treatment group average spending was lower than the control groups. The pre-experiment numbers would look like this: - Treatment Average = 20 Power Users x 100 + 30 Casual Users x 10 = 2300/50 = 46 per user. - Control Average = 30 Power Users x 100 + 20 Casual Users x 10 = 3200/50 = 64 per user. - Group Difference = 46 - 64 = -18. The upshot is our treatment has to have an effect greater than 18 per user to even show that it had a positive impact. But what if it caused users to spend 12 more? We'd compare our two groups and it would look like our treatment had an effect of -6, and we'd likely make the wrong call to not launch the recommended items feature. ## Others Despite its useful functionality, there are still places where A/B testing isn't as useful. For example: - A/B testing can't tell us if we're missing something. Meaning it can tell you if A performs better B or vice versa, but it can't tell us that if we use C, then it will actually perform better than the former two. - Testing out products that people rarely buy. e.g. cars, apartments. It might be too long before the user actually decides to take actions after seeing the information and we might be unaware of the actual motivation. - Optimizing for the funnel, rather than the product. Understanding what the customers want so that we can make the product better. Ultimately, we can't simply test our headlines and get people to like our product more. - Conflicting test: Two different product teams both deployed new features on your landing page and ran the A/B test at the same period of time. This is more of a organization problem. You should probably require the product teams to register for their test, and make sure that multiple tests on the same stuff are not running at the same time, or else you might be tracking the effect of the other test. - Optimizing the wrong metric. The best example is probably noting that higher click through rate doesn't necessary means higher relevance. To be explicit, poor search results means people perform more searches, and thereby click on more ads. While this seems good in the short term, it's terrible in the long term, as users get more and more frustrated with the search engine. A search engine's goal should be to help users find what they want as quickly as possible, and sessions per user (increasing sessions per user means users are satisfied and returning) should probably be the key metric to showcase instead. - The following article also contains some nice reads regarding A/B testing. [Blog: Never start with a hypothesis](https://towardsdatascience.com/hypothesis-testing-decoded-for-movers-and-shakers-bfc2bc34da41) - The crux of the article is saying we should remember that when formulating hypothesis in the real world, our default action should be the one that we find palatable under ignorance. Hence the idea of incorrectly leaving our cozy comfort zone (default action) should be more painful than the idea of incorrectly sticking to it. i.e. a Type I error should feel worse than a Type II error. # Reference - [Youtube: Beautiful A/B Testing](https://www.youtube.com/watch?v=EvDg7ssY0M8) - [Notebook: Statistics for Hackers](http://nbviewer.jupyter.org/github/croach/statistics-for-hackers/blob/master/statistics-for-hackers.ipynb) - [Blog: What Are P-Values?](https://prateekvjoshi.com/2013/12/07/what-are-p-values/) - [Blog: How to work with A/B test data](https://medium.com/@carsonforter/how-to-work-with-a-b-test-data-96121b89d1a4) - [Blog: Interpreting A/B Test using Python](http://okomestudio.net/biboroku/?p=2375) - [Blog: So, You Need a Statistically Significant Sample?](http://multithreaded.stitchfix.com/blog/2015/05/26/significant-sample/) - [Blog: How to Build a Strong A/B Testing Plan That Gets Results](https://conversionxl.com/how-to-build-a-strong-ab-testing-plan-that-gets-results/) - [Blog: A/B testing and Pearson's chi-squared test of independence](http://thestatsgeek.com/2013/07/22/ab-testing/) - [Blog: A/B testing - confidence interval for the difference in proportions using R](http://thestatsgeek.com/2014/02/15/ab-testing-confidence-interval-for-the-difference-in-proportions-using-r/) - [Blog: Python for Data Analysis Part 23: Point Estimates and Confidence Intervals](http://hamelg.blogspot.com/2015/11/python-for-data-analysis-part-23-point.html) - [Notes: MOST winning A/B test results are illusory](http://www.qubit.com/sites/default/files/pdf/mostwinningabtestresultsareillusory_0.pdf) - [Notes: Eberly College of Science STAT 414/415 Comparing Two Proportions](https://onlinecourses.science.psu.edu/stat414/node/268) - [Quora: When should A/B testing not be trusted to make decisions?](https://www.quora.com/When-should-A-B-testing-not-be-trusted-to-make-decisions) - [Forbes: How To Do A/B Testing Right And Avoid The Most Common Mistakes Marketers Make](https://www.forbes.com/sites/sujanpatel/2015/10/29/how-to-do-ab-testing-right-and-avoid-the-most-common-mistakes-marketers-make/) - [Paper: R. Kohavi, A. Deng, B. Frasca, R. Longbotham, T. Walker, Y. Xu (2012) Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained](http://notes.stephenholiday.com/Five-Puzzling-Outcomes.pdf) - [Slideshare: 4 Steps Toward Scientific A/B Testing](https://www.slideshare.net/RJMetrics/4-steps-toward-scientific-ab-testing)
{"hexsha": "cab3802fffb1fe6b12c89367334e401d8af81615", "size": 888138, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ab_tests/frequentist_ab_test.ipynb", "max_stars_repo_name": "anhnongdan/machine_learning", "max_stars_repo_head_hexsha": "ad247554026b53f285ea96491c4834c8f3057435", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ab_tests/frequentist_ab_test.ipynb", "max_issues_repo_name": "anhnongdan/machine_learning", "max_issues_repo_head_hexsha": "ad247554026b53f285ea96491c4834c8f3057435", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ab_tests/frequentist_ab_test.ipynb", "max_forks_repo_name": "anhnongdan/machine_learning", "max_forks_repo_head_hexsha": "ad247554026b53f285ea96491c4834c8f3057435", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-05-09T13:07:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-09T13:07:51.000Z", "avg_line_length": 351.8771790808, "max_line_length": 130426, "alphanum_fraction": 0.9075796779, "converted": true, "num_tokens": 23584}
function [m, ssmp] = cellmean(x, dim) % [M] = CELLMEAN(X, DIM) computes the mean, across all cells in x along % the dimension dim. % % X should be an linear cell-array of matrices for which the size in at % least one of the dimensions should be the same for all cells nx = size(x); if ~iscell(x) || length(nx)>2 || all(nx>1), error('incorrect input for cellmean'); end if nargin==1, scx1 = cellfun('size', x, 1); scx2 = cellfun('size', x, 2); if all(scx2==scx2(1)), dim = 2; %let second dimension prevail elseif all(scx1==scx1(1)), dim = 1; else error('no dimension to compute mean for'); end end nx = max(nx); nsmp = cellfun(@nansum, isfinite(x), repmat({dim},1,nx), 'UniformOutput', 0); ssmp = cellfun(@nansum, x, repmat({dim},1,nx), 'UniformOutput', 0); m = nansum(cell2mat(ssmp), dim)./nansum(nsmp);
{"author": "fieldtrip", "repo": "fieldtrip", "sha": "c2039be598a02d86b39aae76bfa7aaa720f9801c", "save_path": "github-repos/MATLAB/fieldtrip-fieldtrip", "path": "github-repos/MATLAB/fieldtrip-fieldtrip/fieldtrip-c2039be598a02d86b39aae76bfa7aaa720f9801c/external/cellfunction/cellmean.m"}
using Statistics, SpecialFunctions """ tnmom2(a, b) Second moment of the truncated standard normal distribution. """ function tnmom2(a::Real, b::Real) #return tnmom2c(0, a, b) if !(a ≤ b) return oftype(middle(a, b), NaN) elseif a == b return middle(a, b)^2 elseif abs(a) > abs(b) return tnmom2(-b, -a) elseif isinf(a) && isinf(b) return one(middle(a, b)) elseif isinf(b) return 1 + √(2 / π) * a / erfcx(a / √2) end @assert a < b < Inf && abs(a) ≤ abs(b) @assert a ≤ 0 ≤ b || 0 ≤ a ≤ b if a ≤ 0 ≤ b ea = √(π/2) * erf(a / √2) eb = √(π/2) * erf(b / √2) fa = ea - a * exp(-a^2 / 2) fb = eb - b * exp(-b^2 / 2) m2 = (fb - fa) / (eb - ea) fb ≥ fa && eb ≥ ea || error("error: a=$a, b=$b") 0 ≤ m2 ≤ 1 || error("error: a=$a, b=$b") return m2 else # 0 ≤ a ≤ b exΔ = exp((a - b)middle(a, b)) ea = √(π/2) * erfcx(a / √2) eb = √(π/2) * erfcx(b / √2) fa = ea + a fb = eb + b m2 = (fa - fb * exΔ) / (ea - eb * exΔ) a^2 ≤ m2 ≤ b^2 || error("error: a=$a, b=$b") return m2 end end """ tnmom2(a, b, μ, σ) Second moment of the truncated normal distribution, where μ, σ are the mean and standard deviation of the untruncated distribution. """ function tnmom2(a, b, μ, σ) if σ > 0 α = (a - μ) / σ β = (b - μ) / σ #return σ^2 * tnmom2c(-μ / σ, α, β) return μ^2 + σ^2 * tnmom2(α, β) + 2μ * σ * tnmean(α, β) elseif iszero(σ) && a ≤ b return clamp(μ^2 / one(μ), a, b) else return oftype(middle(a, b), NaN) end end """ tnmom2i(a, b) Second moment of the normal distribution with variance -1 and mean 0, truncated to [a,b]. """ function tnmom2i(a::Real, b::Real) if !(-Inf < a ≤ b < Inf) return oftype(middle(a, b), NaN) elseif a == b return middle(a, b)^2 elseif abs(a) > abs(b) return tnmom2i(-b, -a) end @assert -Inf < a < b < Inf && abs(a) ≤ abs(b) @assert a ≤ 0 ≤ b || 0 < a < b Δ = (b - a) * middle(a, b) exΔ = exp(-Δ) da = dawson(a/√2) db = dawson(b/√2) m = ((da - a/√2)exΔ - (db - b/√2)) / (db - da * exΔ) return m end """ tnmom2i(a, b, μ, σ) Second moment of the normal distribution with variance -σ^2 and mean μ, truncated to [a,b]. """ function tnmom2i(a, b, μ, σ) α = (a - μ) / σ β = (b - μ) / σ return μ^2 + σ^2 * tnmom2i(α, β) + 2μ * σ * tnmom1i(α, β) end
{"hexsha": "f68e77c5164de0c61f044b41dbd62789b2dbff29", "size": 2535, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/tnmom2.jl", "max_stars_repo_name": "suzannastep/TruncatedNormal.jl", "max_stars_repo_head_hexsha": "3c16866c3afa3920e787513d492689e9e81192ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-06-14T11:01:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-04T10:39:02.000Z", "max_issues_repo_path": "src/tnmom2.jl", "max_issues_repo_name": "suzannastep/TruncatedNormal.jl", "max_issues_repo_head_hexsha": "3c16866c3afa3920e787513d492689e9e81192ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2022-03-01T13:44:51.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-04T20:39:24.000Z", "max_forks_repo_path": "src/tnmom2.jl", "max_forks_repo_name": "suzannastep/TruncatedNormal.jl", "max_forks_repo_head_hexsha": "3c16866c3afa3920e787513d492689e9e81192ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-12-10T23:34:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-04T13:59:39.000Z", "avg_line_length": 24.6116504854, "max_line_length": 79, "alphanum_fraction": 0.4785009862, "num_tokens": 1008}
# # Training interface
{"hexsha": "c826d4c1a6ee24929d17bf321df0493ac40789eb", "size": 23, "ext": "jl", "lang": "Julia", "max_stars_repo_path": "src/training.jl", "max_stars_repo_name": "lorenzoh/DeepLearningTasks.jl", "max_stars_repo_head_hexsha": "9bf0eb19c1bc5dbefa8cefe9266eee03d2bfa8c4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-12-20T03:54:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-26T20:58:44.000Z", "max_issues_repo_path": "src/training.jl", "max_issues_repo_name": "lorenzoh/DeepLearningTasks.jl", "max_issues_repo_head_hexsha": "9bf0eb19c1bc5dbefa8cefe9266eee03d2bfa8c4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-12-20T04:03:38.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-09T12:20:10.000Z", "max_forks_repo_path": "src/training.jl", "max_forks_repo_name": "lorenzoh/DeepLearningTasks.jl", "max_forks_repo_head_hexsha": "9bf0eb19c1bc5dbefa8cefe9266eee03d2bfa8c4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-01-11T07:03:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-09T23:31:54.000Z", "avg_line_length": 11.5, "max_line_length": 22, "alphanum_fraction": 0.7391304348, "num_tokens": 5}
''' Features for the phasephase motiontracker ''' import time import tempfile import random import traceback import numpy as np import fnmatch import os from riglib import calibrations import os import subprocess import time from riglib.experiment import traits ######################################################################################################## # Phasespace datasources ######################################################################################################## class MotionData(traits.HasTraits): ''' Enable reading of raw motiontracker data from Phasespace system ''' marker_count = traits.Int(8, desc="Number of markers to return") def init(self): ''' Secondary init function. See riglib.experiment.Experiment.init() Prior to starting the task, this 'init' sets up the DataSource for interacting with the motion tracker system and registers the source with the SinkRegister so that the data gets saved to file as it is collected. ''' from riglib import source src, mkw = self.source_class self.motiondata = source.DataSource(src, **mkw) from riglib import sink sink_manager = sink.SinkManager.get_instance() sink_manager.register(self.motiondata) super(MotionData, self).init() @property def source_class(self): ''' Specify the source class as a function in case future descendant classes want to use a different type of source ''' from riglib import motiontracker return motiontracker.make(self.marker_count), dict() def run(self): ''' Code to execute immediately prior to the beginning of the task FSM executing, or after the FSM has finished running. See riglib.experiment.Experiment.run(). This 'run' method starts the motiontracker source prior to starting the experiment's main thread/process, and handle any errors by stopping the source ''' self.motiondata.start() try: super(MotionData, self).run() finally: self.motiondata.stop() def join(self): ''' See riglib.experiment.Experiment.join(). Re-join the 'motiondata' source process before cleaning up the experiment thread ''' self.motiondata.join() super(MotionData, self).join() def _start_None(self): ''' Code to run before the 'None' state starts (i.e., the task stops) ''' self.motiondata.stop() super(MotionData, self)._start_None() class MotionSimulate(MotionData): ''' Simulate presence of raw motiontracking system using a randomized spatial function ''' @property def source_class(self): ''' Specify the source class as a function in case future descendant classes want to use a different type of source ''' from riglib import motiontracker cls = motiontracker.make(self.marker_count, cls=motiontracker.Simulate) return cls, dict(radius=(100,100,50), offset=(-150,0,0)) class MotionAutoAlign(MotionData): '''Creates an auto-aligning motion tracker, for use with the 6-point alignment system''' autoalign = traits.Instance(calibrations.AutoAlign) def init(self): ''' Secondary init function. See riglib.experiment.Experiment.init() Prior to starting the task, this 'init' adds a filter onto the motiondata source. See MotionData for further details. ''' super(MotionAutoAlign, self).init() self.motiondata.filter = self.autoalign @property def source_class(self): ''' Specify the source class as a function in case future descendant classes want to use a different type of source ''' from riglib import motiontracker cls = motiontracker.make(self.marker_count, cls=motiontracker.AligningSystem) return cls, dict()
{"hexsha": "42949b1c9449129df93d9333f00b612a08905a7f", "size": 3991, "ext": "py", "lang": "Python", "max_stars_repo_path": "features/phasespace_features.py", "max_stars_repo_name": "sgowda/brain-python-interface", "max_stars_repo_head_hexsha": "708e2a5229d0496a8ce9de32bda66f0925d366d9", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2015-08-25T00:28:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-14T22:58:51.000Z", "max_issues_repo_path": "features/phasespace_features.py", "max_issues_repo_name": "sgowda/brain-python-interface", "max_issues_repo_head_hexsha": "708e2a5229d0496a8ce9de32bda66f0925d366d9", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 35, "max_issues_repo_issues_event_min_datetime": "2015-07-14T19:57:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-10T09:38:27.000Z", "max_forks_repo_path": "features/phasespace_features.py", "max_forks_repo_name": "sgowda/brain-python-interface", "max_forks_repo_head_hexsha": "708e2a5229d0496a8ce9de32bda66f0925d366d9", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2016-10-05T17:54:26.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-06T15:37:09.000Z", "avg_line_length": 35.3185840708, "max_line_length": 133, "alphanum_fraction": 0.6334252067, "include": true, "reason": "import numpy", "num_tokens": 799}