code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# Guide for Authors ``` print('Welcome to "The Fuzzing Book"!') ``` This notebook compiles the most important conventions for all chapters (notebooks) of "The Fuzzing Book". ## Organization of this Book ### Chapters as Notebooks Each chapter comes in its own _Jupyter notebook_. A single notebook (= a chapter) should cover the material (text and code, possibly slides) for a 90-minute lecture. A chapter notebook should be named `Topic.ipynb`, where `Topic` is the topic. `Topic` must be usable as a Python module and should characterize the main contribution. If the main contribution of your chapter is a class `FooFuzzer`, for instance, then your topic (and notebook name) should be `FooFuzzer`, such that users can state ```python from FooFuzzer import FooFuzzer ``` Since class and module names should start with uppercase letters, all non-notebook files and folders start with lowercase letters. this may make it easier to differentiate them. The special notebook `index.ipynb` gets converted into the home pages `index.html` (on fuzzingbook.org) and `README.md` (on GitHub). Notebooks are stored in the `notebooks` folder. ### Output Formats The notebooks by themselves can be used by instructors and students to toy around with. They can edit code (and text) as they like and even run them as a slide show. The notebook can be _exported_ to multiple (non-interactive) formats: * HTML – for placing this material online. * PDF – for printing * Python – for coding * Slides – for presenting The included Makefile can generate all of these automatically (and a few more). At this point, we mostly focus on HTML and Python, as we want to get these out quickly; but you should also occasionally ensure that your notebooks can (still) be exported into PDF. Other formats (Word, Markdown) are experimental. ## Sites All sources for the book end up on the [Github project page](https://github.com/uds-se/fuzzingbook). This holds the sources (notebooks), utilities (Makefiles), as well as an issue tracker. The derived material for the book ends up in the `docs/` folder, from where it is eventually pushed to the [fuzzingbook website](http://www.fuzzingbook.org/). This site allows to read the chapters online, can launch Jupyter notebooks using the binder service, and provides access to code and slide formats. Use `make publish` to create and update the site. ### The Book PDF The book PDF is compiled automatically from the individual notebooks. Each notebook becomes a chapter; references are compiled in the final chapter. Use `make book` to create the book. ## Creating and Building ### Tools you will need To work on the notebook files, you need the following: 1. Jupyter notebook. The easiest way to install this is via the [Anaconda distribution](https://www.anaconda.com/download/). 2. Once you have the Jupyter notebook installed, you can start editing and coding right away by starting `jupyter notebook` (or `jupyter lab`) in the topmost project folder. 3. If (like me) you don't like the Jupyter Notebook interface, I recommend [Jupyter Lab](https://jupyterlab.readthedocs.io/en/stable/), the designated successor to Jupyter Notebook. Invoke it as `jupyter lab`. It comes with a much more modern interface, but misses autocompletion and a couple of extensions. I am running it [as a Desktop application](http://christopherroach.com/articles/jupyterlab-desktop-app/) which gets rid of all the browser toolbars. On the Mac, there is also the [Pineapple app](https://nwhitehead.github.io/pineapple/), which integrates a nice editor with a local server. This is easy to use, but misses a few features; also, it hasn't seen updates since 2015. 4. To create the entire book (with citations, references, and all), you also need the [ipybublish](https://github.com/chrisjsewell/ipypublish) package. This allows you to create the HTML files, merge multiple chapters into a single PDF or HTML file, create slides, and more. The Makefile provides the essential tools for creation. ### Version Control We use git in a single strand of revisions. Feel free branch for features, but eventually merge back into the main "master" branch. Sync early; sync often. Only push if everything ("make all") builds and passes. The Github repo thus will typically reflect work in progress. If you reach a stable milestone, you can push things on the fuzzingbook.org web site, using `make publish`. #### nbdime The [nbdime](https://github.com/jupyter/nbdime) package gives you tools such as `nbdiff` (and even better, `nbdiff-web`) to compare notebooks against each other; this ensures that cell _contents_ are compared rather than the binary format. `nbdime config-git --enable` integrates nbdime with git such that `git diff` runs the above tools; merging should also be notebook-specific. #### nbstripout Notebooks in version control _should not contain output cells,_ as these tend to change a lot. (Hey, we're talking random output generation here!) To have output cells automatically stripped during commit, install the [nbstripout](https://github.com/kynan/nbstripout) package and use ``` nbstripout --install ``` to set it up as a git filter. The `notebooks/` folder comes with a `.gitattributes` file already set up for `nbstripout`, so you should be all set. Note that _published_ notebooks (in short, anything under the `docs/` tree _should_ have their output cells included, such that users can download and edit notebooks with pre-rendered output. This folder contains a `.gitattributes` file that should explicitly disable `nbstripout`, but it can't hurt to check. As an example, the following cell 1. _should_ have its output included in the [HTML version of this guide](https://www.fuzzingbook.org/beta/html/Guide_for_Authors.html); 2. _should not_ have its output included in [the git repo](https://github.com/uds-se/fuzzingbook/blob/master/notebooks/Guide_for_Authors.ipynb) (`notebooks/`); 3. _should_ have its output included in [downloadable and editable notebooks](https://github.com/uds-se/fuzzingbook/blob/master/docs/beta/notebooks/Guide_for_Authors.ipynb) (`docs/notebooks/` and `docs/beta/notebooks/`). ``` import random random.random() ``` ### Inkscape and GraphViz Creating derived files uses [Inkscape](https://inkscape.org/en/) and [Graphviz](https://www.graphviz.org/) – through its [Python wrapper](https://pypi.org/project/graphviz/) – to process SVG images. These tools are not automatically installed, but are available on pip, _brew_ and _apt-get_ for all major distributions. ### LaTeX Fonts By default, creating PDF uses XeLaTeX with a couple of special fonts, which you can find in the `fonts/` folder; install these fonts system-wide to make them accessible to XeLaTeX. You can also run `make LATEX=pdflatex` to use `pdflatex` and standard LaTeX fonts instead. ### Creating Derived Formats (HTML, PDF, code, ...) The [Makefile](../Makefile) provides rules for all targets. Type `make help` for instructions. The Makefile should work with GNU make and a standard Jupyter Notebook installation. To create the multi-chapter book and BibTeX citation support, you need to install the [iPyPublish](https://github.com/chrisjsewell/ipypublish) package (which includes the `nbpublish` command). ### Creating a New Chapter To create a new chapter for the book, 1. Set up a new `.ipynb` notebook file as copy of [Template.ipynb](Template.ipynb). 2. Include it in the `CHAPTERS` list in the `Makefile`. 3. Add it to the git repository. ## Teaching a Topic Each chapter should be devoted to a central concept and a small set of lessons to be learned. I recommend the following structure: * Introduce the problem ("We want to parse inputs") * Illustrate it with some code examples ("Here's some input I'd like to parse") * Develop a first (possibly quick and dirty) solution ("A PEG parser is short and often does the job"_ * Show that it works and how it works ("Here's a neat derivation tree. Look how we can use this to mutate and combine expressions!") * Develop a second, more elaborated solution, which should then become the main contribution. ("Here's a general LR(1) parser that does not require a special grammar format. (You can skip it if you're not interested)") * Offload non-essential extensions to later sections or to exercises. ("Implement a universal parser, using the Dragon Book") The key idea is that readers should be able to grasp the essentials of the problem and the solution in the beginning of the chapter, and get further into details as they progress through it. Make it easy for readers to be drawn in, providing insights of value quickly. If they are interested to understand how things work, they will get deeper into the topic. If they just want to use the technique (because they may be more interested in later chapters), having them read only the first few examples should be fine for them, too. Whatever you introduce should be motivated first, and illustrated after. Motivate the code you'll be writing, and use plenty of examples to show what the code just introduced is doing. Remember that readers should have fun interacting with your code and your examples. Show and tell again and again and again. ### Special Sections #### Quizzes You can have _quizzes_ as part of the notebook. These are created using the `quiz()` function. Its arguments are * The question * A list of options * The correct answer(s) - either * the single number of the one single correct answer (starting with 1) * a list of numbers of correct answers (multiple choices) To make the answer less obvious, you can specify it as a string containing an arithmetic expression evaluating to the desired number(s). The expression will remain in the code (and possibly be shown as hint in the quiz). ``` from bookutils import quiz # A single-choice quiz quiz("The color of the sky is", ['blue', 'red', 'black'], '5 - 4') # A multiple-choice quiz quiz("What is this book?", ['Novel', 'Friendly', 'Useful'], ['5 - 4', '1 + 1', '27 / 9']) ``` Cells that contain only the `quiz()` call will not be rendered (but the quiz will). #### Synopsis Each chapter should have a section named "Synopsis" at the very end: ``` ## Synopsis This is the text of the synopsis. ``` This section is evaluated at the very end of the notebook. It should summarize the most important functionality (classes, methods, etc.) together with examples. In the derived HTML and PDF files, it is rendered at the beginning, such that it can serve as a quick reference #### Excursions There may be longer stretches of text (and code!) that are too special, too boring, or too repetitve to read. You can mark such stretches as "Excursions" by enclosing them in MarkDown cells that state: ``` #### Excursion: TITLE ``` and ``` #### End of Excursion ``` Stretches between these two markers get special treatment when rendering: * In the resulting HTML output, these blocks are set up such that they are shown on demand only. * In printed (PDF) versions, they will be replaced by a pointer to the online version. * In the resulting slides, they will be omitted right away. Here is an example of an excursion: #### Excursion: Fine points on Excursion Cells Note that the `Excursion` and `End of Excursion` cells must be separate cells; they cannot be merged with others. #### End of Excursion ### Ignored Code If a code cell starts with ```python # ignore ``` then the code will not show up in rendered input. Its _output_ will, however. This is useful for cells that create drawings, for instance - the focus should be on the result, not the code. This also applies to cells that start with a call to `display()` or `quiz()`. ### Ignored Cells You can have _any_ cell not show up at all (including its output) in any rendered input by adding the following meta-data to the cell: ```json { "ipub": { "ignore": true } ``` *This* text, for instance, does not show up in the rendered version. ## Coding ### Set up The first code block in each notebook should be ``` import bookutils ``` This sets up stuff such that notebooks can import each other's code (see below). This import statement is removed in the exported Python code, as the .py files would import each other directly. Importing `bookutils` also sets a fixed _seed_ for random number generation. This way, whenever you execute a notebook from scratch (restarting the kernel), you get the exact same results; these results will also end up in the derived HTML and PDF files. (If you run a notebook or a cell for the second time, you will get more random results.) ### Coding Style and Consistency Here's a few rules regarding coding style. #### Use Python 3 We use Python 3 (specifically, Python 3.6) for all code. If you can, try to write code that can be easily backported to Python 2. #### Follow Python Coding Conventions We use _standard Python coding conventions_ according to [PEP 8](https://www.python.org/dev/peps/pep-0008/). Your code must pass the `pycodestyle` style checks which you get by invoking `make style`. A very easy way to meet this goal is to invoke `make reformat`, which reformats all code accordingly. The `code prettify` notebook extension also allows you to automatically make your code (mostly) adhere to PEP 8. #### One Cell per Definition Use one cell for each definition or example. During importing, this makes it easier to decide which cells to import (see below). #### Identifiers In the book, this is how we denote `variables`, `functions()` and `methods()`, `Classes`, `Notebooks`, `variables_and_constants`, `EXPORTED_CONSTANTS`, `files`, `folders/`, and `<grammar-elements>`. #### Quotes If you have the choice between quoting styles, prefer * double quotes (`"strings"`) around strings that are used for interpolation or that are natural language messages, and * single quotes (`'characters'`) for single characters and formal language symbols that a end user would not see. #### Read More Beyond simple syntactical things, here's a [very nice guide](https://docs.python-guide.org/writing/style/) to get you started writing "pythonic" code. ### Importing Code from Notebooks To import the code of individual notebooks, you can import directly from .ipynb notebook files. ``` from Fuzzer import fuzzer fuzzer(100, ord('0'), 10) ``` **Important**: When importing a notebook, the module loader will **only** load cells that start with * a function definition (`def`) * a class definition (`class`) * a variable definition if all uppercase (`ABC = 123`) * `import` and `from` statements All other cells are _ignored_ to avoid recomputation of notebooks and clutter of `print()` output. Exported Python code will import from the respective .py file instead. The exported Python code is set up such that only the above items will be imported. If importing a module prints out something (or has other side effects), that is an error. Use `make check-imports` to check whether your modules import without output. Import modules only as you need them, such that you can motivate them well in the text. ### Imports and Dependencies Try to depend on as few other notebooks as possible. This will not only ease construction and reconstruction of the code, but also reduce requirements for readers, giving then more flexibility in navigating through the book. When you import a notebook, this will show up as a dependency in the [Sitemap](00_Table_of_Contents.ipynb). If the imported module is not critical for understanding, and thus should not appear as a dependency in the sitemap, mark the import as "minor dependency" as follows: ``` from Reducer import DeltaDebuggingReducer # minor dependency ``` ### Design and Architecture Stick to simple functions and data types. We want our readers to focus on functionality, not Python. You are encouraged to write in a "pythonic" style, making use of elegant Python features such as list comprehensions, sets, and more; however, if you do so, be sure to explain the code such that readers familiar with, say, C or Java can still understand things. ### Incomplete Examples When introducing examples for students to complete, use the ellipsis `...` to indicate where students should add code, as in here: ``` def student_example(): x = some_value() # Now, do something with x ... ``` The ellipsis is legal code in Python 3. (Actually, it is an `Ellipsis` object.) ### Introducing Classes Defining _classes_ can be a bit tricky, since all of a class must fit into a single cell. This defeats the incremental style preferred for notebooks. By defining a class _as a subclass of itself_, though, you can avoid this problem. Here's an example. We introduce a class `Foo`: ``` class Foo: def __init__(self): pass def bar(self): pass ``` Now we could discuss what `__init__()` and `bar()` do, or give an example of how to use them: ``` f = Foo() f.bar() ``` We now can introduce a new `Foo` method by subclassing from `Foo` into a class which is _also_ called `Foo`: ``` class Foo(Foo): def baz(self): pass ``` This is the same as if we had subclassed `Foo` into `Foo_1` with `Foo` then becoming an alias for `Foo_1`. The original `Foo` class is overshadowed by the new one: ``` new_f = Foo() new_f.baz() ``` Note, though, that _existing_ objects keep their original class: ``` from ExpectError import ExpectError with ExpectError(): f.baz() ``` ## Helpers There's a couple of notebooks with helpful functions, including [Timer](Timer.ipynb), [ExpectError and ExpectTimeout](ExpectError.ipynb). Also check out the [Coverage](Coverage.ipynb) class. ### Quality Assurance In your code, make use of plenty of assertions that allow to catch errors quickly. These assertions also help your readers understand the code. ### Issue Tracker The [Github project page](https://github.com/uds-se/fuzzingbook) allows to enter and track issues. ## Writing Text Text blocks use Markdown syntax. [Here is a handy guide](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). ### Sections Any chapter notebook must begin with `# TITLE`, and sections and subsections should then follow by `## SECTION` and `### SUBSECTION`. Sections should start with their own block, to facilitate cross-referencing. ### Highlighting Use * _emphasis_ (`_emphasis_`) for highlighting, * *emphasis* (`*emphasis*`) for highlighting terms that will go into the index, * `backticks` for code and other verbatim elements. ### Hyphens and Dashes Use "–" for em-dashes, "-" for hyphens, and "$-$" for minus. ### Quotes Use standard typewriter quotes (`"quoted string"`) for quoted text. The PDF version will automatically convert these to "smart" (e.g. left and right) quotes. ### Lists and Enumerations You can use bulleted lists: * Item A * Item B and enumerations: 1. item 1 1. item 2 For description lists, use a combination of bulleted lists and highlights: * **PDF** is great for reading offline * **HTML** is great for reading online ### Math LaTeX math formatting works, too. `$x = \sum_{n = 1}^{\infty}\frac{1}{n}$` gets you $x = \sum_{n = 1}^{\infty}\frac{1}{n}$. ### Inline Code Python code normally goes into its own cells, but you can also have it in the text: ```python s = "Python syntax highlighting" print(s) ``` ### Images To insert images, use Markdown syntax `![Word cloud](PICS/wordcloud.png){width=100%}` inserts a picture from the `PICS` folder. ![Word cloud](PICS/wordcloud.png){width=100%} All pictures go to `PICS/`, both in source as well as derived formats; both are stored in git, too. (Not all of us have all tools to recreate diagrams, etc.) ### Footnotes Markdown supports footnotes, as in [^footnote]. These are rendered as footnotes in HTML and PDF, _but not within Jupyter_; hence, readers may find them confusing. So far, the book makes no use of footnotes, and uses parenthesized text instead. [^footnote]: Test, [Link](https://www.fuzzingbook.org). ### Floating Elements and References \todo[inline]{I haven't gotten this to work yet -- AZ} To produce floating elements in LaTeX and PDF, edit the metadata of the cell which contains it. (In the Jupyter Notebook Toolbar go to View -> Cell Toolbar -> Edit Metadata and a button will appear above each cell.) This allows you to control placement and create labels. #### Floating Figures Edit metadata as follows: ```json { "ipub": { "figure": { "caption": "Figure caption.", "label": "fig:flabel", "placement": "H", "height":0.4, "widefigure": false, } } } ``` - all tags are optional - height/width correspond to the fraction of the page height/width, only one should be used (aspect ratio will be maintained automatically) - `placement` is optional and constitutes using a placement arguments for the figure (e.g. \begin{figure}[H]). See [Positioning_images_and_tables](https://www.sharelatex.com/learn/Positioning_images_and_tables). - `widefigure` is optional and constitutes expanding the figure to the page width (i.e. \begin{figure*}) (placement arguments will then be ignored) #### Floating Tables For **tables** (e.g. those output by `pandas`), enter in cell metadata: ```json { "ipub": { "table": { "caption": "Table caption.", "label": "tbl:tlabel", "placement": "H", "alternate": "gray!20" } } } ``` - `caption` and `label` are optional - `placement` is optional and constitutes using a placement arguments for the table (e.g. \begin{table}[H]). See [Positioning_images_and_tables](https://www.sharelatex.com/learn/Positioning_images_and_tables). - `alternate` is optional and constitutes using alternating colors for the table rows (e.g. \rowcolors{2}{gray!25}{white}). See (https://tex.stackexchange.com/a/5365/107738)[https://tex.stackexchange.com/a/5365/107738]. - if tables exceed the text width, in latex, they will be shrunk to fit #### Floating Equations For **equations** (e.g. those output by `sympy`), enter in cell metadata: ```json { "ipub": { "equation": { "environment": "equation", "label": "eqn:elabel" } } } ``` - environment is optional and can be 'none' or any of those available in [amsmath](https://www.sharelatex.com/learn/Aligning_equations_with_amsmath); 'equation', 'align','multline','gather', or their \* variants. Additionally, 'breqn' or 'breqn\*' will select the experimental [breqn](https://ctan.org/pkg/breqn) environment to *smart* wrap long equations. - label is optional and will only be used if the equation is in an environment #### References To reference a floating object, use `\cref`, e.g. \cref{eq:texdemo} ### Cross-Referencing #### Section References * To refer to sections in the same notebook, use the header name as anchor, e.g. `[Code](#Code)` gives you [Code](#Code). For multi-word titles, replace spaces by hyphens (`-`), as in [Using Notebooks as Modules](#Using-Notebooks-as-Modules). * To refer to cells (e.g. equations or figures), you can define a label as cell metadata. See [Floating Elements and References](#Floating-Elements-and-References) for details. * To refer to other notebooks, use a Markdown cross-reference to the notebook file, e.g. [the "Fuzzing" chapter](Fuzzer.ipynb). A special script will be run to take care of these links. Reference chapters by name, not by number. ### Citations To cite papers, cite in LaTeX style. The text ``` print(r"\cite{Purdom1972}") ``` is expanded to \cite{Purdom1972}, which in HTML and PDF should be a nice reference. The keys refer to BibTeX entries in [fuzzingbook.bib](fuzzingbook.bib). * LaTeX/PDF output will have a "References" section appended. * HTML output will link to the URL field from the BibTeX entry. Be sure it points to the DOI. ### Todo's * To mark todo's, use `\todo{Thing to be done}.` \todo{Expand this} ### Tables Tables with fixed contents can be produced using Markdown syntax: | Tables | Are | Cool | | ------ | ---:| ----:| | Zebra | 2 | 30 | | Gnu | 20 | 400 | If you want to produce tables from Python data, the `PrettyTable` package (included in the book) allows to [produce tables with LaTeX-style formatting.](http://blog.juliusschulz.de/blog/ultimate-ipython-notebook) ``` from bookutils import PrettyTable as pt import numpy as np data = np.array([[1, 2, 30], [2, 3, 400]]) pt.PrettyTable(data, [r"$\frac{a}{b}$", r"$b$", r"$c$"], print_latex_longtable=False) ``` ### Plots and Data It is possible to include plots in notebooks. Here is an example of plotting a function: ``` %matplotlib inline import matplotlib.pyplot as plt x = np.linspace(0, 3 * np.pi, 500) plt.plot(x, np.sin(x ** 2)) plt.title('A simple chirp'); ``` And here's an example of plotting data: ``` %matplotlib inline import matplotlib.pyplot as plt data = [25, 36, 57] plt.plot(data) plt.title('Increase in data'); ``` Plots are available in all derived versions (HTML, PDF, etc.) Plots with `plotly` are even nicer (and interactive, even in HTML), However, at this point, we cannot export them to PDF, so `matplotlib` it is. ## Slides You can set up the notebooks such that they also can be presented as slides. In the browser, select View -> Cell Toolbar -> Slideshow. You can then select a slide type for each cell: * `New slide` starts a new slide with the cell (typically, every `## SECTION` in the chapter) * `Sub-slide` starts a new sub-slide which you navigate "down" to (anything in the section) * `Fragment` is a cell that gets revealed after a click (on the same slide) * `Skip` is skipped during the slide show (e.g. `import` statements; navigation guides) * `Notes` goes into presenter notes To create slides, do `make slides`; to view them, change into the `slides/` folder and open the created HTML files. (The `reveal.js` package has to be in the same folder as the slide to be presented.) The ability to use slide shows is a compelling argument for teachers and instructors in our audience. (Hint: In a slide presentation, type `s` to see presenter notes.) ## Writing Tools When you're editing in the browser, you may find these extensions helpful: ### Jupyter Notebook [Jupyter Notebook Extensions](https://github.com/ipython-contrib/jupyter_contrib_nbextensions) is a collection of productivity-enhancing tools (including spellcheckers). I found these extensions to be particularly useful: * Spell Checker (while you're editing) * Table of contents (for quick navigation) * Code prettify (to produce "nice" syntax) * Codefolding * Live Markdown Preview (while you're editing) ### Jupyter Lab Extensions for _Jupyter Lab_ are much less varied and less supported, but things get better. I am running * [Spell Checker](https://github.com/ijmbarr/jupyterlab_spellchecker) * [Table of Contents](https://github.com/jupyterlab/jupyterlab-toc) * [JupyterLab-LSP](https://towardsdatascience.com/jupyterlab-2-0-edd4155ab897) providing code completion, signatures, style checkers, and more. ## Interaction It is possible to include interactive elements in a notebook, as in the following example: ```python try: from ipywidgets import interact, interactive, fixed, interact_manual x = interact(fuzzer, char_start=(32, 128), char_range=(0, 96)) except ImportError: pass ``` Note that such elements will be present in the notebook versions only, but not in the HTML and PDF versions, so use them sparingly (if at all). To avoid errors during production of derived files, protect against `ImportError` exceptions as in the above example. ## Read More Here is some documentation on the tools we use: 1. [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) - general introduction to Markdown 1. [iPyPublish](https://github.com/chrisjsewell/ipypublish) - rich set of tools to create documents with citations and references ## Alternative Tool Sets We don't currently use these, but they are worth learning: 1. [Making Publication-Ready Python Notebooks](http://blog.juliusschulz.de/blog/ultimate-ipython-notebook) - Another tool set on how to produce book chapters from notebooks 1. [Writing academic papers in plain text with Markdown and Jupyter notebook](https://sylvaindeville.net/2015/07/17/writing-academic-papers-in-plain-text-with-markdown-and-jupyter-notebook/) - Alternate ways on how to generate citations 1. [A Jupyter LaTeX template](https://gist.github.com/goerz/d5019bedacf5956bcf03ca8683dc5217#file-revtex-tplx) - How to define a LaTeX template 1. [Boost Your Jupyter Notebook Productivity](https://towardsdatascience.com/jupyter-notebook-hints-1f26b08429ad) - a collection of hints for debugging and profiling Jupyter notebooks
github_jupyter
# Appendix A. Creating custom list of stop words. ``` # Basic data science packages import pandas as pd import numpy as np import matplotlib.pyplot as plt import joblib from sklearn.feature_extraction import text # load data data = pd.read_pickle('data/cleaned_data.pkl') data.head() ``` *** ## Adding adjectives and proper nouns to list of stop words Some adjectives are positive or negative by definition. These words are probably highly predictive of a high or low rating, but do not tell us anything else about the whisky specifically. Therefore, to learn more about whisky specific language, these words should be ignored when tokenizing. Likewise, the name of the distiller in a review does not tell us anything about the whisky. These too will be added to the list of stopwords. We will start with creating a list of adjectives. ``` adj_to_remove = [ 'best', 'good', 'love', 'incredible', 'remarkable', 'excellent', 'stunning', 'great', 'fantastic', 'wonderful', 'outstanding', 'superb', 'magnificent', 'exceptional', 'marvelous', 'superior', 'awesome', 'bad', 'terrible', 'worst', 'poor', 'unpleasant', 'inferior', 'unsatisfactory', 'inadequate', 'lousy', 'atrocious', 'deficient', 'awful' ] ``` We can inspect the names of the whiskys to see the names of each distiller, and then manually add them to a list ``` with pd.option_context('display.max_rows', None, 'display.max_columns', None): print(data['name'].sort_values()) # actual proper nouns that I need to remove - manually inspect proper nouns list and names of the whiskeus proper_nouns_list = [ 'aberfeldy', 'aberlour', 'alltabhainne', 'ardbeg', 'auchentoshan', 'auchentoshans', 'asyla', 'alexander', 'ardmore', 'arran', 'auchroisk', 'aultmore', 'askaig', 'antiquary', 'adelphi', 'ballechin', 'balvenie', 'benachie', 'benriachs', 'bladnoch', 'bladnocha', 'brora', 'broras', 'bowmore', 'bull', 'bruichladdich', 'bruichladdichit', 'bruichladdichs', 'bunnahabhain', 'balblair', 'ballantine', 'nevis', 'benriach', 'benrinnes', 'benromach', 'balmenach', 'blackadder', 'blair', 'boutique', 'box', 'binnys', "binny's", 'cardhu', 'chivas', 'clynelishs', 'clynelish', 'craigellachie', 'cragganmore', 'cadenhead', 'caol', 'ila', 'chieftain', 'compass', 'cuatro', 'cutty', 'collection', 'deanston', 'dailuaine', 'dalmore', 'dalwhinnie', 'dewars', 'dewar', 'deveron', 'douglas', 'duncan', 'dufftown', 'edradour', 'edradours', 'ellen', 'editors', 'farclas', 'garioch', 'gariochs', 'glenallechie','glenburgie', 'glencadam', 'glencraig', 'glendronach', 'glendronachs', 'glenfiddich', 'glenfiddichs', 'glengoyne', 'glenisla', 'glenkeir', 'glenrothes', 'glenlivet', 'glenturret', 'glenfarclas', 'glenglassaugh', 'glenkinchie', 'glenmorangie', 'glenugie', 'gordon', 'hart', 'hazelburn', 'highland', 'hazelwood', 'hunter', 'inchmurrin', 'johnnie', 'jura', 'juras', 'keith', 'kensington', 'kilchomanfree', 'kilchomans', 'kildalton', 'kinchie', 'kininvie', 'kirkland', 'lochnager', 'lochranza', 'lagavulin', 'littlemill', 'linkwood', 'longmorn', 'linlithgow', 'laphroig', 'ledaig', 'lomand', 'lombard', 'lonach', 'longrow', 'macduff', 'macmillans', 'magdelene', 'macallan', 'mortlach', 'monnochmore', 'macdougall', 'mossman', 'mackillops', 'mackinlays', 'master', 'murray', 'mcdavid', 'oban', 'park', 'pulteney', 'peerless', 'scapa', 'shackleton', 'shieldaig', 'skye', 'springbank', 'springbanks', 'strathclyde', 'strathisla', 'scotia', 'signatory', 'scotts', 'singleton', 'speyburn', 'strathmill', 'sovereign', 'talisker', 'tomintoul', 'turasmara', 'teaninich', 'taylor', 'tobermory', 'tomatin', 'tormore', 'tullibardine', 'uigeadail', 'usquaebach', 'valinch', 'walker', 'wemyss' ] # Adding adjectives and names of distillers to the list of stop words my_stop_words = text.ENGLISH_STOP_WORDS.union(proper_nouns_list, adj_to_remove) # Saving custom stop words to list joblib.dump(my_stop_words, 'data/my_stop_words.pkl') ```
github_jupyter
``` import os import json import re import ast import json from graphviz import Digraph import pandas as pd # color the graph import graph_tool.all as gt import copy import matplotlib.colors as mcolors import sys import matplotlib.colors as mcolors import matplotlib.pyplot as plt import numpy as np import seaborn as sns sns.set_style("whitegrid") import pandas as pd import os import json import itertools import matplotlib import matplotlib.ticker as ticker worker_color = {'10.255.23.108': '#e41a1c', '10.255.23.109': '#984ea3', '10.255.23.110': '#ff7f00', '10.255.23.115': '#4daf4a'} results_dir = '/local0/serverless/serverless/ipythons/plots' stats_dir='/opt/dask-distributed/benchmark/stats' # get all benchmarks def get_benchmarks(): benchmarks = {} for _file in os.listdir(stats_dir): try: app = _file.split('.', 1)[0] assert os.path.isfile(os.path.join(stats_dir, f'{app}.g')) \ and os.path.isfile(os.path.join(stats_dir, f'{app}.json')) \ and os.path.isfile(os.path.join(stats_dir, f'{app}.colors')) bnch, scheduler, _ = app.split('_', 2) #scheduler = 'vanilla' benchmarks[app] = {'app': app, 'scheduler': scheduler, 'benchmark': bnch} except AssertionError: pass return benchmarks worker_color = {'10.255.23.108': '#e41a1c', '10.255.23.109': '#984ea3', '10.255.23.110': '#ff7f00', '10.255.23.115': '#4daf4a'} def color_assignment(benchmark, task_style): cfile = f'/opt/dask-distributed/benchmark/stats/{benchmark}.colors' with open(cfile, 'r') as cfd: raw = cfd.read().split('\n') for ln in raw: if not ln: continue task_name, actual = ln.split(',') if task_name not in task_style: task_style[task_name] = {} task_style[task_name]['actual'] = actual def build_graph(benchmark, task_style): css_colors = list(mcolors.CSS4_COLORS.keys()) gfile = f'/opt/dask-distributed/benchmark/stats/{benchmark}.g' with open(gfile, 'r') as fd: raw = fd.read().split('\n') g = gt.Graph(directed=True) vid_to_vx = {} name_to_vid = {} g.vertex_properties['name'] = g.new_vertex_property("string") g.vertex_properties['color'] = g.new_vertex_property("string") g.vertex_properties['worker'] = g.new_vertex_property("string") g.vertex_properties['icolor'] = g.new_vertex_property("int") g.vertex_properties['simcolor'] = g.new_vertex_property("string") g.vertex_properties['isimcolor'] = g.new_vertex_property("string") for ln in raw: if ln.startswith('v'): _, vid, name = ln.split(',', 2) v = g.add_vertex() vid_to_vx[vid] = v name_to_vid[name] = vid g.vp.name[v] = name try: g.vp.icolor[v] = int(task_style[name]['actual']) #if g.vp.icolor[v] >= len(css_colors): #g.vp.color[v] = mcolors.CSS4_COLORS[css_colors[0]] #else: g.vp.color[v] = mcolors.CSS4_COLORS[css_colors[int(task_style[name]['actual'])]] except KeyError: print(f'Keyerror for {name}') raise NameError('Error') #g.vp.color[v] = 'yellow' #g.vp.icolor[v] = 2 for ln in raw: if ln.startswith('e'): _, vsrc, vdst, _ = ln.split(',', 3) g.add_edge(vid_to_vx[vsrc], vid_to_vx[vdst]) return g keys = list(mcolors.CSS4_COLORS.keys()) def update_runtime_state(benchmark, g, task_style): print('process', benchmark) tasks = [] jfile = f'/opt/dask-distributed/benchmark/stats/{benchmark}.json' with open(jfile, 'r') as fd: stats = ast.literal_eval(fd.read()) #print(json.dumps(stats, indent=4)) print('stat size is', len(stats)) min_ts = sys.maxsize for s in stats: task_style[s]['output_size'] = stats[s]['msg']['nbytes'] task_style[s]['input_size'] = 0 task_style[s]['remote_read'] = 0 task_style[s]['local_read'] = 0 task_style[s]['worker'] = stats[s]['worker'].split(':')[1].replace('/', '') startsstops = stats[s]['msg']['startstops'] for ss in startsstops: if ss['action'] == 'inputsize': continue if ss['action'] == 'compute': task_style[s]['compute_end'] = ss['stop'] task_style[s]['compute_start'] = ss['start'] task_style[s]['runtime'] = ss['stop'] - ss['start'] if ss['start'] < min_ts: min_ts = ss['start'] if ss['stop'] < min_ts: min_ts = ss['stop'] print(min_ts) for s in stats: startsstops = stats[s]['msg']['startstops'] min_start = sys.maxsize max_end = 0 transfer_stop = 0 transfer_start = sys.maxsize for ss in startsstops: if ss['action'] == 'inputsize': continue if ss['start'] < min_start: min_start = ss['start'] if ss['stop'] > max_end: max_end = ss['stop'] if ss['action'] == 'compute': compute_stop = ss['stop'] compute_start = ss['start'] run_time = ss['stop'] - ss['start'] if ss['action'] == 'transfer': #print(ss['start'] - min_ts, ss['stop'] - min_ts) if ss['start'] < transfer_start: transfer_start = ss['start'] if ss['stop'] > transfer_stop: transfer_stop = ss['stop'] #print('transfer start', ss['start'] - min_ts, # 'transfer stop', ss['stop'] - min_ts) if transfer_stop == 0: transfer_stop = compute_start transfer_start = compute_start #print('***transfer start', transfer_start - min_ts, # '****transfer stop', transfer_stop - min_ts) tasks.append({'name': s, 'start_ts': min_start - min_ts, 'end_ts': max_end - min_ts, 'compute_start': compute_start - min_ts, 'compute_stop': compute_stop - min_ts, 'transfer_start': transfer_start - min_ts, 'transfer_stop': transfer_stop - min_ts, 'worker': stats[s]['worker'].split(':')[1].replace('/', '')}) #print('\n') #total amount of data accessed, data accessed remotely, data accessed locally for v in g.vertices(): for vi in v.in_neighbors(): task_style[g.vp.name[v]]['input_size'] += task_style[g.vp.name[vi]]['output_size'] if task_style[g.vp.name[v]]['worker'] == task_style[g.vp.name[vi]]['worker']: task_style[g.vp.name[v]]['local_read'] += task_style[g.vp.name[vi]]['output_size'] else: task_style[g.vp.name[v]]['remote_read'] += task_style[g.vp.name[vi]]['output_size'] for v in g.vertices(): g.vp.worker[v] = task_style[g.vp.name[v]]['worker'] #g.vp.color[v] = colors[task_style[g.vp.name[v]]['worker']] #Check the slack for the prefetching bw = 10*(1<<27) # 10 Gbps (1<<30)/(1<<3) not_from_remote = 0 for v in g.vertices(): parents_end = [] for vi in v.in_neighbors(): parents_end.append(task_style[g.vp.name[vi]]['compute_end']) if len(parents_end): max_end = max(parents_end) for vi in v.in_neighbors(): if max_end == task_style[g.vp.name[vi]]['compute_end'] and task_style[g.vp.name[vi]]['worker'] != task_style[g.vp.name[v]]['worker']: #print(f'Slack come from local chain') not_from_remote += 1 #print(f'slack for {g.vp.name[v]}: {round(1000*(max(parents_end) - min(parents_end)), 2)}msec', # '\t runtime:', round(1000*task_style[g.vp.name[vi]]['runtime'], 4), 'msec', # '\t remote read', task_style[g.vp.name[v]]['remote_read']/bw) #print(not_from_remote) return tasks def plot_graph(g): policy = b.split('_')[1] print('policy is', policy) dg = Digraph('G', filename=f'{b}.gv', format='png') for v in g.vertices(): #print(g.vp.name[v]) vname = g.vp.name[v].split('-', 1) vname = vname[1] if len(vname) > 1 else vname[0] #dg.attr('node', shape='ellipse', style='filled', color=g.vp.color[v]) dg.attr('node', shape='ellipse', style="filled,solid", penwidth="3", fillcolor=g.vp.color[v] if "chaincolor" in policy else "#f0f0f0", color=worker_color[g.vp.worker[v]]) color = '-' if 'chaincolor' in policy: color = g.vp.icolor[v] dg.node(f'{vname}, color({color})') #print(g.vp.name[v], g.vp.icolor[v], g.vp.worker[v]) for e in g.edges(): vname = g.vp.name[e.source()].split('-', 1) sname = vname[1] if len(vname) > 1 else vname[0] vname = g.vp.name[e.target()].split('-', 1) tname = vname[1] if len(vname) > 1 else vname[0] dg.edge(f'{sname}, color({g.vp.icolor[e.source()] if "chaincolor" in policy else "-"})', f'{tname}, color({g.vp.icolor[e.target()] if "chaincolor" in policy else "-"})') dg.view(f'/local0/serverless/serverless/ipythons/plots/{b}', quiet=False) def plot_graph2(g): policy = b.split('_')[1] dg = Digraph('G', filename=f'{b.split("_")[0]}.gv', format='png') for v in g.vertices(): vname = g.vp.name[v].split('-', 1) vname = vname[1] if len(vname) > 1 else vname[0] dg.attr('node', shape='ellipse', style='filled', color='#252525') dg.attr('node', shape='ellipse', style="filled,solid", penwidth="3", fillcolor= "#f0f0f0", color='#252525') dg.node(f'{vname}') for e in g.edges(): vname = g.vp.name[e.source()].split('-', 1) sname = vname[1] if len(vname) > 1 else vname[0] vname = g.vp.name[e.target()].split('-', 1) tname = vname[1] if len(vname) > 1 else vname[0] dg.edge(f'{sname}', f'{tname}') dg.view(f'/local0/serverless/serverless/ipythons/plots/{b.split("_")[0]}', quiet=False) def format_xticks(x, pos=None): return x #return str(int(x*1000)) def plot_gannt_chart(tasks, benchmark): #_max = df['runtime'].max() #print(benchmark, df['runtime'].max()) sns.set_style("ticks") sns.set_context("paper", font_scale=1) sns.set_context(rc = {'patch.linewidth': 1.5, 'patch.color': 'black'}) plt.rc('font', family='serif') fig, ax = plt.subplots(figsize=(10,8)) ax.set_xlabel('Time (second)', fontsize=16) sns.despine() ax.yaxis.grid(color='#99999910', linestyle=(0, (5, 10)), linewidth=0.4) ax.set_axisbelow(True) ax.tick_params(axis='both', which='major', labelsize=14) ax.yaxis.set_ticks_position('both') ax.xaxis.set_major_formatter(ticker.FuncFormatter(format_xticks)) ax.set_yticklabels([]) #ax.set_xlim([0, _max]) # Setting graph attribute ax.grid(True) base = 0 size = 8 margin = 3 workers_load={} for ts in tasks: #print(json.dumps(ts, indent=4)) #print(ts['name'], ts['start_ts'], ts['end_ts'], ts['worker']) if ts['worker'] not in workers_load: workers_load[ts['worker']] = [] #workers_load[ts['worker']].append((ts['start_ts'], ts['end_ts'] - ts['start_ts'])) ax.broken_barh([(ts['transfer_start'], ts['transfer_stop'] - ts['transfer_start'])], (base, size), edgecolors =worker_color[ts['worker']], facecolors = worker_color[ts['worker']]) ax.broken_barh([(ts['compute_start'], ts['compute_stop'] - ts['compute_start'])], (base, size), edgecolors = worker_color[ts['worker']], facecolors='#f0f0f0') vname = ts['name'].split('-', 1) sname = vname[1] if len(vname) > 1 else vname[0] ax.text(x=ts['compute_start'] + (ts['compute_stop'] - ts['compute_start'])/2, y=base + size/2, s=sname, ha='center', va='center', color='black') base += (size + margin) jobname, policy, _ = benchmark.split('_') #ax.axes.yaxis.set_visible(False) ax.set_ylabel(f'{benchmark}', fontsize=14) ax.set_title(f'{jobname}, {policy}', fontsize=16) leg1 = ax.legend(['10.255.23.108', '10.255.23.109', '10.255.23.109', '10.255.23.115'], title='Communication cost', ncol = 1, loc='lower center') leg1.legendHandles[0].set_color(worker_color['10.255.23.108']) leg1.legendHandles[1].set_color(worker_color['10.255.23.109']) leg1.legendHandles[2].set_color(worker_color['10.255.23.110']) leg1.legendHandles[3].set_color(worker_color['10.255.23.115']) leg2 = ax.legend(['10.255.23.108', '10.255.23.109', '10.255.23.109', '10.255.23.115'], title='Computation cost',loc='lower right') leg2.legendHandles[0].set_edgecolor(worker_color['10.255.23.108']) leg2.legendHandles[1].set_edgecolor(worker_color['10.255.23.109']) leg2.legendHandles[2].set_edgecolor(worker_color['10.255.23.110']) leg2.legendHandles[3].set_edgecolor(worker_color['10.255.23.115']) leg2.legendHandles[0].set_facecolor('#f0f0f0') leg2.legendHandles[1].set_facecolor('#f0f0f0') leg2.legendHandles[2].set_facecolor('#f0f0f0') leg2.legendHandles[3].set_facecolor('#f0f0f0') #end_2_end = df[df['exp'] == benchmark]['runtime'] #ax.vlines(end_2_end, ymin=0, ymax=base, label = end_2_end, linestyles='--', color='#bdbdbd') ax.add_artist(leg1) fig.savefig(f'{os.path.join("/local0/serverless/serverless/ipythons/plots",benchmark)}.gannt.png', format='png', dpi=200) plt.show() def plot_runtime(name, df): sns.set_style("ticks") sns.set_context("paper", font_scale=1) sns.set_context(rc = {'patch.linewidth': 1.5, 'patch.color': 'black'}) plt.rc('font', family='serif') fig, ax = plt.subplots(figsize=(8,5)) sns.barplot(x='scheduler', y='runtime', #, hue = 'scheduler', order = ['optimal', 'vanilla', 'optplacement', 'chaincolorrr', 'chaincolorch', 'random'], palette = ['#fbb4ae','#fdb462', '#8dd3c7', '#80b1d3', '#bebada' , '#fb8072'], data=df, ax=ax) sns.despine() ax.yaxis.grid(color='#99999910', linestyle=(0, (5, 10)), linewidth=0.4) ax.set_axisbelow(True) ax.tick_params(axis='x', which='major', labelsize=14, rotation=15) ax.tick_params(axis='y', which='major', labelsize=14) ax.yaxis.set_ticks_position('both') ax.set_ylabel('Runtime (sec)', fontsize=16) ax.set_xlabel(f'Benchmark: {df["benchmark"].unique()[0]}', fontsize=16) #ax.legend(fontsize=14, ncol=3) plt.tight_layout() fig.savefig(f'/local0/serverless/serverless-sim/results/runtime_{name}.png', format='png', dpi=200) plt.show() #dt = pd.read_csv('/local0/serverless/task-bench/dask/stats.csv') #dt.head(5) #gdf = dt.groupby('name') benchmarks = ['tree.w10.s8.1G1B_vanilla_3d70b39c'] #get_benchmarks() #for name, df in gdf: #plot_runtime(name, df) #continue #if 'spread' not in name: continue for b in benchmarks: #if name not in b: continue task_style = {} color_assignment(b, task_style) g = build_graph(b, task_style) tasks = update_runtime_state(b, g, task_style) plot_graph(g) plot_graph2(g) plot_gannt_chart(tasks, b) #break #break ```
github_jupyter
### How To Break Into the Field Now you have had a closer look at the data, and you saw how I approached looking at how the survey respondents think you should break into the field. Let's recreate those results, as well as take a look at another question. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import HowToBreakIntoTheField as t %matplotlib inline df = pd.read_csv('./survey_results_public.csv') schema = pd.read_csv('./survey_results_schema.csv') df.head() schema.head() ``` #### Question 1 **1.** In order to understand how to break into the field, we will look at the **CousinEducation** field. Use the **schema** dataset to answer this question. Write a function called **get_description** that takes the **schema dataframe** and the **column** as a string, and returns a string of the description for that column. ``` def get_description(column_name, schema=schema): ''' INPUT - schema - pandas dataframe with the schema of the developers survey column_name - string - the name of the column you would like to know about OUTPUT - desc - string - the description of the column ''' desc = list(schema[schema['Column'] == column_name]['Question'])[0] return desc #test your code #Check your function against solution - you shouldn't need to change any of the below code get_description(df.columns[0]) # This should return a string of the first column description #Check your function against solution - you shouldn't need to change any of the below code descrips = set(get_description(col) for col in df.columns) t.check_description(descrips) ``` The question we have been focused on has been around how to break into the field. Use your **get_description** function below to take a closer look at the **CousinEducation** column. ``` get_description('CousinEducation') ``` #### Question 2 **2.** Provide a pandas series of the different **CousinEducation** status values in the dataset. Store this pandas series in **cous_ed_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status. If it looks terrible, and you get no information from it, then you followed directions. However, we should clean this up! ``` cous_ed_vals = df.CousinEducation.value_counts()#Provide a pandas series of the counts for each CousinEducation status cous_ed_vals # assure this looks right # The below should be a bar chart of the proportion of individuals in your ed_vals # if it is set up correctly. (cous_ed_vals/df.shape[0]).plot(kind="bar"); plt.title("Formal Education"); ``` We definitely need to clean this. Above is an example of what happens when you do not clean your data. Below I am using the same code you saw in the earlier video to take a look at the data after it has been cleaned. ``` possible_vals = ["Take online courses", "Buy books and work through the exercises", "None of these", "Part-time/evening courses", "Return to college", "Contribute to open source", "Conferences/meet-ups", "Bootcamp", "Get a job as a QA tester", "Participate in online coding competitions", "Master's degree", "Participate in hackathons", "Other"] def clean_and_plot(df, title='Method of Educating Suggested', plot=True): ''' INPUT df - a dataframe holding the CousinEducation column title - string the title of your plot axis - axis object plot - bool providing whether or not you want a plot back OUTPUT study_df - a dataframe with the count of how many individuals Displays a plot of pretty things related to the CousinEducation column. ''' study = df['CousinEducation'].value_counts().reset_index() study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True) study_df = t.total_count(study, 'method', 'count', possible_vals) study_df.set_index('method', inplace=True) if plot: (study_df/study_df.sum()).plot(kind='bar', legend=None); plt.title(title); plt.show() props_study_df = study_df/study_df.sum() return props_study_df props_df = clean_and_plot(df) ``` #### Question 4 **4.** I wonder if some of the individuals might have bias towards their own degrees. Complete the function below that will apply to the elements of the **FormalEducation** column in **df**. ``` def higher_ed(formal_ed_str): ''' INPUT formal_ed_str - a string of one of the values from the Formal Education column OUTPUT return 1 if the string is in ("Master's degree", "Doctoral", "Professional degree") return 0 otherwise ''' if formal_ed_str in ("Master's degree", "Doctoral", "Professional degree"): return 1 else: return 0 df["FormalEducation"].apply(higher_ed)[:5] #Test your function to assure it provides 1 and 0 values for the df # Check your code here df['HigherEd'] = df["FormalEducation"].apply(higher_ed) higher_ed_perc = df['HigherEd'].mean() t.higher_ed_test(higher_ed_perc) ``` #### Question 5 **5.** Now we would like to find out if the proportion of individuals who completed one of these three programs feel differently than those that did not. Store a dataframe of only the individual's who had **HigherEd** equal to 1 in **ed_1**. Similarly, store a dataframe of only the **HigherEd** equal to 0 values in **ed_0**. Notice, you have already created the **HigherEd** column using the check code portion above, so here you only need to subset the dataframe using this newly created column. ``` ed_1 = df[df['HigherEd'] == 1] # Subset df to only those with HigherEd of 1 ed_0 = df[df['HigherEd'] == 0] # Subset df to only those with HigherEd of 0 print(ed_1['HigherEd'][:5]) #Assure it looks like what you would expect print(ed_0['HigherEd'][:5]) #Assure it looks like what you would expect #Check your subset is correct - you should get a plot that was created using pandas styling #which you can learn more about here: https://pandas.pydata.org/pandas-docs/stable/style.html ed_1_perc = clean_and_plot(ed_1, 'Higher Formal Education', plot=False) ed_0_perc = clean_and_plot(ed_0, 'Max of Bachelors Higher Ed', plot=False) comp_df = pd.merge(ed_1_perc, ed_0_perc, left_index=True, right_index=True) comp_df.columns = ['ed_1_perc', 'ed_0_perc'] comp_df['Diff_HigherEd_Vals'] = comp_df['ed_1_perc'] - comp_df['ed_0_perc'] comp_df.style.bar(subset=['Diff_HigherEd_Vals'], align='mid', color=['#d65f5f', '#5fba7d']) ``` #### Question 6 **6.** What can you conclude from the above plot? Change the dictionary to mark **True** for the keys of any statements you can conclude, and **False** for any of the statements you cannot conclude. ``` sol = {'Everyone should get a higher level of formal education': False, 'Regardless of formal education, online courses are the top suggested form of education': True, 'There is less than a 1% difference between suggestions of the two groups for all forms of education': False, 'Those with higher formal education suggest it more than those who do not have it': True} t.conclusions(sol) ``` This concludes another look at the way we could compare education methods by those currently writing code in industry.
github_jupyter
The notebook is meant to help the user experiment with different models and features. This notebook assumes that there is a saved csv called 'filteredAggregateData.csv' somewhere on your local harddrive. The location must be specified below. The cell imports all of the relevant packages. ``` ############## imports # general import statistics import datetime from sklearn.externals import joblib # save and load models import random # data manipulation and exploration import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib ## machine learning stuff # preprocessing from sklearn import preprocessing # feature selection from sklearn.feature_selection import SelectKBest, SelectPercentile from sklearn.feature_selection import f_regression # pipeline from sklearn.pipeline import Pipeline # train/testing from sklearn.model_selection import train_test_split, KFold, GridSearchCV, cross_val_score # error calculations from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score # models from sklearn.linear_model import LinearRegression # linear regression from sklearn.linear_model import BayesianRidge #bayesisan ridge regression from sklearn.svm import SVR # support vector machines regression from sklearn.gaussian_process import GaussianProcessRegressor # import GaussianProcessRegressor from sklearn.neighbors import KNeighborsRegressor # k-nearest neightbors for regression from sklearn.neural_network import MLPRegressor # neural network for regression from sklearn.tree import DecisionTreeRegressor # decision tree regressor from sklearn.ensemble import RandomForestRegressor # random forest regression from sklearn.ensemble import AdaBoostRegressor # adaboost for regression # saving models # from sklearn.externals import joblib import joblib ``` Imports the API. 'APILoc' is the location of 'API.py' on your local harddrive. ``` # import the API APILoc = r"C:\Users\thejo\Documents\school\AI in AG research\API" import sys sys.path.insert(0, APILoc) from API import * ``` Load the dataset. Note that the location of the dataset must be specified. ``` # get aggregate data aggDataLoc = r'C:\Users\thejo\Documents\school\AI in AG research\experiment\aggregateData_MS_KY_GA.csv' #aggDataLoc = r'C:\Users\thejo\Documents\school\AI in AG research\experiment\aggregateDataWithVariety.csv' #targetDataLoc = r'C:\Users\thejo\Documents\school\AI in AG research\experiment\aggregateData_GAonly_Annual_final.csv' aggDf = pd.read_csv(aggDataLoc) #aggDf = aggDf.drop("Unnamed: 0",axis=1) #targetDf = pd.read_csv(targetDataLoc) #targetDf = targetDf.drop("Unnamed: 0",axis=1) ``` Test to see if the dataset was loaded properly. A table of the first 5 datapoints should appear. ``` aggDf.head() #targetDf.head() ``` Filter out features that will not be made available for feature selection. All of the features in the list 'XColumnsToKeep' will be made available for feature selection. The features to include are: <br> "Julian Day" <br> "Time Since Sown (Days)" <br> "Time Since Last Harvest (Days)" <br> "Total Radiation (MJ/m^2)" <br> "Total Rainfall (mm)" <br> "Avg Air Temp (C)" <br> "Avg Min Temp (C)" <br> "Avg Max Temp (C)"<br> "Avg Soil Moisture (%)"<br> "Day Length (hrs)"<br> "Percent Cover (%)"<br> ``` # filter out the features that will not be used by the machine learning models # the features to keep: # xColumnsToKeep = ["Julian Day", "Time Since Sown (Days)", "Time Since Last Harvest (Days)", "Total Radiation (MJ/m^2)", # "Total Rainfall (mm)", "Avg Air Temp (C)", "Avg Min Temp (C)", "Avg Max Temp (C)", # "Avg Soil Moisture (%)", "Day Length (hrs)"], "Percent Cover (%)"] xColumnsToKeep = ["Julian Day", "Time Since Sown (Days)", "Total Radiation (MJ/m^2)", "Total Rainfall (mm)", "Avg Air Temp (C)", "Avg Min Temp (C)", "Avg Max Temp (C)", "Avg Soil Moisture (%)"] #xColumnsToKeep = ["Julian Day", "Time Since Sown (Days)", "Total Radiation (MJ/m^2)", "Total Rainfall (mm)"] # the target to keep yColumnsToKeep = ["Yield (tons/acre)"] # get a dataframe containing the features and the targets xDf = aggDf[xColumnsToKeep] #yDf = targetDf[yColumnsToKeep] yDf = aggDf[yColumnsToKeep] # reset the index xDf = xDf.reset_index(drop=True) yDf = yDf.reset_index(drop=True) pd.set_option('display.max_rows', 2500) pd.set_option('display.max_columns', 500) xCols = list(xDf) ``` Test to see if the features dataframe and the target dataframe were successfully made. ``` xDf.head() yDf.head() ``` Lets now define the parameters that will be used to run the machine learning experiments. Note that parameter grids could be made that will allow sci-kit learn to use a 5-fold gridsearch to find the model's best hyperparameters. The parameter grids that are defined here will specify the possible values for the grid search. <br> <br> Once the parameter grids are defined, a list of tuples must also be defined. The tuples must take the form of: <br> (sci-kit learn model, appropriate parameter grid, name of the file to be saved). <br> <br> Then the number of iterations should be made. This is represented by the variable 'N'. Each model will be evaluated N times (via N-fold cross validation), and the average results of the models over those N iterations will be returned. <br> <br> 'workingDir' is the directory in which all of the results will be saved. <br> <br> 'numFeatures' is the number of features that will be selected (via feature selection). ``` # hide the warnings because training the neural network caues lots of warnings. import warnings warnings.filterwarnings('ignore') # make the parameter grids for sklearn's gridsearchcv rfParamGrid = { 'model__n_estimators': [5, 10, 25, 50, 100], # Number of estimators 'model__max_depth': [5, 10, 15, 20], # Maximum depth of the tree 'model__criterion': ["mae"] } knnParamGrid ={ 'model__n_neighbors':[2,5,10], 'model__weights': ['uniform', 'distance'], 'model__leaf_size': [5, 10, 30, 50] } svrParamGrid = { 'model__kernel': ['linear', 'poly', 'rbf', 'sigmoid'], 'model__C': [0.1, 1.0, 5.0, 10.0], 'model__gamma': ["scale", "auto"], 'model__degree': [2,3,4,5] } nnParamGrid = { 'model__hidden_layer_sizes':[(3), (5), (10), (3,3), (5,5), (7,7)], 'model__solver': ['sgd', 'adam'], 'model__learning_rate' : ['constant', 'invscaling', 'adaptive'], 'model__learning_rate_init': [0.1, 0.01, 0.001] } linRegParamGrid = {} bayesParamGrid={ 'model__n_iter':[100,300,500] } dtParamGrid = { 'model__criterion': ['mae'], 'model__max_depth': [5,10,25,50,100] } aModelList = [(RandomForestRegressor(), rfParamGrid, "rfTup.pkl"), (KNeighborsRegressor(), knnParamGrid, "knnTup.pkl"), (SVR(), svrParamGrid, "svrTup.pkl"), #(MLPRegressor(), nnParamGrid, "nnTup.pkl")]#, (LinearRegression(), linRegParamGrid, "linRegTup.pkl"), (BayesianRidge(), bayesParamGrid, "bayesTup.pkl"), (DecisionTreeRegressor(), dtParamGrid, "dtTup.pkl")] N = 10 workingDir = r"C:\Users\thejo\Documents\school\AI in AG research\experiment" numFeatures = 8 # 11 ``` This cell will run the tests and save the results. ``` saveMLResults(N, xDf, yDf, aModelList, workingDir, numFeatures, printResults=True) ```
github_jupyter
``` import pandas as pd df = pd.read_csv('queryset_CNN.csv') print(df.shape) print(df.dtypes) preds = [] pred = [] for index, row in df.iterrows(): doc_id = row.doc_id author_id = row.author_id import ast authorList = ast.literal_eval(row.authorList) candidate = len(authorList) algo = "tfidf_svc" test = algo # change before run level = "word" iterations = 30 dropout = 0.5 samples = 3200 dimensions = 200 loc = authorList.index(author_id) printstate = (("doc_id = %s, candidate = %s, ") % (str(doc_id), str(candidate))) printstate += (("samples = %s, ") % (str(samples))) printstate += (("test = %s") % (str(test))) print("Current test: %s" % (str(printstate))) from sshtunnel import SSHTunnelForwarder with SSHTunnelForwarder(('144.214.121.15', 22), ssh_username='ninadt', ssh_password='Ninad123', remote_bind_address=('localhost', 3306), local_bind_address=('localhost', 3300)): import UpdateDB as db case = db.checkOldML(doc_id = doc_id, candidate = candidate, samples = samples, test = test, port = 3300) if case == False: print("Running: %12s" % (str(printstate))) import StyloML as Stylo (labels_index, train_acc, val_acc, samples) = Stylo.getResults( algo, doc_id = doc_id, authorList = authorList[:], samples = samples) (labels_index, testY, predY, samples) = Stylo.getTestResults( algo, labels_index = labels_index, doc_id = doc_id, authorList = authorList[:], samples = samples) loc = testY test_acc = predY[loc] test_bin = 0 if(predY.tolist().index(max(predY)) == testY): test_bin = 1 from sshtunnel import SSHTunnelForwarder with SSHTunnelForwarder(('144.214.121.15', 22), ssh_username='ninadt', ssh_password='Ninad123', remote_bind_address=('localhost', 3306), local_bind_address=('localhost', 3300)): import UpdateDB as db case = db.updateresultOldML(doc_id = doc_id, candidate = candidate, samples = samples, train_acc = train_acc, val_acc = val_acc, test_acc = test_acc, test_bin = test_bin, test = test, port = 3300) del Stylo import time time.sleep(10) from IPython.display import clear_output clear_output() else: print("Skipped: %12s" % (str(printstate))) # import matplotlib.pyplot as plt # # summarize history for accuracy # plt.plot(history.history['acc']) # plt.plot(history.history['val_acc']) # plt.title('model accuracy') # plt.ylabel('accuracy') # plt.xlabel('epoch') # plt.legend(['train', 'test'], loc='upper left') # plt.show() # # summarize history for loss # plt.plot(history.history['loss']) # plt.plot(history.history['val_loss']) # plt.title('model loss') # plt.ylabel('loss') # plt.xlabel('epoch') # plt.legend(['train', 'test'], loc='upper left') # plt.show() %tb ```
github_jupyter
# Introduction to NumPy This notebook is the first half of a special session on NumPy and PyTorch for CS 224U. Why should we care about NumPy? - It allows you to perform tons of operations on vectors and matrices. - It makes things run faster than naive for-loop implementations (a.k.a. vectorization). - We use it in our class (see files prefixed with `np_` in your cs224u directory). - It's used a ton in machine learning / AI. - Its arrays are often inputs into other important Python packages' functions. In Jupyter notebooks, NumPy documentation is two clicks away: Help -> NumPy reference. ``` __author__ = 'Will Monroe, Chris Potts, and Lucy Li' import numpy as np ``` # Vectors ## Vector Initialization ``` np.zeros(5) np.ones(5) # convert list to numpy array np.array([1,2,3,4,5]) # convert numpy array to list np.ones(5).tolist() # one float => all floats np.array([1.0,2,3,4,5]) # same as above np.array([1,2,3,4,5], dtype='float') # spaced values in interval np.array([x for x in range(20) if x % 2 == 0]) # same as above np.arange(0,20,2) # random floats in [0, 1) np.random.random(10) # random integers np.random.randint(5, 15, size=10) ``` ## Vector indexing ``` x = np.array([10,20,30,40,50]) x[0] # slice x[0:2] x[0:1000] # last value x[-1] # last value as array x[[-1]] # last 3 values x[-3:] # pick indices x[[0,2,4]] ``` ## Vector assignment Be careful when assigning arrays to new variables! ``` #x2 = x # try this line instead x2 = x.copy() x2[0] = 10 x2 x2[[1,2]] = 10 x2 x2[[3,4]] = [0, 1] x2 # check if the original vector changed x ``` ## Vectorized operations ``` x.sum() x.mean() x.max() x.argmax() np.log(x) np.exp(x) x + x # Try also with *, -, /, etc. x + 1 ``` ## Comparison with Python lists Vectorizing your mathematical expressions can lead to __huge__ performance gains. The following example is meant to give you a sense for this. It compares applying `np.log` to each element of a list with 10 million values with the same operation done on a vector. ``` # log every value as list, one by one def listlog(vals): return [np.log(y) for y in vals] # get random vector samp = np.random.random_sample(int(1e7))+1 samp %time _ = np.log(samp) %time _ = listlog(samp) ``` # Matrices The matrix is the core object of machine learning implementations. ## Matrix initialization ``` np.array([[1,2,3], [4,5,6]]) np.array([[1,2,3], [4,5,6]], dtype='float') np.zeros((3,5)) np.ones((3,5)) np.identity(3) np.diag([1,2,3]) ``` ## Matrix indexing ``` X = np.array([[1,2,3], [4,5,6]]) X X[0] X[0,0] # get row X[0, : ] # get column X[ : , 0] # get multiple columns X[ : , [0,2]] ``` ## Matrix assignment ``` # X2 = X # try this line instead X2 = X.copy() X2 X2[0,0] = 20 X2 X2[0] = 3 X2 X2[: , -1] = [5, 6] X2 # check if original matrix changed X ``` ## Matrix reshaping ``` z = np.arange(1, 7) z z.shape Z = z.reshape(2,3) Z Z.shape Z.reshape(6) # same as above Z.flatten() # transpose Z.T ``` ## Numeric operations ``` A = np.array(range(1,7), dtype='float').reshape(2,3) A B = np.array([1, 2, 3]) B # not the same as A.dot(B) A * B A + B A / B # matrix multiplication A.dot(B) B.dot(A.T) A.dot(A.T) # outer product # multiplying each element of first vector by each element of the second np.outer(B, B) ``` The following is a practical example of numerical operations on NumPy matrices. In our class, we have a shallow neural network implemented in `np_shallow_neural_network.py`. See how the forward and backward passes use no for loops, and instead takes advantage of NumPy's ability to vectorize manipulations of data. ```python def forward_propagation(self, x): h = self.hidden_activation(x.dot(self.W_xh) + self.b_xh) y = softmax(h.dot(self.W_hy) + self.b_hy) return h, y def backward_propagation(self, h, predictions, x, labels): y_err = predictions.copy() y_err[np.argmax(labels)] -= 1 # backprop for cross-entropy error: -log(prediction-for-correct-label) d_b_hy = y_err h_err = y_err.dot(self.W_hy.T) * self.d_hidden_activation(h) d_W_hy = np.outer(h, y_err) d_W_xh = np.outer(x, h_err) d_b_xh = h_err return d_W_hy, d_b_hy, d_W_xh, d_b_xh ``` The forward pass essentially computes the following: $$h = f(xW_{xh} + b_{xh})$$ $$y = \text{softmax}(hW_{hy} + b_{hy}),$$ where $f$ is `self.hidden_activation`. The backward pass propagates error by computing local gradients and chaining them. Feel free to learn more about backprop [here](http://cs231n.github.io/optimization-2/), though it is not necessary for our class. Also look at this [neural networks case study](http://cs231n.github.io/neural-networks-case-study/) to see another example of how NumPy can be used to implement forward and backward passes of a simple neural network. ## Going beyond NumPy alone These are examples of how NumPy can be used with other Python packages. ### Pandas We can convert numpy matrices to Pandas dataframes. In the following example, this is useful because it allows us to label each row. You may have noticed this being done in our first unit on distributed representations. ``` import pandas as pd count_df = pd.DataFrame( np.array([ [1,0,1,0,0,0], [0,1,0,1,0,0], [1,1,1,1,0,0], [0,0,0,0,1,1], [0,0,0,0,0,1]], dtype='float64'), index=['gnarly', 'wicked', 'awesome', 'lame', 'terrible']) count_df ``` ### Scikit-learn In `sklearn`, NumPy matrices are the most common input and output and thus a key to how the library's numerous methods can work together. Many of the cs224u's model built by Chris operate just like `sklearn` ones, such as the classifiers we used for our sentiment analysis unit. ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn import datasets iris = datasets.load_iris() X = iris.data y = iris.target print(type(X)) print("Dimensions of X:", X.shape) print(type(y)) print("Dimensions of y:", y.shape) # split data into train/test X_iris_train, X_iris_test, y_iris_train, y_iris_test = train_test_split( X, y, train_size=0.7, test_size=0.3) print("X_iris_train:", type(X_iris_train)) print("y_iris_train:", type(y_iris_train)) print() # start up model maxent = LogisticRegression(fit_intercept=True, solver='liblinear', multi_class='auto') # train on train set maxent.fit(X_iris_train, y_iris_train) # predict on test set iris_predictions = maxent.predict(X_iris_test) fnames_iris = iris['feature_names'] tnames_iris = iris['target_names'] # how well did our model do? print(classification_report(y_iris_test, iris_predictions, target_names=tnames_iris)) ``` ### SciPy SciPy contains what may seem like an endless treasure trove of operations for linear algebra, optimization, and more. It is built so that everything can work with NumPy arrays. ``` from scipy.spatial.distance import cosine from scipy.stats import pearsonr from scipy import linalg # cosine distance a = np.random.random(10) b = np.random.random(10) cosine(a, b) # pearson correlation (coeff, p-value) pearsonr(a, b) # inverse of matrix A = np.array([[1,3,5],[2,5,1],[2,3,8]]) linalg.inv(A) ``` To learn more about how NumPy can be combined with SciPy and Scikit-learn for machine learning, check out this [notebook tutorial](https://github.com/cgpotts/csli-summer/blob/master/advanced_python/intro_to_python_ml.ipynb) by Chris Potts and Will Monroe. (You may notice that over half of this current notebook is modified from theirs.) Their tutorial also has some interesting exercises in it! ### Matplotlib ``` import matplotlib.pyplot as plt a = np.sort(np.random.random(30)) b = a**2 c = np.log(a) plt.plot(a, b, label='y = x^2') plt.plot(a, c, label='y = log(x)') plt.legend() plt.title("Some functions") plt.show() ```
github_jupyter
# Latent Dirichlet Allocation for Text Data In this assignment you will * apply standard preprocessing techniques on Wikipedia text data * use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model * explore and interpret the results, including topic keywords and topic assignments for documents Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of *mixed membership*. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one. With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. ## Text Data Preprocessing We'll start by importing our familiar Wikipedia dataset. The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read [this page](https://turi.com/download/upgrade-graphlab-create.html). ``` import graphlab as gl import numpy as np import matplotlib.pyplot as plt %matplotlib inline '''Check GraphLab Create version''' from distutils.version import StrictVersion assert (StrictVersion(gl.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.' # import wiki data wiki = gl.SFrame('people_wiki.gl/') wiki ``` In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a _bag of words_, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be. Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from GraphLab Create: ``` wiki_docs = gl.text_analytics.count_words(wiki['text']) wiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True) wiki_docs ``` ## Model fitting and interpretation In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module. Note: This may take several minutes to run. ``` topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200) ``` GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results. ``` topic_model ``` It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will * get the top words in each topic and use these to identify topic themes * predict topic distributions for some example documents * compare the quality of LDA "nearest neighbors" to the NN output from the first assignment * understand the role of model hyperparameters alpha and gamma ## Load a fitted topic model The method used to fit the LDA model is a _randomized algorithm_, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization. It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model. We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above. ``` topic_model = gl.load_model('lda_assignment_topic_model') ``` # Identifying topic themes by top words We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA. In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme _and_ that all the topics are relatively distinct. We can use the GraphLab Create function get_topics() to view the top words (along with their associated probabilities) from each topic. __Quiz Question:__ Identify the top 3 most probable words for the first topic. ``` topic_model topic_model.get_topics() ``` __ Quiz Question:__ What is the sum of the probabilities assigned to the top 50 words in the 3rd topic? ``` topic_model.get_topics(topic_ids=[2], num_words=50)['score'].sum() ``` Let's look at the top 10 words for each topic to see if we can identify any themes: ``` [x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)] ``` We propose the following themes for each topic: - topic 0: Science and research - topic 1: Team sports - topic 2: Music, TV, and film - topic 3: American college and politics - topic 4: General politics - topic 5: Art and publishing - topic 6: Business - topic 7: International athletics - topic 8: Great Britain and Australia - topic 9: International music We'll save these themes for later: ``` themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \ 'art and publishing','Business','international athletics','Great Britain and Australia','international music'] ``` ### Measuring the importance of top words We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words. We'll do this with two visualizations of the weights for the top words in each topic: - the weights of the top 100 words, sorted by the size - the total weight of the top 10 words Here's a plot for the top 100 words by weight in each topic: ``` for i in range(10): plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score']) plt.xlabel('Word rank') plt.ylabel('Probability') plt.title('Probabilities of Top 100 Words in each Topic') ``` In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total! Next we plot the total weight assigned by each topic to its top 10 words: ``` top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)] ind = np.arange(10) width = 0.5 fig, ax = plt.subplots() ax.bar(ind-(width/2),top_probs,width) ax.set_xticks(ind) plt.xlabel('Topic') plt.ylabel('Probability') plt.title('Total Probability of Top 10 Words in each Topic') plt.xlim(-0.5,9.5) plt.ylim(0,0.15) plt.show() ``` Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary. Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all. # Topic distributions for some example documents As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic. We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition. Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a _distribution_ over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama: ``` obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]]) pred1 = topic_model.predict(obama, output_type='probability') pred2 = topic_model.predict(obama, output_type='probability') print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]})) ``` To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document: ``` def average_predictions(model, test_document, num_trials=100): avg_preds = np.zeros((model.num_topics)) for i in range(num_trials): avg_preds += model.predict(test_document, output_type='probability')[0] avg_preds = avg_preds/num_trials result = gl.SFrame({'topics':themes, 'average predictions':avg_preds}) result = result.sort('average predictions', ascending=False) return result print average_predictions(topic_model, obama, 100) ``` __Quiz Question:__ What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions. ``` bush = gl.SArray([wiki_docs[int(np.where(wiki['name']=='George W. Bush')[0])]]) print average_predictions(topic_model, bush, 100) ``` __Quiz Question:__ What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions. ``` gerrard = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Steven Gerrard')[0])]]) print average_predictions(topic_model, gerrard, 100) ``` # Comparing LDA to nearest neighbors for document retrieval So far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations. In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment. We'll start by creating the LDA topic distribution representation for each document: ``` wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability') ``` Next we add the TF-IDF document representations: ``` wiki['word_count'] = gl.text_analytics.count_words(wiki['text']) wiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count']) ``` For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model: ``` model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'], method='brute_force', distance='cosine') model_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'], method='brute_force', distance='cosine') ``` Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist: ``` model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10) model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10) ``` Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents. With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada. Our LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be "close" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies. __Quiz Question:__ Using the TF-IDF representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use `mylist.index(value)` to find the index of the first instance of `value` in `mylist`.) __Quiz Question:__ Using the LDA representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use `mylist.index(value)` to find the index of the first instance of `value` in `mylist`.) ``` nn_tfidf = model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000) nn_lda = model_lda_rep.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000) nn_tfidf[nn_tfidf['reference_label'] == 'Mariano Rivera'] nn_lda[nn_lda['reference_label'] == 'Mariano Rivera'] ``` # Understanding the role of LDA model hyperparameters Finally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic. In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words. Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model. __Quiz Question:__ What was the value of alpha used to fit our original topic model? ``` topic_model ``` __Quiz Question:__ What was the value of gamma used to fit our original topic model? Remember that GraphLab Create uses "beta" instead of "gamma" to refer to the hyperparameter that influences topic distributions over words. We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model: - tpm_low_alpha, a model trained with alpha = 1 and default gamma - tpm_high_alpha, a model trained with alpha = 50 and default gamma ``` tpm_low_alpha = gl.load_model('lda_low_alpha') tpm_high_alpha = gl.load_model('lda_high_alpha') ``` ### Changing the hyperparameter alpha Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha. ``` a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1] b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1] c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1] ind = np.arange(len(a)) width = 0.3 def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab): fig = plt.figure() ax = fig.add_subplot(111) b1 = ax.bar(ind, a, width, color='lightskyblue') b2 = ax.bar(ind+width, b, width, color='lightcoral') b3 = ax.bar(ind+(2*width), c, width, color='gold') ax.set_xticks(ind+width) ax.set_xticklabels(range(10)) ax.set_ylabel(ylab) ax.set_xlabel(xlab) ax.set_ylim(0,ylim) ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param]) plt.tight_layout() param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha', xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article') ``` Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics. __Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the **low alpha** model? Use the average results from 100 topic predictions. ``` paul = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]]) paul_nn_low = np.sort(tpm_low_alpha.predict(paul,output_type='probability')[0]) paul_nn_low ``` __Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the **high alpha** model? Use the average results from 100 topic predictions. ``` paul_nn_high = np.sort(tpm_high_alpha.predict(paul,output_type='probability')[0]) paul_nn_high ``` ### Changing the hyperparameter gamma Just as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models. Now we will consider the following two models: - tpm_low_gamma, a model trained with gamma = 0.02 and default alpha - tpm_high_gamma, a model trained with gamma = 0.5 and default alpha ``` del tpm_low_alpha del tpm_high_alpha tpm_low_gamma = gl.load_model('lda_low_gamma') tpm_high_gamma = gl.load_model('lda_high_gamma') a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] ind = np.arange(len(a)) width = 0.3 param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma', xlab='Topics (sorted by weight of top 100 words)', ylab='Total Probability of Top 100 Words') param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma', xlab='Topics (sorted by weight of bottom 1000 words)', ylab='Total Probability of Bottom 1000 Words') ``` From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary. __Quiz Question:__ For each topic of the **low gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get\_topics() function from GraphLab Create with the cdf\_cutoff argument). ``` tpm_low_gamma count = 0 for i, topic in enumerate(themes): c = len(tpm_low_gamma.get_topics(topic_ids=[i], cdf_cutoff=0.5, num_words=547462)) count += c print "# of words required for topic - " + topic + " = " + str(c) count / float(len(themes)) ``` __Quiz Question:__ For each topic of the **high gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get\_topics() function from GraphLab Create with the cdf\_cutoff argument). ``` count = 0 for i, topic in enumerate(themes): c = len(tpm_high_gamma.get_topics(topic_ids=[i], cdf_cutoff=0.5, num_words=547462)) count += c print "# of words required for topic - " + topic + " = " + str(c) count / float(len(themes)) ``` We have now seen how the hyperparameters alpha and gamma influence the characteristics of our LDA topic model, but we haven't said anything about what settings of alpha or gamma are best. We know that these parameters are responsible for controlling the smoothness of the topic distributions for documents and word distributions for topics, but there's no simple conversion between smoothness of these distributions and quality of the topic model. In reality, there is no universally "best" choice for these parameters. Instead, finding a good topic model requires that we be able to both explore the output (as we did by looking at the topics and checking some topic predictions for documents) and understand the impact of hyperparameter settings (as we have in this section).
github_jupyter
``` # -*- coding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # ``` # Natural Language Processing (NLP) with machine learning (ML) **Preprocessing of textual data** ### Download NLTK data - we need to do this only one time The download process can last longer (with GUI) and all data packages are bigger size of 3.3 GB Uncomment the *nltk.download()* line to download all! It open a new download window, which requires to click ! ``` import nltk # nltk.download() # nltk.download('punkt') # nltk.download('stopwords') # nltk.download('averaged_perceptron_tagger') # Part-of-Speech Tagging (POS) # nltk.download('tagsets') # nltk.download('maxent_ne_chunker') # Name Entity Recognition (NER) # nltk.download('words') ``` ### Tokenization ``` import string import re from nltk.tokenize import word_tokenize from nltk.corpus import stopwords # load data filename = 'data/metamorphosis_clean.txt' file = open(filename, 'rt') text = file.read() file.close() # split into words tokens = word_tokenize(text) # convert to lower case tokens = [w.lower() for w in tokens] # prepare regex for char filtering re_punc = re.compile('[%s]' % re.escape(string.punctuation)) # remove punctuation from each word stripped = [re_punc.sub('', w) for w in tokens] # remove remaining tokens that are not alphabetic words = [word for word in stripped if word.isalpha()] # filter out stop words stop_words = set(stopwords.words('english')) words = [w for w in words if not w in stop_words] print(words[:100]) ``` ### TF-IDF with TfidfVectorizer ``` import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer dataset = [ "I enjoy reading about Machine Learning and Machine Learning is my PhD subject", "I would enjoy a walk in the park", "I was reading in the library" ] vectorizer = TfidfVectorizer(use_idf=True) tfIdf = vectorizer.fit_transform(dataset) df = pd.DataFrame(tfIdf[0].T.todense(), index=vectorizer.get_feature_names(), columns=["TF-IDF"]) df = df.sort_values('TF-IDF', ascending=False) print (df.head(25)) ``` ### TF-IDF with TfidfTransformer ``` from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_extraction.text import CountVectorizer transformer = TfidfTransformer(use_idf=True) countVectorizer = CountVectorizer() wordCount = countVectorizer.fit_transform(dataset) newTfIdf = transformer.fit_transform(wordCount) df = pd.DataFrame(newTfIdf[0].T.todense(), index=countVectorizer.get_feature_names(), columns=["TF-IDF"]) df = df.sort_values('TF-IDF', ascending=False) print (df.head(25)) ``` ### Cosine similarity URL: https://stackoverflow.com/questions/12118720/python-tf-idf-cosine-to-find-document-similarity ``` from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import linear_kernel # twenty dataset twenty = fetch_20newsgroups() tfidf = TfidfVectorizer().fit_transform(twenty.data) # cosine similarity cosine_similarities = linear_kernel(tfidf[0:1], tfidf).flatten() # top-5 related documents related_docs_indices = cosine_similarities.argsort()[:-5:-1] print(related_docs_indices) print(cosine_similarities[related_docs_indices]) # print the first result to check print(twenty.data[0]) print(twenty.data[958]) ``` ### Text classification URL https://towardsdatascience.com/machine-learning-nlp-text-classification-using-scikit-learn-python-and-nltk-c52b92a7c73a ``` import numpy as np from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.pipeline import Pipeline # twenty dataset twenty_train = fetch_20newsgroups(subset='train', shuffle=True) twenty_test = fetch_20newsgroups(subset='test', shuffle=True) print(twenty_train.target_names) # print("\n".join(twenty_train.data[0].split("\n")[:3])) ``` ### Multinomial Naive Bayes ``` from sklearn.naive_bayes import MultinomialNB # Bag-of-words count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(twenty_train.data) X_test_counts = count_vect.transform(twenty_test.data) # TF-IDF transformer = TfidfTransformer() X_train_tfidf = transformer.fit_transform(X_train_counts) X_test_tfidf = transformer.transform(X_test_counts) # Naive Bayes (NB) for text classification clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target) # Performance of the model predicted = clf.predict(X_test_tfidf) np.mean(predicted == twenty_test.target) ``` ### Pipeline The above code with Multinomial Naive Bayes can be written more ellegant with scikit-learn pipeline. The code will be shorter and more reliable. ``` text_clf = Pipeline([('vect', CountVectorizer(stop_words='english')), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB()), ]) text_clf = text_clf.fit(twenty_train.data, twenty_train.target) # Performance of the model predicted = text_clf.predict(twenty_test.data) np.mean(predicted == twenty_test.target) ``` ### GridSearchCV with Naive Bayes We want and need to optimize the pipeline by hyper-parameter tunning. We may get some better classification results. ``` from sklearn.model_selection import GridSearchCV parameters = {'vect__ngram_range': [(1, 1), (1, 2)], 'tfidf__use_idf': (True, False), 'clf__alpha': (1e-2, 1e-3), } gs_clf = GridSearchCV(text_clf, parameters, n_jobs=-1) gs_clf = gs_clf.fit(twenty_train.data, twenty_train.target) print(gs_clf.best_score_) print(gs_clf.best_params_) ``` ### SGDClassifier We are trying another classifier called SGDClassifier instead of the previous Multinomial Naive Bayes. Let see if this new classifier acts better incomparison with and without optimization. ``` from sklearn.linear_model import SGDClassifier text_clf_svm = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf-svm', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, random_state=42)), ]) text_clf_svm = text_clf_svm.fit(twenty_train.data, twenty_train.target) # Performance of the model predicted_svm = text_clf_svm.predict(twenty_test.data) np.mean(predicted_svm == twenty_test.target) ``` ### GridSearchCV with SVM Here a more classifiers, e.g., SVM. We are going to try SVM with Grid Search optimization. ``` from sklearn.model_selection import GridSearchCV parameters_svm = {'vect__ngram_range': [(1, 1), (1, 2)], 'tfidf__use_idf': (True, False), 'clf-svm__alpha': (1e-2, 1e-3), } gs_clf_svm = GridSearchCV(text_clf_svm, parameters_svm, n_jobs=-1) gs_clf_svm = gs_clf_svm.fit(twenty_train.data, twenty_train.target) print(gs_clf_svm.best_score_) print(gs_clf_svm.best_params_) ``` ### Stemming Stemming can improve classifier results too. Let see if it works in our case example with Multinomial Naive Bayes. ``` from nltk.stem.snowball import SnowballStemmer stemmer = SnowballStemmer("english", ignore_stopwords=True) class StemmedCountVectorizer(CountVectorizer): def build_analyzer(self): analyzer = super(StemmedCountVectorizer, self).build_analyzer() return lambda doc: ([stemmer.stem(w) for w in analyzer(doc)]) stemmed_count_vect = StemmedCountVectorizer(stop_words='english') text_mnb_stemmed = Pipeline([('vect', stemmed_count_vect), ('tfidf', TfidfTransformer()), ('mnb', MultinomialNB(fit_prior=False))]) text_mnb_stemmed = text_mnb_stemmed.fit(twenty_train.data, twenty_train.target) predicted_mnb_stemmed = text_mnb_stemmed.predict(twenty_test.data) np.mean(predicted_mnb_stemmed == twenty_test.target) ```
github_jupyter
``` library(e1071) library(ggplot2) # reading data svm_data <- read.csv("../../datasets/knn.csv") head(svm_data) # building SVM model svm_model <- svm(level~., data = svm_data, kernel = "linear") svm_model # visualizing the model options(repr.plot.width=6, repr.plot.height=5) plot(svm_model, svm_data) # accuracy of the model addmargins(table(predict(svm_model,svm_data),svm_data$level)) # 100% accurate # Slack Variable svm_model$epsilon # Regularization parameter C controls margin length. It is inversely proportional to margin length. svm_model$cost # smaller value is acceptable # Comparing SVM model on different values of C svm_c_10000 <- svm(level~., data = svm_data, kernel = "linear", cost = 10000) plot(svm_c_10000, svm_data) svm_c_0.1 <- svm(level~., data = svm_data, kernel = "linear", cost = 0.1) plot(svm_c_0.1, svm_data) # Best value of C - cross-validation tune_c <- tune(svm, level~., data = svm_data, kernel = "linear", ranges = list(cost = c(0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000))) tune_c$best.model # Best value of C = 0.1 # Gamma is a hyper-parameter used to estimate the classification error. # Best value of Gamma can also be found out using CV. tune_gamma <- tune(svm, level~., data = svm_data, kernel = "linear", ranges = list(gamma = c(0.001, 0.01, 0.1, 1, 3, 5, 7, 10))) tune_gamma$best.model # Best value of gamma = 0.001 plot(tune_gamma) # No misclassified value on all the provided values to gamma. ``` # Non-linear kernel ``` svm_data <- read.csv("../../datasets/svm.csv") head(svm_data) levels(svm_data$y) # visualizing the data Category <- svm_data$y ggplot() + geom_point(aes(svm_data$X2, svm_data$X1, col = Category), cex = 2) + xlab("X2") + ylab("X1") + ggtitle("Category prediction using X1 and X2") + theme_bw() # builing SVM non-linear model - using tune for degree of polynomial tune_svm <- tune(svm, y~., data = svm_data, kernel = "polynomial", ranges = list(degree = c(1, 2, 3, 4, 5))) tune_svm$best.model # So, a polynomial of order 2 is best suited to classify given data svm_model <- svm(y~., data = svm_data, kernel = "polynomial", degree = 2) plot(svm_model, svm_data) ``` # Multiple class classification using SVM ``` iris_df <- subset.data.frame(iris, select = c("Petal.Length", "Petal.Width", "Species")) svm_model <- svm(Species~., data = iris_df, kernel = "radial") plot(svm_model, iris_df) ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import time import seaborn as sns import urllib.request from tqdm import tqdm import pandas as pd %matplotlib inline ``` # Quadratic Assignment Problem ## 1. Read data Popular QAP with loss function minimums: <br> - Nug12 12 578 (OPT) (12,7,9,3,4,8,11,1,5,6,10,2) - Nug14 14 1014 (OPT) (9,8,13,2,1,11,7,14,3,4,12,5,6,10) - Nug15 15 1150 (OPT) (1,2,13,8,9,4,3,14,7,11,10,15,6,5,12) - Nug16a 16 1610 (OPT) (9,14,2,15,16,3,10,12,8,11,6,5,7,1,4,13) - Nug16b 16 1240 (OPT) (16,12,13,8,4,2,9,11,15,10,7,3,14,6,1,5) - Nug17 17 1732 (OPT) (16,15,2,14,9,11,8,12,10,3,4,1,7,6,13,17,5) - Nug18 18 1930 (OPT) (10,3,14,2,18,6,7,12,15,4,5,1,11,8,17,13,9,16) - Nug20 20 2570 (OPT) (18,14,10,3,9,4,2,12,11,16,19,15,20,8,13,17,5,7,1,6) - Nug21 21 2438 (OPT) (4,21,3,9,13,2,5,14,18,11,16,10,6,15,20,19,8,7,1,12,17) - Nug22 22 3596 (OPT) (2,21,9,10,7,3,1,19,8,20,17,5,13,6,12,16,11,22,18,14,15) - Nug24 24 3488 (OPT) (17,8,11,23,4,20,15,19,22,18,3,14,1,10,7,9,16,21,24,12,6,13,5,2) - Nug25 25 3744 (OPT) (5,11,20,15,22,2,25,8,9,1,18,16,3,6,19,24,21,14,7,10,17,12,4,23,13) * - Nug27 27 5234 (OPT) (23,18,3,1,27,17,5,12,7,15,4,26,8,19,20,2,24,21,14,10,9,13,22,25,6,16,11) * - Nug28 28 5166 (OPT) (18,21,9,1,28,20,11,3,13,12,10,19,14,22,15,2,25,16,4,23,7,17,24,26,5,27,8,6) * - Nug30 30 6124 (OPT) (5 12 6 13 2 21 26 24 10 9 29 28 17 1 8 7 19 25 23 22 11 16 30 4 15 18 27 3 14 20) ``` def get_nug(no): QAP_INSTANCE_URL = f'http://anjos.mgi.polymtl.ca/qaplib/data.d/nug{no}.dat' qap_instance_file = urllib.request.urlopen(QAP_INSTANCE_URL) line = qap_instance_file.readline() n = int(line.decode()[:-1].split()[0]) A = np.empty((n, n)) qap_instance_file.readline() for i in range(n): line = qap_instance_file.readline() A[i, :] = list(map(int, line.decode()[:-1].split())) B = np.empty((n, n)) qap_instance_file.readline() for i in range(n): line = qap_instance_file.readline() B[i, :] = list(map(int, line.decode()[:-1].split())) return n, A, B n, A, B = get_nug(12) print('Problem size: %d' % n) print('Flow matrix:\n', A) print('Distance matrix:\n', B) ``` ## 2. Objective function ``` def qap_objective_function(p): s = 0.0 for i in range(n): s += (A[i, :] * B[p, p[i]]).sum() return s p = [11, 6, 8, 2, 3, 7, 10, 0, 4, 5, 9, 1] print(qap_objective_function(p), p) ``` # 3. Random Sampling ``` %%time T = 1000000 permutations = np.empty((T, n), dtype=np.int64) costs = np.zeros(T) for i in tqdm(range(T)): permutations[i, :] = np.random.permutation(n) costs[i] = qap_objective_function(permutations[i, :]) p = permutations[costs.argmin(), :] print(qap_objective_function(p), p) plt.figure() plt.hist(costs, bins=100, edgecolor='black') plt.show() print(costs.mean(), costs.std()) ``` ## 4. Simulated Annealing ``` def qap_objective_function(p): s = 0.0 for i in range(n): s += (A[i, :] * B[p, p[i]]).sum() return s q = p.copy() i, j = 4, 8 q[i], q[j] = q[j], q[i] qap_objective_function(q) qap_objective_function(p) z = q.copy() i, j = 2, 3 z[i], z[j] = z[j], z[i] qap_objective_function(z) def delta(p, r, s, A, B): n = len(p) ans = 0 q = p.copy() q[r], q[s] = q[s], q[r] for i in range(n): for j in range(n): ans += A[i][j] * B[p[i], p[j]] - A[i][j] * B[q[i], q[j]] return -ans def delta2(p, r, s, A, B): n = len(p) ans = 0 for k in range(n): if k == r or k == s: continue ans += (A[s, k] - A[r, k]) * (B[p[s], p[k]] - B[p[r], p[k]]) return -2 * ans r, s = 4, 8 u, v = 2, 6 print(delta2(p, r, s, A, B)) print(delta2(q, u, v, A, B)) for u in range(n): for v in range(n): a = delta2(p, r, s, A, B) b = delta2(q, u, v, A, B) c = delta2(p, u, v, A, B) + 2 * (A[r, u] - A[r, v] + A[s, v] - A[s, u]) * (B[q[s], q[u]] - B[q[s], q[v]] + B[q[r], q[v]] - B[q[r], q[u]]) if u in [r, s] or v in [r, s]: 2 + 2 else: if b != c: print(u, v) print(a) print(b) print(c) print() class SA: def __init__(self, flow_m, dist_m, QAP_name='', T=10000, radius=1, alpha=1.0, cost_dist=False): self.T = T self.radius = radius self.alpha = alpha n1, n2 = flow_m.shape n3, n4 = dist_m.shape assert(n1 == n2 == n3 == n4) self.FLOW = flow_m self.DIST = dist_m self.n = n1 self.last_perm = np.random.permutation(n1) self.act_perm = self.last_perm.copy() self.r, self.s = 0, 1 self.act_perm[0], self.act_perm[1] = self.act_perm[1], self.act_perm[0] self.p_cost = self.qap_objective_function(self.act_perm) self.costs = np.zeros(T) self.all_perms = [self.act_perm] self.all_jumps = [] self.QAP_name = QAP_name self.cost_and_perms_dist = [] self.cost_dist = cost_dist self.last_swap_costs = {} self.act_swap_costs = {} for u in range(self.n): for v in range(u, self.n): self.last_swap_costs[(u, v)] = self.delta(self.last_perm, u, v) self.act_swap_costs[(u, v)] = self.delta(self.act_perm, u, v) def delta(self, p, r, s): ''' Change of Cost after swap(r, s) ''' n = self.n A, B = self.FLOW, self.DIST ans = 0 for k in range(n): if k == r or k == s: continue ans += (A[s, k] - A[r, k]) * (B[p[s], p[k]] - B[p[r], p[k]]) return -2 * ans def qap_objective_function(self, p): s = 0.0 for i in range(self.n): s += (self.FLOW[i, :] * self.DIST[p, p[i]]).sum() return s def random_neighbor(self): q = self.act_perm.copy() for r in range(self.radius): i, j = np.random.choice(self.n, 2, replace=False) q[i], q[j] = q[j], q[i] return q def run(self): for t in tqdm(range(self.T), desc='Simulated Annealing', position=0): good_jump, random_jump = 0, 0 q = self.random_neighbor() q_cost = self.qap_objective_function(q) if(q_cost < self.p_cost): if self.cost_dist: self.cost_and_perms_dist.append((self.p_cost - q_cost)) self.act_perm, self.p_cost = q, q_cost self.all_perms.append(self.act_perm) good_jump = 1 elif(np.random.rand() < np.exp(-self.alpha * (q_cost - self.p_cost) * t / self.T)): self.act_perm, self.p_cost = q, q_cost self.all_perms.append(self.act_perm) random_jump = 1 self.costs[t] = self.p_cost self.all_jumps.append((good_jump, random_jump)) def update_swap_costs(self): cost = 0 r, s = self.r, self.s A, B = self.FLOW, self.DIST q = self.act_perm self.last_swap_costs = self.act_swap_costs self.act_swap_costs = {} for u in range(self.n): for v in range(u, self.n): if u in [r, s] or v in [r, s]: # calculate (u, v) swap cost in O(N) self.act_swap_costs[(u, v)] = self.delta(self.act_perm, u, v) else: # swap in O(1) self.act_swap_costs[(u, v)] = self.last_swap_costs[u, v] \ + 2 * (A[r, u] - A[r, v] + A[s, v] - A[s, u]) \ * (B[q[s], q[u]] - B[q[s], q[v]] + B[q[r], q[v]] - B[q[r], q[u]]) def run_faster(self): ''' We make only single swaps per iter. Insted of calculating qap_objective_function in every iteration, we calculate cost of swapping every pair every time we change a state. qap_objective_function -> O(n^2) every itereation new approach -> O(n^2) only when we change a state ''' for t in tqdm(range(self.T), desc='Simulated Annealing', position=0): good_jump, random_jump = 0, 0 u, v = np.random.choice(self.n, 2, replace=False) q = self.act_perm.copy() q[u], q[v] = q[v], q[u] q_cost = self.act_swap_costs[(min(u, v), max(u, v))] + self.p_cost # if q_cost != self.qap_objective_function(q): # print('IMPLEMENTATION DOES NOT WORK!!!') if(q_cost < self.p_cost): self.act_perm, self.p_cost = q, q_cost self.all_perms.append(self.act_perm) self.r, self.s = u, v self.last_perm = self.act_perm self.update_swap_costs() good_jump = 1 elif(np.random.rand() < np.exp(-self.alpha * (q_cost - self.p_cost) * t / self.T)): self.act_perm, self.p_cost = q, q_cost self.all_perms.append(self.act_perm) self.r, self.s = u, v self.last_perm = self.act_perm self.update_swap_costs() random_jump = 1 self.costs[t] = self.p_cost self.all_jumps.append((good_jump, random_jump)) def plot_cost(self): plt.figure(figsize=(15, 5)) plt.plot(self.costs) plt.title('Cost function ' + self.QAP_name) plt.show() def plot_hist(self, bins): plt.figure(figsize=(15, 5)) plt.hist(self.costs, bins=bins, edgecolor='black') plt.title('Cost function histogram ' + self.QAP_name) plt.show() def plot_jumps(self): x = np.array(self.all_jumps).reshape(-1, 50) x = x.sum(axis=1) f, ax = plt.subplots(2,1, figsize=(15,10)) ax[0].bar(range(x.shape[0]), x, color='green') ax[1].bar(range(x.shape[0]), x, color='red') ax[0].set_title('Successes') ax[1].set_title('Accepted failures') plt.show() def plot_all(self, bins): self.plot_cost() self.plot_hist(bins=bins) self.plot_jumps() ``` ### 4.1 Basic implementation ``` %%time n, A, B = get_nug(12) simulation = SA(flow_m=A, dist_m=B, QAP_name='nug12', T=500000, cost_dist=True) simulation.run() simulation.costs.min() simulation.plot_all(bins=50) ``` ### 4.2 Improved implementation based on https://arxiv.org/pdf/1111.1353.pdf ``` %%time n, A, B = get_nug(12) simulation = SA(flow_m=A, dist_m=B, QAP_name='nug12', T=500000) simulation.run_faster() simulation.costs.min() ``` ## 5. Parameters tunning ### 5.1 Radius ``` %%time scores_r = [] for r in range(1, 20): print(f'r: {r}') n, A, B = get_nug(12) simulation = SA(flow_m=A, dist_m=B, QAP_name='nug12', T=200000, radius=r, cost_dist=True) simulation.run() scores_r.append((r, simulation.costs, simulation.cost_and_perms_dist)) scores_r = np.array(scores_r) plt.plot(scores_r[:, 0], list(map(lambda x: min(x), scores_r[:, 1]))) plt.xlabel('Distance between permutations') plt.ylabel('Min cost') # save scores # pd.DataFrame(scores_r).to_csv('scores_for_different_radius.csv') # scores_r = pd.read_csv('scores_for_different_radius.csv', names=['r', 'cost'], # header=None).reset_index(drop=True).iloc[1:] # distances between cost_f(perms) in successes f, ax = plt.subplots(4, 5, figsize=(17,15)) for r in range(19): ax[r // 5, r % 5].plot(scores_r[:, 2][r]) ax[r // 5, r % 5].set_title(f'r: {r}') ``` ### 5.2 alpha ``` %%time scores_a = [] alphas = [0.01, 0.02, 0.03, 0.05, 0.1, 0.2, 0.3, 0.5, 0.7, 1.01, 1.03, 1.05, 1.1, 1.2, 1.4, 1.5, 1.7, 2, 4, 5, 10] for a in alphas: print(f'alpha: {a}') n, A, B = get_nug(12) simulation = SA(flow_m=A, dist_m=B, QAP_name='nug12', T=200000, alpha=a) simulation.run_faster() scores_a.append(simulation.costs) # save scores # pd.DataFrame(scores_a).to_csv('scores_for_different_alpha.csv') scores_a = np.array(scores_a) plt.figure(figsize=(15, 5)) plt.xticks(np.linspace(0, 10, 20)) plt.plot(alphas, scores_a.min(axis=1)) for i, a in enumerate(alphas): print(a, ' -> ', scores_a.min(axis=1)[i]) ``` ## 6. SA on different QAP ``` from urllib.error import HTTPError nugs = [] for i in range(12, 30): try: x = get_nug(i) nugs.append(x) print(f'Nug {i} appended!') except HTTPError: print(f'Nug {i} wasn t found :c') %%time SCORES = [] for n, A, B in nugs: print(f'Running nug: {n}') simulation = SA(flow_m=A, dist_m=B, QAP_name=f'Nug {n}', T=1000000) simulation.run_faster() SCORES.append(simulation.costs) save_costs = pd.DataFrame(simulation.costs) save_costs.to_csv(f'scores_for_nug{n}.csv') SCORES = np.array(SCORES) n, m = SCORES.shape n, m f, ax = plt.subplots(4, 3, figsize=(17,15)) for i in range(n): ax[i // 3, i % 3].plot(SCORES[i]) ax[i // 3, i % 3].set_title(f'nug{nugs[i][0]}') print('Min cost:') for i, s in enumerate(SCORES.min(axis=1)): print(f'nug{nugs[i][0]}: {s}') ```
github_jupyter
``` from sympy import * EI, a, q = var("EI, a, q") pprint("\nFEM-Solution:") # 1: Stiffness Matrices: # Element 1 l = 2*a l2 = l*l l3 = l*l*l K = EI/l3 * Matrix( [ [ 4*l2 , -6*l , 2*l2 , 6*l , 0 , 0 ], [ -6*l , 12 , -6*l , -12 , 0 , 0 ], [ 2*l2 , -6*l , 4*l2 , 6*l , 0 , 0 ], [ 6*l , -12 , 6*l , 12 , 0 , 0 ], [ 0 , 0 , 0 , 0 , 0 , 0 ], [ 0 , 0 , 0 , 0 , 0 , 0 ], ] ) # Element 2 l = a l2 = l*l l3 = l*l*l K += EI/l3 * Matrix( [ [ 0 , 0 , 0 , 0 , 0 , 0 ], [ 0 , 0 , 0 , 0 , 0 , 0 ], [ 0 , 0 , 4*l2 , -6*l , 2*l2 , 6*l ], [ 0 , 0 , -6*l , 12 , -6*l , -12 ], [ 0 , 0 , 2*l2 , -6*l , 4*l2 , 6*l ], [ 0 , 0 , 6*l , -12 , 6*l , 12 ], ] ) # 2: BCs: p0,w0,p1,w1,p2,w2 = var("ψ₀,w₀,ψ₁,w₁,ψ₂,w₂") M0,F0,M1,F1,M2,F2 = var("M₀,F₀,M₁,F₁,M₂,F₂") Mq1, Fq1 = -q/12*a*a, q/2*a Mq2, Fq2 = -Mq1, Fq1 # 0 1 2 # qqqqqqqqqqqqqqqq # |-------------------------A--------------- u = Matrix([ 0,0, p1,0, p2,w2 ] ) f = Matrix([ M0,F0, Mq1,Fq1+F1, Mq2,Fq2 ] ) unks = [ M0,F0, p1,F1, p2,w2 ] # 3: Solution: eq = Eq(K*u, f) sol = solve(eq, unks) pprint(sol) pprint("\nMinimum-Total-Potential-Energy-Principle-Solution:") # 1: Ansatz: a0, a1, a2, a3 = var("a0, a1, a2, a3") b0, b1, b2 = var("b0, b1, b2") order = 2 if (order == 2): # w2 has order 2 b3, b4 = 0, 0 else: # w2 has order 4 b3, b4 = var("b3, b4") x1, x2 = var("x1, x2") w1 = a0 + a1*x1 + a2*x1**2 + a3*x1**3 w1p = diff(w1, x1) w1pp = diff(w1p, x1) w2 = b0 + b1*x2 + b2*x2**2 + b3*x2**3 + b4*x2**4 w2p = diff(w2, x2) w2pp = diff(w2p, x2) pprint("\nw1 and w2:") pprint(w1) pprint(w2) # 2: Using BCs: pprint("\nElimination of a0, a1, a2, a3, b0 using BCs:") # w1(0)=0 e1 = Eq(w1.subs(x1, 0)) # w1'(0)=0 e2 = Eq(w1p.subs(x1, 0)) # w1(2l)=0 e3 = Eq(w1.subs(x1, 2*a)) # w2(0)=0 e4 = Eq(w2.subs(x2, 0)) # w1p(2a)=w2p(0) e5 = Eq(w1p.subs(x1, 2*a), w2p.subs(x2, 0)) eqns, unks = [e1, e2, e3, e4, e5], [a0, a1, a2, a3, b0] sol = solve(eqns, unks) pprint(sol) sub_list=[ (a0, sol[a0]), (a1, sol[a1]), (a2, sol[a2]), (a3, sol[a3]), (b0, sol[b0]), ] pprint("\nw1 and w2:") w1 = w1.subs(sub_list) w2 = w2.subs(sub_list) pprint(w1) pprint(w2) pprint("\nw1'' and w2'':") w1pp = w1pp.subs(sub_list) w2pp = w2pp.subs(sub_list) pprint(w1pp) pprint(w2pp) # 3: Using Principle: pprint("\nU1, U2, Uq:") i1 = w1pp*w1pp I1 = integrate(i1, x1) I1 = I1.subs(x1,2*a) - I1.subs(x1,0) U1 = EI*I1/2 pprint(U1) i2 = w2pp*w2pp I2 = integrate(i2, x2) I2 = I2.subs(x2,a) - I2.subs(x2,0) U2 = EI*I2/2 pprint(U2) i2 = q*w2 I2 = integrate(i2, x2) I2 = I2.subs(x2,a) - I2.subs(x2,0) Uq = I2 pprint(Uq) pprint("\nParameters for U1 + U2 - Uq = Min:") U = U1 + U2 - Uq e1 = Eq(diff(U, b1)) e2 = Eq(diff(U, b2)) if (order == 2): eqns = [e1, e2] unks = [b1, b2] sol = solve(eqns, unks) sub_list=[ (b1, sol[b1]), (b2, sol[b2]), ] w2 = w2.subs(sub_list) else: e3 = Eq(diff(U, b3)) e4 = Eq(diff(U, b4)) eqns = [e1, e2, e3, e4] unks = [b1, b2, b3, b4] sol = solve(eqns, unks) sub_list=[ (b1, sol[b1]), (b2, sol[b2]), (b3, sol[b3]), (b4, sol[b4]), ] w2 = w2.subs(sub_list) pprint(sol) pprint("\nw2:") pprint(w2) pprint("\nw2(a):") w2 = w2.subs(x2, a) pprint(w2) # FEM-Solution: # ⎧ 2 4 3 3 ⎫ # ⎪ 3⋅a⋅q -11⋅a⋅q -a ⋅q 3⋅a ⋅q -a ⋅q -5⋅a ⋅q ⎪ # ⎨F₀: ─────, F₁: ────────, M₀: ──────, w₂: ──────, ψ₁: ──────, ψ₂: ────────⎬ # ⎪ 8 8 4 8⋅EI 4⋅EI 12⋅EI ⎪ # ⎩ ⎭ # # 2. Minimum-Total-Potential-Energy-Principle-Solution: # # w1 and w2: # 2 3 # a₀ + a₁⋅x₁ + a₂⋅x₁ + a₃⋅x₁ # 2 # b₀ + b₁⋅x₂ + b₂⋅x₂ # # Elimination of a0, a1, a2, a3, b0 using BCs: # ⎧ -b₁ b₁ ⎫ # ⎪a₀: 0, a₁: 0, a₂: ────, a₃: ────, b₀: 0⎪ # ⎨ 2⋅a 2 ⎬ # ⎪ 4⋅a ⎪ # ⎩ ⎭ # # w1 and w2: # 2 3 # b₁⋅x₁ b₁⋅x₁ # - ────── + ────── # 2⋅a 2 # 4⋅a # 2 # b₁⋅x₂ + b₂⋅x₂ # # w1'' and w2'': # b₁ 3⋅b₁⋅x₁ # - ── + ─────── # a 2 # 2⋅a # 2⋅b₂ # # U1, U2, Uq: # 2 # EI⋅b₁ # ────── # a # 2 # 2⋅EI⋅a⋅b₂ # 3 2 # a ⋅b₂⋅q a ⋅b₁⋅q # ─────── + ─────── # 3 2 # # Parameters for U1 + U2 - Uq = Min: # ⎧ 3 2 ⎫ # ⎪ a ⋅q a ⋅q⎪ # ⎨b₁: ────, b₂: ─────⎬ # ⎪ 4⋅EI 12⋅EI⎪ # ⎩ ⎭ # # w2: # 3 2 2 # a ⋅q⋅x₂ a ⋅q⋅x₂ # ─────── + ──────── # 4⋅EI 12⋅EI # # w2(a): # 4 # a ⋅q # ──── # 3⋅EI ```
github_jupyter
``` import enum from abc import ABCMeta, abstractmethod, abstractproperty import numpy as np import pandas as pd from matplotlib import pyplot as plt %matplotlib inline ``` # Bayesian bandits Let us explore a simple task of multi-armed bandits with Bernoulli distributions and several strategies for it. Bandits have $K$ actions. An action leads to a reward $r=1$ with a probability $0 \le \theta_k \le 1$ unknown to an agent but fixed over time. The agent goal is to minimize total suboptimality over $T$ actions: $$\rho = T\theta^* - \sum_{t=1}^T r_t,$$ and $\theta^* = \max_k\{\theta_k\}$. **Real-life example** - clinical trials: we have $k$ drug types and $T$ patients. After some drug is taken a patient is cured with a probability $\theta_k$. The goal is to find an optimal drug. The survey on clinical trials - https://arxiv.org/pdf/1507.08025.pdf. ### Hometask The task is complementary for the reinforcement learning task, the maximum is 10 points for both. Implement all the unimplemented agents: 1. [1 балл] $\varepsilon$-greedy 2. [1 балл] UCB 3. [1 балл] Thompson sampling 4. [2 балла] Custom strategy 5. [2 балла] $\varepsilon$-greedy для riverflow 6. [3 балла] PSRL agent для riverflow ``` class BernoulliBandit: def __init__(self, n_actions=5): self._probs = np.random.random(n_actions) @property def action_count(self): return len(self._probs) def pull(self, action): if np.random.random() > self._probs[action]: return 0.0 return 1.0 def optimal_reward(self): """ Used for regret calculation """ return np.max(self._probs) def step(self): """ Used in nonstationary version """ pass def reset(self): """ Used in nonstationary version """ class AbstractAgent(metaclass=ABCMeta): def init_actions(self, n_actions): self._successes = np.zeros(n_actions) self._failures = np.zeros(n_actions) self._total_pulls = 0 @abstractmethod def get_action(self): """ Get current best action :rtype: int """ pass def update(self, action, reward): """ Observe reward from action and update agent's internal parameters :type action: int :type reward: int """ self._total_pulls += 1 if reward == 1: self._successes[action] += 1 else: self._failures[action] += 1 @property def name(self): return self.__class__.__name__ class RandomAgent(AbstractAgent): def get_action(self): return np.random.randint(0, len(self._successes)) ``` ### Epsilon-greedy agent > **for** $t = 1,2,...$ **do** >> **for** $k = 1,...,K$ **do** >>> $\hat\theta_k \leftarrow \alpha_k / (\alpha_k + \beta_k)$ >> **end for** >> $x_t \leftarrow argmax_{k}\hat\theta$ with probability $1 - \varepsilon$ or random action with probability $\varepsilon$ >> Apply $x_t$ and observe $r_t$ >> $(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t}) + (r_t, 1-r_t)$ > **end for** Implement the algorithm described: ``` class EpsilonGreedyAgent(AbstractAgent): def __init__(self, epsilon = 0.01): self._epsilon = epsilon def init_actions(self, n_actions): self._a = np.ones(n_actions) * 1e-5 self._b = np.ones(n_actions) * 1e-5 self._n_actions = n_actions def get_action(self): if np.random.random() < self._epsilon: return np.random.randint(self._n_actions) thetas = self._a / (self._a + self._b) return np.argmax(thetas) def update(self, action, reward): self._a[action] += reward self._b[action] += (1 - reward) @property def name(self): return self.__class__.__name__ + "(epsilon={})".format(self._epsilon) ``` ### UCB agent Epsilon-greedy strategy has no preferences on the random choice stage. Probably it is better to choose the actions we are not sure enough in, but having a potential to become optimal in the future. We could create the measure for both optimality and uncertainty in the same time. One of possible sollutions for that is UCB1 algorithm: > **for** $t = 1,2,...$ **do** >> **for** $k = 1,...,K$ **do** >>> $w_k \leftarrow \alpha_k / (\alpha_k + \beta_k) + \sqrt{2log\ t \ / \ (\alpha_k + \beta_k)}$ >> **end for** >> $x_t \leftarrow argmax_{k}w$ >> Apply $x_t$ and observe $r_t$ >> $(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t}) + (r_t, 1-r_t)$ > **end for** Other solutions and optimality analysis - https://homes.di.unimi.it/~cesabian/Pubblicazioni/ml-02.pdf. ``` class UCBAgent(AbstractAgent): def __init__(self): pass def init_actions(self, n_actions): self._a = np.ones(n_actions) * 1e-5 self._b = np.ones(n_actions) * 1e-5 self._t = 1 def get_action(self): doubleus = self._a / (self._a + self._b) + np.sqrt(2 * np.log(self._t) / (self._a + self._b)) return np.argmax(doubleus) def update(self, action, reward): self._a[action] += reward self._b[action] += (1 - reward) self._t += 1 ``` ### Thompson sampling UCB1 algorithm does not consider the rewards distribution. If it is known, we can improve it with Thompson sampling. We believe that $\theta_k$ are independent and indentically distributed. As a prior distribution for them we will use beta-distribution with parameters $\alpha=(\alpha_1, \dots, \alpha_k)$ and $\beta=(\beta_1, \dots, \beta_k)$. Thus for each parameter $\theta_k$ an apriori probability function looks like $$ p(\theta_k) = \frac{\Gamma(\alpha_k + \beta_k)}{\Gamma(\alpha_k) + \Gamma(\beta_k)} \theta_k^{\alpha_k - 1}(1 - \theta_k)^{\beta_k - 1} $$ After receiving new evidence the distribution is updated following the Bayes rule. Beta distribution is comfortable to work with because of conjugation - actions aposterior distribution will be Beta too so we can easily update them: > **for** $t = 1,2,...$ **do** >> **for** $k = 1,...,K$ **do** >>> Sample $\hat\theta_k \sim beta(\alpha_k, \beta_k)$ >> **end for** >> $x_t \leftarrow argmax_{k}\hat\theta$ >> Apply $x_t$ and observe $r_t$ >> $(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t}) + (r_t, 1-r_t)$ > **end for** Hometask: https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf. ``` class ThompsonSamplingAgent(AbstractAgent): def __init__(self): pass def init_actions(self, n_actions): self._a = np.ones(n_actions) * 1e-5 self._b = np.ones(n_actions) * 1e-5 def get_action(self): thetas = np.random.beta(self._a, self._b) return np.argmax(thetas) def update(self, action, reward): self._a[action] += reward self._b[action] += (1 - reward) def plot_regret(env, agents, n_steps=5000, n_trials=50): scores = { agent.name : [0.0 for step in range(n_steps)] for agent in agents } for trial in range(n_trials): env.reset() for a in agents: a.init_actions(env.action_count) for i in range(n_steps): optimal_reward = env.optimal_reward() for agent in agents: action = agent.get_action() reward = env.pull(action) agent.update(action, reward) scores[agent.name][i] += optimal_reward - reward env.step() # change bandit's state if it is unstationary plt.figure(figsize=(17, 8)) for agent in agents: plt.plot(np.cumsum(scores[agent.name]) / n_trials) plt.legend([agent.name for agent in agents]) plt.ylabel("regret") plt.xlabel("steps") plt.show() # Uncomment agents agents = [ EpsilonGreedyAgent(), UCBAgent(), ThompsonSamplingAgent() ] plot_regret(BernoulliBandit(), agents, n_steps=10000, n_trials=10) ``` # Non-stationary bandits But what if the probabilities change over time? For example ``` class DriftingBandit(BernoulliBandit): def __init__(self, n_actions=5, gamma=0.01): """ Idea from https://github.com/iosband/ts_tutorial """ super().__init__(n_actions) self._gamma = gamma self._successes = None self._failures = None self._steps = 0 self.reset() def reset(self): self._successes = np.zeros(self.action_count) + 1.0 self._failures = np.zeros(self.action_count) + 1.0 self._steps = 0 def step(self): action = np.random.randint(self.action_count) reward = self.pull(action) self._step(action, reward) def _step(self, action, reward): self._successes = self._successes * (1 - self._gamma) + self._gamma self._failures = self._failures * (1 - self._gamma) + self._gamma self._steps += 1 self._successes[action] += reward self._failures[action] += 1.0 - reward self._probs = np.random.beta(self._successes, self._failures) ``` Letus look at the reward probabilities changing over time: ``` drifting_env = DriftingBandit(n_actions=5) drifting_probs = [] for i in range(20000): drifting_env.step() drifting_probs.append(drifting_env._probs) plt.figure(figsize=(14, 8)) plt.plot(pd.DataFrame(drifting_probs).rolling(window=20).mean()) plt.xlabel("steps") plt.ylabel("Success probability") plt.title("Reward probabilities over time") plt.legend(["Action {}".format(i) for i in range(drifting_env.action_count)]) plt.show() ``` **Task** - create an agent that will work better then any of the stationary algorithms above. ``` class YourAgent(AbstractAgent): def __init__(self, coef=0.995): self._coef = coef def init_actions(self, n_actions): self._a = np.ones(n_actions) * 1e-5 self._b = np.ones(n_actions) * 1e-5 self._t = 1 def get_action(self): doubleus = (self._a / (self._a + self._b) + np.sqrt(2 * np.log(self._t) / (self._a + self._b))) return np.argmax(doubleus) def update(self, action, reward): self._a[action] = self._coef * self._a[action] + reward self._b[action] = self._coef * self._b[action] + (1 - reward) self._t = self._coef * self._t + 1 drifting_agents = [ ThompsonSamplingAgent(), EpsilonGreedyAgent(), UCBAgent(), YourAgent() ] plot_regret(DriftingBandit(), drifting_agents, n_steps=20000, n_trials=10) ``` ## Exploration в MPDP The next task, "river flow", illustrates the importance of exploration with Markov decision-making processes example. <img src="river_swim.png"> Illustration from https://arxiv.org/abs/1306.0940 Both rewards and transitions are unknown to the agent. The optimal strategy is to go right against the flow, but the easiest one is to keep going left and get small rewards each time. ``` class RiverSwimEnv: LEFT_REWARD = 5.0 / 1000 RIGHT_REWARD = 1.0 def __init__(self, intermediate_states_count=4, max_steps=16): self._max_steps = max_steps self._current_state = None self._steps = None self._interm_states = intermediate_states_count self.reset() def reset(self): self._steps = 0 self._current_state = 1 return self._current_state, 0.0, False @property def n_actions(self): return 2 @property def n_states(self): return 2 + self._interm_states def _get_transition_probs(self, action): if action == 0: if self._current_state == 0: return [0, 1.0, 0] else: return [1.0, 0, 0] elif action == 1: if self._current_state == 0: return [0, .4, .6] if self._current_state == self.n_states - 1: return [.4, .6, 0] else: return [.05, .6, .35] else: raise RuntumeError("Unknown action {}. Max action is {}".format(action, self.n_actions)) def step(self, action): """ :param action: :type action: int :return: observation, reward, is_done :rtype: (int, float, bool) """ reward = 0.0 if self._steps >= self._max_steps: return self._current_state, reward, True transition = np.random.choice(range(3), p=self._get_transition_probs(action)) if transition == 0: self._current_state -= 1 elif transition == 1: pass else: self._current_state += 1 if self._current_state == 0: reward = self.LEFT_REWARD elif self._current_state == self.n_states - 1: reward = self.RIGHT_REWARD self._steps += 1 return self._current_state, reward, False ``` Lets impement Q-learning agent with an $\varepsilon$-greedy strategy and look at its performance. ``` class QLearningAgent: def __init__(self, n_states, n_actions, lr=0.2, gamma=0.95, epsilon=0.1): self._gamma = gamma self._epsilon = epsilon self._q_matrix = np.zeros((n_states, n_actions)) self._lr = lr def get_action(self, state): if np.random.random() < self._epsilon: return np.random.randint(0, self._q_matrix.shape[1]) else: return np.argmax(self._q_matrix[state]) def get_q_matrix(self): """ Used for policy visualization """ return self._q_matrix def start_episode(self): """ Used in PSRL agent """ pass def update(self, state, action, reward, next_state): max_val = self._q_matrix[next_state].max() self._q_matrix[state][action] += self._lr * \ (reward + self._gamma * max_val - self._q_matrix[state][action]) def train_mdp_agent(agent, env, n_episodes): episode_rewards = [] for ep in range(n_episodes): state, ep_reward, is_done = env.reset() agent.start_episode() while not is_done: action = agent.get_action(state) next_state, reward, is_done = env.step(action) agent.update(state, action, reward, next_state) state = next_state ep_reward += reward episode_rewards.append(ep_reward) return episode_rewards env = RiverSwimEnv() agent = QLearningAgent(env.n_states, env.n_actions) rews = train_mdp_agent(agent, env, 1000) plt.figure(figsize=(15, 8)) plt.plot(pd.DataFrame(np.array(rews)).ewm(alpha=.1).mean()) plt.xlabel("Episode count") plt.ylabel("Reward") plt.show() ``` Implement policy decisions visualization: ``` def plot_policy(agent): fig = plt.figure(figsize=(15, 8)) ax = fig.add_subplot(111) ax.matshow(agent.get_q_matrix().T) ax.set_yticklabels(['', 'left', 'right']) plt.xlabel("State") plt.ylabel("Action") plt.title("Values of state-action pairs") plt.show() plot_policy(agent) ``` We can see that the agent uses non-optimalstrategy and keeps going left, not knowing the better option. ## Posterior sampling RL Let us implement Thompson Sampling for MPDP! The algorithm: >**for** episode $k = 1,2,...$ **do** >> sample $M_k \sim f(\bullet\ |\ H_k)$ >> compute policy $\mu_k$ for $M_k$ >> **for** time $t = 1, 2,...$ **do** >>> take action $a_t$ from $\mu_k$ >>> observe $r_t$ and $s_{t+1}$ >>> update $H_k$ >> **end for** >**end for** $M_k$ is modeled as two matrices: transitions and rewards. Transition matrix is sampled from Dirichlet distribution while reward matrix is sampled from Gamma-normal distribution. The distibutions are updated following the bayes rule (see https://en.wikipedia.org/wiki/Conjugate_prior for continous distributions). Follow-up reading - https://arxiv.org/abs/1306.0940. ``` def bellman(val, trans, rewards): q = np.zeros(rewards.shape) for state in range(rewards.shape[0]): for action in range(rewards.shape[1]): q[state][action] = rewards[state][action] + val.dot(trans[state,:,action]) return q.max(axis=1) def sample_normal_gamma(mu, lmbd, alpha, beta): """ https://en.wikipedia.org/wiki/Normal-gamma_distribution """ tau = np.random.gamma(alpha, beta) mu = np.random.normal(mu , 1.0 / np.sqrt(lmbd * tau)) return mu, tau class PsrlAgent: def __init__(self, n_states, n_actions, horizon=10): self._n_states = n_states self._n_actions = n_actions self._horizon = horizon # params for transition sampling - Dirichlet distribution self._transition_counts = np.zeros((n_states, n_states, n_actions)) + 1.0 # params for reward sampling - Normal-gamma distribution self._mu_matrix = np.zeros((n_states, n_actions)) + 1.0 self._state_action_counts = np.zeros((n_states, n_actions)) + 1.0 # lambda self._alpha_matrix = np.zeros((n_states, n_actions)) + 1.0 self._beta_matrix = np.zeros((n_states, n_actions)) + 1.0 self._val = np.zeros(n_states) def _value_iteration(self, transitions, rewards): for _ in range(self._horizon): self._val = bellman(self._val, transitions, rewards) return self._val def start_episode(self): # sample new mdp self._sampled_transitions = np.apply_along_axis(np.random.dirichlet, 1, self._transition_counts) sampled_reward_mus, sampled_reward_stds = sample_normal_gamma( self._mu_matrix, self._state_action_counts, self._alpha_matrix, self._beta_matrix ) self._sampled_rewards = sampled_reward_mus self._current_value_function = self._value_iteration(self._sampled_transitions, self._sampled_rewards) def get_action(self, state): return np.argmax(self._sampled_rewards[state] + self._current_value_function.dot(self._sampled_transitions[state])) def update(self, state, action, reward, next_state): mu0 = self._mu_matrix[state][action] v = self._state_action_counts[state][action] a = self._alpha_matrix[state][action] b = self._beta_matrix[state][action] self._mu_matrix[state][action] = (mu0 * v + reward) / (v + 1) self._state_action_counts[state][action] = v + 1 self._alpha_matrix[state][action] = a + 1/2 self._beta_matrix[state][action] = b + (v / (v + 1)) * (((reward - mu0) ** 2) / 2) self._transition_counts[state][next_state][action] += 1 def get_q_matrix(self): return self._sampled_rewards + self._current_value_function.dot(self._sampled_transitions) horizon = 20 env = RiverSwimEnv(max_steps=horizon) agent = PsrlAgent(env.n_states, env.n_actions, horizon=horizon) rews = train_mdp_agent(agent, env, 1000) plt.figure(figsize=(15, 8)) plt.plot(pd.DataFrame(np.array(rews)).ewm(alpha=.1).mean()) plt.xlabel("Episode count") plt.ylabel("Reward") plt.show() plot_policy(agent) ```
github_jupyter
``` import os from google.colab import drive drive.mount('/content/gdrive') !mkdir train_local !unzip /content/gdrive/MyDrive/NFL_Helmet/nfl-health-and-safety-helmet-assignment.zip -d train_local/ import random import numpy as np from pathlib import Path import datetime import pandas as pd from tqdm.notebook import tqdm from sklearn.model_selection import train_test_split import cv2 import json import matplotlib.pyplot as plt from IPython.core.display import Video, display import subprocess import gc import shutil os.listdir("./train_local") ``` # prepare data as COCO format (standard format of obeject detection) # reference https://www.kaggle.com/eneszvo/mmdet-cascadercnn-helmet-detection-for-beginners ``` import pandas as pd # Load image level csv file extra_df = pd.read_csv('./train_local/image_labels.csv') print('Number of ground truth bounding boxes: ', len(extra_df)) # Number of unique labels label_to_id = {label: i for i, label in enumerate(extra_df.label.unique())} print('Unique labels: ', label_to_id) extra_df.head() def create_ann_file(df, category_id): now = datetime.datetime.now() data = dict( info=dict( description='NFL-Helmet-Assignment', url=None, version=None, year=now.year, contributor=None, date_created=now.strftime('%Y-%m-%d %H:%M:%S.%f'), ), licenses=[dict( url=None, id=0, name=None, )], images=[ # license, url, file_name, height, width, date_captured, id ], type='instances', annotations=[ # segmentation, area, iscrowd, image_id, bbox, category_id, id ], categories=[ # supercategory, id, name ], ) class_name_to_id = {} labels = ["__ignore__", 'Helmet', 'Helmet-Blurred', 'Helmet-Difficult', 'Helmet-Sideline', 'Helmet-Partial'] for i, each_label in enumerate(labels): class_id = i - 1 # starts with -1 class_name = each_label if class_id == -1: assert class_name == '__ignore__' continue class_name_to_id[class_name] = class_id data['categories'].append(dict( supercategory=None, id=class_id, name=class_name, )) box_id = 0 for i, image in tqdm(enumerate(os.listdir(TRAIN_PATH))): img = cv2.imread(TRAIN_PATH+'/'+image) height, width, _ = img.shape data['images'].append({ 'license':0, 'url': None, 'file_name': image, 'height': height, 'width': width, 'date_camputured': None, 'id': i }) df_temp = df[df.image == image] for index, row in df_temp.iterrows(): area = round(row.width*row.height, 1) bbox =[row.left, row.top, row.width, row.height] data['annotations'].append({ 'id': box_id, 'image_id': i, 'category_id': category_id[row.label], 'area': area, 'bbox':bbox, 'iscrowd':0 }) box_id+=1 return data from sklearn.model_selection import train_test_split TRAIN_PATH = './train_local/images' extra_df = pd.read_csv('./train_local/image_labels.csv') category_id = {'Helmet':0, 'Helmet-Blurred':1, 'Helmet-Difficult':2, 'Helmet-Sideline':3, 'Helmet-Partial':4} df_train, df_val = train_test_split(extra_df, test_size=0.2, random_state=36) ann_file_train = create_ann_file(df_train, category_id) ann_file_val = create_ann_file(df_val, category_id) print('train:',df_train.shape,'val:',df_val.shape) #save as json to gdrive with open('/content/gdrive/MyDrive/NFL_Helmet/ann_file_train.json', 'w') as f: json.dump(ann_file_train, f, indent=4) with open('/content/gdrive/MyDrive/NFL_Helmet/ann_file_val.json', 'w') as f: json.dump(ann_file_val, f, indent=4) ```
github_jupyter
# k-Nearest Neighbor (kNN) exercise *Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.* The kNN classifier consists of two stages: - During training, the classifier takes the training data and simply remembers it - During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples - The value of k is cross-validated In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code. ``` # Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print 'Training data shape: ', X_train.shape print 'Training labels shape: ', y_train.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print X_train.shape, X_test.shape from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train) ``` We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for the label Lets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example. First, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time. ``` # Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print dists.shape # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolation='none') plt.show() ``` **Inline Question #1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.) - What in the data is the cause behind the distinctly bright rows? - What causes the columns? **Your Answer**: The larger the distance between the images, the brighter the value of the cell will be. In other words the larger the value, the less similar the training image is to the test image. The bright row states how the test image isn't similar to any of the training images, while the bright column states that how the training images isn't similar to any of the test images. ``` # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) ``` You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`: ``` y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) ``` You should expect to see a slightly better performance than with `k = 1`. ``` # Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Let's compare how fast the implementations are def time_function(f, *args): """ Call a function f with args and return the time (in seconds) that it took to execute. """ import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print 'Two loop version took %f seconds' % two_loop_time one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print 'One loop version took %f seconds' % one_loop_time no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print 'No loop version took %f seconds' % no_loop_time # you should see significantly faster performance with the fully vectorized implementation ``` ### Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. ``` num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] ################################################################################ # TODO: # # Split up the training data into folds. After splitting, X_train_folds and # # y_train_folds should each be lists of length num_folds, where # # y_train_folds[i] is the label vector for the points in X_train_folds[i]. # # Hint: Look up the numpy array_split function. # ################################################################################ index_split = np.array_split(range(X_train.shape[0]), num_folds) X_train_folds = [X_train[i] for i in index_split] Y_train_folds = [y_train[i] for i in index_split] ################################################################################ # END OF YOUR CODE # ################################################################################ # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = {} ################################################################################ # TODO: # # Perform k-fold cross validation to find the best value of k. For each # # possible value of k, run the k-nearest-neighbor algorithm num_folds times, # # where in each case you use all but one of the folds as training data and the # # last fold as a validation set. Store the accuracies for all fold and all # # values of k in the k_to_accuracies dictionary. # ################################################################################ for k in k_choices: for fold in range(num_folds): val_X_text = X_train_folds[fold] val_Y_test = Y_train_folds[fold] X_train_temp = np.concatenate(X_train_folds[:fold] + X_train_folds[fold+1:]) Y_train_temp = np.concatenate(Y_train_folds[:fold] + Y_train_folds[fold+1:]) # Out KNN classifier knn = KNearestNeighbor() knn.train(X_train_temp, Y_train_temp) # Computing the distance dists_temp = knn.compute_distances_two_loops(val_X_text) y_test_pred_temp = knn.predict_labels(dists_temp, k=k) # Accuracy num_correct = np.sum(y_test_pred_temp==val_Y_test) num_test = val_X_text.shape[0] acc = float(num_correct)/num_test print("k=",k,"Fold=",fold,"Accuracy=",accuracy) k_to_accuracies[k] = k_to_accuracies.get(k,[]) + [accuracy] ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print 'k = %d, accuracy = %f' % (k, accuracy) # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. best_k = 1 classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) # plot the raw observations for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())]) accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.show() ```
github_jupyter
``` %matplotlib inline ``` Training a Classifier ===================== This is it. You have seen how to define neural networks, compute loss and make updates to the weights of the network. Now you might be thinking, What about data? ---------------- Generally, when you have to deal with image, text, audio or video data, you can use standard python packages that load data into a numpy array. Then you can convert this array into a ``torch.*Tensor``. - For images, packages such as Pillow, OpenCV are useful - For audio, packages such as scipy and librosa - For text, either raw Python or Cython based loading, or NLTK and SpaCy are useful Specifically for vision, we have created a package called ``torchvision``, that has data loaders for common datasets such as Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz., ``torchvision.datasets`` and ``torch.utils.data.DataLoader``. This provides a huge convenience and avoids writing boilerplate code. For this tutorial, we will use the CIFAR10 dataset. It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size. .. figure:: /_static/img/cifar10.png :alt: cifar10 cifar10 Training an image classifier ---------------------------- We will do the following steps in order: 1. Load and normalizing the CIFAR10 training and test datasets using ``torchvision`` 2. Define a Convolution Neural Network 3. Define a loss function 4. Train the network on the training data 5. Test the network on the test data 1. Loading and normalizing CIFAR10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Using ``torchvision``, it’s extremely easy to load CIFAR10. ``` import torch import torchvision import torchvision.transforms as transforms ``` The output of torchvision datasets are PILImage images of range [0, 1]. We transform them to Tensors of normalized range [-1, 1]. ``` transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') ``` Let us show some of the training images, for fun. ``` import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) ``` 2. Define a Convolution Neural Network ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Copy the neural network from the Neural Networks section before and modify it to take 3-channel images (instead of 1-channel images as it was defined). ``` import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() ``` 3. Define a Loss function and optimizer ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Let's use a Classification Cross-Entropy loss and SGD with momentum. ``` import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ``` 4. Train the network ^^^^^^^^^^^^^^^^^^^^ This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize. ``` for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') ``` 5. Test the network on the test data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We have trained the network for 2 passes over the training dataset. But we need to check if the network has learnt anything at all. We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions. Okay, first step. Let us display an image from the test set to get familiar. ``` dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) ``` Okay, now let us see what the neural network thinks these examples above are: ``` outputs = net(images) ``` The outputs are energies for the 10 classes. Higher the energy for a class, the more the network thinks that the image is of the particular class. So, let's get the index of the highest energy: ``` _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) ``` The results seem pretty good. Let us look at how the network performs on the whole dataset. ``` correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) ``` That looks waaay better than chance, which is 10% accuracy (randomly picking a class out of 10 classes). Seems like the network learnt something. Hmmm, what are the classes that performed well, and the classes that did not perform well: ``` class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i])) ``` Okay, so what next? How do we run these neural networks on the GPU? Training on GPU ---------------- Just like how you transfer a Tensor on to the GPU, you transfer the neural net onto the GPU. Let's first define our device as the first visible cuda device if we have CUDA available: ``` device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Assume that we are on a CUDA machine, then this should print a CUDA device: print(device) ``` The rest of this section assumes that `device` is a CUDA device. Then these methods will recursively go over all modules and convert their parameters and buffers to CUDA tensors: .. code:: python net.to(device) Remember that you will have to send the inputs and targets at every step to the GPU too: .. code:: python inputs, labels = inputs.to(device), labels.to(device) Why dont I notice MASSIVE speedup compared to CPU? Because your network is realllly small. **Exercise:** Try increasing the width of your network (argument 2 of the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` – they need to be the same number), see what kind of speedup you get. **Goals achieved**: - Understanding PyTorch's Tensor library and neural networks at a high level. - Train a small neural network to classify images Training on multiple GPUs ------------------------- If you want to see even more MASSIVE speedup using all of your GPUs, please check out :doc:`data_parallel_tutorial`. Where do I go next? ------------------- - :doc:`Train neural nets to play video games </intermediate/reinforcement_q_learning>` - `Train a state-of-the-art ResNet network on imagenet`_ - `Train a face generator using Generative Adversarial Networks`_ - `Train a word-level language model using Recurrent LSTM networks`_ - `More examples`_ - `More tutorials`_ - `Discuss PyTorch on the Forums`_ - `Chat with other users on Slack`_
github_jupyter
# Boltzmann Machine ## Downloading the dataset ### ML-100K ``` # !wget "http://files.grouplens.org/datasets/movielens/ml-100k.zip" # !unzip ml-100k.zip # !ls ``` ### ML-1M ``` # !wget "http://files.grouplens.org/datasets/movielens/ml-1m.zip" # !unzip ml-1m.zip # !ls ``` ## Importing the libraries ``` import numpy as np import pandas as pd import torch import torch.nn as nn import torch.nn.parallel import torch.optim as optim import torch.utils.data from torch.autograd import Variable ``` ## Importing the dataset ``` # We won't be using this dataset. movies = pd.read_csv('ml-1m/movies.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1') users = pd.read_csv('ml-1m/users.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1') ratings = pd.read_csv('ml-1m/ratings.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1') ``` ## Preparing the training set and the test set ``` training_set = pd.read_csv('ml-100k/u1.base', delimiter = '\t') training_set = np.array(training_set, dtype = 'int') test_set = pd.read_csv('ml-100k/u1.test', delimiter = '\t') test_set = np.array(test_set, dtype = 'int') ``` ## Getting the number of users and movies ``` nb_users = int(max(max(training_set[:,0]), max(test_set[:,0]))) nb_movies = int(max(max(training_set[:,1]), max(test_set[:,1]))) print(nb_users,'|',nb_movies) ``` ## Converting the data into an array with users in lines and movies in columns ``` def convert(data): new_data = [] for id_users in range(1, nb_users + 1): id_movies = data[:,1][data[:,0] == id_users] id_ratings = data[:,2][data[:,0] == id_users] ratings = np.zeros(nb_movies) ratings[id_movies - 1] = id_ratings new_data.append(list(ratings)) return new_data training_set = convert(training_set) test_set = convert(test_set) ``` ## Converting the data into Torch Tensors ``` training_set = torch.FloatTensor(training_set) test_set = torch.FloatTensor(test_set) ``` ## Converting the ratings into binary ratings 1 (Liked) or 0 (Not Liked) ``` training_set[training_set == 0] = -1 training_set[training_set == 1] = 0 training_set[training_set == 2] = 0 training_set[training_set >= 3] = 1 test_set[test_set == 0] = -1 test_set[test_set == 1 ] = 0 test_set[test_set == 2] = 0 test_set[test_set >= 3] = 1 ``` ## Creating the architecture of the Neural Network ``` class RBM(): def __init__(self, nv, nh): self.W = torch.randn(nh, nv) self.a = torch.randn(1, nh) self.b = torch.randn(1, nv) def sample_h(self, x): wx = torch.mm(x, self.W.t()) activation = wx + self.a.expand_as(wx) p_h_given_v = torch.sigmoid(activation) return p_h_given_v, torch.bernoulli(p_h_given_v) def sample_v(self, y): wy = torch.mm(y, self.W) activation = wy + self.b.expand_as(wy) p_v_given_h = torch.sigmoid(activation) return p_v_given_h, torch.bernoulli(p_v_given_h) def train(self, v0, vk, ph0, phk): self.W += (torch.mm(v0.t(), ph0) - torch.mm(vk.t(), phk)).t() self.b += torch.sum((v0 - vk), 0) self.a += torch.sum((ph0 - phk), 0) nv = len(training_set[0]) nh = 100 batch_size = 100 rbm = RBM(nv, nh) ``` ## Training the RBM ``` nb_epoch = 10 for epoch in range(1, nb_epoch + 1): train_loss = 0 train_rmse = 0 s = 0. for id_user in range(0, nb_users - batch_size, batch_size): vk = training_set[id_user:id_user+batch_size] v0 = training_set[id_user:id_user+batch_size] ph0,_ = rbm.sample_h(v0) for k in range(10): _,hk = rbm.sample_h(vk) _,vk = rbm.sample_v(hk) vk[v0<0] = v0[v0<0] phk,_ = rbm.sample_h(vk) rbm.train(v0, vk, ph0, phk) train_loss += torch.mean(torch.abs(v0[v0>=0] - vk[v0>=0]))# train_rmse += np.sqrt(torch.mean((v0[v0 >= 0] - vk[v0 >= 0])**2)) s += 1. print('epoch : {} | Loss : {} | RMSE : {}'.format(epoch, train_loss/s, train_rmse/s)) ``` ## Testing the RBM ``` test_loss = 0 test_rmse = 0 s = 0. for id_user in range(nb_users): v = training_set[id_user:id_user+1] vt = test_set[id_user:id_user+1] if len(vt[vt>=0]) > 0: _,h = rbm.sample_h(v) _,v = rbm.sample_v(h) test_loss += torch.mean(torch.abs(vt[vt>=0] - v[vt>=0])) test_rmse += np.sqrt(torch.mean((vt[vt>=0] - v[vt>=0])**2)) s += 1. print('loss : {} | RMSE : {}'.format(test_loss/s, test_rmse/s)) ```
github_jupyter
<div style='background: #FF7B47; padding: 10px; border: thin solid darblue; border-radius: 5px; margin-bottom: 2vh'> # Session 01 - Notebook Like most session notebooks in this course, this notebook is divided into two parts. Part one is a 'manual' that will allow you to code along with the new code that we introduce at the beginning of each session. The second part is the actual lab/assignment part, where you will work through a few practical tasks and write small but useful programs. <div style='background: #FF7B47; padding: 10px; border: thin solid darblue; border-radius: 5px'> ## Part 1 - Manual <div style='background: lightsalmon; padding: 10px; border: thin solid darblue; border-radius: 5px'> A.1 - "hello, world!" ``` print('Hello, world!') ``` <div style='background: lightsalmon; padding: 10px; border: thin solid darblue; border-radius: 5px'> A.2 - basic datatypes - strings and numeric variables ``` # your code here print(2) print(2+4) print(2**4) my_var = 2 print(my_var*2) ``` <div style='background: lightsalmon; padding: 10px; border: thin solid darblue; border-radius: 5px'> A.3 - basic operations ``` # your code here print(my_var *2) print(my_var **4) ``` <div style='background: lightsalmon; padding: 10px; border: thin solid darblue; border-radius: 5px'> A.4 - advanced data types to store collections of data ``` # your code here - lists # your code here - dictionaries my_dict = { "brand": "Ford", "model": "Mustang", "year": 1964, "example" : {'test':'here', 'test_2': 'here_2'} } print(my_dict['brand']) print(my_dict['example']) print(my_dict['example']['test']) ``` <div style='background: lightsalmon; padding: 10px; border: thin solid darblue; border-radius: 5px'> A.5 - for loops ``` # your code here fruits = ['apple', 'banana', 'strawberry', 'peach'] # we are iterating over each element in the data structure for fruit in fruits: print(fruit) fruits_in_fridge = { 'apple': 2, 'banana': 3, 'strawberry':10, 'peach': 4 } for key in fruits_in_fridge: print(key) # printing the keys print(fruits_in_fridge[key]) ``` <div style='background: lightsalmon; padding: 10px; border: thin solid darblue; border-radius: 5px'> A.6 - Python If ... Else ``` # your code here a = 33 b = 33 if b > a: print("b is greater than a") elif a == b: print("a and b are equal") else: print("a is greater than b") ``` <div style='background: lightsalmon; padding: 10px; border: thin solid darblue; border-radius: 5px'> A.6 - Functions ``` # Your code here def say_hello_to(name): print("hello, " + name) say_hello_to('Nathan') list_of_names = ['Megan', 'Robert', 'Jermain', 'Angela', 'Amr', 'Anthony', 'Rex', 'Nathan'] for classmate in list_of_names: say_hello_to(classmate) ``` <div style='background: #6A9EB4; padding: 10px; border: thin solid darblue; border-radius: 5px'> ## Part 2 - Lab During today's lab you will write code that will help the College to perform the house lottery more efficiently and assist the house administrations in a variety of tasks. <div style='background: #ADCAD6; padding: 10px; border: thin solid darblue; border-radius: 5px'> ## Task #1 - automatize the house lottery In the precirculated template folder, you will find the file students.csv with all rising sophomores that will enter the house lottery, i.e. they will get assigned to one of the twelve undergraduate houses. So far, the college has done this task manually but they hope that you can help them to automtize that process. Please load the csv and add another column 'house_id'. Pyhton's csv package will come in handy to load the csv file and treat each row as a list. Having loaded the file, add a random house id to each student and save that information in a new csv file. You might find the python package 'random' quite useful to automatize the lottery process. We've imported the package for you and provided an example. <div style='background: #ADCAD6; padding: 10px; border: thin solid darblue; border-radius: 5px'> Examples and precirculated code: ``` # house ids lookup tables house_to_id = { 'Adams House': 0, 'Cabot House': 1, 'Currier House' : 2, 'Dunster House': 3, 'Eliot House': 4, 'Kirkland House': 5, 'Leverett House': 6, 'Lowell House': 7, 'Mather House': 8, 'Pforzheimer House':9, 'Quincy House': 10, 'Winthrop House': 11 } id_to_house = { 0: 'Adams House', 1: 'Cabot House', 2: 'Currier House', 3: 'Dunster House', 4: 'Eliot House', 5: 'Kirkland House', 6: 'Leverett House', 7: 'Lowell House', 8: 'Mather House', 9: 'Pforzheimer House', 10: 'Quincy House', 11: 'Winthrop House' } # importing useful python packages import random import csv # some example code snippets how to load a csv file and how to write into one # read file_read = open("data/students.csv", "r") reader = csv.reader(file_read) for row in reader: print(row) break # breaking after first element feel free to check out the entire data stucture file_read.close() # write - notice that the file doesn't have to exist beforehand! csv write will create the file automatically, which is very useful! file_write = open('students_with_house.csv', 'w', newline='') writer = csv.writer(file_write) # we just write one row here. It might be useful to put this line into a loop when automatizing things writer.writerow(['first_name', 'last_name', 'HUID','email', 'house_id']) file_write.close() # example - generate a random integer between 1 and 10. example_random = random.randint(1,10) print(example_random) ``` <div style='background: #ADCAD6; padding: 10px; border: thin solid darblue; border-radius: 5px'> Your turn - load the csv file, create a random number for each student between 0-11 and store all students in a new csv file with their respective house assignments. A for loop might come in handy. ``` # your code here file_read = open("data/students.csv", "r") reader = csv.reader(file_read) # write - notice that the file doesn't have to exist beforehand! csv write will create the file automatically, which is very useful! file_write = open('students_with_house.csv', 'w', newline='') writer = csv.writer(file_write) for row in reader: student = row house_tmp = random.randint(0,11) student.append(house_tmp) writer.writerow(student) file_write.close() ``` <div style='background: #ADCAD6; padding: 10px; border: thin solid darblue; border-radius: 5px'> Write a small program that makes sure that you've successfully created and populated a csv with all students and their assigned houses. ``` # your code here file_test = open('students_with_house.csv', 'r') reader = csv.reader(file_test) for row in reader: print(row) break ``` <div style='background: #ADCAD6; padding: 10px; border: thin solid darblue; border-radius: 5px'> ## Task #2 - generate a file for a house on demand OK, you've helped the college out with the lottery but now the house administrators are struggling a bit because they have all 2000 students in one file but only care about the students that were assigned to their particular house. Write a small programm that solves that task on demand and generates a csv for them with only their students. You can write a program that does this task on demand for a given house, or you can generate a csv for each house in advance. ``` # your code here house = 'Adams House' house_id = house_to_id['Adams House'] file_read = open('students_with_house.csv', 'r') reader = csv.reader(file_read) file_write = open('adams_students.csv', 'w', newline='') writer = csv.writer(file_write) for row in reader: if int(row[4]) == house_id: writer.writerow(row) file_write.close() def make_house_file(house): house_id = house_to_id[house] file_read = open('students_with_house.csv', 'r') reader = csv.reader(file_read) file_write = open(house + '_students.csv', 'w', newline='') writer = csv.writer(file_write) for row in reader: if int(row[4]) == house_id: writer.writerow(row) file_write.close() for house_name in house_to_id: make_house_file(house_name) ``` <div style='background: #CBE0A4; padding: 10px; border: thin solid darblue; border-radius: 5px'> ## Bonus Tasks 1. calculate vacant rooms per house 2. write a program that computes the number of students assigned per house in a given csv 3. write a function that checks whether there are problems with the numbers of students assigned to each house 4. write code that assigns students randomly but in such a way that there are no capacity issues. <div style='background: #CBE0A4; padding: 10px; border: thin solid darblue; border-radius: 5px'> Some house administrators have complaned that the list of students is too long to accomodate all new sophomores assigned to their houses. Since some houses are bigger and others are smaller, we cannot simply generate integers and get away with the randomly generated number of students in each house. Rather, we have to check more carefolly whether there is still capacity. Below, find two useful dictionaries hat should help you to solve this task. ``` # bonus is house with exact capacities house_capacity = { 'Adams House': 411, 'Cabot House': 362, 'Currier House' : 356, 'Dunster House': 428, 'Eliot House': 450, 'Kirkland House': 400, 'Leverett House': 480, 'Lowell House': 450, 'Mather House': 426, 'Pforzheimer House':360, 'Quincy House': 420, 'Winthrop House': 500 } # number of occupied rooms after seniors have left house_occupied = { 'Adams House': 236, 'Cabot House': 213, 'Currier House' : 217, 'Dunster House': 296, 'Eliot House': 288, 'Kirkland House': 224, 'Leverett House': 233, 'Lowell House': 242, 'Mather House': 217, 'Pforzheimer House':195, 'Quincy House': 253, 'Winthrop House': 310 } house_names = [] for house in house_capacity: house_names.append(house) ``` <div style='background: #CBE0A4; padding: 10px; border: thin solid darblue; border-radius: 5px'>Let's start by writing a small program that helps us to calculate the vacant rooms for each house. Try to use a dictionary structure that contains all information for each house. Feel free to also write a few lines that check how many vacant rooms there are in total. ``` vacant_rooms = {} # your code here for house in house_names: vacant_rooms[house] = house_capacity[house] - house_occupied[house] # your code ends here print(vacant_rooms) # your code here # take each house name # add their values total = 0 for house in house_names: total = total + vacant_rooms[house] # total += vacant_rooms[house] print(total) ``` <div style='background: #CBE0A4; padding: 10px; border: thin solid darblue; border-radius: 5px'>Let's now write a small function that calculates the number of students assigned per house with our old method and returns a dictionary with that information ``` helper_dict = { "A": 222, "B": 123 } def calculate_students_per_house(filename): # helper dict helper_dict = { 'Adams House': 0, 'Cabot House': 0, 'Currier House' : 0, 'Dunster House': 0, 'Eliot House': 0, 'Kirkland House': 0, 'Leverett House': 0, 'Lowell House': 0, 'Mather House': 0, 'Pforzheimer House':0, 'Quincy House': 0, 'Winthrop House': 0 } # your code here file_read = open(filename, 'r') reader = csv.reader(file_read) for row in reader: house_id = int(row[4]) house_name = id_to_house[house_id] helper_dict[house_name] = helper_dict[house_name] + 1 # your code ends here return helper_dict assigned_students_random = calculate_students_per_house('students_with_house.csv') print(assigned_students_random) ``` <div style='background: #CBE0A4; padding: 10px; border: thin solid darblue; border-radius: 5px'>Next, let's check by how much we were off for each house with our random approach. ``` def house_assignment_check(assignements_per_house_dict): # your code here for house in house_names: difference = vacant_rooms[house] - assignements_per_house_dict[house] if difference < 0: print(f'there is a problem with {house}. We have assigned {abs(difference)} too many students') else: print(f'there is no problemwith {house}') house_assignment_check(assigned_students_random) ``` <div style='background: #CBE0A4; padding: 10px; border: thin solid darblue; border-radius: 5px'>Finally, let's write a function that assignes houses more carefully. We can still generate random integers to assign a house, but we need to check whether that house still has capacity. For that reason, please create a function called assign_house() that generates not only a random number, but also checks whether that number is valid, i.e. if that house still has capacity. If there's no capacity, that function should call itself again until it generates a house (id) that still has capacity. ``` vacant_rooms = {'Adams House': 175, 'Cabot House': 149, 'Currier House': 139, 'Dunster House': 132, 'Eliot House': 162, 'Kirkland House': 176, 'Leverett House': 247, 'Lowell House': 208, 'Mather House': 209, 'Pforzheimer House': 165, 'Quincy House': 167, 'Winthrop House': 190} # solution def assign_house(): house_id = random.randint(0,11) house_name = id_to_house[house_id] if vacant_rooms[house_name] > 0: vacant_rooms[house_name] -= 1 return house_id else: return assign_house() # next, load the students.csv file, read it row by row and use the "assign_house()" function to generate a house for each student. # your code here # your code here file_read = open("data/students.csv", "r") reader = csv.reader(file_read) # write - notice that the file doesn't have to exist beforehand! csv write will create the file automatically, which is very useful! file_write = open('students_with_house_correct_capaity.csv', 'w', newline='') writer = csv.writer(file_write) for row in reader: student = row correct_house = assign_house() student.append(correct_house) writer.writerow(student) file_write.close() # finally, check whether your new solution is working flawlesly # your code here file_read = open("students_with_house_correct_capaity.csv", "r") reader = csv.reader(file_read) for row in reader: print(row) break assigned_students_per_house = calculate_students_per_house('students_with_house_correct_capaity.csv') assigned_students_per_house vacant_rooms = {'Adams House': 175, 'Cabot House': 149, 'Currier House': 139, 'Dunster House': 132, 'Eliot House': 162, 'Kirkland House': 176, 'Leverett House': 247, 'Lowell House': 208, 'Mather House': 209, 'Pforzheimer House': 165, 'Quincy House': 167, 'Winthrop House': 190} house_assignment_check(assigned_students_per_house) ```
github_jupyter
# Autobatching log-densities example [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google/jax/blob/master/docs/notebooks/vmapped_log_probs.ipynb) This notebook demonstrates a simple Bayesian inference example where autobatching makes user code easier to write, easier to read, and less likely to include bugs. Inspired by a notebook by @davmre. ``` import functools import itertools import re import sys import time from matplotlib.pyplot import * import jax from jax import lax import jax.numpy as jnp import jax.scipy as jsp from jax import random import numpy as np import scipy as sp ``` ## Generate a fake binary classification dataset ``` np.random.seed(10009) num_features = 10 num_points = 100 true_beta = np.random.randn(num_features).astype(jnp.float32) all_x = np.random.randn(num_points, num_features).astype(jnp.float32) y = (np.random.rand(num_points) < sp.special.expit(all_x.dot(true_beta))).astype(jnp.int32) y ``` ## Write the log-joint function for the model We'll write a non-batched version, a manually batched version, and an autobatched version. ### Non-batched ``` def log_joint(beta): result = 0. # Note that no `axis` parameter is provided to `jnp.sum`. result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.)) result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta)))) return result log_joint(np.random.randn(num_features)) # This doesn't work, because we didn't write `log_prob()` to handle batching. try: batch_size = 10 batched_test_beta = np.random.randn(batch_size, num_features) log_joint(np.random.randn(batch_size, num_features)) except ValueError as e: print("Caught expected exception " + str(e)) ``` ### Manually batched ``` def batched_log_joint(beta): result = 0. # Here (and below) `sum` needs an `axis` parameter. At best, forgetting to set axis # or setting it incorrectly yields an error; at worst, it silently changes the # semantics of the model. result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.), axis=-1) # Note the multiple transposes. Getting this right is not rocket science, # but it's also not totally mindless. (I didn't get it right on the first # try.) result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta.T).T)), axis=-1) return result batch_size = 10 batched_test_beta = np.random.randn(batch_size, num_features) batched_log_joint(batched_test_beta) ``` ### Autobatched with vmap It just works. ``` vmap_batched_log_joint = jax.vmap(log_joint) vmap_batched_log_joint(batched_test_beta) ``` ## Self-contained variational inference example A little code is copied from above. ### Set up the (batched) log-joint function ``` @jax.jit def log_joint(beta): result = 0. # Note that no `axis` parameter is provided to `jnp.sum`. result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=10.)) result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta)))) return result batched_log_joint = jax.jit(jax.vmap(log_joint)) ``` ### Define the ELBO and its gradient ``` def elbo(beta_loc, beta_log_scale, epsilon): beta_sample = beta_loc + jnp.exp(beta_log_scale) * epsilon return jnp.mean(batched_log_joint(beta_sample), 0) + jnp.sum(beta_log_scale - 0.5 * np.log(2*np.pi)) elbo = jax.jit(elbo) elbo_val_and_grad = jax.jit(jax.value_and_grad(elbo, argnums=(0, 1))) ``` ### Optimize the ELBO using SGD ``` def normal_sample(key, shape): """Convenience function for quasi-stateful RNG.""" new_key, sub_key = random.split(key) return new_key, random.normal(sub_key, shape) normal_sample = jax.jit(normal_sample, static_argnums=(1,)) key = random.PRNGKey(10003) beta_loc = jnp.zeros(num_features, jnp.float32) beta_log_scale = jnp.zeros(num_features, jnp.float32) step_size = 0.01 batch_size = 128 epsilon_shape = (batch_size, num_features) for i in range(1000): key, epsilon = normal_sample(key, epsilon_shape) elbo_val, (beta_loc_grad, beta_log_scale_grad) = elbo_val_and_grad( beta_loc, beta_log_scale, epsilon) beta_loc += step_size * beta_loc_grad beta_log_scale += step_size * beta_log_scale_grad if i % 10 == 0: print('{}\t{}'.format(i, elbo_val)) ``` ### Display the results Coverage isn't quite as good as we might like, but it's not bad, and nobody said variational inference was exact. ``` figure(figsize=(7, 7)) plot(true_beta, beta_loc, '.', label='Approximated Posterior Means') plot(true_beta, beta_loc + 2*jnp.exp(beta_log_scale), 'r.', label='Approximated Posterior $2\sigma$ Error Bars') plot(true_beta, beta_loc - 2*jnp.exp(beta_log_scale), 'r.') plot_scale = 3 plot([-plot_scale, plot_scale], [-plot_scale, plot_scale], 'k') xlabel('True beta') ylabel('Estimated beta') legend(loc='best') ```
github_jupyter
# PCA with Cary5000 data for deep-UV spectra (190-300 nm) ``` # Import packages import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from mpl_toolkits import mplot3d from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler plt.style.use('ggplot') # Set seed seed = 4 # Import data data = pd.read_csv('Datasets/urea_saline_cary5000.csv') # Define features and targets X = data.drop(data.columns[0:2204], axis=1) y = data['Urea Concentration (mM)'] # Normalize data sc = StandardScaler() X = sc.fit_transform(X) # Do PCA pca = PCA(n_components=10, random_state=seed) X_pca = pca.fit_transform(X) print("Variance explained by all 10 PC's =", sum(pca.explained_variance_ratio_ *100)) # Elbow Plot plt.plot(np.cumsum(pca.explained_variance_ratio_), color='blueviolet') plt.xlabel('Number of components') plt.ylabel('Explained variance (%)') plt.savefig('elbow_plot.png', dpi=100) np.cumsum(pca.explained_variance_ratio_) # If we apply PCA with n_components=2 pca_2 = PCA(n_components=2, random_state=seed) X_pca_2 = pca_2.fit_transform(X) # Plot it plt.figure(figsize=(10, 7)) sns.scatterplot(x=X_pca_2[:, 0], y=X_pca_2[:, 1], s=70, hue=y, palette='viridis') plt.title('2D Scatterplot: 95.5% of the variability captured', pad=15) plt.xlabel('First prinicipal component') plt.ylabel('Second principal component') # Plot it in 3D pca_3 = PCA(n_components=3, random_state=seed) X_pca_3 = pca_3.fit_transform(X) fig = plt.figure(figsize = (12, 8)) ax = plt.axes(projection='3d') sctt = ax.scatter3D(X_pca_3[:, 0], X_pca_3[:, 1], X_pca_3[:, 2], c = y, s=50, alpha=0.6) plt.title('3D Scatterplot: 98.0% of the variability captured', pad=15) ax.set_xlabel('First principal component') ax.set_ylabel('Second principal component') ax.set_zlabel('Third principal component') plt.savefig('3d_scatterplot.png') ``` ## Drop outliers - data from 02/11/2022 ``` data = pd.read_csv('Datasets/urea_saline_cary5000.csv') data = data.drop(data.index[17:29]) data.reset_index(inplace=True) # Define features and targets X = data.drop(data.columns[0:2205], axis=1) y = data['Urea Concentration (mM)'] # Normalize data sc = StandardScaler() X = sc.fit_transform(X) # Do PCA pca = PCA(n_components=10, random_state=seed) X_pca = pca.fit_transform(X) print("Variance explained by all 10 PC's =", sum(pca.explained_variance_ratio_ *100)) # Elbow Plot plt.plot(np.cumsum(pca.explained_variance_ratio_), color='blueviolet') plt.xlabel('Number of components') plt.ylabel('Explained variance (%)') plt.savefig('elbow_plot.png', dpi=100) np.cumsum(pca.explained_variance_ratio_) # If we apply PCA with n_components=2 pca_2 = PCA(n_components=2, random_state=seed) X_pca_2 = pca_2.fit_transform(X) # Plot it plt.figure(figsize=(10, 7)) sns.scatterplot(x=X_pca_2[:, 0], y=X_pca_2[:, 1], s=70, hue=y, palette='viridis') plt.title('2D Scatterplot: 96.4% of the variability captured', pad=15) plt.xlabel('First prinicipal component') plt.ylabel('Second principal component') # Plot it in 3D pca_3 = PCA(n_components=3, random_state=seed) X_pca_3 = pca_3.fit_transform(X) fig = plt.figure(figsize = (12, 8)) ax = plt.axes(projection='3d') sctt = ax.scatter3D(X_pca_3[:, 0], X_pca_3[:, 1], X_pca_3[:, 2], c = y, s=50, alpha=0.6) plt.title('3D Scatterplot: 98.4% of the variability captured', pad=15) ax.set_xlabel('First principal component') ax.set_ylabel('Second principal component') ax.set_zlabel('Third principal component') plt.savefig('3d_scatterplot.png') ```
github_jupyter
<a href="https://colab.research.google.com/github/agungsantoso/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/Part%201%20-%20Tensors%20in%20PyTorch%20(Exercises).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Introduction to Deep Learning with PyTorch In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. ## Neural Networks Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output. <img src="https://github.com/agungsantoso/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/assets/simple_neuron.png?raw=1" width=400px> Mathematically this looks like: $$ \begin{align} y &= f(w_1 x_1 + w_2 x_2 + b) \\ y &= f\left(\sum_i w_i x_i +b \right) \end{align} $$ With vectors this is the dot/inner product of two vectors: $$ h = \begin{bmatrix} x_1 \, x_2 \cdots x_n \end{bmatrix} \cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix} $$ ## Tensors It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors. <img src="https://github.com/agungsantoso/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/assets/tensor_examples.svg?raw=1" width=600px> With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ``` # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ``` Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line: `features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution. Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution. PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ``` ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ``` You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs. Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error ```python >> torch.mm(features, weights) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-15d592eb5279> in <module>() ----> 1 torch.mm(features, weights) RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033 ``` As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work. **Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often. There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view). * `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory. * `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch. * `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`. I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`. > **Exercise**: Calculate the output of our little network using matrix multiplication. ``` ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) ``` ### Stack them up! That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix. <img src='https://github.com/agungsantoso/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/assets/multilayer_diagram_weights.png?raw=1' width=450px> The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$ \vec{h} = [h_1 \, h_2] = \begin{bmatrix} x_1 \, x_2 \cdots \, x_n \end{bmatrix} \cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2} \end{bmatrix} $$ The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply $$ y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right) $$ ``` ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ``` > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ``` ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ``` If you did this correctly, you should see the output `tensor([[ 0.3171]])`. The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. ## Numpy to Torch and back Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ``` import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ``` The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ``` # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ```
github_jupyter
## **University of Toronto - CSC413 - Neural Networks and Deep Learning** ## **Programming Assignment 4 - StyleGAN2-Ada** This is a self-contained notebook that allows you to play around with a pre-trained StyleGAN2-Ada generator Disclaimer: Some codes were borrowed from StyleGAN official documentation on Github https://github.com/NVlabs/stylegan Make sure to set your runtime to GPU Remember to save your progress periodically! ``` # Run this for Google CoLab (use TensorFlow 1.x) %tensorflow_version 1.x # clone StyleGAN2 Ada !git clone https://github.com/NVlabs/stylegan2-ada.git #setup some environments (Do not change any of the following) import sys import pickle import os import numpy as np from IPython.display import Image import PIL.Image from PIL import Image import matplotlib.pyplot as plt sys.path.insert(0, "/content/stylegan2-ada") #do not remove this line import dnnlib import dnnlib.tflib as tflib import IPython.display from google.colab import files ``` Next, we will load a pre-trained StyleGan2-ada network. Each of the following pre-trained network is specialized to generate one type of image. ``` # The pre-trained networks are stored as standard pickle files # Uncomment one of the following URL to begin # If you wish, you can also find other pre-trained networks online #URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/ffhq.pkl" # Human faces #URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/cifar10.pkl" # CIFAR10, these images are a bit too tiny for our experiment #URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqwild.pkl" # wild animal pictures #URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/metfaces.pkl" # European portrait paintings #URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqcat.pkl" # cats #URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqdog.pkl" # dogs tflib.init_tf() #this creates a default Tensorflow session # we are now going to load the StyleGAN2-Ada model # The following code downloads the file and unpickles it to yield 3 instances of dnnlib.tflib.Network. with dnnlib.util.open_url(URL) as fp: _G, _D, Gs = pickle.load(fp) # Here is a brief description of _G, _D, Gs, for details see the official StyleGAN documentation # _G = Instantaneous snapshot of the generator. Mainly useful for resuming a previous training run. # _D = Instantaneous snapshot of the discriminator. Mainly useful for resuming a previous training run. # Gs = Long-term average of the generator. Yields higher-quality results than the instantaneous snapshot. # We will work with Gs ``` ## Part 1 Sampling and Identifying Fakes Open: https://github.com/NVlabs/stylegan and follow the instructions starting from *There are three ways to use the pre-trained generator....* Complete generate_latent_code and generate_images function in the Colab notebook to generate a small row of $3 - 5$ images. You do not need to include these images into your PDF submission. If you wish, you can try to use https://www.whichfaceisreal.com/learn.html as a guideline to spot any imperfections that you detect in these images, e.g., ``blob artifact" and make a short remark for your attached images. ``` # Sample a batch of latent codes {z_1, ...., z_B}, B is your batch size. def generate_latent_code(SEED, BATCH, LATENT_DIMENSION = 512): """ This function returns a sample a batch of 512 dimensional random latent code - SEED: int - BATCH: int that specifies the number of latent codes, Recommended batch_size is 3 - 6 - LATENT_DIMENSION is by default 512 (see Karras et al.) You should use np.random.RandomState to construct a random number generator, say rnd Then use rnd.randn along with your BATCH and LATENT_DIMENSION to generate your latent codes. This samples a batch of latent codes from a normal distribution https://numpy.org/doc/stable/reference/random/generated/numpy.random.RandomState.randn.html Return latent_codes, which is a 2D array with dimensions BATCH times LATENT_DIMENSION """ ################################################################################ ########################## COMPLETE THE FOLLOWING ############################## ################################################################################ latent_codes = ... ################################################################################ return latent_codes # Sample images from your latent codes https://github.com/NVlabs/stylegan # You can use their default settings ################################################################################ ########################## COMPLETE THE FOLLOWING ############################## ################################################################################ def generate_images(SEED, BATCH, TRUNCATION = 0.7): """ This function generates a batch of images from latent codes. - SEED: int - BATCH: int that specifies the number of latent codes to be generated - TRUNCATION: float between [-1, 1] that decides the amount of clipping to apply to the latent code distribution recommended setting is 0.7 You will use Gs.run() to sample images. See https://github.com/NVlabs/stylegan for details You may use their default setting. """ # Sample a batch of latent code z using generate_latent_code function latent_codes = ... # Convert latent code into images by following https://github.com/NVlabs/stylegan fmt = dict(...) images = Gs.run(...) return PIL.Image.fromarray(np.concatenate(images, axis=1) , 'RGB') ################################################################################ # Generate your images generate_images(...) ``` ## **Part 2 Interpolation** Complete the interpolate_images function using linear interpolation between two latent codes, \begin{equation} z = r z_1 + (1-r) z_2, r \in [0, 1] \end{equation} and feeding this interpolation through the StyleGAN2-Ada generator Gs as done in generate_images. Include a small row of interpolation in your PDF submission as a screen shot if necessary to keep the file size small. ``` ################################################################################ ########################## COMPLETE THE FOLLOWING ############################## ################################################################################ def interpolate_images(SEED1, SEED2, INTERPOLATION, BATCH = 1, TRUNCATION = 0.7): """ - SEED1, SEED2: int, seed to use to generate the two latent codes - INTERPOLATION: int, the number of interpolation between the two images, recommended setting 6 - 10 - BATCH: int, the number of latent code to generate. In this experiment, it is 1. - TRUNCATION: float between [-1, 1] that decides the amount of clipping to apply to the latent code distribution recommended setting is 0.7 You will interpolate between two latent code that you generate using the above formula You can generate an interpolation variable using np.linspace https://numpy.org/doc/stable/reference/generated/numpy.linspace.html This function should return an interpolated image. Include a screenshot in your submission. """ latent_code_1 = ... latent_code_2 = ... images = Gs.run(...) return PIL.Image.fromarray(np.concatenate(images, axis=1) , 'RGB') ################################################################################ # Create an interpolation of your generated images interpolate_images(...) ``` After you have generated interpolated images, an interesting task would be to see how you can create a GIF. Feel free to explore a little bit more. ## **Part 3 Style Mixing and Fine Control** In the final part, you will reproduce the famous style mixing example from the original StyleGAN paper. ### Step 1. We will first learn how to generate from sub-networks of the StyleGAN generator. ``` # You will generate images from sub-networks of the StyleGAN generator # Similar to Gs, the sub-networks are represented as independent instances of dnnlib.tflib.Network # Complete the function by following \url{https://github.com/NVlabs/stylegan} # And Look up Gs.components.mapping, Gs.components.synthesism, Gs.get_var # Remember to use the truncation trick as described in the handout after you obtain src_dlatents from Gs.components.mapping.run def generate_from_subnetwork(src_seeds, LATENT_DIMENSION = 512): """ - src_seeds: a list of int, where each int is used to generate a latent code, e.g., [1,2,3] - LATENT_DIMENSION: by default 512 You will complete the code snippet in the Write Your Code Here block This generates several images from a sub-network of the genrator. To prevent mistakes, we have provided the variable names which corresponds to the ones in the StyleGAN documentation You should use their convention. """ # default arguments to Gs.components.synthesis.run, this is given to you. synthesis_kwargs = { 'output_transform': dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True), 'randomize_noise': False, 'minibatch_size': 4 } ############################################################################ ########################## WRITE YOUR CODE HERE ############################ ############################################################################ truncation = ... src_latents = ... src_dlatents = ... w_avg = ... src_dlatents = ... all_images = Gs.components.synthesis.run(...) ############################################################################ return PIL.Image.fromarray(np.concatenate(all_images, axis=1) , 'RGB') # generate several iamges from the sub-network generate_from_subnetwork(...) ``` ### Step 2. Initialize the col_seeds, row_seeds and col_styles and generate a grid of image. A recommended example for your experiment is as follows: * col_seeds = [1, 2, 3, 4, 5] * row_seeds = [6] * col_styles = [1, 2, 3, 4, 5] and * col_seeds = [1, 2, 3, 4, 5] * row_seeds = [6] * col_styles = [8, 9, 10, 11, 12] You will then incorporate your code from generate from sub_network into the cell below. Experiment with the col_styles variable. Explain what col_styles does, for instance, roughly describe what these numbers corresponds to. Create a simple experiment to backup your argument. Include **at maximum two** sets of images that illustrates the effect of changing col_styles and your explanation. Include them as screen shots to minimize the size of the file. Make reference to the original StyleGAN or the StyleGAN2 paper by Karras et al. as needed https://arxiv.org/pdf/1812.04948.pdf https://arxiv.org/pdf/1912.04958.pdf ``` ################################################################################ ####################COMPLETE THE NEXT THREE LINES############################### ################################################################################ col_seeds = ... row_seeds = ... col_styles = ... ################################################################################ src_seeds = list(set(row_seeds + col_seeds)) # default arguments to Gs.components.synthesis.run, do not change synthesis_kwargs = { 'output_transform': dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True), 'randomize_noise': False, 'minibatch_size': 4 } ################################################################################ ########################## COMPLETE THE FOLLOWING ############################## ################################################################################ # Copy the #### WRITE YOUR CODE HERE #### portion from generate_from_subnetwork() all_images = Gs.components.synthesis.run(...) ################################################################################ # (Do not change) image_dict = {(seed, seed): image for seed, image in zip(src_seeds, list(all_images))} w_dict = {seed: w for seed, w in zip(src_seeds, list(src_dlatents))} # Generating Images (Do not Change) for row_seed in row_seeds: for col_seed in col_seeds: w = w_dict[row_seed].copy() w[col_styles] = w_dict[col_seed][col_styles] image = Gs.components.synthesis.run(w[np.newaxis], **synthesis_kwargs)[0] image_dict[(row_seed, col_seed)] = image # Create an Image Grid (Do not Change) def create_grid_images(): _N, _C, H, W = Gs.output_shape canvas = PIL.Image.new('RGB', (W * (len(col_seeds) + 1), H * (len(row_seeds) + 1)), 'black') for row_idx, row_seed in enumerate([None] + row_seeds): for col_idx, col_seed in enumerate([None] + col_seeds): if row_seed is None and col_seed is None: continue key = (row_seed, col_seed) if row_seed is None: key = (col_seed, col_seed) if col_seed is None: key = (row_seed, row_seed) canvas.paste(PIL.Image.fromarray(image_dict[key], 'RGB'), (W * col_idx, H * row_idx)) return canvas # The following code will create your image, save it as a png, and display the image # Run the following code after you have set your row_seed, col_seed and col_style image_grid = create_grid_images() image_grid.save('image_grid.png') im = Image.open("image_grid.png") im ```
github_jupyter
``` #all_slow ``` # Tutorial - Migrating from Lightning > Incrementally adding fastai goodness to your Lightning training We're going to use the MNIST training code from Lightning's 'Quick Start' (as at August 2020), converted to a module. See `migrating_lightning.py` for the Lightning code we are importing here. ``` from migrating_lightning import * from fastai2.vision.all import * ``` ## Using fastai's training loop We can use the Lightning module directly: ``` model = LitModel() ``` To use it in fastai, we first pull the DataLoaders from the module into a `DataLoaders` object: ``` data = DataLoaders(model.train_dataloader(), model.val_dataloader()).cuda() ``` We can now create a `Learner` and fit: ``` learn = Learner(data, model, loss_func=F.cross_entropy, opt_func=Adam, metrics=accuracy) learn.fit_one_cycle(1, 0.001) ``` As you can see, migrating from Ignite allowed us to reduce the amount of code, and doesn't require you to change any of your existing data pipelines, optimizers, loss functions, models, etc. Once you've made this change, you can then benefit from fastai's rich set of callbacks, transforms, visualizations, and so forth. For instance, in the Lightning example, Tensorboard support was defined a special-case "logger". In fastai, Tensorboard is just another `Callback` that you can add, with the parameter `cbs=Tensorboard`, when you create your `Learner`. The callbacks all work together, so you can add an remove any schedulers, loggers, visualizers, and so forth. You don't have to learn about special types of functionality for each - they are all just plain callbacks. Note that fastai is very different from Lightning, in that it is much more than just a training loop (although we're only using the training loop in this example) - it is a complete framework including GPU-accelerated transformations, end-to-end inference, integrated applications for vision, text, tabular, and collaborative filtering, and so forth. You can use any part of the framework on its own, or combine them together, as described in the [fastai paper](https://arxiv.org/abs/2002.04688). ### Taking advantage of fastai Data Blocks One problem in the Lightning example is that it doesn't actually use a validation set - it's just using the training set a second time as a validation set. You might prefer to use fastai's Data Block API, which makes it really easy to create, visualize, and test your input data processing. Here's how you can create input data for MNIST, for instance: ``` mnist = DataBlock(blocks=(ImageBlock(cls=PILImageBW), CategoryBlock), get_items=get_image_files, splitter=GrandparentSplitter(), get_y=parent_label) ``` Here, we're telling `DataBlock` that we have a B&W image input, and a category output, our input items are file names of images, the images are labeled based on the name of the parent folder, and they are split by training vs validation based on the grandparent folder name. It's important to actually look at your data, so fastai also makes it easy to visualize your inputs and outputs, for instance: ``` dls = mnist.dataloaders(untar_data(URLs.MNIST_TINY)) dls.show_batch(max_n=9, figsize=(4,4)) ```
github_jupyter
``` # Import Required Libraries try: import tensorflow as tf import os import random import numpy as np from tqdm import tqdm from skimage.io import imread, imshow from skimage.transform import resize import matplotlib.pyplot as plt from tensorflow.keras.models import load_model from keras.models import model_from_json print("----Libraries Imported----") except: print("----Libraries Not Imported----") # checking the content of the current directory os.listdir() # Setting up path seed = 42 np.random.seed = seed IMG_WIDTH = 128 IMG_HEIGHT = 128 IMG_CHANNELS = 3 TRAIN_PATH = 'E:/Projects 6th SEM/Orange-Fruit-Recognition-Using-Image-Segmentation/Image Segmentaion/train_data/' TEST_PATH = 'E:/Projects 6th SEM/Orange-Fruit-Recognition-Using-Image-Segmentation/Image Segmentaion/test_data/' train_ids = next(os.walk(TRAIN_PATH))[1] test_ids = next(os.walk(TEST_PATH))[1] print(train_ids) print(test_ids) # Loading data # independent variable X_train = np.zeros((len(train_ids), IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dtype=np.uint8) # dependent variable (what we are trying to predict) Y_train = np.zeros((len(train_ids), IMG_HEIGHT, IMG_WIDTH, 1), dtype=np.bool) print('Resizing training images and masks') for n, id_ in tqdm(enumerate(train_ids), total=len(train_ids)): path = TRAIN_PATH + id_ img = imread(path + '/images/' + id_ + '.jpg')[:,:,:IMG_CHANNELS] img = resize(img, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True) X_train[n] = img #Fill empty X_train with values from img mask = np.zeros((IMG_HEIGHT, IMG_WIDTH, 1), dtype=np.bool) for mask_file in next(os.walk(path + '/masks/'))[2]: mask_ = imread(path + '/masks/' + mask_file) mask_ = np.expand_dims(resize(mask_, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True), axis=-1) mask = np.maximum(mask, mask_) Y_train[n] = mask # test images X_test = np.zeros((len(test_ids), IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dtype=np.uint8) sizes_test = [] print('Resizing test images') for n, id_ in tqdm(enumerate(test_ids), total=len(test_ids)): path = TEST_PATH + id_ img = imread(path + '/images/' + id_ + '.jpg')[:,:,:IMG_CHANNELS] sizes_test.append([img.shape[0], img.shape[1]]) img = resize(img, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True) X_test[n] = img print('Done!') # Showing Random images from the dataset image_x = random.randint(0, len(train_ids)) imshow(X_train[image_x]) plt.show() imshow(np.squeeze(Y_train[image_x])) plt.show() from UNet_Model import Segmentation_model model = Segmentation_model() model.summary() ################################ #Modelcheckpoint with tf.device('/GPU:0'): results = model.fit(X_train, Y_train, validation_split=0.1, batch_size=4, epochs=100) print('Training DONE') # Plotting Training Results plt.plot(results.history['accuracy'][0:150]) plt.plot(results.history['val_accuracy'][0:150]) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['training_accuracy', 'validation_accuracy']) plt.show() plt.plot(results.history['loss'][0:150]) plt.plot(results.history['val_loss'][0:150]) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['training_loss', 'validation_loss']) plt.show() # Saving model orange_model_json = model.to_json() with open("Segmentation_model.json", "w") as json_file: json_file.write(orange_model_json) model.save_weights("Orange_Fruit_Weights_segmentation.h5") # Loading Unet segmentation_model = model_from_json(open("Segmentation_model.json", "r").read()) segmentation_model.load_weights('Orange_Fruit_Weights_segmentation.h5') #################################### idx = random.randint(0, len(X_train)) print(idx) preds_train = segmentation_model.predict(X_train[:int(X_train.shape[0]*0.9)], verbose=1) preds_val = segmentation_model.predict(X_train[int(X_train.shape[0]*0.9):], verbose=1) preds_test = segmentation_model.predict(X_test, verbose=1) preds_train_t = (preds_train > 0.5).astype(np.uint8) preds_val_t = (preds_val > 0.5).astype(np.uint8) preds_test_t = (preds_test > 0.5).astype(np.uint8) # Perform a sanity check on some random training samples ix = random.randint(0, len(preds_train_t)) imshow(X_train[ix]) plt.show() imshow(np.squeeze(Y_train[ix])) plt.show() imshow(np.squeeze(preds_train_t[ix])) plt.show() # Perform a sanity check on some random validation samples ix = random.randint(0, len(preds_val_t)) imshow(X_train[int(X_train.shape[0]*0.9):][ix]) plt.show() imshow(np.squeeze(Y_train[int(Y_train.shape[0]*0.9):][ix])) plt.show() imshow(np.squeeze(preds_val_t[ix])) plt.show() # Loading Classification Model import Prediction_file as pf classification_model = pf.Loading_Model() # Prediction path1 = 'Images/kiwi.jpg' path2 = 'Images/Orange.jpg' pred1 = pf.predicting(path1,classification_model) pred2 = pf.predicting(path2,classification_model) from tensorflow.keras.preprocessing.image import load_img, img_to_array def process_image(path): img = load_img(path, target_size = (IMG_WIDTH,IMG_HEIGHT)) img_tensor = img_to_array(img) img_tensor = np.expand_dims(img_tensor, axis = 0) img_tensor/=255.0 return img_tensor if pred2 > 0.5: p = segmentation_model.predict(process_image(path2), verbose=1) p_t = (p > 0.5).astype(np.uint8) imshow(np.squeeze(p_t)) plt.show() p = segmentation_model.predict(process_image(path1), verbose=1) p_t = (p > 0.5).astype(np.uint8) imshow(np.squeeze(p_t)) plt.show() ```
github_jupyter
# Lesson 6: pets revisited ``` %reload_ext autoreload %autoreload 2 %matplotlib inline from fastai.vision import * ``` Set a batch size of 64. Untar the data at `URLs.PETS`, and set the variable `path` to the returned path with `images` appended to the end. ## Data augmentation Create a variable `tfms` that captures the output of the `get_transforms` function, with the following arguments: - max_rotate=20 - max_zoom=1.3 - max_lighting=0.4 - max_warp=0.4 - p_affine=1. - p_lighting=1. Explain what each of these are. Create an `ImageList` from the folder `path` split by a random 20% (using a random seed of 2). Assign it to the variable `src`. What kind of object does this return? Can you find (in code) why this is the case? Write a function `get_data` that takes `src` and labels it using the regex `([^/]+)_\d+.jpg$`, transforms it with `tfms`, takes `size` as an argument, takes `padding_mode` as an argument that defaults to `reflection`, creates a databunch with batch size `bs`, and normalizes using the imagenet stats. What type would you expect this to return? Create a variable `data` that calls `get_data` with size 224, `bs=bs`, and padding type `zeros`. Write a function `_plot` that plots the fourth image in the training dataset. Pass it to `plot_multi` to create a 3x3 grid of augmented images. Create a new variable `data` with size 224 and the same bs. data = get_data(224,bs) Use the same process to plot a 3x3 grid of 8x8 images of augmented data. This time allow the default padding mode. ## Train a model Call `gc.collect`. Can you explain what this does? Create a `cnn_learner` named `learn` with data `data`, architecture resnet34, using the `error_rate` metric, and `bn_final` set to true. Fit a cycle with 3 epochs, a slice up to 1e-2, with `pct_start=0.8`. Can you explain what `pct_start` does? Unfreeze the neural net. Fit another cycle with two epochs under the slice (1e-6, 1e-3). Same pct_start. Create a new `data` object with size 352. Train for another cycle with 2 epochs, this time with a `max_lr` of `slice(1e-6, 1e-4)`. Save the model under the name `352`. ## Convolution kernel Create another new `data` with size 352 and batch size 16. Create a new learner `learn` with the same specs as earlier, and load the weights from `352` to it. Set the variable `idx=0`. Set the values returned at position `idx` within the valid_ds and to `x` and `y`. Call the `show` method on x. Return the item at position `idx` in the `y` part of the `valid_ds`. This is created for you, because it doesn't teach much. Maybe dig into the `expand` method. Return the shape of `k`. ``` k.shape ``` Get the `x` value of the first item in `valid_ds`, get the `data` property and set it to `t`. What does the data property represent? Add a new dimension to `t` using the `None` index syntax. Create an image called `edge` by convolving `t` with our filter `k`. Run `show_image` over `edge`. Hint: you'll have to get the zeroth index of `edge` -- why? Show the number of classes in `data`. Print the model. Print a model summary. ## Heatmap Get the model out of our learner and set it to `eval` mode. Get one item from the `x` data you created above. Call this `xb`. Hint: `one_item` returns a tuple, but we only need the first thing. Create an image from a denormed version of xb. Again, you'll have to index into this. Be sure you can explain why. Call the output `xb_im`. Put the `xb` variable on the GPU by calling `cuda()`. Import fastai.callbacks.hooks. Create a function `hooked_backward` that returns two objects `grad_a` and `grad_g` representing the activations and the gradients. Make sure to use `with` statements here so that the hooks are removed after we get our results. Create two objects, `hook_a` and `hook_g` with the outputs of `hooked_backward`. Assign the stored activation outputs to a variable called `acts`. Make sure to call `.cpu` to put this back on the CPU. Take an average over the channel dimension to get a 2d shape. Print out the shape. Write a function `show_heatmap` that does the following: - takes an argument hm - Creates a new matplotlib axis using `plt.subplots` - shows `xb_im` on the new axis - calls `ax.imshow` with arguments `alpha=0.6`, `extent=(0,352,352,0)`, `interpolation=bilinear`, `cmap=magma`. Look up what these mean. Call `show_heatmap` on `avg_acts`. ## Grad-CAM Paper: [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization](https://arxiv.org/abs/1610.02391) ``` grad = hook_g.stored[0][0].cpu() grad_chan = grad.mean(1).mean(1) grad.shape,grad_chan.shape mult = (acts*grad_chan[...,None,None]).mean(0) show_heatmap(mult) fn = path/'../other/bulldog_maine.jpg' #Replace with your own image x = open_image(fn); x xb,_ = data.one_item(x) xb_im = Image(data.denorm(xb)[0]) xb = xb.cuda() hook_a,hook_g = hooked_backward() acts = hook_a.stored[0].cpu() grad = hook_g.stored[0][0].cpu() grad_chan = grad.mean(1).mean(1) mult = (acts*grad_chan[...,None,None]).mean(0) show_heatmap(mult) data.classes[0] hook_a,hook_g = hooked_backward(0) acts = hook_a.stored[0].cpu() grad = hook_g.stored[0][0].cpu() grad_chan = grad.mean(1).mean(1) mult = (acts*grad_chan[...,None,None]).mean(0) show_heatmap(mult) ``` ## fin
github_jupyter
# Deploy a previously created model in SageMaker Sagemaker decouples model creation/fitting and model deployment. **This short notebook shows how you can deploy a model that you have already created**. It is assumed that you have already created the model and it appears in the `Models` section of the SageMaker console. Obviously, before you deploy a model the model must exist, so please go back and make sure you have already fit/created the model before proceeding. For more information about deploying models, see https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-deploy-model.html ``` import boto3 from time import gmtime, strftime # configs for model, endpoint and batch transform model_name = ( "ENTER MODEL NAME" # enter name of a model from the 'Model panel' in the AWS SageMaker console. ) sm = boto3.client("sagemaker") ``` ## Deploy using an inference endpoint ``` # set endpoint name/config. endpoint_config_name = "DEMO-model-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) endpoint_name = "DEMO-model-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) create_endpoint_config_response = sm.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[ { "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": model_name, "VariantName": "AllTraffic", } ], ) print("Endpoint Config Arn: " + create_endpoint_config_response["EndpointConfigArn"]) create_endpoint_response = sm.create_endpoint( EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name ) print(create_endpoint_response["EndpointArn"]) resp = sm.describe_endpoint(EndpointName=endpoint_name) status = resp["EndpointStatus"] print("Status: " + status) ``` If you go to the AWS SageMaker service console now, you should see that the endpoint creation is in progress. ## Deploy using a batch transform job A batch transform job should be used for when you want to create inferences on a dateset and then shut down the resources when inference is finished. ``` # config for batch transform batch_job_name = "ENTER_JOB_NAME" output_location = "ENDER_OUTPUT_LOCATION" # S3 bucket/location input_location = "ENTER_INPUT_LOCATION" # S3 bucket/location request = { "TransformJobName": batch_job_name, "ModelName": model_name, "TransformOutput": { "S3OutputPath": output_location, "Accept": "text/csv", "AssembleWith": "Line", }, "TransformInput": { "DataSource": {"S3DataSource": {"S3DataType": "S3Prefix", "S3Uri": input_location}}, "ContentType": "text/csv", "SplitType": "Line", "CompressionType": "None", }, "TransformResources": { "InstanceType": "ml.m4.xlarge", # change this based on what resources you want to request "InstanceCount": 1, }, } sm.create_transform_job(**request) ```
github_jupyter
# Autonomous driving - Car detection Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242). **You will learn to**: - Use object detection on a car detection dataset - Deal with bounding boxes ## <font color='darkblue'>Updates</font> #### If you were working on the notebook before this update... * The current notebook is version "3a". * You can find your original work saved in the notebook with the previous version name ("v3") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. #### List of updates * Clarified "YOLO" instructions preceding the code. * Added details about anchor boxes. * Added explanation of how score is calculated. * `yolo_filter_boxes`: added additional hints. Clarify syntax for argmax and max. * `iou`: clarify instructions for finding the intersection. * `iou`: give variable names for all 8 box vertices, for clarity. Adds `width` and `height` variables for clarity. * `iou`: add test cases to check handling of non-intersecting boxes, intersection at vertices, or intersection at edges. * `yolo_non_max_suppression`: clarify syntax for tf.image.non_max_suppression and keras.gather. * "convert output of the model to usable bounding box tensors": Provides a link to the definition of `yolo_head`. * `predict`: hint on calling sess.run. * Spelling, grammar, wording and formatting updates to improve clarity. ## Import libraries Run the following cell to load the packages and dependencies that you will find useful as you build the object detector! ``` import argparse import os import matplotlib.pyplot as plt from matplotlib.pyplot import imshow import scipy.io import scipy.misc import numpy as np import pandas as pd import PIL import tensorflow as tf from keras import backend as K from keras.layers import Input, Lambda, Conv2D from keras.models import load_model, Model from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body %matplotlib inline ``` **Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`. ## 1 - Problem Statement You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. <center> <video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls> </video> </center> <caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We thank [drive.ai](htps://www.drive.ai/) for providing this dataset. </center></caption> You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like. <img src="nb_images/box_label.png" style="width:500px;height:250;"> <caption><center> <u> **Figure 1** </u>: **Definition of a box**<br> </center></caption> If you have 80 classes that you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. In this exercise, you will learn how "You Only Look Once" (YOLO) performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. ## 2 - YOLO "You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. ### 2.1 - Model details #### Inputs and outputs - The **input** is a batch of images, and each image has the shape (m, 608, 608, 3) - The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. #### Anchor Boxes * Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt' * The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$. * The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85). #### Encoding Let's look in greater detail at what this encoding represents. <img src="nb_images/architecture.png" style="width:700px;height:400;"> <caption><center> <u> **Figure 2** </u>: **Encoding architecture for YOLO**<br> </center></caption> If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object. Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height. For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425). <img src="nb_images/flatten.png" style="width:700px;height:400;"> <caption><center> <u> **Figure 3** </u>: **Flattening the last two last dimensions**<br> </center></caption> #### Class score Now, for each box (of each cell) we will compute the following element-wise product and extract a probability that the box contains a certain class. The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$. <img src="nb_images/probability_extraction.png" style="width:700px;height:400;"> <caption><center> <u> **Figure 4** </u>: **Find the class detected by each box**<br> </center></caption> ##### Example of figure 4 * In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1). * The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$. * The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$. * Let's say we calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So we'll assign the score 0.44 and class "3" to this box "1". #### Visualizing classes Here's one way to visualize what YOLO is predicting on an image: - For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes). - Color that grid cell according to what object that grid cell considers the most likely. Doing this results in this picture: <img src="nb_images/proba_map.png" style="width:300px;height:300;"> <caption><center> <u> **Figure 5** </u>: Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell.<br> </center></caption> Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. #### Visualizing bounding boxes Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: <img src="nb_images/anchor_map.png" style="width:200px;height:200;"> <caption><center> <u> **Figure 6** </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption> #### Non-Max suppression In the figure above, we plotted only boxes for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects. To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps: - Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class; either due to the low probability of any object, or low probability of this particular class). - Select only one box when several boxes overlap with each other and detect the same object. ### 2.2 - Filtering with a threshold on class scores You are going to first apply a filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold. The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It is convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables: - `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells. - `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell. - `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell. #### **Exercise**: Implement `yolo_filter_boxes()`. 1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$). The following code may help you choose the right operator: ```python a = np.random.randn(19*19, 5, 1) b = np.random.randn(19*19, 5, 80) c = a * b # shape of c will be (19*19, 5, 80) ``` This is an example of **broadcasting** (multiplying vectors of different sizes). 2. For each box, find: - the index of the class with the maximum box score - the corresponding box score **Useful references** * [Keras argmax](https://keras.io/backend/#argmax) * [Keras max](https://keras.io/backend/#max) **Additional Hints** * For the `axis` parameter of `argmax` and `max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`. * Applying `max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. We don't need to keep the last dimension after applying the maximum here. * Even though the documentation shows `keras.backend.argmax`, use `keras.argmax`. Similarly, use `keras.max`. 3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep. 4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. **Useful reference**: * [boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask) **Additional Hints**: * For the `tf.boolean_mask`, we can keep the default `axis=None`. **Reminder**: to call a Keras function, you should use `K.function(...)`. ``` # GRADED FUNCTION: yolo_filter_boxes def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6): """Filters YOLO boxes by thresholding on object and class confidence. Arguments: box_confidence -- tensor of shape (19, 19, 5, 1) boxes -- tensor of shape (19, 19, 5, 4) box_class_probs -- tensor of shape (19, 19, 5, 80) threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box Returns: scores -- tensor of shape (None,), containing the class probability score for selected boxes boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold. For example, the actual output size of scores would be (10,) if there are 10 boxes. """ # Step 1: Compute box scores ### START CODE HERE ### (≈ 1 line) box_scores = np.multiply(box_confidence, box_class_probs) ### END CODE HERE ### # Step 2: Find the box_classes using the max box_scores, keep track of the corresponding score ### START CODE HERE ### (≈ 2 lines) box_classes = K.argmax(box_scores, axis = -1) box_class_scores = K.max(box_scores, axis = -1) ### END CODE HERE ### # Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the # same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold) ### START CODE HERE ### (≈ 1 line) filtering_mask = box_class_scores >= threshold ### END CODE HERE ### # Step 4: Apply the mask to box_class_scores, boxes and box_classes ### START CODE HERE ### (≈ 3 lines) scores = tf.boolean_mask(box_class_scores, filtering_mask) boxes = tf.boolean_mask(boxes, filtering_mask) classes = tf.boolean_mask(box_classes, filtering_mask) ### END CODE HERE ### return scores, boxes, classes with tf.Session() as test_a: box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1) boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1) box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1) scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5) print("scores[2] = " + str(scores[2].eval())) print("boxes[2] = " + str(boxes[2].eval())) print("classes[2] = " + str(classes[2].eval())) print("scores.shape = " + str(scores.shape)) print("boxes.shape = " + str(boxes.shape)) print("classes.shape = " + str(classes.shape)) ``` **Expected Output**: <table> <tr> <td> **scores[2]** </td> <td> 10.7506 </td> </tr> <tr> <td> **boxes[2]** </td> <td> [ 8.42653275 3.27136683 -0.5313437 -4.94137383] </td> </tr> <tr> <td> **classes[2]** </td> <td> 7 </td> </tr> <tr> <td> **scores.shape** </td> <td> (?,) </td> </tr> <tr> <td> **boxes.shape** </td> <td> (?, 4) </td> </tr> <tr> <td> **classes.shape** </td> <td> (?,) </td> </tr> </table> **Note** In the test for `yolo_filter_boxes`, we're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative. ### 2.3 - Non-max suppression ### Even after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). <img src="nb_images/non-max-suppression.png" style="width:500px;height:400;"> <caption><center> <u> **Figure 7** </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. <br> </center></caption> Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU. <img src="nb_images/iou.png" style="width:500px;height:400;"> <caption><center> <u> **Figure 8** </u>: Definition of "Intersection over Union". <br> </center></caption> #### **Exercise**: Implement iou(). Some hints: - In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, we move to the right. As y increases, we move down. - For this exercise, we define a box using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. (This makes it a bit easier to calculate the intersection). - To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. (Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative. - To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$: - Feel free to draw some examples on paper to clarify this conceptually. - The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom. - The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top. - The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero). - The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero. **Additional Hints** - `xi1` = **max**imum of the x1 coordinates of the two boxes - `yi1` = **max**imum of the y1 coordinates of the two boxes - `xi2` = **min**imum of the x2 coordinates of the two boxes - `yi2` = **min**imum of the y2 coordinates of the two boxes - `inter_area` = You can use `max(height, 0)` and `max(width, 0)` ``` # GRADED FUNCTION: iou def iou(box1, box2): """Implement the intersection over union (IoU) between box1 and box2      Arguments: box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)     box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)     """ # Assign variable names to coordinates for clarity (box1_x1, box1_y1, box1_x2, box1_y2) = box1 (box2_x1, box2_y1, box2_x2, box2_y2) = box2 # Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area. ### START CODE HERE ### (≈ 7 lines) xi1 = box1_x1 if box1_x1 > box2_x1 else box2_x1 yi1 = box1_y1 if box1_y1 > box2_y1 else box2_y1 xi2 = box1_x2 if box1_x2 < box2_x2 else box2_x2 yi2 = box1_y2 if box1_y2 < box2_y2 else box2_y2 inter_width = xi2 - xi1 inter_height = yi2 - yi1 inter_area = 0 if (inter_width < 0 or inter_height < 0) else inter_width * inter_height ### END CODE HERE ###     # Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B) ### START CODE HERE ### (≈ 3 lines) box1_area = (box1_x2 - box1_x1) * (box1_y2 - box1_y1) box2_area = (box2_x2 - box2_x1) * (box2_y2 - box2_y1) union_area = (box1_area + box2_area) - inter_area ### END CODE HERE ### # compute the IoU ### START CODE HERE ### (≈ 1 line) iou = inter_area / union_area ### END CODE HERE ### return iou ## Test case 1: boxes intersect box1 = (2, 1, 4, 3) box2 = (1, 2, 3, 4) print("iou for intersecting boxes = " + str(iou(box1, box2))) ## Test case 2: boxes do not intersect box1 = (1,2,3,4) box2 = (5,6,7,8) print("iou for non-intersecting boxes = " + str(iou(box1,box2))) ## Test case 3: boxes intersect at vertices only box1 = (1,1,2,2) box2 = (2,2,3,3) print("iou for boxes that only touch at vertices = " + str(iou(box1,box2))) ## Test case 4: boxes intersect at edge only box1 = (1,1,3,3) box2 = (2,3,3,4) print("iou for boxes that only touch at edges = " + str(iou(box1,box2))) ``` **Expected Output**: ``` iou for intersecting boxes = 0.14285714285714285 iou for non-intersecting boxes = 0.0 iou for boxes that only touch at vertices = 0.0 iou for boxes that only touch at edges = 0.0 ``` #### YOLO non-max suppression You are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score. 2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`). 3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box. This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain. **Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation): ** Reference documentation ** - [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression) ``` tf.image.non_max_suppression( boxes, scores, max_output_size, iou_threshold=0.5, name=None ) ``` Note that in the version of tensorflow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument 'score_threshold.* - [K.gather()](https://www.tensorflow.org/api_docs/python/tf/keras/backend/gather) Even though the documentation shows `tf.keras.backend.gather()`, you can use `keras.gather()`. ``` keras.gather( reference, indices ) ``` ``` # GRADED FUNCTION: yolo_non_max_suppression def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5): """ Applies Non-max suppression (NMS) to set of boxes Arguments: scores -- tensor of shape (None,), output of yolo_filter_boxes() boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later) classes -- tensor of shape (None,), output of yolo_filter_boxes() max_boxes -- integer, maximum number of predicted boxes you'd like iou_threshold -- real value, "intersection over union" threshold used for NMS filtering Returns: scores -- tensor of shape (, None), predicted score for each box boxes -- tensor of shape (4, None), predicted box coordinates classes -- tensor of shape (, None), predicted class for each box Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this function will transpose the shapes of scores, boxes, classes. This is made for convenience. """ max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression() K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor # Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep ### START CODE HERE ### (≈ 1 line) nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold) ### END CODE HERE ### # Use K.gather() to select only nms_indices from scores, boxes and classes ### START CODE HERE ### (≈ 3 lines) scores = K.gather(scores, nms_indices) boxes = K.gather(boxes, nms_indices) classes = K.gather(classes, nms_indices) ### END CODE HERE ### return scores, boxes, classes with tf.Session() as test_b: scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1) boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1) classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1) scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes) print("scores[2] = " + str(scores[2].eval())) print("boxes[2] = " + str(boxes[2].eval())) print("classes[2] = " + str(classes[2].eval())) print("scores.shape = " + str(scores.eval().shape)) print("boxes.shape = " + str(boxes.eval().shape)) print("classes.shape = " + str(classes.eval().shape)) ``` **Expected Output**: <table> <tr> <td> **scores[2]** </td> <td> 6.9384 </td> </tr> <tr> <td> **boxes[2]** </td> <td> [-5.299932 3.13798141 4.45036697 0.95942086] </td> </tr> <tr> <td> **classes[2]** </td> <td> -2.24527 </td> </tr> <tr> <td> **scores.shape** </td> <td> (10,) </td> </tr> <tr> <td> **boxes.shape** </td> <td> (10, 4) </td> </tr> <tr> <td> **classes.shape** </td> <td> (10,) </td> </tr> </table> ### 2.4 Wrapping up the filtering It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. **Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): ```python boxes = yolo_boxes_to_corners(box_xy, box_wh) ``` which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes` ```python boxes = scale_boxes(boxes, image_shape) ``` YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. Don't worry about these two functions; we'll show you where they need to be called. ``` # GRADED FUNCTION: yolo_eval def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5): """ Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes. Arguments: yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors: box_confidence: tensor of shape (None, 19, 19, 5, 1) box_xy: tensor of shape (None, 19, 19, 5, 2) box_wh: tensor of shape (None, 19, 19, 5, 2) box_class_probs: tensor of shape (None, 19, 19, 5, 80) image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype) max_boxes -- integer, maximum number of predicted boxes you'd like score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box iou_threshold -- real value, "intersection over union" threshold used for NMS filtering Returns: scores -- tensor of shape (None, ), predicted score for each box boxes -- tensor of shape (None, 4), predicted box coordinates classes -- tensor of shape (None,), predicted class for each box """ ### START CODE HERE ### # Retrieve outputs of the YOLO model (≈1 line) box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs # Convert boxes to be ready for filtering functions (convert boxes box_xy and box_wh to corner coordinates) boxes = yolo_boxes_to_corners(box_xy, box_wh) # Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line) scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = score_threshold) # Scale boxes back to original image shape. boxes = scale_boxes(boxes, image_shape) # Use one of the functions you've implemented to perform Non-max suppression with # maximum number of boxes set to max_boxes and a threshold of iou_threshold (≈1 line) scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes) ### END CODE HERE ### return scores, boxes, classes with tf.Session() as test_b: yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1), tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1), tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1), tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)) scores, boxes, classes = yolo_eval(yolo_outputs) print("scores[2] = " + str(scores[2].eval())) print("boxes[2] = " + str(boxes[2].eval())) print("classes[2] = " + str(classes[2].eval())) print("scores.shape = " + str(scores.eval().shape)) print("boxes.shape = " + str(boxes.eval().shape)) print("classes.shape = " + str(classes.eval().shape)) ``` **Expected Output**: <table> <tr> <td> **scores[2]** </td> <td> 138.791 </td> </tr> <tr> <td> **boxes[2]** </td> <td> [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] </td> </tr> <tr> <td> **classes[2]** </td> <td> 54 </td> </tr> <tr> <td> **scores.shape** </td> <td> (10,) </td> </tr> <tr> <td> **boxes.shape** </td> <td> (10, 4) </td> </tr> <tr> <td> **classes.shape** </td> <td> (10,) </td> </tr> </table> ## Summary for YOLO: - Input image (608, 608, 3) - The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output. - After flattening the last two dimensions, the output is a volume of shape (19, 19, 425): - Each cell in a 19x19 grid over the input image gives 425 numbers. - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture. - 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and 80 is the number of classes we'd like to detect - You then select only few boxes based on: - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes - This gives you YOLO's final output. ## 3 - Test YOLO pre-trained model on images In this part, you are going to use a pre-trained model and test it on the car detection dataset. We'll need a session to execute the computation graph and evaluate the tensors. ``` sess = K.get_session() ``` ### 3.1 - Defining classes, anchors and image shape. * Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. * We have gathered the information on the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". * We'll read class names and anchors from text files. * The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images. ``` class_names = read_classes("model_data/coco_classes.txt") anchors = read_anchors("model_data/yolo_anchors.txt") image_shape = (720., 1280.) ``` ### 3.2 - Loading a pre-trained model * Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. * You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5". * These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will simply refer to it as "YOLO" in this notebook. Run the cell below to load the model from this file. ``` yolo_model = load_model("model_data/yolo.h5") ``` This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains. ``` yolo_model.summary() ``` **Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine. **Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2). ### 3.3 - Convert output of the model to usable bounding box tensors The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you. If you are curious about how `yolo_head` is implemented, you can find the function definition in the file ['keras_yolo.py'](https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py). The file is located in your workspace in this path 'yad2k/models/keras_yolo.py'. ``` yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names)) ``` You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function. ### 3.4 - Filtering boxes `yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Let's now call `yolo_eval`, which you had previously implemented, to do this. ``` scores, boxes, classes = yolo_eval(yolo_outputs, image_shape) ``` ### 3.5 - Run the graph on an image Let the fun begin. You have created a graph that can be summarized as follows: 1. <font color='purple'> yolo_model.input </font> is given to `yolo_model`. The model is used to compute the output <font color='purple'> yolo_model.output </font> 2. <font color='purple'> yolo_model.output </font> is processed by `yolo_head`. It gives you <font color='purple'> yolo_outputs </font> 3. <font color='purple'> yolo_outputs </font> goes through a filtering function, `yolo_eval`. It outputs your predictions: <font color='purple'> scores, boxes, classes </font> **Exercise**: Implement predict() which runs the graph to test YOLO on an image. You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`. The code below also uses the following function: ```python image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608)) ``` which outputs: - image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it. - image_data: a numpy-array representing the image. This will be the input to the CNN. **Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}. #### Hint: Using the TensorFlow Session object * Recall that above, we called `K.get_Session()` and saved the Session object in `sess`. * To evaluate a list of tensors, we call `sess.run()` like this: ``` sess.run(fetches=[tensor1,tensor2,tensor3], feed_dict={yolo_model.input: the_input_variable, K.learning_phase():0 } ``` * Notice that the variables `scores, boxes, classes` are not passed into the `predict` function, but these are global variables that you will use within the `predict` function. ``` def predict(sess, image_file): """ Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the predictions. Arguments: sess -- your tensorflow/Keras session containing the YOLO graph image_file -- name of an image stored in the "images" folder. Returns: out_scores -- tensor of shape (None, ), scores of the predicted boxes out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes out_classes -- tensor of shape (None, ), class index of the predicted boxes Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes. """ # Preprocess your image image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608)) # Run the session with the correct tensors and choose the correct placeholders in the feed_dict. # You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0}) ### START CODE HERE ### (≈ 1 line) out_scores, out_boxes, out_classes = sess.run(fetches = [scores, boxes, classes], feed_dict = {yolo_model.input: image_data, K.learning_phase(): 0}) ### END CODE HERE ### # Print predictions info print('Found {} boxes for {}'.format(len(out_boxes), image_file)) # Generate colors for drawing bounding boxes. colors = generate_colors(class_names) # Draw bounding boxes on the image file draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors) # Save the predicted bounding box on the image image.save(os.path.join("out", image_file), quality=90) # Display the results in the notebook output_image = scipy.misc.imread(os.path.join("out", image_file)) imshow(output_image) return out_scores, out_boxes, out_classes ``` Run the following cell on the "test.jpg" image to verify that your function is correct. ``` out_scores, out_boxes, out_classes = predict(sess, "test.jpg") ``` **Expected Output**: <table> <tr> <td> **Found 7 boxes for test.jpg** </td> </tr> <tr> <td> **car** </td> <td> 0.60 (925, 285) (1045, 374) </td> </tr> <tr> <td> **car** </td> <td> 0.66 (706, 279) (786, 350) </td> </tr> <tr> <td> **bus** </td> <td> 0.67 (5, 266) (220, 407) </td> </tr> <tr> <td> **car** </td> <td> 0.70 (947, 324) (1280, 705) </td> </tr> <tr> <td> **car** </td> <td> 0.74 (159, 303) (346, 440) </td> </tr> <tr> <td> **car** </td> <td> 0.80 (761, 282) (942, 412) </td> </tr> <tr> <td> **car** </td> <td> 0.89 (367, 300) (745, 648) </td> </tr> </table> The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the cell above code 4. Run the code and see the output of the algorithm! If you were to run your session in a for loop over all your images. Here's what you would get: <center> <video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls> </video> </center> <caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks [drive.ai](https://www.drive.ai/) for providing this dataset! </center></caption> ## <font color='darkblue'>What you should remember: - YOLO is a state-of-the-art object detection model that is fast and accurate - It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume. - The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes. - You filter through all the boxes using non-max suppression. Specifically: - Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes - Intersection over Union (IoU) thresholding to eliminate overlapping boxes - Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise. **References**: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's GitHub repository. The pre-trained weights used in this exercise came from the official YOLO website. - Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015) - Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016) - Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K) - The official YOLO website (https://pjreddie.com/darknet/yolo/) **Car detection dataset**: <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. We are grateful to Brody Huval, Chih Hu and Rahul Patel for providing this data.
github_jupyter
# Licences / Notes ``` # Copyright 2019 Google Inc. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #Adapted by Thierry Lincoln in November,2019 from this Colab notebook: #https://colab.research.google.com/github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb. #Changes includes # - Reading our stressor data and parsing it properly # - reconfiguring the last layer to include N neurons corresponding to N categories # - correcting the probability output so that it follows [0,1] proper pattern # - better analysis with confusion matrix # - exporting to pb format for tensorflow serving api ``` Intro: If you’ve been following Natural Language Processing over the past year, you’ve probably heard of BERT: Bidirectional Encoder Representations from Transformers. It’s a neural network architecture designed by Google researchers that’s totally transformed what’s state-of-the-art for NLP tasks, like text classification, translation, summarization, and question answering. Now that BERT's been added to [TF Hub](https://www.tensorflow.org/hub) as a loadable module, it's easy(ish) to add into existing Tensorflow text pipelines. In an existing pipeline, BERT can replace text embedding layers like ELMO and GloVE. Alternatively, [finetuning](http://wiki.fast.ai/index.php/Fine_tuning) BERT can provide both an accuracy boost and faster training time in many cases. Some code was adapted from [this colab notebook](https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb). Let's get started! # Loading Libraries ``` os.environ['LD_LIBRARY_PATH'] = '/usr/local/cuda-10.0/lib64:/usr/local/cuda-10.0/lib' #os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "0" #(or "1" or "2") import sys print(sys.executable) #export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:/usr/local/cuda-10.0/lib #export CUDA_VISIBLE_DEVICES=0 from sklearn.model_selection import train_test_split from sklearn.model_selection import StratifiedShuffleSplit import pandas as pd import tensorflow as tf import tensorflow_hub as hub from datetime import datetime import matplotlib.pyplot as plt from sklearn.utils.multiclass import unique_labels from sklearn.metrics import f1_score,confusion_matrix,classification_report,accuracy_score pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.max_colwidth', 1000) print(tf.__version__) #needs to be version 1.15.0, version 2.0 doesn't work with this notebook config = tf.ConfigProto() #config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1 #config.gpu_options.visible_device_list="0" from tensorflow.python.client import device_lib device_lib.list_local_devices() ``` In addition to the standard libraries we imported above, we'll need to install BERT's python package. ``` #!pip install bert-tensorflow import bert from bert import run_classifier_with_tfhub from bert import optimization from bert import tokenization import numpy as np ``` Below, we'll set an output directory location to store our model output and checkpoints. This can be a local directory, in which case you'd set OUTPUT_DIR to the name of the directory you'd like to create. Set DO_DELETE to rewrite the OUTPUT_DIR if it exists. Otherwise, Tensorflow will load existing model checkpoints from that directory (if they exist). ## Utils functions ``` def get_test_experiment_df(test): test_predictions = [x[0]['probabilities'] for x in zip(getListPrediction(in_sentences=list(test['Answer.Stressor'])))] test_live_labels = np.array(test_predictions).argmax(axis=1) test['Predicted'] = [label_list_text[x] for x in test_live_labels] # appending the labels to the dataframe probabilities_df_live = pd.DataFrame(test_predictions) # creating a proabilities dataset probabilities_df_live.columns = label_list_text # naming the columns test.reset_index(inplace=True,drop=True) # resetting index experiment_df = pd.concat([test,probabilities_df_live],axis=1, ignore_index=False) return test,experiment_df def getListPrediction(in_sentences): #1 input_examples = [bert.run_classifier.InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, "" is just a dummy label #2 input_features = bert.run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer) #3 predict_input_fn = bert.run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False) print(input_features[0].input_ids) #4 predictions = estimator.predict(input_fn=predict_input_fn,yield_single_examples=True) return predictions is_normalize_active=False def get_confusion_matrix(y_test,predicted,labels): class_names=labels # plotting confusion matrix np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plot_confusion_matrix(y_test, predicted, classes=class_names, title='Confusion matrix, without normalization') # Plot normalized confusion matrix plot_confusion_matrix(y_test, predicted, classes=class_names, normalize=True, title='Normalized confusion matrix') plt.show() def plot_confusion_matrix(y_true, y_pred, classes, normalize=False, title=None, cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if not title: if normalize: title = 'Normalized confusion matrix' else: title = 'Confusion matrix, without normalization' # Compute confusion matrix cm = confusion_matrix(y_true, y_pred) # Only use the labels that appear in the data classes =classes if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] #print("Normalized confusion matrix") else: test =1 #print('Confusion matrix, without normalization') fig, ax = plt.subplots() im = ax.imshow(cm, interpolation='nearest', cmap=cmap) #ax.figure.colorbar(im, ax=ax) # We want to show all ticks... ax.set(xticks=np.arange(cm.shape[1]), yticks=np.arange(cm.shape[0]), # ... and label them with the respective list entries xticklabels=classes, yticklabels=classes, title=title, ylabel='True label', xlabel='Predicted label') # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor") # Loop over data dimensions and create text annotations. fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i in range(cm.shape[0]): for j in range(cm.shape[1]): ax.text(j, i, format(cm[i, j], fmt), ha="center", va="center", color="white" if cm[i, j] > thresh else "black") #fig.tight_layout() return ax ``` # Loading the data ``` def data_prep_bert(df,test_size): print("Filling missing values") df[DATA_COLUMN] = df[DATA_COLUMN].fillna('_NA_') print("Splitting dataframe with shape {} into training and test datasets".format(df.shape)) X_train, X_test = train_test_split(df, test_size=test_size, random_state=2018,stratify = df[LABEL_COLUMN_RAW]) return X_train, X_test def open_dataset(NAME,mapping_index,excluded_categories): df = pd.read_csv(PATH+NAME+'.csv',sep =',') df.head(10) df = df[df[LABEL_COLUMN_RAW].notna()] #df.columns = [LABEL_COLUMN_RAW,'Severity',DATA_COLUMN,'Source'] if excluded_categories is not None: for category in excluded_categories: df = df[df[LABEL_COLUMN_RAW] !=category] label_list=[] label_list_final =[] if(mapping_index is None): df[LABEL_COLUMN_RAW] = df[LABEL_COLUMN_RAW].astype('category') df[LABEL_COLUMN], mapping_index = pd.Series(df[LABEL_COLUMN_RAW]).factorize() #uses pandas factorize() to convert to numerical index label_list_final = [None] * len(mapping_index.categories) label_list_number = [None] * len(mapping_index.categories) for index,ele in enumerate(list(mapping_index.categories)): lindex = mapping_index.get_loc(ele) label_list_number[lindex] = lindex label_list_final[lindex] = ele else: df[LABEL_COLUMN] = df[LABEL_COLUMN_RAW].apply(lambda x: mapping_index.get_loc(x)) frequency_dict = df[LABEL_COLUMN_RAW].value_counts().to_dict() df["class_freq"] = df[LABEL_COLUMN_RAW].apply(lambda x: frequency_dict[x]) return df,mapping_index,label_list_number,label_list_final ``` # Require user changes > Start Here ### Experiment Name ``` PATH = './datasets/' TODAY_DATE = "21_04_2020/" EXPERIMENT_NAME = 'test' EXPERIMENTS_PATH = PATH + 'experiments/'+TODAY_DATE+EXPERIMENT_NAME if not os.path.exists(PATH + 'experiments/'+TODAY_DATE): os.mkdir(PATH + 'experiments/'+TODAY_DATE) if not os.path.exists(EXPERIMENTS_PATH): os.mkdir(EXPERIMENTS_PATH) ``` ### Model Hyperparameters ``` # Compute train and warmup steps from batch size # These hyperparameters are copied from this colab notebook (https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb) BATCH_SIZE = 32 LEARNING_RATE = 2e-5 NUM_TRAIN_EPOCHS = 3.0 # Warmup is a period of time where hte learning rate # is small and gradually increases--usually helps training. WARMUP_PROPORTION = 0.1 # Model configs SAVE_CHECKPOINTS_STEPS = 1000 SAVE_SUMMARY_STEPS = 100 # We'll set sequences to be at most 64 tokens long. MAX_SEQ_LENGTH = 64 OUTPUT_DIR = './models/'+EXPERIMENT_NAME+ '/' #_01_04_2020/ DATASET_NAME = '2200_master_csv_mturk_and_live_for_training' DATA_COLUMN = 'Answer.Stressor' LABEL_COLUMN_RAW = 'Consolidated Updated Categories'#'Answer.Label' LABEL_COLUMN = 'label_numeric' test_on_mturk_and_popbots_live = True # include live data in training + include mturk in testing #dataset,mapping_index,label_list, label_list_text = open_dataset('mturk900balanced',None) EXCLUDED_CATEGORIES = ['Other'] #None # # if nothing to exclude put None, THIS ALWAYS MUST BE A LIST dataset,mapping_index,label_list, label_list_text = open_dataset(DATASET_NAME,None,EXCLUDED_CATEGORIES) if test_on_mturk_and_popbots_live: mturk = dataset[dataset['Source']== 'mTurk'] live = dataset[dataset['Source']== 'Popbots'] live = live.sample(frac=1).reset_index(drop=True) # shuffle live TEST_PERCENTAGE = len(live)/(2*len(mturk)) # given to set the percentage of mturk used as test set to have 50/50 print(f"Test percentage is {TEST_PERCENTAGE}") train,test = data_prep_bert(mturk,TEST_PERCENTAGE) # test size from mturk train = train.append(live.loc[0:int(len(live)/2)]) # taking 1/2 of that dataset for training test = test.append(live.loc[int(len(live)/2):int(len(live))]) # taking 1/2 of live dataset for testing else: # or taking live only for testing train,test = dataset[dataset['Source']== 'mTurk'],dataset[dataset['Source']== 'Popbots'] #print(f"Dataset has {len(dataset)} training examples") print(f"Normal label list is {label_list}") print(f"The labels text is {label_list_text}") #Export train test to csv #train.to_csv(PATH+'900_CSV_SPLITTED/train.csv') #test.to_csv(PATH+'900_CSV_SPLITTED/test.csv') ``` ### Train set and test set analysis ``` def print_dataset_info(train,test): print(f"Train size {len(train)} with {len(train[train['Source']== 'Popbots'])} from Popbots and {len(train[train['Source']== 'mTurk'])} from mturk") print(f"Test size {len(test)} with {len(test[test['Source']== 'Popbots'])} from Popbots and {len(test[test['Source']== 'mTurk'])} from mturk") print('\nTraining distribution:') print(pd.pivot_table(train[[LABEL_COLUMN_RAW, 'Source']],index=[LABEL_COLUMN_RAW, 'Source'],columns=None, aggfunc=len)) #.to_clipboard(excel=True) print('\nTesting distribution:') print(pd.pivot_table(test[[LABEL_COLUMN_RAW, 'Source']],index=[LABEL_COLUMN_RAW, 'Source'],columns=None, aggfunc=len)) #.to_clipboard(excel=True) train = train.sample(frac=1).reset_index(drop=True) #reshuffle everything test = test.sample(frac=1).reset_index(drop=True) print_dataset_info(train,test) ``` ### Step to reduce the most dominant categories and balance the dataset ``` samplig_cutoff = 100 # all the categories which had less than 100 example won't be sampled down REVERSE_FREQ = 'Max_reverse_sampling_chance' train[REVERSE_FREQ] = train['class_freq'].apply(lambda x: (max(train['class_freq'])/x)*(max(train['class_freq'])/x)) sampling_boolean = (train['Source'] != 'Popbots') & (train['class_freq'].astype(float) > samplig_cutoff) train_to_be_balanced = train[sampling_boolean] train_not_resampled = train[~sampling_boolean] train_temp = train_to_be_balanced.sample(n=(1500-len(train_not_resampled)), weights=REVERSE_FREQ, random_state=2020) train = pd.concat([train_temp,train_not_resampled]) print_dataset_info(train,test) mapping_index.categories train.to_csv(EXPERIMENTS_PATH+'/TRAIN_'+DATASET_NAME+'.csv') test.to_csv(EXPERIMENTS_PATH+'/TEST_'+DATASET_NAME+'.csv') ``` # Require user changes > STOP Here # Data Preprocessing For us, our input data is the 'sentence' column and our label is the 'polarity' column #Data Preprocessing We'll need to transform our data into a format BERT understands. This involves two steps. First, we create `InputExample`'s using the constructor provided in the BERT library. - `text_a` is the text we want to classify, which in this case, is the `Request` field in our Dataframe. - `text_b` is used if we're training a model to understand the relationship between sentences (i.e. is `text_b` a translation of `text_a`? Is `text_b` an answer to the question asked by `text_a`?). This doesn't apply to our task, so we can leave `text_b` blank. - `label` is the label for our example, i.e. True, False ``` # Use the InputExample class from BERT's run_classifier code to create examples from the data train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this example text_a = x[DATA_COLUMN], text_b = None, label = x[LABEL_COLUMN]), axis = 1) test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None, text_a = x[DATA_COLUMN], text_b = None, label = x[LABEL_COLUMN]), axis = 1) ``` Next, we need to preprocess our data so that it matches the data BERT was 1. List item 2. List item trained on. For this, we'll need to do a couple of things (but don't worry--this is also included in the Python library): 1. Lowercase our text (if we're using a BERT lowercase model) 2. Tokenize it (i.e. "sally says hi" -> ["sally", "says", "hi"]) 3. Break words into WordPieces (i.e. "calling" -> ["call", "##ing"]) 4. Map our words to indexes using a vocab file that BERT provides 5. Add special "CLS" and "SEP" tokens (see the [readme](https://github.com/google-research/bert)) 6. Append "index" and "segment" tokens to each input (see the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf)) Happily, we don't have to worry about most of these details. To start, we'll need to load a vocabulary file and lowercasing information directly from the BERT tf hub module: ``` # This is a path to an uncased (all lowercase) version of BERT BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1" def create_tokenizer_from_hub_module(): """Get the vocab file and casing info from the Hub module.""" with tf.Graph().as_default(): bert_module = hub.Module(BERT_MODEL_HUB) tokenization_info = bert_module(signature="tokenization_info", as_dict=True) with tf.Session() as sess: vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"], tokenization_info["do_lower_case"]]) return bert.tokenization.FullTokenizer( vocab_file=vocab_file, do_lower_case=do_lower_case) tokenizer = create_tokenizer_from_hub_module() ``` Great--we just learned that the BERT model we're using expects lowercase data (that's what stored in tokenization_info["do_lower_case"]) and we also loaded BERT's vocab file. We also created a tokenizer, which breaks words into word pieces: ``` tokenizer.tokenize("This here's an example of using the BERT tokenizer") ``` Using our tokenizer, we'll call `run_classifier.convert_examples_to_features` on our InputExamples to convert them into features BERT understands. ``` # Convert our train and test features to InputFeatures that BERT understands. train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer) test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer) ``` # Creating a model Now that we've prepared our data, let's focus on building a model. `create_model` does just this below. First, it loads the BERT tf hub module again (this time to extract the computation graph). Next, it creates a single new layer that will be trained to adapt BERT to our classification task. This strategy of using a mostly trained model is called [fine-tuning](http://wiki.fast.ai/index.php/Fine_tuning). To understand the `pooled ouput` vs `sequence output` refer to https://www.kaggle.com/questions-and-answers/86510 ``` def create_model(is_predicting, input_ids, input_mask, segment_ids, labels, num_labels): """Creates a classification model.""" bert_module = hub.Module( BERT_MODEL_HUB, trainable=True) bert_inputs = dict( input_ids=input_ids, input_mask=input_mask, segment_ids=segment_ids) bert_outputs = bert_module( inputs=bert_inputs, signature="tokens", as_dict=True) # Use "pooled_output" for classification tasks on an entire sentence. # Use "sequence_outputs" for token-level output. output_layer = bert_outputs["pooled_output"] hidden_size = output_layer.shape[-1].value # Create our own layer to tune for politeness data. output_weights = tf.get_variable( "output_weights", [num_labels, hidden_size], initializer=tf.truncated_normal_initializer(stddev=0.02)) output_bias = tf.get_variable( "output_bias", [num_labels], initializer=tf.zeros_initializer()) with tf.variable_scope("loss"): # Dropout helps prevent overfitting output_layer = tf.nn.dropout(output_layer, keep_prob=0.9) # does the Ax multiplication logits = tf.matmul(output_layer, output_weights, transpose_b=True) # add the bias eg: Ax+B logits = tf.nn.bias_add(logits, output_bias) ########################### HERE ADDITIONAL LAYERS CAN BE ADDED ###################### # compute the log softmax for each neurons/logit log_probs = tf.nn.log_softmax(logits, axis=-1) #compute the normal softmax to get the probabilities probs = tf.nn.softmax(logits, axis=-1) # Convert labels into one-hot encoding one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32) #classes_weights = tf.constant([1.0,1.0,1.0,1.0,1.0,1.0,0.7], dtype=tf.float32) #sample_weights = tf.multiply(one_hot_labels, classes_weights) predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32)) # If we're predicting, we want predicted labels and the probabiltiies. if is_predicting: return (predicted_labels, log_probs,probs) # If we're train/eval, compute loss between predicted and actual label #per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1) per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1) loss = tf.reduce_mean(per_example_loss) return (loss, predicted_labels, log_probs) ``` Next we'll wrap our model function in a `model_fn_builder` function that adapts our model to work for training, evaluation, and prediction. ``` # model_fn_builder actually creates our model function # using the passed parameters for num_labels, learning_rate, etc. def model_fn_builder(num_labels, learning_rate, num_train_steps, num_warmup_steps): """Returns `model_fn` closure for TPUEstimator.""" def model_fn(features, labels, mode, params): # pylint: disable=unused-argument """The `model_fn` for TPUEstimator.""" input_ids = features["input_ids"] input_mask = features["input_mask"] segment_ids = features["segment_ids"] label_ids = features["label_ids"] is_predicting = (mode == tf.estimator.ModeKeys.PREDICT) # TRAIN and EVAL if not is_predicting: (loss, predicted_labels, log_probs) = create_model( is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels) train_op = bert.optimization.create_optimizer( loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False) # Calculate evaluation metrics. def metric_fn(label_ids, predicted_labels): accuracy = tf.metrics.accuracy(label_ids, predicted_labels) """ f1_score = tf.contrib.metrics.f1_score( label_ids, predicted_labels) auc = tf.metrics.auc( label_ids, predicted_labels)""" recall = tf.metrics.recall( label_ids, predicted_labels) precision = tf.metrics.precision( label_ids, predicted_labels) true_pos = tf.metrics.true_positives( label_ids, predicted_labels) true_neg = tf.metrics.true_negatives( label_ids, predicted_labels) false_pos = tf.metrics.false_positives( label_ids, predicted_labels) false_neg = tf.metrics.false_negatives( label_ids, predicted_labels) return { "eval_accuracy": accuracy, #"f1_score": f1_score, #"auc": auc, "precision": precision, "recall": recall, "true_positives": true_pos, "true_negatives": true_neg, "false_positives": false_pos, "false_negatives": false_neg } eval_metrics = metric_fn(label_ids, predicted_labels) if mode == tf.estimator.ModeKeys.TRAIN: return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) else: return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metrics) else: (predicted_labels, log_probs,probs) = create_model( is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels) predictions = { 'probabilities': probs#, #'labels': predicted_labels } return tf.estimator.EstimatorSpec(mode, predictions=predictions) # Return the actual model function in the closure return model_fn # Compute # train and warmup steps from batch size num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS) num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION) # Specify outpit directory and number of checkpoint steps to save run_config = tf.estimator.RunConfig( model_dir=OUTPUT_DIR, save_summary_steps=SAVE_SUMMARY_STEPS, save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS) model_fn = model_fn_builder( num_labels=len(label_list), learning_rate=LEARNING_RATE, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps) estimator = tf.estimator.Estimator( model_fn=model_fn, config=run_config, params={"batch_size": BATCH_SIZE}) ``` Next we create an input builder function that takes our training feature set (`train_features`) and produces a generator. This is a pretty standard design pattern for working with Tensorflow [Estimators](https://www.tensorflow.org/guide/estimators). ``` # Create an input function for training. drop_remainder = True for using TPUs. train_input_fn = bert.run_classifier.input_fn_builder( features=train_features, seq_length=MAX_SEQ_LENGTH, is_training=True, drop_remainder=False) ``` # Training the model ``` print(f'Beginning Training!') current_time = datetime.now() estimator.train(input_fn=train_input_fn, max_steps=num_train_steps) print("Training took time ", datetime.now() - current_time) ``` # Evaluating the model on Test Set ``` test_input_fn = bert.run_classifier.input_fn_builder( features=test_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False) #estimator.evaluate(input_fn=test_input_fn, steps=None) #fetching all the probabilities for each line of the test set test_probabilities = [x[0]['probabilities'] for x in zip(estimator.predict(test_input_fn,yield_single_examples=True))] #taking the argmex for the highest category test_final_labels = np.array(test_probabilities).argmax(axis=1) ``` ### Classification Report ``` report = pd.DataFrame(classification_report(list(test[LABEL_COLUMN]),list(test_final_labels),zero_division=0, output_dict=True)).T print(report) ``` ### Confusion Matrix ``` get_confusion_matrix(y_test=test[LABEL_COLUMN],predicted=test_final_labels,labels=label_list_text) ``` ### Exporting test set with probabilities ``` test, experiment_df = get_test_experiment_df(test) experiment_df.to_csv(EXPERIMENTS_PATH+'/test_with_probabilities.csv') ``` ### RUN ALL CELLS ABOVE ON HERE ``` experiment_df[experiment_df['Predicted'] != experiment_df['Answer.Label']].head(10) # change head(n) to see more ``` # Exporting the model as Pb format ``` def export_model(dir_path): MAX_SEQ_LEN = 128 def serving_input_receiver_fn(): """An input receiver that expects a serialized tf.Example.""" reciever_tensors = { "input_ids": tf.placeholder(dtype=tf.int32, shape=[1, MAX_SEQ_LEN]) } features = { "label_ids":tf.placeholder(tf.int32, [None], name='label_ids'), "input_ids": reciever_tensors['input_ids'], "input_mask": 1 - tf.cast(tf.equal(reciever_tensors['input_ids'], 0), dtype=tf.int32), "segment_ids": tf.zeros(dtype=tf.int32, shape=[1, MAX_SEQ_LEN]) } return tf.estimator.export.ServingInputReceiver(features, reciever_tensors) estimator._export_to_tpu = False estimator.export_saved_model(dir_path, serving_input_receiver_fn) export_model('./tfmode/pbformat/') ``` ## Getting analysis for a another dataset ``` test_all_live = pd.read_csv(PATH+'PopbotsLive_TestSet_213.csv') test_all_live, experiment_df_live = get_test_experiment_df(test_all_live) ```
github_jupyter
# Country Comparison Australia - New Zealand In the following the data of Australia and New Zealand will be cleaned, preprocessed and compared. We chose these two countries for its geographical proximity, historical, political and economic ties as well as for our domain knowledge of the area. Moreover, the data we see in our Master's program often focuses on the US an Europe, therefore we decided to chang our focus on another important part of the world. Hence, we obtained the complete SDG (Sustainable Development Goals) dataset for both countries for the period of 2006 to 2020 from the World Bank Database. ## Data Preprocessing This step is necessary because of the high prevalence of missing data. By the end of this script a new dataset with cleaned data will be create whose variables will be used for visual analysis in Tableau. #### Importing Libraries and Dataset ``` import pandas as pd import numpy as np raw_csv_data = pd.read_csv("AUS_NZ_Complete_2006-2020.csv") raw_csv_data ``` #### Restructuring Dataset ``` #Create subsets of data for each country raw_csv_AUS = raw_csv_data[raw_csv_data["Country Code"]=="AUS"] raw_csv_NZL = raw_csv_data[raw_csv_data["Country Code"]=="NZL"] raw_csv_AUS raw_csv_NZL #drop columns: Country Name, Country Code, Series Code raw_csv_AUS.drop(["Country Name", "Country Code", "Series Code"], axis=1, inplace=True) raw_csv_NZL.drop(["Country Name", "Country Code", "Series Code"], axis=1, inplace=True) raw_csv_AUS raw_csv_NZL #transpose data transposed_csv_AUS = raw_csv_AUS.transpose().reset_index() transposed_csv_AUS.columns = np.arange(len(transposed_csv_AUS.columns)) transposed_csv_NZL = raw_csv_NZL.transpose().reset_index() transposed_csv_NZL.columns = np.arange(len(transposed_csv_NZL.columns)) #rename column headers with first row & change series name to year transposed_csv_AUS.rename(columns=transposed_csv_AUS.iloc[0], inplace = True) transposed_csv_AUS.drop(index=transposed_csv_AUS.index[0], axis=0, inplace=True) transposed_csv_NZL.rename(columns=transposed_csv_NZL.iloc[0], inplace = True) transposed_csv_NZL.drop(index=transposed_csv_NZL.index[0], axis=0, inplace=True) transposed_csv_AUS.rename(columns={"Series Name": "Year"}, inplace=True) transposed_csv_NZL.rename(columns={"Series Name": "Year"}, inplace=True) transposed_csv_AUS transposed_csv_NZL #Clean year column data type(transposed_csv_AUS["Year"][1]) transposed_csv_AUS["Year"][1][:4] transposed_csv_AUS.Year=transposed_csv_AUS.Year.str[:4] transposed_csv_NZL.Year=transposed_csv_NZL.Year.str[:4] transposed_csv_AUS transposed_csv_NZL #Replace ".." with Nan transposed_csv_AUS=transposed_csv_AUS.replace('..', np.nan) transposed_csv_NZL=transposed_csv_NZL.replace('..', np.nan) ``` #### Data Exploration ``` #dataset checkpoint AUS_structured = transposed_csv_AUS.copy() NZL_structured = transposed_csv_NZL.copy() AUS_structured.info() NZL_structured.info() AUS_structured.describe() NZL_structured.describe() # select non numeric columns AUS al_non_numeric = AUS_structured.select_dtypes(exclude=[np.number]) non_numeric_cols = al_non_numeric.columns.values print(non_numeric_cols) # select non numeric columns NZL al_non_numeric = NZL_structured.select_dtypes(exclude=[np.number]) non_numeric_cols = al_non_numeric.columns.values print(non_numeric_cols) # % of missing AUS for col in AUS_structured.columns: pct_missing = np.mean(AUS_structured[col].isnull()) print('{} - {}%'.format(col, round(pct_missing*100))) # % of missing NZL for col in NZL_structured.columns: pct_missing = np.mean(NZL_structured[col].isnull()) print('{} - {}%'.format(col, round(pct_missing*100))) ``` #### Data Cleaning ``` for col in AUS_structured.columns: if round((np.mean(AUS_structured[col].isnull()))*100) > 0: AUS_structured.drop(col, axis=1, inplace=True) else: pass AUS_structured for col in NZL_structured.columns: if round((np.mean(NZL_structured[col].isnull()))*100) > 0: NZL_structured.drop(col, axis=1, inplace=True) else: pass NZL_structured ``` #### Feature Selection ``` # out of the remainin column 85 have been identified based on domain knowledge which will be used for furter analysis in Tableau AUS_preprocessed = AUS_structured[["Year","Adults (ages 15+) and children (ages 0-14) newly infected with HIV","Adults (ages 15-49) newly infected with HIV","Incidence of HIV, ages 15-24 (per 1,000 uninfected population ages 15-24)","Incidence of HIV, ages 15-49 (per 1,000 uninfected population ages 15-49)","Incidence of HIV, all (per 1,000 uninfected population)","Prevalence of HIV, female (% ages 15-24)","Prevalence of HIV, male (% ages 15-24)","Prevalence of HIV, total (% of population ages 15-49)","Women's share of population ages 15+ living with HIV (%)","Young people (ages 15-24) newly infected with HIV","Population ages 00-04, female (% of female population)","Population ages 00-04, male (% of male population)","Population ages 0-14 (% of total population)","Population ages 0-14, female","Population ages 0-14, female (% of female population)","Population ages 0-14, male","Population ages 0-14, male (% of male population)","Population ages 0-14, total","Population ages 05-09, female (% of female population)","Population ages 05-09, male (% of male population)","Population ages 10-14, female (% of female population)","Population ages 10-14, male (% of male population)","Population ages 15-19, female (% of female population)","Population ages 15-19, male (% of male population)","Population ages 15-64 (% of total population)","Population ages 15-64, female","Population ages 15-64, female (% of female population)","Population ages 15-64, male","Population ages 15-64, male (% of male population)","Population ages 15-64, total","Population ages 20-24, female (% of female population)","Population ages 20-24, male (% of male population)","Population ages 25-29, female (% of female population)","Population ages 25-29, male (% of male population)","Population ages 30-34, female (% of female population)","Population ages 30-34, male (% of male population)","Population ages 35-39, female (% of female population)","Population ages 35-39, male (% of male population)","Population ages 40-44, female (% of female population)","Population ages 40-44, male (% of male population)","Population ages 45-49, female (% of female population)","Population ages 45-49, male (% of male population)","Population ages 50-54, female (% of female population)","Population ages 50-54, male (% of male population)","Population ages 55-59, female (% of female population)","Population ages 55-59, male (% of male population)","Population ages 60-64, female (% of female population)","Population ages 60-64, male (% of male population)","Population ages 65 and above (% of total population)","Population ages 65 and above, female","Population ages 65 and above, female (% of female population)","Population ages 65 and above, male","Population ages 65 and above, male (% of male population)","Population ages 65 and above, total","Population ages 65-69, female (% of female population)","Population ages 65-69, male (% of male population)","Population ages 70-74, female (% of female population)","Population ages 70-74, male (% of male population)","Population ages 75-79, female (% of female population)","Population ages 75-79, male (% of male population)","Population ages 80 and above, female (% of female population)","Population ages 80 and above, male (% of male population)","Population, female","Population, female (% of total population)","Population, male","Population, male (% of total population)","Population, total","Proportion of seats held by women in national parliaments (%)","Women Business and the Law Index Score (scale 1-100)","Employment to population ratio, 15+, female (%) (national estimate)","Employment to population ratio, 15+, male (%) (national estimate)","Employment to population ratio, 15+, total (%) (modeled ILO estimate)","Employment to population ratio, 15+, total (%) (national estimate)","Employment to population ratio, ages 15-24, female (%) (national estimate)","Employment to population ratio, ages 15-24, male (%) (national estimate)","Employment to population ratio, ages 15-24, total (%) (national estimate)","GDP per person employed (constant 2017 PPP $)","Unemployment, female (% of female labor force) (national estimate)","Unemployment, male (% of male labor force) (national estimate)","Unemployment, total (% of total labor force) (modeled ILO estimate)","Unemployment, total (% of total labor force) (national estimate)","Unemployment, youth female (% of female labor force ages 15-24) (national estimate)","Unemployment, youth male (% of male labor force ages 15-24) (national estimate)","Unemployment, youth total (% of total labor force ages 15-24) (national estimate)"]] NZL_preprocessed = NZL_structured[["Year","Adults (ages 15+) and children (ages 0-14) newly infected with HIV","Adults (ages 15-49) newly infected with HIV","Incidence of HIV, ages 15-24 (per 1,000 uninfected population ages 15-24)","Incidence of HIV, ages 15-49 (per 1,000 uninfected population ages 15-49)","Incidence of HIV, all (per 1,000 uninfected population)","Prevalence of HIV, female (% ages 15-24)","Prevalence of HIV, male (% ages 15-24)","Prevalence of HIV, total (% of population ages 15-49)","Women's share of population ages 15+ living with HIV (%)","Young people (ages 15-24) newly infected with HIV","Population ages 00-04, female (% of female population)","Population ages 00-04, male (% of male population)","Population ages 0-14 (% of total population)","Population ages 0-14, female","Population ages 0-14, female (% of female population)","Population ages 0-14, male","Population ages 0-14, male (% of male population)","Population ages 0-14, total","Population ages 05-09, female (% of female population)","Population ages 05-09, male (% of male population)","Population ages 10-14, female (% of female population)","Population ages 10-14, male (% of male population)","Population ages 15-19, female (% of female population)","Population ages 15-19, male (% of male population)","Population ages 15-64 (% of total population)","Population ages 15-64, female","Population ages 15-64, female (% of female population)","Population ages 15-64, male","Population ages 15-64, male (% of male population)","Population ages 15-64, total","Population ages 20-24, female (% of female population)","Population ages 20-24, male (% of male population)","Population ages 25-29, female (% of female population)","Population ages 25-29, male (% of male population)","Population ages 30-34, female (% of female population)","Population ages 30-34, male (% of male population)","Population ages 35-39, female (% of female population)","Population ages 35-39, male (% of male population)","Population ages 40-44, female (% of female population)","Population ages 40-44, male (% of male population)","Population ages 45-49, female (% of female population)","Population ages 45-49, male (% of male population)","Population ages 50-54, female (% of female population)","Population ages 50-54, male (% of male population)","Population ages 55-59, female (% of female population)","Population ages 55-59, male (% of male population)","Population ages 60-64, female (% of female population)","Population ages 60-64, male (% of male population)","Population ages 65 and above (% of total population)","Population ages 65 and above, female","Population ages 65 and above, female (% of female population)","Population ages 65 and above, male","Population ages 65 and above, male (% of male population)","Population ages 65 and above, total","Population ages 65-69, female (% of female population)","Population ages 65-69, male (% of male population)","Population ages 70-74, female (% of female population)","Population ages 70-74, male (% of male population)","Population ages 75-79, female (% of female population)","Population ages 75-79, male (% of male population)","Population ages 80 and above, female (% of female population)","Population ages 80 and above, male (% of male population)","Population, female","Population, female (% of total population)","Population, male","Population, male (% of total population)","Population, total","Proportion of seats held by women in national parliaments (%)","Women Business and the Law Index Score (scale 1-100)","Employment to population ratio, 15+, female (%) (national estimate)","Employment to population ratio, 15+, male (%) (national estimate)","Employment to population ratio, 15+, total (%) (modeled ILO estimate)","Employment to population ratio, 15+, total (%) (national estimate)","Employment to population ratio, ages 15-24, female (%) (national estimate)","Employment to population ratio, ages 15-24, male (%) (national estimate)","Employment to population ratio, ages 15-24, total (%) (national estimate)","GDP per person employed (constant 2017 PPP $)","Unemployment, female (% of female labor force) (national estimate)","Unemployment, male (% of male labor force) (national estimate)","Unemployment, total (% of total labor force) (modeled ILO estimate)","Unemployment, total (% of total labor force) (national estimate)","Unemployment, youth female (% of female labor force ages 15-24) (national estimate)","Unemployment, youth male (% of male labor force ages 15-24) (national estimate)","Unemployment, youth total (% of total labor force ages 15-24) (national estimate)"]] AUS_preprocessed NZL_preprocessed #joined dataset AUS_preprocessed["Country"]="Australia" NZL_preprocessed["Country"]="New Zealand" NZL_preprocessed joint_preprocessed = AUS_preprocessed joint_preprocessed = joint_preprocessed.append(NZL_preprocessed) joint_preprocessed ``` #### Save Preprocessed Datasets as CSV ``` AUS_preprocessed.to_csv("AUS_preprocessed.csv", index=False) NZL_preprocessed.to_csv("NZL_preprocessed.csv", index=False) joint_preprocessed.to_csv("joint_preprocessed.csv", index=False) ```
github_jupyter
# Retail Demo Store Messaging Workshop - Amazon Pinpoint In this workshop we will use [Amazon Pinpoint](https://aws.amazon.com/pinpoint/) to add the ability to dynamically send personalized messages to the customers of the Retail Demo Store. We'll build out the following use-cases. - Send new users a welcome email after they sign up for a Retail Demo Store account - When users add items to their shopping cart but do not complete an order, send an email with a coupon code encouraging them to finish their order - Send users an email with product recommendations from the Amazon Personalize campaign we created in the Personalization workshop Recommended Time: 1 hour ## Prerequisites Since this module uses Amazon Personalize to generate and associate personalized product recommendations for users, it is assumed that you have either completed the [Personalization](../1-Personalization/1.1-Personalize.ipynb) workshop or those resources have been pre-provisioned in your AWS environment. If you are unsure and attending an AWS managed event such as a workshop, check with your event lead. ## Architecture Before diving into setting up Pinpoint to send personalize messages to our users, let's review the relevant parts of the Retail Demo Store architecture and how it uses Pinpoint to integrate with the machine learning campaigns created in Personalize. ![Retail Demo Store Pinpoint Architecture](images/retaildemostore-pinpoint-architecture.png) ### AWS Amplify & Amazon Pinpoint The Retail Demo Store's Web UI leverages [AWS Amplify](https://aws.amazon.com/amplify/) to integrate with AWS services for authentication ([Amazon Cognito](https://aws.amazon.com/cognito/)), messaging and analytics ([Amazon Pinpoint](https://aws.amazon.com/pinpoint/)), and to keep our personalization ML models up to date ([Amazon Personalize](https://aws.amazon.com/personalize/)). AWS Amplify provides libraries for JavaScript, iOS, Andriod, and React Native for building web and mobile applications. For this workshop, we'll be focusing on how user information and events from the Retail Demo Store's Web UI are sent to Pinpoint. This is depicted as **(1)** and **(2)** in the architecture above. We'll also show how the user information and events synchronized to Pinpoint are used to create and send personalized messages. When a new user signs up for a Retail Demo Store account, views a product, adds a product to their cart, completes an order, and so on, the relevant function is called in [AnalyticsHandler.js](https://github.com/aws-samples/retail-demo-store/blob/master/src/web-ui/src/analytics/AnalyticsHandler.js) in the Retail Demo Store Web UI. The new user sign up event triggers a call to the `AnalyticsHandler.identify` function where user information from Cognito is used to [update an endpoint](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id-endpoints.html) in Pinpoint. In Pinpoint, an endpoint represents a destination that you can send messages to, such as a mobile device, email address, or phone number. ```javascript // Excerpt from src/web-ui/src/analytics/AnalyticsHandler.js export const AnalyticsHandler = { identify(user) { Vue.prototype.$Amplify.Auth.currentAuthenticatedUser().then((cognitoUser) => { let endpoint = { userId: user.id, optOut: 'NONE', userAttributes: { Username: [ user.username ], ProfileEmail: [ user.email ], FirstName: [ user.first_name ], LastName: [ user.last_name ], Gender: [ user.gender ], Age: [ user.age.toString() ], Persona: user.persona.split("_") } } if (user.addresses && user.addresses.length > 0) { let address = user.addresses[0] endpoint.location = { City: address.city, Country: address.country, PostalCode: address.zipcode, Region: address.state } } if (cognitoUser.attributes.email) { endpoint.address = cognitoUser.attributes.email endpoint.channelType = 'EMAIL' Amplify.Analytics.updateEndpoint(endpoint) } }) } } ``` Once an `EMAIL` endpoint is created for our user, we can update attributes on that endpoint based on actions the user takes in the web UI. For example, when the user adds an item to their shopping cart, we'll set the attribute `HasShoppingCart` to `true` to indicate that this endpoint has an active shopping cart. We can also set metrics such as the number of items in the endpoint's cart. As we'll see later, we can use these attributes when building Campaigns in Pinpoint to target endpoints based on their activity in the application. ```javascript // Excerpt from src/web-ui/src/analytics/AnalyticsHandler.js productAddedToCart(userId, cart, product, quantity, experimentCorrelationId) { Amplify.Analytics.updateEndpoint({ attributes: { HasShoppingCart: ['true'] }, metrics: { ItemsInCart: cart.items.length } }) } ``` When the user completes an order, we send revenue tracking events to Pinpoint, as shown below, and also update endpoint attributes and metrics. We'll see how these events, attributes, and metrics be can used later in this workshop. ```javascript // Excerpt from src/web-ui/src/analytics/AnalyticsHandler.js orderCompleted(user, cart, order) { // ... for (var itemIdx in order.items) { let orderItem = order.items[itemIdx] Amplify.Analytics.record({ name: '_monetization.purchase', attributes: { userId: user ? user.id : null, cartId: cart.id, orderId: order.id.toString(), _currency: 'USD', _product_id: orderItem.product_id }, metrics: { _quantity: orderItem.quantity, _item_price: +orderItem.price.toFixed(2) } }) } Amplify.Analytics.updateEndpoint({ attributes: { HasShoppingCart: ['false'], HasCompletedOrder: ['true'] }, metrics: { ItemsInCart: 0 } }) } ``` ### Integrating Amazon Pinpoint & Amazon Personalize - Pinpoint Recommenders When building a Campaign in Amazon Pinpoint, you can associate the Pinpoint Campaign with a machine learning model, or recommender, that will be used to retrieve item recommendations for each endpoint eligible for the campaign. A recommender is linked to an Amazon Personalize Campaign. As you may recall from the [Personalization workshop](../1-Personalization/1.1-Personalize.ipynb), a Personalize Campaign only returns a list of item IDs (which represent product IDs for Retail Demo Store products). In order to turn the list of item IDs into more useful information for building a personalized email, Pinpoint supports the option to associate an AWS Lambda function to a recommender. This function is called passing information about the endpoint and the item IDs from Personalize and the function can return metadata about each item ID. Then in your Pinpoint message template you can reference the item metadata to incorporate it into your messages. The Retail Demo Store architecture already has a [Lambda function](https://github.com/aws-samples/retail-demo-store/blob/master/src/aws-lambda/pinpoint-recommender/pinpoint-recommender.py) deployed to use for our Pinpoint recommender. This function calls to Retail Demo Store's [Products](https://github.com/aws-samples/retail-demo-store/tree/master/src/products) microservice to retrieve useful information for each product (name, description, price, image URL, product URL, and so on). We will create a Pinpoint recommender in this workshop to tie it all together. This is depicted as **(3)**, **(4)**, and **(5)** in the architecture above. ## Setup Before we can make API calls to setup Pinpoint from this notebook, we need to install and import the necessary dependencies. ### Import Dependencies Next, let's import the dependencies we'll need for this notebook. We also have to retrieve Uid from a SageMaker notebook instance tag. ``` # Import Dependencies import boto3 import time import json import requests from botocore.exceptions import ClientError # Setup Clients personalize = boto3.client('personalize') ssm = boto3.client('ssm') pinpoint = boto3.client('pinpoint') lambda_client = boto3.client('lambda') iam = boto3.client('iam') # Service discovery will allow us to dynamically discover Retail Demo Store resources servicediscovery = boto3.client('servicediscovery') with open('/opt/ml/metadata/resource-metadata.json') as f: data = json.load(f) sagemaker = boto3.client('sagemaker') sagemakerResponce = sagemaker.list_tags(ResourceArn=data["ResourceArn"]) for tag in sagemakerResponce["Tags"]: if tag['Key'] == 'Uid': Uid = tag['Value'] break ``` ### Determine Pinpoint Application/Project When the Retail Demo Store resources were deployed by the CloudFormation templates, a Pinpoint Application (aka Project) was automatically created with the name "retaildemostore". In order for us to interact with the application via API calls in this notebook, we need to determine the application ID. Let's lookup our Pinpoint application using the Pinpoint API. ``` pinpoint_app_name = 'retaildemostore' pinpoint_app_id = None get_apps_response = pinpoint.get_apps() if get_apps_response['ApplicationsResponse'].get('Item'): for app in get_apps_response['ApplicationsResponse']['Item']: if app['Name'] == pinpoint_app_name: pinpoint_app_id = app['Id'] break assert pinpoint_app_id is not None, 'Retail Demo Store Pinpoint project/application does not exist' print('Pinpoint Application ID: ' + pinpoint_app_id) ``` ### Get Personalize Campaign ARN Before we can create a recommender in Pinpoint, we need the Amazon Personalize Campaign ARN for the product recommendation campaign. Let's look it up in the SSM parameter store where it was set by the Personalize workshop. ``` response = ssm.get_parameter(Name='retaildemostore-product-recommendation-campaign-arn') personalize_campaign_arn = response['Parameter']['Value'] assert personalize_campaign_arn != 'NONE', 'Personalize Campaign ARN not initialized - run Personalization workshop' print('Personalize Campaign ARN: ' + personalize_campaign_arn) ``` ### Get Recommendation Customizer Lambda ARN We also need the ARN for our Lambda function that will return product metadata for the item IDs. This function has already been deployed for you. Let's lookup our function by its name. ``` response = lambda_client.get_function(FunctionName = 'RetailDemoStorePinpointRecommender') lambda_function_arn = response['Configuration']['FunctionArn'] print('Recommendation customizer Lambda ARN: ' + lambda_function_arn) ``` ### Get IAM Role for Pinpoint to access Personalize In order for Pinpoint to access our Personalize campaign to get recommendations, we need to provide it with an IAM Role. The Retail Demo Store deployment has already created a role with the necessary policies. Let's look it up by it's role name. ``` response = iam.get_role(RoleName = Uid+'-PinptP9e') pinpoint_personalize_role_arn = response['Role']['Arn'] print('Pinpoint IAM role for Personalize: ' + pinpoint_personalize_role_arn) ``` ## Create Pinpoint Recommender Configuration With our environment setup and configuration info loaded, we can now create a recommender in Amazon Pinpoint. > We're using the Pinpoint API to create the Recommender Configuration in this workshop. You can also create a recommeder in the AWS Console for Pinpoint under the "Machine learning models" section. A few things to note in the recommender configuration below. - In the `Attributes` section, we're creating user-friendly names for the product information fields returned by our Lambda function. These names will be used in the Pinpoint console UI when designing message templates and can make it easier for template designers to select fields. - We're using `PINPOINT_USER_ID` for the `RecommendationProviderIdType` since the endpoint's `UserId` is where we set the ID for the user in the Retail Demo Store. Since this ID is what we use to represent each user when training the recommendation models in Personalize, we need Pinpoint to use this ID as well when retrieving recommendations. - We're limiting the number of recommendations per message to 4. ``` response = pinpoint.create_recommender_configuration( CreateRecommenderConfiguration={ 'Attributes': { 'Recommendations.Name': 'Product Name', 'Recommendations.URL': 'Product Detail URL', 'Recommendations.Category': 'Product Category', 'Recommendations.Description': 'Product Description', 'Recommendations.Price': 'Product Price', 'Recommendations.ImageURL': 'Product Image URL' }, 'Description': 'Retail Demo Store Personalize recommender for Pinpoint', 'Name': 'retaildemostore-recommender', 'RecommendationProviderIdType': 'PINPOINT_USER_ID', 'RecommendationProviderRoleArn': pinpoint_personalize_role_arn, 'RecommendationProviderUri': personalize_campaign_arn, 'RecommendationTransformerUri': lambda_function_arn, 'RecommendationsPerMessage': 4 } ) recommender_id = response['RecommenderConfigurationResponse']['Id'] print('Pinpoint recommender configuration ID: ' + recommender_id) ``` ### Verify Machine Learning Model / Recommender If you open a web browser window/tab and browse to the Pinpoint service in the AWS console for the AWS account we're working with, you should see the ML Model / Recommender that we just created in Pinpoint. ![Pinpoint ML Model / Recommender](images/pinpoint-ml-model.png) ## Create Personalized Email Templates With Amazon Pinpoint we can create email templates that can be used to send to groups of our users based on criteria. We'll start by creating email templates for the following use-case then step through how we target and send emails to the right users at the appropriate time. - Welcome Email - sent to users shortly after creating a Retail Demo Store account - Abandoned Cart Email - sent to users who leave items in their cart without completing an order - Personalized Recommendations Email - includes recommendations from the recommeder we just created ### Load Welcome Email Templates The first email template will be a welcome email template that is sent to new users of the Retail Demo Store after they create an account. Our templates will support both HTML and plain text formats. We'll load both formats and create the template. You can find all templates used in this workshop in the `pinpoint-templates` directory where this notebook is located. They can also be found in the Retail Demo Store source code repository. Let's load the HTML version of our welcome template and then look at a snippet of it. Complete template is available for review at [pinpoint-templates/welcome-email-template.html](pinpoint-templates/welcome-email-template.html) ``` with open('pinpoint-templates/welcome-email-template.html', 'r') as html_file: html_template = html_file.read() ``` ```html // Excerpt from pinpoint-templates/welcome-email-template.html <table border="0" cellpadding="0" cellspacing="0"> <tr> <td> <h1>Thank you for joining the Retail Demo Store!</h1> <p><strong>Hi, {{User.UserAttributes.FirstName}}.</strong> We just wanted to send you a quick note thanking you for creating an account on the Retail Demo Store. We're excited to serve you. </p> <p>We pride ourselves in providing a wide variety of high quality products in our store and delivering exceptional customer service. </p> <p>Please drop-in and check out our store often to see what's new and for personalized recommendations we think you'll love. </p> <p>Cheers,<br/>Retail Demo Store team </p> </td> </tr> <tr> <td style="text-align: center; padding-top: 20px"> <small>Retail Demo Store &copy; 2019-2020</small> </td> </tr> </table> ``` Notice how we're using the mustache template tagging syntax, `{{User.UserAttributes.FirstName}}`, to display the user's first name. This will provide a nice touch of personalization to our welcome email. Next we'll load and display the text version of our welcome email. ``` with open('pinpoint-templates/welcome-email-template.txt', 'r') as text_file: text_template = text_file.read() print('Text Template:') print(text_template) ``` ### Create Welcome Email Pinpoint Template Now let's take our HTML and text email template source and create a template in Amazon Pinpoint. We'll use a default substitution of "there" for the user's first name attribute if it is not set for some reason. This will result in the email greeting being "Hi there,..." rather than "Hi ,..." if we don't have a value for first name. ``` response = pinpoint.create_email_template( EmailTemplateRequest={ 'Subject': 'Welcome to the Retail Demo Store', 'TemplateDescription': 'Welcome email sent to new customers', 'HtmlPart': html_template, 'TextPart': text_template, 'DefaultSubstitutions': json.dumps({ 'User.UserAttributes.FirstName': 'there' }) }, TemplateName='RetailDemoStore-Welcome' ) welcome_template_arn = response['CreateTemplateMessageBody']['Arn'] print('Welcome email template ARN: ' + welcome_template_arn) ``` ### Load Abandoned Cart Email Templates Next we'll create an email template that includes messaging for users who add items to their cart but fail to complete an order. The following is a snippet of the Abanoned Cart Email template. Notice how multiple style properties are bring set for email formatting. You can also see how the template refers to custom Attributes such as cart item properties like ```ShoppingCartItemTitle``` and ```ShoppingCartItemImageURL``` can be passed. Complete template available for review at [pinpoint-templates/abandoned-cart-email-template.html](pinpoint-templates/abandoned-cart-email-template.html) ``` with open('pinpoint-templates/abandoned-cart-email-template.html', 'r') as html_file: html_template = html_file.read() ``` ```html // Excerpt from pinpoint-templates/abandoned-cart-email-template.html <tr> <td style="width:139px;"> <img alt="{{Attributes.ShoppingCartItemTitle}}" height="auto" src="{{Attributes.ShoppingCartItemImageURL}}" style="border:none;display:block;outline:none;text-decoration:none;height:auto;width:100%;font-size:13px;" width="139" /> </td> </tr> </tbody> </table> </td> </tr> </table> </div> <!--[if mso | IE]> </td> <td style="vertical-align:top;width:285px;" > <![endif]--> <div class="mj-column-per-50 mj-outlook-group-fix" style="font-size:0px;text-align:left;direction:ltr;display:inline-block;vertical-align:top;width:50%;"> <table border="0" cellpadding="0" cellspacing="0" role="presentation" style="vertical-align:top;" width="100%"> <tr> <td align="center" style="font-size:0px;padding:10px 25px;word-break:break-word;"> <div style="font-family:Ubuntu, Helvetica, Arial, sans-serif;font-size:18px;font-weight:bold;line-height:1;text-align:center;color:#000000;"> <p>{{Attributes.ShoppingCartItemTitle}}</p> </div> </td> </tr> <tr> <td align="center" vertical-align="middle" style="font-size:0px;padding:10px 25px;word-break:break-word;"> <table border="0" cellpadding="0" cellspacing="0" role="presentation" style="border-collapse:separate;line-height:100%;"> <tr> <td align="center" bgcolor="#FF9900" role="presentation" style="border:none;border-radius:3px;cursor:auto;mso-padding-alt:10px 25px;background:#FF9900;" valign="middle"> <a href="{{Attributes.WebsiteCartURL}}" style="display:inline-block;background:#FF9900;color:#ffffff;font-family:Ubuntu, Helvetica, Arial, sans-serif;font-size:9px;font-weight:normal;line-height:120%;margin:0;text-decoration:none;text-transform:none;padding:10px 25px;mso-padding-alt:0px;border-radius:3px;" target="_blank"> BUY NOW </a> </td> </tr> ``` ``` with open('pinpoint-templates/abandoned-cart-email-template.txt', 'r') as text_file: text_template = text_file.read() print('Text Template:') print(text_template) ``` ### Create Abandoned Cart Email Template Now we can create an email template in Pinpoint for our abandoned cart use-case. ``` response = pinpoint.create_email_template( EmailTemplateRequest={ 'Subject': 'Retail Demo Store - Motivation to Complete Your Order', 'TemplateDescription': 'Abandoned cart email template', 'HtmlPart': html_template, 'TextPart': text_template, 'DefaultSubstitutions': json.dumps({ 'User.UserAttributes.FirstName': 'there' }) }, TemplateName='RetailDemoStore-AbandonedCart' ) abandoned_cart_template_arn = response['CreateTemplateMessageBody']['Arn'] print('Abandoned cart email template ARN: ' + abandoned_cart_template_arn) ``` ### Load Recommendations Email Templates Next we'll create an email template that includes recommendations from the Amazon Personalize product recommendation campaign that we created in the [Personalization workshop](../1-Personalization/1.1-Personalize.ipynb). If you haven't completed the personalization workshop, please do so now and come back to this workshop when complete. As with the welcome email template, let's load and then view snippets of the HTML and text formats for our template. Complete template is available at [pinpoint-templates/recommendations-email-template.html](pinpoint-templates/recommendations-email-template.html) ``` with open('pinpoint-templates/recommendations-email-template.html', 'r') as html_file: html_template = html_file.read() ``` ``` html // Excerpt from pinpoint-templates/recommendations-email-template.html <table border="0" cellpadding="0" cellspacing="0"> <tr> <td> <h1>Hi, {{User.UserAttributes.FirstName}}. Greetings from the Retail Demo Store!</h1> <p>Here are a few products inspired by your shopping trends</p> <p>&nbsp;</p> </td> </tr> <tr> <td> <table border="0" cellpadding="4" cellspacing="0"> <tr valign="top"> <td style="text-align: left; width: 40%;" width="40%"> <a href="{{Recommendations.URL.[0]}}"> <img src="{{Recommendations.ImageURL.[0]}}" alt="{{Recommendations.Name.[0]}}" style="min-width: 50px; max-width: 300px; border: 0; text-decoration:none; vertical-align: baseline;"/> </a> </td> <td style="text-align: left;"> <h3>{{Recommendations.Name.[0]}}</h3> <p>{{Recommendations.Description.[0]}}</p> <p><strong>{{Recommendations.Price.[0]}}</strong></p> <p><a href="{{Recommendations.URL.[0]}}"><strong>Buy Now!</strong></a></p> </td> </tr> ``` Notice the use of several new mustache template tags in this template. For example, `{{Recommendations.Name.[0]}}` resolves to the product name of the first product recommended by Personalize. The product name came from our Lambda function which was called by Pinpoint after it called `get_recommendations` on our Personalize campaign. Next load the text version of our template. ``` with open('pinpoint-templates/recommendations-email-template.txt', 'r') as text_file: text_template = text_file.read() print('Text Template:') print(text_template) ``` ### Create Recommendations Email Template This time when we create the template in Pinpoint, we'll specify the `RecommenderId` for the machine learning model (Amazon Personalize) that we created earlier. ``` response = pinpoint.create_email_template( EmailTemplateRequest={ 'Subject': 'Retail Demo Store - Products Just for You', 'TemplateDescription': 'Personalized recommendations email template', 'RecommenderId': recommender_id, 'HtmlPart': html_template, 'TextPart': text_template, 'DefaultSubstitutions': json.dumps({ 'User.UserAttributes.FirstName': 'there' }) }, TemplateName='RetailDemoStore-Recommendations' ) recommendations_template_arn = response['CreateTemplateMessageBody']['Arn'] print('Recommendation email template ARN: ' + recommendations_template_arn) ``` ### Verify Email Templates If you open a web browser window/tab and browse to the Pinpoint service in the AWS console for the AWS account we're working with, you should see the message templates we just created. ![Pinpoint Message Templates](images/pinpoint-msg-templates.png) ## Enable Pinpoint Email Channel Before we can setup Segments and Campaigns to send emails, we have to enable the email channel in Pinpoint and verify sending and receiving email addresses. > We'll be using the Pinpoint email channel in sandbox mode. This means that Pinpoint will only send emails from and to addresses that have been verified in the Pinpoint console. In the Pinpoint console, click on "All Projects" and then the "retaildemostore" project. ![Pinpoint Projects](images/pinpoint-projects.png) ### Email Settings From the "retaildemostore" project page, expand "Settings" in the left navigation and then click "Email". You will see that email has not yet been enabled as a channel. Click the "Edit" button to enable Pinpoint to send emails and to verify some email addresses. ![Pinpoint Email Settings](images/pinpoint-email-setup.png) ### Verify Some Email Addresses On the "Edit email" page, check the box to enable the email channel and enter a valid email address that you have the ability to check throughout the rest of this workshop. ![Pinpoint Verify Email Addresses](images/pinpoint-email-verify.png) ### Verify Additional Email Addresses So that we can send an email to more than one endpoint in this workshop, verify a couple more variations of your email address. Assuming your **valid** email address is `joe@example.com`, add a few more variations using `+` notation such as... - `joe+1@example.com` - `joe+2@example.com` - `joe+3@example.com` Just enter a variation, click the "Verify email address" button, and repeat until you've added a few more. Write down or commit to memory the variations you created--we'll need them later. By adding these variations, we're able to create separate Retail Demo Store accounts for each email address and therefore separate endpoints in Pinpoint that we can target. Note that emails sent to the these variations should still be delivered to your same inbox. ### Check Your Inbox & Click Verification Links Pinpoint should have sent verification emails to all of the email addresses you added above. Sign in to your email client and check your inbox for the verification emails. Once you receive the emails (it can take a few minutes), click on the verification link in **each email**. If after several minutes you don't receive the verification email or you want to use a different address, repeat the verification process above. > Your email address(es) must be verified before we can setup Campaigns in Pinpoint. After you click the verify link in the email sent to each variation of your email address, you should see a success page like the following. ![Email Verified](images/pinpoint-ses-success.png) ## Let's Go Shopping - Create Retail Demo Store User Accounts & Pinpoint Endpoints Next let's create a few new user accounts in the Retail Demo Store Web UI using the email address(es) that we just verified. Based on the source code snippets we saw earlier, we know that the Retail Demo Store will create endpoints in Pinpoint for new accounts. <div class="alert alert-info"> IMPORTANT: each Retail Demo Store account must be created in an entirely separate web browser session in order for them to be created as separate endpoints in Pinpoint. Signing out and attempting to create a new account in the same browser will NOT work. The easiest way to do this successfully is to use Google Chrome and open new Incognito windows for each new account. Alternatively, you could use multiple browser types (i.e. Chrome, Firefox, Safari, IE) and/or separate devices to create accounts such as a mobile phone or tablet. </div> 1. Open the Retail Demo Store Web UI in an new Incognito window. If you don't already have the Web UI open or need the URL, you can find it in the "Outputs" tab for the Retail Demo Store CloudFormation stack in your AWS account. Look for the "WebURL" output field, right click on the link, and select "Open Link in Incognito Window" (Chrome only). ![CloudFormation Outputs](images/retaildemostore-cfn-outputs.png) 2. Click the "Sign In" button in the top navigation (right side) and then click on the "Create account" link in the "Sign in" form. ![Create Retail Demo Store account](images/retaildemostore-create-account.png) 3. A few seconds after creating your account you should receive an email with a six digit confirmation code. Enter this code on the confirmation page. ![Confirm Retail Demo Store account](images/retaildemostore-confirm-account.png) 4. Once confirmed you can sign in to your account with your user name and password. At this point you should have a endpoint in Pinpoint for this user. 5. Close your Incognito window(s). 6. Open a new Incognito window and **repeat the process for SOME (but not all) of your remaining email address variations** you verified in Pinpoint above. **As a reminder, it's important that you create each Retail Demo Store account in a separate/new Incognito window, browser application, or device. Otherwise, your accounts will overwrite the same endpoint in Pinpoint.** <div class="alert alert-info"> Be sure to hold back one or two of your verified email addresses until after we create a welcome email campaign below so the sign up events fall within the time window of the campaign. </div> ### Shopping Behavior With your Retail Demo Store accounts created, perform some activities with some of your accounts. - For one of your users add some items to the shopping cart but do not checkout to simulate an abandoned cart scenario. - For another user, add some items to the cart and complete an order or two so that revenue events are sent all the way through to Pinpoint. - Also be sure to view a few products by clicking through to the product detail view. Select products that would indicate an affinity for a product type (e.g. shoes or electronics) so you can see how product recommendations are tailored in the product recommendations email. ## Create Pinpoint Segments With our Recommender and message templates in place and a few test users created in the Retail Demo Store, let's turn to creating Segments in Pinpoint. After our Segments are created, we'll create some Campaigns. 1. Start by browsing to the Amazon Pinpoint service page in the AWS account where the Retail Demo Store was deployed. Click on "All Projects" and you should see the "retaildemostore" project. Click on the "retaildemostore" project and then "Segments" in the left navigation. Click on the "Create a segment" button. ![Pinpoint Segments](images/pinpoint-segments.png) 2. Then click on the "Create segment" button. We will be building a dynamic segment based on the endpoints that were automatically created when we created our Retail Demo Store user accounts. We'll include all endpoints that have an email address by adding a filter by channel type with a value of `EMAIL`. Name your segment "AllEmailUsers" and scroll down and click the "Create segment" button at the bottom of the page. ![Pinpoint Create Segment](images/pinpoint-create-segment.png) 3. Create another segment that is based on the "AllEmailUsers" segment you just created but has an additional filter on the `HasShoppingCart` endpoint attribute and has a value of `true`. This represents all users that have a shopping cart and will be used for our abandoned cart campaign. If you don't see this endpoint attribute or don't see `true` as an option, switch to another browser tab/window and add items to the shopping cart for one of your test users. ![Pinpoint Carts Segment](images/pinpoint-carts-segment.png) ## Create Campaigns With our segments created for all users and for users with shopping carts, let's create campaigns for our welcome email, product recommendations, and abandoned cart use-cases. ### Welcome Email Campaign Let's start with with the welcome email campaign. For the "retaildemostore" project in Pinpoint, click "Campaigns" in the left navigation and then the "Create a campaign" button. 1. For Step 1, give your campaign a name such as "WelcomeEmail", select "Standard campaign" as the campaign type, and "Email" as the channel. Click "Next" to continue. ![Pinpoint Create Campaign](images/pinpoint-create-welcome-campaign-1.png) 2. For Step 2, we will be using our "AllEmailUsers" dynamic segment. Click "Next" to continue. ![Pinpoint Create Campaign](images/pinpoint-create-welcome-campaign-2.png) 3. For Step 3, choose the "RetailDemoStore-Welcome" email template, scroll to the bottom of the page, and click "Next". ![Pinpoint Create Campaign](images/pinpoint-create-welcome-campaign-3.png) 4. For Step 4, we want the campaign to be sent when the `UserSignedUp` event occurs. Set the campaign start date to be today's date so that it begins immediately and the end date to be a few days into the future. **Be sure to adjust to your current time zone.** ![Pinpoint Create Campaign](images/pinpoint-create-welcome-campaign-4.png) 5. Scroll to the bottom of the page, click "Next". 6. Click "Launch campaign" to launch your campaign. <div class="alert alert-info"> <strong>Given that the welcome campaign is activated based on sign up events that occur between the campaign start and end times, to test this campaign you must wait until after the camapign starts and then use one of your remaining verified email addresses to create a new Retail Demo Store account.</strong> </div> ### Abandoned Cart Campaign To create an abandoned cart campaign, repeat the steps you followed for the Welcome campaign above but this time select the `UsersWithCarts` segment, the `RetailDemoStore-AbandonedCart` email template, and the `Session Stop` event. This will trigger the abandoned cart email to be sent when users end their session while still having a shopping cart. Launch the campaign, wait for the campaign to start, and then close out some browser sessions for user(s) with items still in their cart. This can take some trial and error and waiting given the how browsers and devices trigger end of session events. ### Recommendations Campaign Finally, create a recommendations campaign that targets the `AllEmailUsers` segment and uses the `RetailDemoStore-Recommendations` message template. This time, however, rather than trigger the campaign based on an event, we'll send the campaign immediately. Click "Next", launch the campaign, and check the email inbox for your test accounts after a few moments. ![Pinpoint Create Campaign](images/pinpoint-create-rec-campaign-4.png) ## Bonus - Pinpoint Journeys With Amazon Pinpoint journeys, you can create custom experiences for your customers using an easy to use, drag-and-drop interface. When you build a journey, you choose the activities that you want to add to the journey. These activities can perform a variety of different actions, like sending an email to journey participants, waiting a defined period of time, or splitting users based on a certain action, such as when they open or click a link in an email. Using the segments and message templates you've already created, experiment with creating a journey that guides users through a messaging experience. For example, start a journey by sending all users the Recommendations message template. Then add a pause/wait step followed by a Multivariate Split that directs users down separate paths whether they've completed an order (hint: create a `OrderCompleted` segment), opened the Recommendations email, or done nothing. Perhaps users who completed an order might receive a message asking them to refer a friend to the Retail Demo Store and users who just opened the email might be sent a message with a coupon to motivate them to get shopping (you'll need to create a new message templates for these). ## Workshop Complete Congratulations! You have completed the Retail Demo Store Pinpoint Workshop. ### Cleanup If you launched the Retail Demo Store in your personal AWS account **AND** you're done with all workshops & your evaluation of the Retail Demo Store, you can remove all provisioned AWS resources and data by deleting the CloudFormation stack you used to deploy the Retail Demo Store. Although deleting the CloudFormation stack will delete the entire "retaildemostore" project in Pinpoint, including all endpoint data, it will not delete resources we created directly in this workshop (i.e. outside of the "retaildemostore" Pinpoint project). The following cleanup steps will remove the resources we created outside the "retaildemostore" Pinpoint project. > If you are participating in an AWS managed event such as a workshop and using an AWS provided temporary account, you can skip the following cleanup steps unless otherwise instructed. #### Delete Recommeder Configuration ``` response = pinpoint.delete_recommender_configuration(RecommenderId=recommender_id) print(json.dumps(response, indent=2)) ``` #### Delete Email Message Templates ``` response = pinpoint.delete_email_template(TemplateName='RetailDemoStore-Welcome') print(json.dumps(response, indent=2)) response = pinpoint.delete_email_template(TemplateName='RetailDemoStore-AbandonedCart') print(json.dumps(response, indent=2)) response = pinpoint.delete_email_template(TemplateName='RetailDemoStore-Recommendations') print(json.dumps(response, indent=2)) ``` Other resources allocated for the Retail Demo Store will be deleted when the CloudFormation stack is deleted. End of workshop
github_jupyter
# Kqlmagic - __parametrization__ features *** Explains how to emebed python values in kql queries *** *** ## Make sure that you have the lastest version of Kqlmagic Download Kqlmagic from PyPI and install/update (if latest version ims already installed you can skip this step) ``` #!pip install Kqlmagic --no-cache-dir --upgrade ``` ## Add Kqlmagic to notebook magics ``` %reload_ext Kqlmagic ``` ## Authenticate to get access to data ``` %kql azure-data-explorer://code;cluster='help';database='Samples' ``` ## Use python user namespace as source of parameters - prefix query with **kql let statements** to parametrize the query - beware to the mapping: - int -> long - float -> real - str -> string - bool -> bool - datetime -> datetime - timedelta -> timespan - dict, list, set, tuple -> dynamic (only if can be serialized to json) - **pandas dataframe -> view table** - None -> null - unknown, str(value) == 'nan' -> real(null) - unknown, str(value) == 'NaT' -> datetime(null) - unknown str(value) == 'nat' -> time(null) - other -> string ``` from datetime import datetime, timedelta my_limit = 10 my_not_state = 'TEXAS' my_start_datetime = datetime(2007, 8, 29) my_timespan = timedelta(days=100) my_dict = {"a":1} my_list = ["x", "y", "z"] my_tuple = ("t", 44, my_limit) my_set = {6,7,8} %%kql let _dict_ = my_dict; let _list_ = my_list; let _tuple_ = my_tuple; let _set_ = my_set; let _start_time_ = my_start_datetime; let _timespan_ = my_timespan; let _limit_ = my_limit; let _not_val_ = my_not_state; StormEvents | where StartTime >= _start_time_ | where EndTime <= _start_time_ + _timespan_ | where State != _not_val_ | summarize count() by State | extend d = _dict_ | extend l = _list_ | extend t = _tuple_ | extend s = _set_ | sort by count_ | limit _limit_ ``` ## Dataframe prameter as a kql table - prefix query with **kql let statement** that assigns a kql var to the dataframe - beware to the mapping of the dataframe to kql table columns types : - int8,int16,int32,int64,uint8,uint16,uint32,uint64 -> long - float16,float32,float64 -> real - character -> string - bytes -> string - void -> string - category -> string - datetime,datetime64,datetime64[ns],datetime64[ns,tz] -> datetime - timedelta,timedelta64,timedelta64[ns] -> timespan - bool -> bool - record -> dynamic - complex64,complex128 -> dynamic([real, imag]) - object -> if all objects of type: - dict,list,tuple,set -> dynamic (only if can be serialized to json) - bool or nan -> bool - float or nan -> float - int or nan -> long - datetime or 'NaT' -> datetime - timedeltae or 'NaT' -> timespan - other -> string ``` my_df =_kql_raw_result_.to_dataframe() my_df %%kql let _my_table_ = my_df; _my_table_ | project State, s, t | limit 3 _kql_raw_result_.parametrized_query ``` ## Parametrize the whole query string ``` sort_col = 'count_' my_query = """StormEvents | where State != 'OHIO' | summarize count() by State | sort by {0} | limit 5""".format(sort_col) %kql -query my_query ``` ## Use python dictionary as source of parameters - set option -params_dict with the name of a python variable that refer to the dictionary - prefix query with kql let statements to parametrize the query ``` p_dict = {'p_limit':20, 'p_not_state':'IOWA'} %%kql -params_dict p_dict let _limit_ = p_limit; let _not_val_ = p_not_state; StormEvents | where State != _not_val_ | summarize count() by State | sort by count_ | limit _limit_ ``` ## Use python dictionary expression as source of parameters - set option -params_dict with a dictionary string (python format) - prefix query with kql let statements to parametrize the query - **make sure that the dictionary expression is without spaces** ``` %%kql -params_dict {'p_limit':5,'p_not_state':'OHIO'} let _limit_ = p_limit; let _not_val_ = p_not_state; StormEvents | where State != _not_val_ | summarize count() by State | sort by count_ | limit _limit_ ``` ## get query string - shows the original query, as in the input cell ``` _kql_raw_result_.query ``` ## get parametrized query string - shows the parametrized query, that was submited to kusto ``` _kql_raw_result_.parametrized_query ``` - ### <span style="color:#82CAFA">*Note - additional let statements were added to the original query, one let statement for each parameter*</span> ``` p_dict = {'p_limit':5,'p_not_state':'OHIO'} %%kql -params_dict p_dict let _limit_ = p_limit; let _not_val_ = p_not_state; StormEvents | where State != _not_val_ | summarize count() by State | sort by count_ | limit _limit_ ``` ## parameters dictionary is modified ``` p_dict = {'p_limit': 5, 'p_not_state': 'IOWA'} ``` ## refresh use original parameters - the same parameter values are used ``` _kql_raw_result_.refresh() ``` - ### <span style="color:#82CAFA">*Note - the refresh method use the original parameter values, as they were set*</span> ## submit use the current python values as parameters - a new query is created and parametrized with the current python values ``` _kql_raw_result_.submit() ``` - ### <span style="color:#82CAFA">*Note - the submit method cretes a new query and parametrize with the current parameter values*</span> ## Parametrize option all options can be parametrized. instead of providing a quoted parameter value, specify the python variable or python expression - beware, that python expression must not have spaces !!! - valid expression examples: ```my_var```, ```str(type(x))```, ```[a,1,2]``` - invalid expressions: ```str( type ( x ) )```, ```[a, 1, 2]``` ``` table_package = 'pandas' my_popup_state = True %%kql -tp=table_package -pw=my_popup_state -f=table_package!='pandas' StormEvents | where State != 'OHIO' | summarize count() by State | sort by count_ | limit 5 ``` ## Parametrize commands all commands can be parametrized. instead of providing a quoted parameter value, specify the python variable or python expression. - **note**, if instead of the python expression, you specify a variable that starts with $, it will be retreived from the environment variables.<br><br> - **beware**, that python expression must not have spaces !!! ``` my_topic = "kql" %kql --help my_topic ``` ## Parametrize connection string all values in connection string can be parametrized. instead of providing a quoted parameter value, specify the python variable or python expression - **note**, if you don't specify the credential's secret you will be prompted. - **note**, if instead of the python expression, you specify a variable that starts with $, it will be retreived from the environment variables.<br><br> - beware, that python expression must not have spaces !!! ``` my_appid = "DEMO_APP" my_appkey = "DEMO_KEY" %kql appinsights://appid=my_appid;appkey=my_appkey ``` ## Parametrize the whold connection string ``` my_connection_str = """ loganalytics://workspace='DEMO_WORKSPACE';appkey='DEMO_KEY';alias='myworkspace' """ %kql -conn=my_connection_str ```
github_jupyter
``` # !pip install graphviz ``` To produce the decision tree visualization you should install the graphviz package into your system: https://stackoverflow.com/questions/35064304/runtimeerror-make-sure-the-graphviz-executables-are-on-your-systems-path-aft ``` # Run one of these in case you have problems with graphviz # All users: try this first # ! conda install graphviz # If that doesn't work: # Ubuntu/Debian users only # ! sudo apt-get update && sudo apt-get install graphviz # Mac users only (assuming you have homebrew installed) # ! brew install graphviz # Windows users, check the stack overflow link. Sorry! from collections import Counter from os.path import join import matplotlib.pyplot as plt import seaborn as sns import numpy as np import pandas as pd from sklearn.cluster import DBSCAN, KMeans, AgglomerativeClustering from sklearn.base import clone from sklearn.metrics import pairwise_distances from scipy.cluster.hierarchy import dendrogram from sklearn.manifold import TSNE from sklearn.tree import DecisionTreeClassifier, export_graphviz from sklearn.model_selection import train_test_split import graphviz sns.set() ``` ## Import preprocessed data ``` df = pd.read_csv(join('..', 'data', 'tugas_preprocessed.csv')) # Splitting feature names into groups non_metric_features = df.columns[df.columns.str.startswith('x')] pc_features = df.columns[df.columns.str.startswith('PC')] metric_features = df.columns[~df.columns.str.startswith('x') & ~df.columns.str.startswith('PC')] ``` # Before we proceed - Consider applying the outlier filtering method discussed last class. - We manually filtered the dataset's outliers based on a univariate analysis - Consider dropping/transforming the variable "rcn". Why? - Very little correlation with any other variables - Remember the Component planes: the SOM's units were indistinguishable on this variable ``` # Based on the hyperparameters found in the previous class dbscan = DBSCAN(eps=1.9, min_samples=20, n_jobs=4) dbscan_labels = dbscan.fit_predict(df[metric_features]) Counter(dbscan_labels) # Save the newly detected outliers (they will be classified later based on the final clusters) df_out = # CODE HERE # New df without outliers and 'rcn' df = # CODE HERE # Update metric features list metric_features = # CODE HERE ``` # Clustering by Perspectives - Demographic/Behavioral Perspective: - Product Perspective: ``` # Split variables into perspectives (example, requires critical thinking and domain knowledge) demographic_features = [ 'income', 'frq', 'per_net_purchase', 'spent_online' ] preference_features = [ 'clothes', 'kitchen', 'small_appliances', 'toys', 'house_keeping', ] df_dem = df[demographic_features].copy() df_prf = df[preference_features].copy() ``` ## Testing on K-means and Hierarchical clustering Based on (1) our previous tests and (2) the context of this problem, the optimal number of clusters is expected to be between 3 and 7. ``` def get_ss(df): """Computes the sum of squares for all variables given a dataset """ ss = np.sum(df.var() * (df.count() - 1)) return ss # return sum of sum of squares of each df variable def r2(df, labels): sst = get_ss(df) ssw = np.sum(df.groupby(labels).apply(get_ss)) return 1 - ssw/sst def get_r2_scores(df, clusterer, min_k=2, max_k=10): """ Loop over different values of k. To be used with sklearn clusterers. """ r2_clust = {} for n in range(min_k, max_k): clust = clone(clusterer).set_params(n_clusters=n) labels = clust.fit_predict(df) r2_clust[n] = r2(df, labels) return r2_clust # Set up the clusterers (try out a KMeans and a AgglomerativeClustering) kmeans = # CODE HERE hierarchical = # CODE HERE ``` ### Finding the optimal clusterer on demographic variables ``` # Obtaining the R² scores for each cluster solution on demographic variables r2_scores = {} r2_scores['kmeans'] = get_r2_scores(df_dem, kmeans) for linkage in ['complete', 'average', 'single', 'ward']: r2_scores[linkage] = get_r2_scores( df_dem, hierarchical.set_params(linkage=linkage) ) pd.DataFrame(r2_scores) # Visualizing the R² scores for each cluster solution on demographic variables pd.DataFrame(r2_scores).plot.line(figsize=(10,7)) plt.title("Demographic Variables:\nR² plot for various clustering methods\n", fontsize=21) plt.legend(title="Cluster methods", title_fontsize=11) plt.xlabel("Number of clusters", fontsize=13) plt.ylabel("R² metric", fontsize=13) plt.show() ``` ### Repeat the process for product variables ``` # Obtaining the R² scores for each cluster solution on product variables r2_scores = {} r2_scores['kmeans'] = get_r2_scores(df_prf, kmeans) for linkage in ['complete', 'average', 'single', 'ward']: r2_scores[linkage] = get_r2_scores( df_prf, hierarchical.set_params(linkage=linkage) ) # Visualizing the R² scores for each cluster solution on product variables pd.DataFrame(r2_scores).plot.line(figsize=(10,7)) plt.title("Product Variables:\nR2 plot for various clustering methods\n", fontsize=21) plt.legend(title="Cluster methods", title_fontsize=11) plt.xlabel("Number of clusters", fontsize=13) plt.ylabel("R2 metric", fontsize=13) plt.show() ``` ## Merging the Perspectives - How can we merge different cluster solutions? ``` # Applying the right clustering (algorithm and number of clusters) for each perspective kmeans_prod = # CODE HERE prod_labels = kmeans_prod.fit_predict(df_prf) kmeans_behav = # CODE HERE behavior_labels = kmeans_behav.fit_predict(df_dem) # Setting new columns df['product_labels'] = prod_labels df['behavior_labels'] = behavior_labels # Count label frequencies (contigency table) # CODE HERE ``` ### Manual merging: Merge lowest frequency clusters into closest clusters ``` # Clusters with low frequency to be merged: to_merge = # CODE HERE df_centroids = df.groupby(['behavior_labels', 'product_labels'])\ [metric_features].mean() # Computing the euclidean distance matrix between the centroids euclidean = # CODE HERE df_dists = pd.DataFrame( euclidean, columns=df_centroids.index, index=df_centroids.index ) # Merging each low frequency clustering (source) to the closest cluster (target) source_target = {} for clus in to_merge: if clus not in source_target.values(): source_target[clus] = df_dists.loc[clus].sort_values().index[1] source_target df_ = df.copy() # Changing the behavior_labels and product_labels based on source_target for source, target in source_target.items(): mask = # CODE HERE (changing the behavior and product labels of each source based on target) df_.loc[mask, 'behavior_labels'] = target[0] df_.loc[mask, 'product_labels'] = target[1] # New contigency table df_.groupby(['product_labels', 'behavior_labels'])\ .size()\ .to_frame()\ .reset_index()\ .pivot('behavior_labels', 'product_labels', 0) ``` ### Merging using Hierarchical clustering ``` # Centroids of the concatenated cluster labels df_centroids = # CODE HERE (group by both on behavior and product label) df_centroids # Using Hierarchical clustering to merge the concatenated cluster centroids hclust = AgglomerativeClustering( linkage='ward', affinity='euclidean', distance_threshold=0, n_clusters=None ) hclust_labels = hclust.fit_predict(df_centroids) # Adapted from: # https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html#sphx-glr-auto-examples-cluster-plot-agglomerative-dendrogram-py # create the counts of samples under each node (number of points being merged) counts = np.zeros(hclust.children_.shape[0]) n_samples = len(hclust.labels_) # hclust.children_ contains the observation ids that are being merged together # At the i-th iteration, children[i][0] and children[i][1] are merged to form node n_samples + i for i, merge in enumerate(hclust.children_): # track the number of observations in the current cluster being formed current_count = 0 for child_idx in merge: if child_idx < n_samples: # If this is True, then we are merging an observation current_count += 1 # leaf node else: # Otherwise, we are merging a previously formed cluster current_count += counts[child_idx - n_samples] counts[i] = current_count # the hclust.children_ is used to indicate the two points/clusters being merged (dendrogram's u-joins) # the hclust.distances_ indicates the distance between the two points/clusters (height of the u-joins) # the counts indicate the number of points being merged (dendrogram's x-axis) linkage_matrix = np.column_stack( [hclust.children_, hclust.distances_, counts] ).astype(float) # Plot the corresponding dendrogram sns.set() fig = plt.figure(figsize=(11,5)) # The Dendrogram parameters need to be tuned y_threshold = 2.3 dendrogram(linkage_matrix, truncate_mode='level', labels=df_centroids.index, p=5, color_threshold=y_threshold, above_threshold_color='k') plt.hlines(y_threshold, 0, 1000, colors="r", linestyles="dashed") plt.title(f'Hierarchical Clustering - {linkage.title()}\'s Dendrogram', fontsize=21) plt.xlabel('Number of points in node (or index of point if no parenthesis)') plt.ylabel(f'Euclidean Distance', fontsize=13) plt.show() # Re-running the Hierarchical clustering based on the correct number of clusters hclust = # CODE HERE hclust_labels = hclust.fit_predict(df_centroids) df_centroids['hclust_labels'] = hclust_labels df_centroids # centroid's cluster labels # Mapper between concatenated clusters and hierarchical clusters cluster_mapper = df_centroids['hclust_labels'].to_dict() df_ = df.copy() # Mapping the hierarchical clusters on the centroids to the observations df_['merged_labels'] = df_.apply(# CODE HERE) # Merged cluster centroids df_.groupby('merged_labels').mean()[metric_features] #Merge cluster contigency table # Getting size of each final cluster df_counts = df_.groupby('merged_labels')\ .size()\ .to_frame() # Getting the product and behavior labels df_counts = df_counts\ .rename({v:k for k, v in cluster_mapper.items()})\ .reset_index() df_counts['behavior_labels'] = df_counts['merged_labels'].apply(lambda x: x[0]) df_counts['product_labels'] = df_counts['merged_labels'].apply(lambda x: x[1]) df_counts.pivot('behavior_labels', 'product_labels', 0) # Setting df to have the final product, behavior and merged clusters df = df_.copy() ``` ## Cluster Analysis ``` def cluster_profiles(df, label_columns, figsize, compar_titles=None): """ Pass df with labels columns of one or multiple clustering labels. Then specify this label columns to perform the cluster profile according to them. """ if compar_titles == None: compar_titles = [""]*len(label_columns) sns.set() fig, axes = plt.subplots(nrows=len(label_columns), ncols=2, figsize=figsize, squeeze=False) for ax, label, titl in zip(axes, label_columns, compar_titles): # Filtering df drop_cols = [i for i in label_columns if i!=label] dfax = df.drop(drop_cols, axis=1) # Getting the cluster centroids and counts centroids = dfax.groupby(by=label, as_index=False).mean() counts = dfax.groupby(by=label, as_index=False).count().iloc[:,[0,1]] counts.columns = [label, "counts"] # Setting Data pd.plotting.parallel_coordinates(centroids, label, color=sns.color_palette(), ax=ax[0]) sns.barplot(x=label, y="counts", data=counts, ax=ax[1]) #Setting Layout handles, _ = ax[0].get_legend_handles_labels() cluster_labels = ["Cluster {}".format(i) for i in range(len(handles))] ax[0].annotate(text=titl, xy=(0.95,1.1), xycoords='axes fraction', fontsize=13, fontweight = 'heavy') ax[0].legend(handles, cluster_labels) # Adaptable to number of clusters ax[0].axhline(color="black", linestyle="--") ax[0].set_title("Cluster Means - {} Clusters".format(len(handles)), fontsize=13) ax[0].set_xticklabels(ax[0].get_xticklabels(), rotation=-20) ax[1].set_xticklabels(cluster_labels) ax[1].set_xlabel("") ax[1].set_ylabel("Absolute Frequency") ax[1].set_title("Cluster Sizes - {} Clusters".format(len(handles)), fontsize=13) plt.subplots_adjust(hspace=0.4, top=0.90) plt.suptitle("Cluster Simple Profilling", fontsize=23) plt.show() # Profilling each cluster (product, behavior, merged) cluster_profiles( df = df[metric_features.to_list() + ['product_labels', 'behavior_labels', 'merged_labels']], label_columns = ['product_labels', 'behavior_labels', 'merged_labels'], figsize = (28, 13), compar_titles = ["Product clustering", "Behavior clustering", "Merged clusters"] ) ``` ## Cluster visualization using t-SNE ``` # This is step can be quite time consuming two_dim = # CODE HERE (explore the TSNE class and obtain the 2D coordinates) # t-SNE visualization pd.DataFrame(two_dim).plot.scatter(x=0, y=1, c=df['merged_labels'], colormap='tab10', figsize=(15,10)) plt.show() ``` ## Assess feature importance and reclassify outliers ### Using the R² What proportion of each variables total SS is explained between clusters? ``` def get_ss_variables(df): """Get the SS for each variable """ ss_vars = df.var() * (df.count() - 1) return ss_vars def r2_variables(df, labels): """Get the R² for each variable """ sst_vars = get_ss_variables(df) ssw_vars = np.sum(df.groupby(labels).apply(get_ss_variables)) return 1 - ssw_vars/sst_vars # We are essentially decomposing the R² into the R² for each variable # CODE HERE (obtain the R² for each variable using the functions above) ``` ### Using a Decision Tree We get the normalized total reduction of the criterion (gini or entropy) brought by that feature (also known as Gini importance). ``` # Preparing the data X = df.drop(columns=['product_labels','behavior_labels','merged_labels']) y = df.merged_labels # Splitting the data X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42 ) # Fitting the decision tree dt = # CODE HERE (set a simple decision tree with max depth of 3) dt.fit(X_train, y_train) print("It is estimated that in average, we are able to predict {0:.2f}% of the customers correctly".format(dt.score(X_test, y_test)*100)) # Assessing feature importance pd.Series(dt.feature_importances_, index=X_train.columns) # Predicting the cluster labels of the outliers df_out['merged_labels'] = # CODE HERE df_out.head() # Visualizing the decision tree dot_data = export_graphviz(dt, out_file=None, feature_names=X.columns.to_list(), filled=True, rounded=True, special_characters=True) graphviz.Source(dot_data) ```
github_jupyter
## Interpreting Ensemble Compressed Features **Gregory Way, 2019** The following notebook will assign biological knowledge to the compressed features using the network projection approach. I use the model previously identified that was used to predict TP53 inactivation. I observe the BioBombe gene set enrichment scores for the features with high coefficients in this model. ``` import os import sys import pandas as pd ``` ## Load the `All Feature` Ensemble Model ``` model_file = os.path.join("results", "top_model_ensemble_all_features_tp53_feature_for_followup.tsv") top_model_df = pd.read_table(model_file) top_model_df coef_file = os.path.join("results", "mutation_ensemble_all", "TP53", "TP53_ensemble_all_features_coefficients.tsv.gz") coef_df = pd.read_table(coef_file).drop(['signal', 'z_dim', 'seed', 'algorithm'], axis='columns') coef_df.head() full_coef_id_df = ( pd.DataFrame(coef_df.feature.str.split("_").values.tolist(), columns=['algorithm', 'individual_feature', 'seed', 'k', 'signal']) ) full_coef_id_df = pd.concat([full_coef_id_df, coef_df], axis='columns') full_coef_id_df = full_coef_id_df.query("abs > 0").query("signal == 'signal'") print(full_coef_id_df.shape) full_coef_id_df.head() ``` ## Load Network Projection Results ``` gph_dir = os.path.join("..", "6.biobombe-projection", "results", "tcga", "gph", "signal") gph_files = os.listdir(gph_dir) all_scores_list = [] for file in gph_files: file = os.path.join(gph_dir, file) scores_df = pd.read_table(file) all_scores_list.append(scores_df) all_scores_df = pd.concat(all_scores_list, axis='rows') print(all_scores_df.shape) all_scores_df.head() all_scores_df = all_scores_df.assign(big_feature_id=all_scores_df.algorithm + "_" + all_scores_df.feature.astype(str) + "_" + all_scores_df.seed.astype(str) + "_" + all_scores_df.z.astype(str) + "_signal") all_scores_df = all_scores_df.assign(abs_z_score=all_scores_df.z_score.abs()) all_coef_scores_df = ( full_coef_id_df .merge(all_scores_df, how='left', left_on="feature", right_on="big_feature_id") .sort_values(by=['abs', 'abs_z_score'], ascending=False) .reset_index(drop=True) ) all_coef_scores_df.head() # Explore the biobombe scores for specific DAE features top_n_features = 5 biobombe_df = ( all_coef_scores_df .groupby('big_feature_id') .apply(func=lambda x: x.abs_z_score.nlargest(top_n_features)) .reset_index() .merge(all_coef_scores_df .reset_index(), right_on=['index', 'abs_z_score', 'big_feature_id'], left_on=['level_1', 'abs_z_score', 'big_feature_id']) .drop(['level_1', 'index', 'feature_x', 'algorithm_x', 'seed_x', 'model_type', 'algorithm_y', 'feature_y', 'seed_y', 'z'], axis='columns') .sort_values(by=['abs', 'abs_z_score'], ascending=False) .reset_index(drop=True) ) print(biobombe_df.shape) biobombe_df.head(20) # Output biobombe scores applied to the all feature ensemble model file = os.path.join('results', 'tcga_tp53_classify_top_biobombe_scores_all_feature_ensemble_model_table.tsv') biobombe_df.to_csv(file, sep='\t', index=False) ``` ## Detect the highest contributing variables ``` neg_biobombe_df = biobombe_df.query("weight < 0") pos_biobombe_df = biobombe_df.query("weight > 0") top_neg_variables_df = neg_biobombe_df.groupby("variable")['weight'].sum().sort_values(ascending=True) top_pos_variables_df = pos_biobombe_df.groupby("variable")['weight'].sum().sort_values(ascending=False) full_result_df = pd.DataFrame(pd.concat([top_pos_variables_df, top_neg_variables_df])) full_result_df = ( full_result_df .assign(abs_weight=full_result_df.weight.abs()) .sort_values(by='abs_weight', ascending=False) ) full_result_df.head() # Output biobombe scores applied to the all feature ensemble model file = os.path.join('results', 'tcga_tp53_classify_aggregate_biobombe_scores_all_feature_ensemble.tsv') full_result_df.to_csv(file, sep='\t', index=False) ```
github_jupyter
Plot Antarctic Tidal Currents ============================= Demonstrates plotting hourly tidal currents around Antarctica OTIS format tidal solutions provided by Ohio State University and ESR - http://volkov.oce.orst.edu/tides/region.html - https://www.esr.org/research/polar-tide-models/list-of-polar-tide-models/ - ftp://ftp.esr.org/pub/datasets/tmd/ Finite Element Solution (FES) provided by AVISO - https://www.aviso.altimetry.fr/en/data/products/auxiliary-products/global-tide-fes.html #### Python Dependencies - [numpy: Scientific Computing Tools For Python](https://www.numpy.org) - [scipy: Scientific Tools for Python](https://www.scipy.org/) - [pyproj: Python interface to PROJ library](https://pypi.org/project/pyproj/) - [netCDF4: Python interface to the netCDF C library](https://unidata.github.io/netcdf4-python/) - [matplotlib: Python 2D plotting library](http://matplotlib.org/) - [cartopy: Python package designed for geospatial data processing](https://scitools.org.uk/cartopy/docs/latest/) #### Program Dependencies - `calc_astrol_longitudes.py`: computes the basic astronomical mean longitudes - `calc_delta_time.py`: calculates difference between universal and dynamic time - `convert_ll_xy.py`: convert lat/lon points to and from projected coordinates - `load_constituent.py`: loads parameters for a given tidal constituent - `load_nodal_corrections.py`: load the nodal corrections for tidal constituents - `infer_minor_corrections.py`: return corrections for minor constituents - `read_tide_model.py`: extract tidal harmonic constants from OTIS tide models - `read_netcdf_model.py`: extract tidal harmonic constants from netcdf models - `read_FES_model.py`: extract tidal harmonic constants from FES tide models - `predict_tide.py`: predict tidal elevation at a single time using harmonic constants This notebook uses Jupyter widgets to set parameters for calculating the tidal maps. The widgets can be installed as described below. ``` pip3 install --user ipywidgets jupyter nbextension install --user --py widgetsnbextension jupyter nbextension enable --user --py widgetsnbextension jupyter-notebook ``` #### Load modules ``` import os import pyproj import datetime import numpy as np import matplotlib matplotlib.rcParams['axes.linewidth'] = 2.0 matplotlib.rcParams["animation.html"] = "jshtml" import matplotlib.pyplot as plt import matplotlib.animation as animation import cartopy.crs as ccrs import ipywidgets as widgets from IPython.display import HTML import pyTMD.time from pyTMD.calc_delta_time import calc_delta_time from pyTMD.read_tide_model import extract_tidal_constants from pyTMD.read_netcdf_model import extract_netcdf_constants from pyTMD.read_GOT_model import extract_GOT_constants from pyTMD.read_FES_model import extract_FES_constants from pyTMD.infer_minor_corrections import infer_minor_corrections from pyTMD.predict_tide import predict_tide #-- autoreload %load_ext autoreload %autoreload 2 ``` #### Set parameters for program - Model directory - Tide model - Date to run ``` #-- set the directory with tide models dirText = widgets.Text( value=os.getcwd(), description='Directory:', disabled=False ) #-- dropdown menu for setting tide model model_list = ['CATS0201','CATS2008','TPXO9-atlas','TPXO9-atlas-v2', 'TPXO9-atlas-v3','TPXO9-atlas-v4','TPXO9.1','TPXO8-atlas', 'TPXO7.2','FES2014'] modelDropdown = widgets.Dropdown( options=model_list, value='CATS2008', description='Model:', disabled=False, ) #-- date picker widget for setting time datepick = widgets.DatePicker( description='Date:', value = datetime.date.today(), disabled=False ) #-- display widgets for setting directory, model and date widgets.VBox([dirText,modelDropdown,datepick]) ``` #### Setup tide model parameters ``` #-- directory with tide models tide_dir = os.path.expanduser(dirText.value) MODEL = modelDropdown.value if (MODEL == 'CATS0201'): grid_file = os.path.join(tide_dir,'cats0201_tmd','grid_CATS') model_file = os.path.join(tide_dir,'cats0201_tmd','UV0_CATS02_01') model_format = 'OTIS' EPSG = '4326' TYPES = ['u','v'] elif (MODEL == 'CATS2008'): grid_file = os.path.join(tide_dir,'CATS2008','grid_CATS2008') model_file = os.path.join(tide_dir,'CATS2008','uv.CATS2008.out') model_format = 'OTIS' EPSG = 'CATS2008' TYPES = ['u','v'] elif (MODEL == 'TPXO9-atlas'): model_directory = os.path.join(tide_dir,'TPXO9_atlas') grid_file = os.path.join(model_directory,'grid_tpxo9_atlas.nc.gz') model_files = {} model_files['u'] = ['u_q1_tpxo9_atlas_30.nc.gz','u_o1_tpxo9_atlas_30.nc.gz', 'u_p1_tpxo9_atlas_30.nc.gz','u_k1_tpxo9_atlas_30.nc.gz', 'u_n2_tpxo9_atlas_30.nc.gz','u_m2_tpxo9_atlas_30.nc.gz', 'u_s2_tpxo9_atlas_30.nc.gz','u_k2_tpxo9_atlas_30.nc.gz', 'u_m4_tpxo9_atlas_30.nc.gz','u_ms4_tpxo9_atlas_30.nc.gz', 'u_mn4_tpxo9_atlas_30.nc.gz','u_2n2_tpxo9_atlas_30.nc.gz'] model_files['v'] = ['v_q1_tpxo9_atlas_30.nc.gz','v_o1_tpxo9_atlas_30.nc.gz', 'v_p1_tpxo9_atlas_30.nc.gz','v_k1_tpxo9_atlas_30.nc.gz', 'v_n2_tpxo9_atlas_30.nc.gz','v_m2_tpxo9_atlas_30.nc.gz', 'v_s2_tpxo9_atlas_30.nc.gz','v_k2_tpxo9_atlas_30.nc.gz', 'v_m4_tpxo9_atlas_30.nc.gz','v_ms4_tpxo9_atlas_30.nc.gz', 'v_mn4_tpxo9_atlas_30.nc.gz','v_2n2_tpxo9_atlas_30.nc.gz'] model_format = 'netcdf' TYPES = ['u','v'] SCALE = 1.0/100.0 GZIP = True elif (MODEL == 'TPXO9-atlas-v2'): model_directory = os.path.join(tide_dir,'TPXO9_atlas_v2') grid_file = os.path.join(model_directory,'grid_tpxo9_atlas_30_v2.nc.gz') model_files = {} model_files['u'] = ['u_q1_tpxo9_atlas_30_v2.nc.gz','u_o1_tpxo9_atlas_30_v2.nc.gz', 'u_p1_tpxo9_atlas_30_v2.nc.gz','u_k1_tpxo9_atlas_30_v2.nc.gz', 'u_n2_tpxo9_atlas_30_v2.nc.gz','u_m2_tpxo9_atlas_30_v2.nc.gz', 'u_s2_tpxo9_atlas_30_v2.nc.gz','u_k2_tpxo9_atlas_30_v2.nc.gz', 'u_m4_tpxo9_atlas_30_v2.nc.gz','u_ms4_tpxo9_atlas_30_v2.nc.gz', 'u_mn4_tpxo9_atlas_30_v2.nc.gz','u_2n2_tpxo9_atlas_30_v2.nc.gz'] model_files['v'] = ['v_q1_tpxo9_atlas_30_v2.nc.gz','v_o1_tpxo9_atlas_30_v2.nc.gz', 'v_p1_tpxo9_atlas_30_v2.nc.gz','v_k1_tpxo9_atlas_30_v2.nc.gz', 'v_n2_tpxo9_atlas_30_v2.nc.gz','v_m2_tpxo9_atlas_30_v2.nc.gz', 'v_s2_tpxo9_atlas_30_v2.nc.gz','v_k2_tpxo9_atlas_30_v2.nc.gz', 'v_m4_tpxo9_atlas_30_v2.nc.gz','v_ms4_tpxo9_atlas_30_v2.nc.gz', 'v_mn4_tpxo9_atlas_30_v2.nc.gz','v_2n2_tpxo9_atlas_30_v2.nc.gz'] model_file = {} for key,val in model_files.items(): model_file[key] = [os.path.join(model_directory,m) for m in val] model_format = 'netcdf' TYPES = ['u','v'] SCALE = 1.0/100.0 GZIP = True elif (MODEL == 'TPXO9-atlas-v3'): model_directory = os.path.join(tide_dir,'TPXO9_atlas_v3') grid_file = os.path.join(model_directory,'grid_tpxo9_atlas_30_v3.nc.gz') model_files = {} model_files['u'] = ['u_q1_tpxo9_atlas_30_v3.nc.gz','u_o1_tpxo9_atlas_30_v3.nc.gz', 'u_p1_tpxo9_atlas_30_v3.nc.gz','u_k1_tpxo9_atlas_30_v3.nc.gz', 'u_n2_tpxo9_atlas_30_v3.nc.gz','u_m2_tpxo9_atlas_30_v3.nc.gz', 'u_s2_tpxo9_atlas_30_v3.nc.gz','u_k2_tpxo9_atlas_30_v3.nc.gz', 'u_m4_tpxo9_atlas_30_v3.nc.gz','u_ms4_tpxo9_atlas_30_v3.nc.gz', 'u_mn4_tpxo9_atlas_30_v3.nc.gz','u_2n2_tpxo9_atlas_30_v3.nc.gz'] model_files['v'] = ['v_q1_tpxo9_atlas_30_v3.nc.gz','v_o1_tpxo9_atlas_30_v3.nc.gz', 'v_p1_tpxo9_atlas_30_v3.nc.gz','v_k1_tpxo9_atlas_30_v3.nc.gz', 'v_n2_tpxo9_atlas_30_v3.nc.gz','v_m2_tpxo9_atlas_30_v3.nc.gz', 'v_s2_tpxo9_atlas_30_v3.nc.gz','v_k2_tpxo9_atlas_30_v3.nc.gz', 'v_m4_tpxo9_atlas_30_v3.nc.gz','v_ms4_tpxo9_atlas_30_v3.nc.gz', 'v_mn4_tpxo9_atlas_30_v3.nc.gz','v_2n2_tpxo9_atlas_30_v3.nc.gz'] model_file = {} for key,val in model_files.items(): model_file[key] = [os.path.join(model_directory,m) for m in val] model_format = 'netcdf' TYPES = ['u','v'] SCALE = 1.0/100.0 GZIP = True elif (MODEL == 'TPXO9-atlas-v4'): model_directory = os.path.join(tide_dir,'TPXO9_atlas_v4') grid_file = os.path.join(tide_dir,'grid_tpxo9_atlas_30_v4') model_files = ['u_q1_tpxo9_atlas_30_v4','u_o1_tpxo9_atlas_30_v4', 'u_p1_tpxo9_atlas_30_v4','u_k1_tpxo9_atlas_30_v4', 'u_n2_tpxo9_atlas_30_v4','u_m2_tpxo9_atlas_30_v4', 'u_s2_tpxo9_atlas_30_v4','u_k2_tpxo9_atlas_30_v4', 'u_m4_tpxo9_atlas_30_v4','u_ms4_tpxo9_atlas_30_v4', 'u_mn4_tpxo9_atlas_30_v4','u_2n2_tpxo9_atlas_30_v4'] model_file = [os.path.join(model_directory,m) for m in model_files] model_format = 'OTIS' EPSG = '4326' TYPES = ['u','v'] elif (MODEL == 'TPXO9.1'): grid_file = os.path.join(tide_dir,'TPXO9.1','DATA','grid_tpxo9') model_file = os.path.join(tide_dir,'TPXO9.1','DATA','u_tpxo9.v1') model_format = 'OTIS' EPSG = '4326' TYPES = ['u','v'] elif (MODEL == 'TPXO8-atlas'): grid_file = os.path.join(tide_dir,'tpxo8_atlas','grid_tpxo8atlas_30_v1') model_file = os.path.join(tide_dir,'tpxo8_atlas','uv.tpxo8_atlas_30_v1') model_format = 'ATLAS' EPSG = '4326' TYPES = ['u','v'] elif (MODEL == 'TPXO7.2'): grid_file = os.path.join(tide_dir,'TPXO7.2_tmd','grid_tpxo7.2') model_file = os.path.join(tide_dir,'TPXO7.2_tmd','u_tpxo7.2') model_format = 'OTIS' EPSG = '4326' TYPES = ['u','v'] elif (MODEL == 'FES2014'): model_directory = {} model_directory['u'] = os.path.join(tide_dir,'fes2014','eastward_velocity') model_directory['v'] = os.path.join(tide_dir,'fes2014','northward_velocity') model_files = ['2n2.nc.gz','eps2.nc.gz','j1.nc.gz','k1.nc.gz', 'k2.nc.gz','l2.nc.gz','la2.nc.gz','m2.nc.gz','m3.nc.gz','m4.nc.gz', 'm6.nc.gz','m8.nc.gz','mf.nc.gz','mks2.nc.gz','mm.nc.gz', 'mn4.nc.gz','ms4.nc.gz','msf.nc.gz','msqm.nc.gz','mtm.nc.gz', 'mu2.nc.gz','n2.nc.gz','n4.nc.gz','nu2.nc.gz','o1.nc.gz','p1.nc.gz', 'q1.nc.gz','r2.nc.gz','s1.nc.gz','s2.nc.gz','s4.nc.gz','sa.nc.gz', 'ssa.nc.gz','t2.nc.gz'] model_file = {} for key,val in model_directory.items(): model_file[key] = [os.path.join(val,m) for m in model_files] c = ['2n2','eps2','j1','k1','k2','l2','lambda2','m2','m3','m4','m6', 'm8','mf','mks2','mm','mn4','ms4','msf','msqm','mtm','mu2','n2', 'n4','nu2','o1','p1','q1','r2','s1','s2','s4','sa','ssa','t2'] model_format = 'FES' TYPES = ['u','v'] SCALE = 1.0 GZIP = True ``` #### Setup coordinates for calculating tidal currents ``` #-- create an image around Antarctica xlimits = [-560.*5e3,560.*5e3] ylimits = [-560.*5e3,560.*5e3] spacing = [20e3,-20e3] #-- x and y coordinates x = np.arange(xlimits[0],xlimits[1]+spacing[0],spacing[0]) y = np.arange(ylimits[1],ylimits[0]+spacing[1],spacing[1]) xgrid,ygrid = np.meshgrid(x,y) #-- x and y dimensions nx = int((xlimits[1]-xlimits[0])/spacing[0])+1 ny = int((ylimits[0]-ylimits[1])/spacing[1])+1 #-- convert image coordinates from polar stereographic to latitude/longitude crs1 = pyproj.CRS.from_string("epsg:{0:d}".format(3031)) crs2 = pyproj.CRS.from_string("epsg:{0:d}".format(4326)) transformer = pyproj.Transformer.from_crs(crs1, crs2, always_xy=True) lon,lat = transformer.transform(xgrid.flatten(), ygrid.flatten()) ``` #### Calculate tide map ``` #-- convert from calendar date to days relative to Jan 1, 1992 (48622 MJD) YMD = datepick.value tide_time = pyTMD.time.convert_calendar_dates(YMD.year, YMD.month, YMD.day, hour=np.arange(24)) #-- delta time (TT - UT1) file delta_file = pyTMD.utilities.get_data_path(['data','merged_deltat.data']) #-- save tide currents tide = {} #-- iterate over u and v currents for TYPE in TYPES: #-- read tidal constants and interpolate to grid points if model_format in ('OTIS','ATLAS'): amp,ph,D,c = extract_tidal_constants(lon, lat, grid_file, model_file, EPSG, TYPE=TYPE, METHOD='spline', GRID=model_format) DELTAT = np.zeros_like(tide_time) elif (model_format == 'netcdf'): amp,ph,D,c = extract_netcdf_constants(lon, lat, grid_file, model_file[TYPE], TYPE=TYPE, METHOD='spline', SCALE=SCALE, GZIP=GZIP) DELTAT = np.zeros_like(tide_time) elif (model_format == 'GOT'): amp,ph,c = extract_GOT_constants(lon, lat, model_file, METHOD='spline', SCALE=SCALE) #-- interpolate delta times from calendar dates to tide time DELTAT = calc_delta_time(delta_file, tide_time) elif (model_format == 'FES'): amp,ph = extract_FES_constants(lon, lat, model_file[TYPE], TYPE=TYPE, VERSION=MODEL, METHOD='spline', SCALE=SCALE, GZIP=GZIP) #-- interpolate delta times from calendar dates to tide time DELTAT = calc_delta_time(delta_file, tide_time) #-- calculate complex phase in radians for Euler's cph = -1j*ph*np.pi/180.0 #-- calculate constituent oscillation hc = amp*np.exp(cph) #-- allocate for tide current map calculated every hour tide[TYPE] = np.ma.zeros((ny,nx,24)) for hour in range(24): #-- predict tidal elevations at time and infer minor corrections TIDE = predict_tide(tide_time[hour], hc, c, DELTAT=DELTAT[hour], CORRECTIONS=model_format) MINOR = infer_minor_corrections(tide_time[hour], hc, c, DELTAT=DELTAT[hour], CORRECTIONS=model_format) #-- add major and minor components and reform grid tide[TYPE][:,:,hour] = np.reshape((TIDE+MINOR),(ny,nx)) ``` #### Create animation of hourly tidal oscillation ``` #-- output Antarctic Tidal Current Animation projection = ccrs.Stereographic(central_longitude=0.0, central_latitude=-90.0,true_scale_latitude=-71.0) #-- figure axis and image objects ax1,im = ({},{}) fig, (ax1['u'],ax1['v']) = plt.subplots(num=1, ncols=2, figsize=(11.5,7), subplot_kw=dict(projection=projection)) vmin = np.min([tide['u'].min(),tide['v'].min()]) vmax = np.max([tide['u'].max(),tide['v'].max()]) extent = (xlimits[0],xlimits[1],ylimits[0],ylimits[1]) for TYPE,ax in ax1.items(): #-- plot tidal currents im[TYPE] = ax.imshow(np.zeros((ny,nx)), interpolation='nearest', vmin=vmin, vmax=vmax, transform=projection, extent=extent, origin='upper', animated=True) #-- add high resolution cartopy coastlines ax.coastlines('10m') #-- set x and y limits ax.set_xlim(xlimits) ax.set_ylim(ylimits) # stronger linewidth on frame ax.spines['geo'].set_linewidth(2.0) ax.spines['geo'].set_capstyle('projecting') #-- Add colorbar with a colorbar axis #-- Add an axes at position rect [left, bottom, width, height] cbar_ax = fig.add_axes([0.085, 0.075, 0.83, 0.035]) #-- extend = add extension triangles to upper and lower bounds #-- options: neither, both, min, max cbar = fig.colorbar(im['u'], cax=cbar_ax, extend='both', extendfrac=0.0375, drawedges=False, orientation='horizontal') #-- rasterized colorbar to remove lines cbar.solids.set_rasterized(True) #-- Add label to the colorbar cbar.ax.set_xlabel('{0} Tidal Velocity'.format(MODEL), fontsize=13) cbar.ax.set_ylabel('cm/s', fontsize=13, rotation=0) cbar.ax.yaxis.set_label_coords(1.04, 0.15) #-- ticks lines all the way across cbar.ax.tick_params(which='both', width=1, length=18, labelsize=13, direction='in') #-- add title (date and time) ttl = fig.suptitle(None, fontsize=13) # adjust subplot within figure fig.subplots_adjust(left=0.02,right=0.98,bottom=0.1,top=0.98,wspace=0.04) #-- animate each map def animate_maps(hour): #-- set map data iterating over u and v currents for TYPE in TYPES: im[TYPE].set_data(tide[TYPE][:,:,hour]) #-- set title args = (YMD.year,YMD.month,YMD.day,hour) ttl.set_text('{0:4d}-{1:02d}-{2:02d}T{3:02d}:00:00'.format(*args)) #-- set animation anim = animation.FuncAnimation(fig, animate_maps, frames=24) %matplotlib inline HTML(anim.to_jshtml()) ```
github_jupyter
# Algoritmos de Otimização No Deep Learning temos como propósito que nossas redes neurais aprendam a aproximar uma função de interesse, como o preço de casas numa regressão, ou a função que classifica objetos numa foto, no caso da classificação. No último notebook, nós programos nossa primeira rede neural. Além disso, vimos também a fórmula de atualização dos pesos. Se você não lembra, os pesos e os bias foram atualizados da seguinte forma: $$w_i = w_i - \lambda * \partial w $$ $$b_i = b_i - \lambda * \partial b$$ Mas, você já parou pra pensar da onde vêm essas fórmulas? Além disso, será que existem melhores formas de atualizar os pesos? É isso que vamos ver nesse notebook. ## Descida de Gradiente Estocástica (SGD) Na descida de gradiente estocástica separamos os nossos dados de treino em vários subconjuntos, que chamamos de mini-batches. No começo eles serão pequenos, como 32-128 exemplos, para aplicações mais avançadas eles tendem a ser muito maiores, na ordem de 1024 e até mesmo 8192 exemplos por mini-batch. Como na descida de gradiente padrão, computamos o gradiente da função de custo em relação aos exemplos, e subtraímos o gradiente vezes uma taxa de apredizado dos parâmetros da rede. Podemos ver o SGD como tomando um passo pequeno na direção de maior redução do valor da loss. ### Equação $w_{t+1} = w_t - \eta \cdot \nabla L$ ### Código ``` import jax def sgd(weights, gradients, eta): return jax.tree_util.tree_multimap(lambda w, g: w - eta*g, weights, gradients) ``` Vamos usar o SGD para otimizar uma função simples ``` #hide import numpy as np import matplotlib.pyplot as plt %matplotlib inline def f(x): return x**2 - 25 x = np.linspace(-10, 10, num=100) y = f(x) plt.plot(x, y) plt.ylim(-50) x0 = 9.0 f_g = jax.value_and_grad(f) x_ = [] y_ = [] for i in range(10): y0, grads = f_g(x0) x_.append(x0) y_.append(y0) x0 = sgd(x0, grads, 0.9) plt.plot(x, y) plt.plot(x_, y_, color='red', marker='o'); ``` ## Momentum Um problema com o uso de mini-batches é que agora estamos **estimando** a direção que diminui a função de perda no conjunto de treino, e quão menor o mini-batch mais ruidosa é a nossa estimativa. Para consertar esse problema do ruído nós introduzimos a noção de momentum. O momentm faz sua otimização agir como uma bola pesada descendo uma montanha, então mesmo que o caminho seja cheio de montes e buracos a direção da bola não é muito afetada. De um ponto de vista mais matemático as nossas atualizações dos pesos vão ser uma combinação entre os gradientes desse passo e os gradientes anteriores, estabilizando o treino. ### Equação $v_{t} = \gamma v_{t-1} + \nabla L \quad \text{o gamma serve como um coeficiente ponderando entre usar os updates anteriores e o novo gradiente} \\ w_{t+1} = w_t - \eta v_t $ ### Código ``` def momentum(weights, gradients, eta, mom, gamma): mom = jax.tree_util.tree_multimap( lambda v, g: gamma*v + (1 - gamma)*g, weights, gradients) weights = jax.tree_util.tree_multimap( lambda w, v: w - eta*mom, weights, mom) return weights, mom x0 = 9.0 mom = 0.0 x_ = [] y_ = [] for i in range(10): y0, grads = f_g(x0) x_.append(x0) y_.append(y0) x0, mom = momentum(x0, grads, 0.9, mom, 0.99) plt.plot(x, y) plt.plot(x_, y_, color='red', marker='o'); ``` ## RMSProp Criado por Geoffrey Hinton durante uma aula, esse método é o primeiro **método adaptivo** que estamos vendo. O que isso quer dizer é que o método tenta automaticamente computar uma taxa de aprendizado diferente para cada um dos pesos da nossa rede neural, usando taxas pequenas para parâmetros que sofrem atualização frequentemente e taxas maiores para parâmetros que são atualizados mais raramente, permitindo uma otimização mais rápida. Mais especificamente, o RMSProp divide o update normal do SGD pela raiz da soma dos quadrados dos gradientes anteriores (por isso seu nome Root-Mean-Square Proportional), assim reduzindo a magnitude da atualização de acordo com as magnitudes anteriores. ### Equação $ \nu_{t} = \gamma \nu_{t-1} + (1 - \gamma) (\nabla L)^2 \\ w_{t+1} = w_t - \frac{\eta \nabla L}{\sqrt{\nu_t + \epsilon}} $ ### Código ``` def computa_momento(updates, moments, decay, order): return jax.tree_multimap( lambda g, t: (1 - decay) * (g ** order) + decay * t, updates, moments) def rmsprop(weights, gradients, eta, nu, gamma): nu = computa_momento(gradients, nu, gamma, 2) updates = jax.tree_multimap( lambda g, n: g * jax.lax.rsqrt(n + 1e-8), gradients, nu) weights = jax.tree_util.tree_multimap(lambda w, g: w - eta*g, weights, updates) return weights, nu x0 = 9.0 nu = 0.0 x_ = [] y_ = [] for i in range(10): y0, grads = f_g(x0) x_.append(x0) y_.append(y0) x0, nu = rmsprop(x0, grads, 0.9, nu, 0.99) plt.plot(x, y) plt.plot(x_, y_, color='red', marker='o'); ``` ## Adam Por fim o Adam usa ideias semelhantes ao Momentum e ao RMSProp, mantendo médias exponenciais tanto dos gradientes passados, quanto dos seus quadrados. ### Equação $ m_t = \beta_1 m_{t-1} + (1 - \beta_1) \nabla L \\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) (\nabla L)^2 \\ w_{t+1} = w_t - \frac{\eta m_t}{\sqrt{v_t} \epsilon} $ ### Código ``` import jax.numpy as jnp def adam(weights, gradients, eta, mu, nu, b1, b2): mu = computa_momento(gradients, mu, b1, 1) nu = computa_momento(gradients, nu, b2, 2) updates = jax.tree_multimap( lambda m, v: m / (jnp.sqrt(v + 1e-6) + 1e-8), mu, nu) weights = jax.tree_util.tree_multimap(lambda w, g: w - eta*g, weights, updates) return weights, mu, nu x0 = 9.0 mu = 0.0 nu = 0.0 x_ = [] y_ = [] for i in range(10): y0, grads = f_g(x0) x_.append(x0) y_.append(y0) x0, mu, nu = adam(x0, grads, 0.8, mu, nu, 0.9, 0.999) plt.plot(x, y) plt.plot(x_, y_, color='red', marker='o'); ```
github_jupyter
``` # -*- coding: utf-8 -*- ``` # 📝 Exercise M6.03 The aim of this exercise is to: * verifying if a random forest or a gradient-boosting decision tree overfit if the number of estimators is not properly chosen; * use the early-stopping strategy to avoid adding unnecessary trees, to get the best generalization performances. We will use the California housing dataset to conduct our experiments. ``` from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split data, target = fetch_california_housing(return_X_y=True, as_frame=True) target *= 100 # rescale the target in k$ data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0, test_size=0.5) ``` <div class="admonition note alert alert-info"> <p class="first admonition-title" style="font-weight: bold;">Note</p> <p class="last">If you want a deeper overview regarding this dataset, you can refer to the Appendix - Datasets description section at the end of this MOOC.</p> </div> Create a gradient boosting decision tree with `max_depth=5` and `learning_rate=0.5`. ``` # Write your code here. ``` Also create a random forest with fully grown trees by setting `max_depth=None`. ``` # Write your code here. ``` For both the gradient-boosting and random forest models, create a validation curve using the training set to assess the impact of the number of trees on the performance of each model. Evaluate the list of parameters `param_range = [1, 2, 5, 10, 20, 50, 100]` and use the mean absolute error. ``` # Write your code here. ``` Both gradient boosting and random forest models will always improve when increasing the number of trees in the ensemble. However, it will reach a plateau where adding new trees will just make fitting and scoring slower. To avoid adding new unnecessary tree, unlike random-forest gradient-boosting offers an early-stopping option. Internally, the algorithm will use an out-of-sample set to compute the generalization performance of the model at each addition of a tree. Thus, if the generalization performance is not improving for several iterations, it will stop adding trees. Now, create a gradient-boosting model with `n_estimators=1_000`. This number of trees will be too large. Change the parameter `n_iter_no_change` such that the gradient boosting fitting will stop after adding 5 trees that do not improve the overall generalization performance. ``` # Write your code here. ``` Estimate the generalization performance of this model again using the `sklearn.metrics.mean_absolute_error` metric but this time using the test set that we held out at the beginning of the notebook. Compare the resulting value with the values observed in the validation curve. ``` # Write your code here. ```
github_jupyter
# Movie recommendation Objective: Based on movies list and user ratings estimate how user would rate other movies References: https://medium.com/@jdwittenauer/deep-learning-with-keras-recommender-systems-e7b99cb29929 ``` import pandas as pd import numpy as np from pandas_profiling import ProfileReport import seaborn as sns import matplotlib.pyplot as plt import tensorflow as tf from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from tensorflow.keras import Model from tensorflow.keras.layers import Input, Reshape, Dot, Add, Activation, Lambda, Concatenate, Dropout, Dense from tensorflow.keras.layers import Embedding from tensorflow.keras.optimizers import Adam from tensorflow.keras.regularizers import l2 movies = pd.read_csv("data/movies.csv") ratings = pd.read_csv("data/ratings.csv") movies.head() ratings.head() # Figure out how top 15 users interacts with top 15 movies top_users = ( ratings .groupby('userId')['rating'] .count() .sort_values(ascending=False)[:15] ) top_movies = ( ratings .groupby('movieId')['rating'] .count() .sort_values(ascending=False)[:15] ) top_r = ratings.join(top_users, rsuffix='_r', how='inner', on='userId') top_r = top_r.join(top_movies, rsuffix='_r', how='inner', on='movieId') pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.max) rates_per_user = ratings.groupby('movieId')['rating'].count() sns.distplot(rates_per_user) sns.distplot(ratings['rating']) # encoding user_enc = LabelEncoder() movie_enc = LabelEncoder() ratings['user'] = user_enc.fit_transform(ratings['userId'].values) ratings['movie'] = movie_enc.fit_transform(ratings['movieId'].values) ratings['rating'] = ratings['rating'].values.astype(np.float32) n_users = ratings['user'].nunique() n_movies = ratings['movie'].nunique() min_rating = min(ratings['rating']) max_rating = max(ratings['rating']) (n_users, n_movies) # Uncomment to print profile report #profile = ProfileReport(ratings, title='Movies report', html={'style':{'full_width':True}}) #profile X = ratings[['user', 'movie']].values y = ratings['rating'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42) X_train.shape, X_test.shape, y_train.shape, y_test.shape n_factors = 50 X_train_array = [X_train[:, 0], X_train[:, 1]] X_test_array = [X_test[:, 0], X_test[:, 1]] def RecommenderV1(n_users, n_movies, n_factors): user = Input(shape=(1,)) u = Embedding(n_users, n_factors, embeddings_initializer='he_normal', embeddings_regularizer=l2(1e-6))(user) u = Reshape((n_factors,))(u) movie = Input(shape=(1,)) m = Embedding(n_movies, n_factors, embeddings_initializer='he_normal', embeddings_regularizer=l2(1e-6))(movie) m = Reshape((n_factors,))(m) x = Dot(axes=1)([u, m]) model = Model(inputs=[user, movie], outputs=x) opt = Adam(lr=0.001) model.compile(loss='mean_squared_error', optimizer=opt) return model model = RecommenderV1(n_users, n_movies, n_factors) model.summary() history = model.fit(x=X_train_array, y=y_train, batch_size=64, epochs=5, verbose=1, validation_data=(X_test_array, y_test)) def draw_history(history): plt.plot(history.history['loss'], label='train') plt.plot(history.history['val_loss'], label='test') plt.legend(); draw_history(history) class EmbeddingLayer: def __init__(self, n_items, n_factors): self.n_items = n_items self.n_factors = n_factors def __call__(self, x): x = Embedding(self.n_items, self.n_factors, embeddings_initializer='he_normal', embeddings_regularizer=l2(1e-6))(x) x = Reshape((self.n_factors,))(x) return x def RecommenderNet(n_users, n_movies, n_factors, min_rating, max_rating): user = Input(shape=(1,)) u = EmbeddingLayer(n_users, n_factors)(user) movie = Input(shape=(1,)) m = EmbeddingLayer(n_movies, n_factors)(movie) x = Concatenate()([u, m]) x = Dropout(0.05)(x) x = Dense(10, kernel_initializer='he_normal')(x) x = Activation('relu')(x) x = Dropout(0.5)(x) x = Dense(1, kernel_initializer='he_normal')(x) x = Activation('sigmoid')(x) x = Lambda(lambda x: x * (max_rating - min_rating) + min_rating)(x) model = Model(inputs=[user, movie], outputs=x) opt = Adam(lr=0.001) model.compile(loss='mean_squared_error', optimizer=opt) return model model = RecommenderNet(n_users, n_movies, n_factors, min_rating, max_rating) model.summary() history = model.fit(x=X_train_array, y=y_train, batch_size=64, epochs=10, verbose=1, validation_data=(X_test_array, y_test)) draw_history(history) top_r estimated = model.predict(X_test_array) estimated esimated_rounded = map(lambda x: round(x * 2) / 2, estimated[:, 0]) expected_actual = pd.DataFrame({'actual':y_test, 'estimated': esimated_rounded}) expected_actual_grouped = expected_actual.groupby(['actual', 'estimated']).size().reset_index(name="size") ax = sns.scatterplot(x="actual", y="estimated", hue="size", size="size", sizes=(0, 1000), data=expected_actual_grouped) ```
github_jupyter
Adopted from GDELT Data Wrangle by James Houghton https://nbviewer.jupyter.org/github/JamesPHoughton/Published_Blog_Scripts/blob/master/GDELT%20Wrangler%20-%20Clean.ipynb Additional GDELT resources: GDELT library overview: https://colab.research.google.com/drive/1rnKEHKV1StOwGtFPsCctKDPTBB_kHOc_?usp=sharing GDELT with big data: https://github.com/linwoodc3/gdeltPyR/wiki/Pulling-Large-GDELT-Data # PART I: Get GDELT DATA FOR NIGER ### Get the GDELT index files ``` #Pike trying to create a graph of conflict related news reports on the current Armenia Azerbaijan war #to highlight in increase in bellicose statements from the principle participant Azerbaijan import requests import lxml.html as lh gdelt_base_url = 'http://data.gdeltproject.org/events/' # get the list of all the links on the gdelt file page page = requests.get(gdelt_base_url+'index.html') #Grab GDELT reference list which is by day doc = lh.fromstring(page.content) link_list = doc.xpath("//*/ul/li/a/@href") #Returns all the possible CSV files of GDELT data as a references list # separate out those links that begin with four digits ''' Will extract just the days resulting in list like: ['20200617.export.CSV.zip', '20200616.export.CSV.zip', '20200615.export.CSV.zip',...] Until 2015 ''' file_list = [x for x in link_list if str.isdigit(x[0:4])] file_list #Counters to help assess how many files are coming and going out infilecounter = 0 outfilecounter = 0 ``` ### Uses GDELT Index file list to download GDELT data for that day for that country ``` import os.path #To help navigate the file directories import urllib #To request from GDELT import zipfile #TO unzip the files we downlaod import glob #To go through multiple files in a directory import operator local_path = './results/' # Will save to empy results folder to help keep file clean #Pike pulled the country codes and fought a tuple error fips_country_code = 'AJ' #Adjust list number to get days wanted for compressed_file in file_list[:10]: print(compressed_file,) # if we dont have the compressed file stored locally, go get it. Keep trying if necessary. while not os.path.isfile(local_path+compressed_file): print('downloading,'), urllib.request.urlretrieve(url=gdelt_base_url+compressed_file, filename=local_path+compressed_file) # extract the contents of the compressed file to a temporary directory print('extracting,'), z = zipfile.ZipFile(file=local_path+compressed_file, mode='r') z.extractall(path=local_path+'tmp/') # parse each of the csv files in the working directory, print('parsing,'), for infile_name in glob.glob(local_path+'tmp/*'): outfile_name = local_path+fips_country_code+'%04i.tsv'%outfilecounter # open the infile and outfile with open(infile_name, mode='r', encoding="ISO-8859-1") as infile, open(outfile_name, mode='w') as outfile: for line in infile: # extract lines with our interest country code if fips_country_code in operator.itemgetter(51, 37, 44)(line.split('\t')): outfile.write(line) outfilecounter +=1 # delete the temporary file os.remove(infile_name) infilecounter +=1 print('done', infilecounter) ``` # PART II: PARSE DATA AGAIN ### Read in the data ``` import pandas as pd # Get the GDELT field names from a helper file colnames = pd.read_csv('CSV.header.fieldids.csv')['Field Name'] # Build DataFrames from each of the intermediary files files = glob.glob(local_path+fips_country_code+'*') DFlist = [] for active_file in files: print(active_file) DFlist.append(pd.read_csv(active_file, sep='\t', header=None, dtype=str, names=colnames, index_col=['GLOBALEVENTID'], encoding='iso-8859-1')) # Merge the file-based dataframes and save a pickle DF = pd.concat(DFlist) DF.to_pickle(local_path+'backup'+fips_country_code+'.pickle') # once everythin is safely stored away, remove the temporary files for active_file in files: os.remove(active_file) import pickle #Pike changed the name from Niger_Data to Conflict_Data Conflict_Data = pd.read_pickle(r"./results/backupAJ.pickle") ``` ### See top 5 lines of data ``` Conflict_Data.head() ``` ### Helper Function to turn codebooks into look up tables ``` def ref_dict(df): cols = list(df) ref_dict = {} for row in df.iterrows(): ref_dict[row[1][cols[0]]] = row[1][cols[1]] return ref_dict ``` ### Convert each codebook and store in object ``` #Read in event codes eventCodes = ref_dict(pd.read_csv("./Ref Codes/CAMEO.eventcodes.txt", sep='\t')) #Read in Goldsteinscale goldScale = ref_dict(pd.read_csv("./Ref Codes/CAMEO.goldsteinscale.txt", sep='\t')) #Read in ethnic groups ethnicCodes =ref_dict(pd.read_csv("./Ref Codes/CAMEO.ethnic.txt", sep='\t')) #Read in known Groups knownGroups = ref_dict(pd.read_csv("./Ref Codes/CAMEO.knowngroup.txt", sep='\t')) #Read in relgion religionCodes = ref_dict(pd.read_csv("./Ref Codes/CAMEO.religion.txt", sep='\t')) #Read in type typeCodes = ref_dict(pd.read_csv("./Ref Codes/CAMEO.type.txt", sep='\t')) eventCodes # Turn colnames into list for ref cross_ref = list(colnames) # Create look up table to get values instead of numbers look_up_code = {"eventCodes": [26,27,28], "goldScale":[30], "ethnicCodes":[9,19], "knownGroups":[8,18], "religionCodes":[10,11,20,21], "typeCodes":[12,13,14,22,23,24]} ''' Helper function to user can reorient data based on interest from codes data: Conflict_Data - pandas dataframe ref: key value from look_look_code - string codebook: reference ''' import math def search_dict(data,ref, codebook): res = {} look_up = look_up_code[ref] col_names = [] for i in look_up: col_names.append(cross_ref[i]) for col in col_names: for row in data.iterrows(): if isinstance(row[1][col],float): #print (type(row[1][col]), col) pass else: #print (col) var = codebook[row[1][col]].upper() #print (var, row[1][col]) if var in res.keys(): #print(row[1][col]) res[var].append(dict(row[1])) else: res[var] = [dict(row[1])] return res #Pike attempting to graph news statements over time leading up to the current conflict #Code adapted from http://linwoodc3.github.io/2017/10/15/Taking-the-Pulse-of-worldwide-news-Media-using-gdelt.html #and https://nbviewer.jupyter.org/github/dmasad/GDELT_Intro/blob/master/Getting_Started_with_GDELT.ipynb import matplotlib.pyplot as plt import seaborn as sns f,ax = plt.subplots(figsize=(15,5)) plt.title('Distribution of Azerbaijian-Armenia News Production') sns.kdeplot(conflict_data.assign(provider=conflict_data.sourceurl.\ apply(lambda x: s.search(x).group() if s.search(x) else np.nan))['provider']\ .value_counts(),bw=0.4,shade=True,lable='No. of articles written',ax=ax) sns.kdeplot(conflict_data.assign(provider=conflict_data.sourceurl.\ apply(lambda x: s.search(x_.group(_ if s.search else np.nan))['provider']\ .value_counts(),bw=0.4,shade=True,label='Cumulative',cumulative=True,ax=ax) plt.show() #verfication to ensure code is working properly for k,v in res.items(): print (k, ": ", len(v)) #Put each collection of articles in a Dataframe list_res = [] for cat in res.values(): #print(cat) list_res.append(pd.DataFrame(cat)) list_res[0] #access the group you are interested in by changing the variables ``` ### Homework 4: Do some type of analysis with GDELT data. It can be country focused (e.g. Guatemala) or topic focused (e.g. attacks or bilateral agreements) ### Must write in the first cell what you are interested in. Code must work but results can be garabage. Update the GDELT parameters to get the information you want and then include some type of plot can be a graph or can be a map. ### Total Points Possible 19
github_jupyter
# 07 - Serving predictions The purpose of the notebook is to show how to use the deployed model for online and batch prediction. The notebook covers the following tasks: 1. Test the `Endpoint` resource for online prediction. 2. Use the custom model uploaded as a `Model` resource for batch prediciton. 3. Run a the batch prediction pipeline using `Vertex Pipelines`. ## Setup ### Import libraries ``` import os import time from datetime import datetime import tensorflow as tf from google.cloud import aiplatform as vertex_ai ``` ### Setup Google Cloud project ``` PROJECT_ID = '[your-project-id]' # Change to your project id. REGION = 'us-central1' # Change to your region. BUCKET = '[your-bucket-name]' # Change to your bucket name. if PROJECT_ID == '' or PROJECT_ID is None or PROJECT_ID == '[your-project-id]': # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] if BUCKET == '' or BUCKET is None or BUCKET == '[your-bucket-name]': # Set your bucket name using your GCP project id BUCKET = PROJECT_ID # Try to create the bucket if it doesn'exists ! gsutil mb -l $REGION gs://$BUCKET print('') print('Project ID:', PROJECT_ID) print('Region:', REGION) print('Bucket name:', BUCKET) ``` ### Set configurations ``` VERSION = 'v1' DATASET_DISPLAY_NAME = 'chicago-taxi-tips' MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}' ENDPOINT_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier' SERVE_BQ_DATASET_NAME = 'playground_us' # Change to your serving BigQuery dataset name. SERVE_BQ_TABLE_NAME = 'chicago_taxitrips_prep' # Change to your serving BigQuery table name. ``` ## 1. Making an online prediciton ``` vertex_ai.init( project=PROJECT_ID, location=REGION, staging_bucket=BUCKET ) endpoint_name = vertex_ai.Endpoint.list( filter=f'display_name={ENDPOINT_DISPLAY_NAME}', order_by='update_time')[-1].gca_resource.name endpoint = vertex_ai.Endpoint(endpoint_name) test_instances = [ { 'dropoff_grid': ['POINT(-87.6 41.9)'], 'euclidean': [2064.2696], 'loc_cross': [''], 'payment_type': ['Credit Card'], 'pickup_grid': ['POINT(-87.6 41.9)'], 'trip_miles': [1.37], 'trip_day': [12], 'trip_hour': [16], 'trip_month': [2], 'trip_day_of_week': [4], 'trip_seconds': [555] } ] predictions = endpoint.predict(test_instances).predictions for prediction in predictions: print(prediction) # TODO {for Khalid, get error saying model does not support explanations} explanations = endpoint.explain(test_instances).explanations for explanation in explanations: print(explanation) ``` ## 2. Make a batch prediction ``` WORKSPACE = f'gs://{BUCKET}/{DATASET_DISPLAY_NAME}/' SERVING_DATA_DIR = os.path.join(WORKSPACE, 'serving_data') SERVING_INPUT_DATA_DIR = os.path.join(SERVING_DATA_DIR, 'input_data') SERVING_OUTPUT_DATA_DIR = os.path.join(SERVING_DATA_DIR, 'output_predictions') if tf.io.gfile.exists(SERVING_DATA_DIR): print('Removing previous serving data...') tf.io.gfile.rmtree(SERVING_DATA_DIR) print('Creating serving data directory...') tf.io.gfile.mkdir(SERVING_DATA_DIR) print('Serving data directory is ready.') ``` ### Extract serving data to Cloud Storage as JSONL ``` from src.model_training import features as feature_info from src.preprocessing import etl from src.common import datasource_utils LIMIT = 10000 sql_query = datasource_utils.create_bq_source_query( dataset_display_name=DATASET_DISPLAY_NAME, missing=feature_info.MISSING_VALUES, limit=LIMIT ) print(sql_query) args = { #'runner': 'DataflowRunner', 'sql_query': sql_query, 'exported_data_prefix': os.path.join(SERVING_INPUT_DATA_DIR, 'data-'), 'temporary_dir': os.path.join(WORKSPACE, 'tmp'), 'gcs_location': os.path.join(WORKSPACE, 'bq_tmp'), 'project': PROJECT_ID, 'region': REGION, 'setup_file': './setup.py' } tf.get_logger().setLevel('ERROR') print('Data extraction started...') etl.run_extract_pipeline(args) print('Data extraction completed.') ! gsutil ls {SERVING_INPUT_DATA_DIR} ``` ### Submit the batch prediction job ``` model_name = vertex_ai.Model.list( filter=f'display_name={MODEL_DISPLAY_NAME}', order_by='update_time')[-1].gca_resource.name job_resources = { 'machine_type': 'n1-standard-2', #'accelerator_count': 1, #'accelerator_type': 'NVIDIA_TESLA_T4' 'starting_replica_count': 1, 'max_replica_coun': 10, } job_display_name = f'{MODEL_DISPLAY_NAME}-prediction-job-{datetime.now().strftime('%Y%m%d%H%M%S')}' vertex_ai.BatchPredictionJob.create( job_display_name=job_display_name, model_name=model_name, gcs_source=SERVING_INPUT_DATA_DIR + '/*.jsonl', gcs_destination_prefix=SERVING_OUTPUT_DATA_DIR, instances_format='jsonl', predictions_format='jsonl', sync=True, **job_resources, ) ``` ## 3. Run the batch prediction pipeline using `Vertex Pipelines` ``` WORKSPACE = f'{BUCKET}/{DATASET_DISPLAY_NAME}/' MLMD_SQLLITE = 'mlmd.sqllite' ARTIFACT_STORE = os.path.join(WORKSPACE, 'tfx_artifacts') PIPELINE_NAME = f'{MODEL_DISPLAY_NAME}-predict-pipeline' os.environ['PROJECT'] = PROJECT_ID os.environ['REGION'] = REGION os.environ['MODEL_DISPLAY_NAME'] = MODEL_DISPLAY_NAME os.environ['PIPELINE_NAME'] = PIPELINE_NAME os.environ['ARTIFACT_STORE_URI'] = ARTIFACT_STORE os.environ['BATCH_PREDICTION_BQ_DATASET_NAME'] = SERVE_BQ_DATASET_NAME os.environ['BATCH_PREDICTION_BQ_TABLE_NAME'] = SERVE_BQ_TABLE_NAME os.environ['SERVE_LIMIT'] = '1000' os.environ['BEAM_RUNNER'] = 'DirectRunner' os.environ['TFX_IMAGE_URI'] = f'gcr.io/{PROJECT_ID}/{DATASET_DISPLAY_NAME}:{VERSION}' import importlib from src.tfx_pipelines import config importlib.reload(config) for key, value in config.__dict__.items(): if key.isupper(): print(f'{key}: {value}') from src.tfx_pipelines import runner pipeline_definition_file = f'{config.PIPELINE_NAME}.json' pipeline_definition = runner.compile_prediction_pipeline(pipeline_definition_file) from kfp.v2.google.client import AIPlatformClient pipeline_client = AIPlatformClient( project_id=PROJECT_ID, region=REGION) pipeline_client.create_run_from_job_spec( job_spec_path=pipeline_definition_file ) ```
github_jupyter
# Wildfire Damage Assessment (binary classification) ``` #First rescale data to minmax and convert to uint8 #!gdal_translate -scale 0 13016 0 255 -ot Byte data/SantaRosa/Sent2/post/B08_clip_post.tif data/SantaRosa/Sent2/post/B08_clip_scale_post.tif #!gdal_translate -scale 0 13016 0 255 -ot Byte data/SantaRosa/Sent2/pre/B08_clip_pre.tif data/SantaRosa/Sent2/pre/B08_clip_scale_pre.tif %load_ext autoreload from wildfireassessment.ops import * #my package from wildfireassessment.plots import * #my package #from wildfireassessment.test import * import numpy as np import numpy.ma as ma import matplotlib.pyplot as plt from pathlib import Path from skimage import morphology from skimage.transform import resize import pandas as pd import geopandas as gpd import pickle from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB, BernoulliNB from sklearn import linear_model from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn import svm from sklearn.neural_network import MLPClassifier from sklearn.metrics import confusion_matrix, recall_score, precision_score, accuracy_score, f1_score from sklearn.externals import joblib from rasterstats import zonal_stats import fiona %matplotlib inline #read in filepaths for data #filepath_post = #filepath_pre = Path("./data/SantaRosa/WorldView/pre") #WorldView Post/Pre fps_wv_post = Path("./data/SantaRosa/WorldView/post/0221213.tif") fps_wv_pre = Path("./data/SantaRosa/WorldView/pre/0221213_2.tif") #Sent2 Post/Pre fps_sent2_post = Path("./data/SantaRosa/Sent2/post/B08_clip_scale_post.tif") fps_sent2_pre = Path("./data/SantaRosa/Sent2/pre/B08_clip_scale_pre.tif") raster_src_post, rgb_post = readRGBImg(fps_wv_post) raster_src_pre, rgb_pre = readRGBImg(fps_wv_pre) raster_src_post_b08, b08_post = readOneImg(fps_sent2_post) raster_src_pre_b08, b08_pre = readOneImg(fps_sent2_pre) b08_upscaled_post = resize(b08_post, raster_src_post.shape, anti_aliasing=True) b08_upscaled_post = b08_upscaled_post * 255 b08_upscaled_post = b08_upscaled_post.astype(rasterio.uint8) b08_upscaled_pre = resize(b08_pre, raster_src_pre.shape, anti_aliasing=True) b08_upscaled_pre = b08_upscaled_pre * 255 b08_upscaled_pre = b08_upscaled_pre.astype(rasterio.uint8) plt.imshow(b08_upscaled_post, cmap='gray') ``` ## Extract pixels #### unravel ``` rgb_rav_post = {0 : rgb_post[:,:,0].ravel().astype(float), 1 : rgb_post[:,:,1].ravel().astype(float), 2 : rgb_post[:,:,2].ravel().astype(float)} rgb_rav_pre = {0 : rgb_pre[:,:,0].ravel().astype(float), 1 : rgb_pre[:,:,1].ravel().astype(float), 2 : rgb_pre[:,:,2].ravel().astype(float)} b08_rav_post = b08_upscaled_post.ravel().astype(float) b08_rav_pre = b08_upscaled_pre.ravel().astype(float) b08_rav_pre.shape[0] b08_upscaled_post = None b08_upscaled_pre = None b08_post = None b08_pre = None rgb_pre = None rgb_post = None def computeSI(b1, b2): return (b1-b2)/(b1+b2) def changedSI(SI_pre, SI_post): return SI_pre - SI_post def makeChunkX(b, g, r, n, b_p, g_p, r_p, n_p): SI_gb = (computeSI(g, b), computeSI(g_p, b_p)) #(post, pre) SI_rb = (computeSI(r, b), computeSI(r_p, b_p)) SI_rg = (computeSI(r, g), computeSI(r_p, g_p)) SI_nb = (computeSI(n, b), computeSI(n_p, b_p)) SI_ng = (computeSI(n, g), computeSI(n_p, g_p)) SI_nr = (computeSI(n, r), computeSI(n_p, r_p)) dSI_gb = changedSI(SI_gb[1], SI_gb[0]) dSI_rb = changedSI(SI_rb[1], SI_rb[0]) dSI_rg = changedSI(SI_rg[1], SI_rg[0]) dSI_nb = changedSI(SI_nb[1], SI_nb[0]) dSI_ng = changedSI(SI_ng[1], SI_ng[0]) dSI_nr = changedSI(SI_nr[1], SI_nr[0]) return np.dstack((b, b_p, g, g_p, r, r_p, n, n_p, SI_gb[0], SI_rb[0], SI_rg[0], SI_nb[0], SI_ng[0], SI_nr[0], SI_gb[1], SI_rb[1], SI_rg[1], SI_nb[1], SI_ng[1], SI_nr[1], dSI_nb, dSI_rg, dSI_rb, dSI_gb, dSI_nr, dSI_ng))[0] X_chunk = makeChunkX(rgb_rav_post[2][0:100], rgb_rav_post[1][0:100], rgb_rav_post[0][0:100], b08_rav_post[0:100], rgb_rav_pre[2][0:100], rgb_rav_pre[1][0:100], rgb_rav_pre[0][0:100], b08_rav_pre[0:100]) m = ma.masked_invalid(X_chunk) ma.set_fill_value(m, -999) m imp = SimpleImputer(missing_values=np.nan, strategy='mean') imp.fit(X_chunk) X_chunk_imp = imp.transform(X_chunk) rf_model = joblib.load(open("models/rf_grid_bin_precision.pkl", 'rb')) pred_y = rf_model.predict(X_chunk_imp) ``` ### Test Random Forest ``` from joblib import Parallel, delayed import multiprocessing def processInParallel(i): X_chunk = makeChunkX(rgb_rav_post[2][i:i+100], rgb_rav_post[1][i:i+100], rgb_rav_post[0][i:i+100], b08_rav_post[i:i+100], rgb_rav_pre[2][i:i+100], rgb_rav_pre[1][i:i+100], rgb_rav_pre[0][i:i+100], b08_rav_pre[i:i+100]) return rf_model.predict(ma.masked_invalid(X_chunk)) %%time num_cores = multiprocessing.cpu_count() pred_y = Parallel(n_jobs=num_cores, backend="multiprocessing")(delayed(processInParallel)(i) for i in range(0, len(b08_rav_post), 100)) pred_y pred_y_rf = np.hstack(pred_y).reshape(raster_src_post.shape) plt.imshow(pred_y_rf) pred_y_rf_clean = morphology.remove_small_holes(pred_y_rf==1, 500) pred_y_rf_clean = morphology.remove_small_objects(pred_y_rf_clean, 500) plt.imshow(pred_y_rf_clean) pred_y_rf_clean.astype(np.uint8) raster_src_post.meta metadata = { 'driver': 'GTiff', 'dtype': 'uint8', 'width': raster_src_post.meta['width'], 'height': raster_src_post.meta['height'], 'count': 1, 'crs': raster_src_post.meta['crs'], 'transform': raster_src_post.meta['transform'] } metadata with rasterio.open("results/predict_mask_rf_SantaRosa_0221213.tif", 'w', **metadata) as dst: dst.write(pred_y_rf_clean.astype(np.uint8), 1) ```
github_jupyter
## NMF [2.5. Decomposing signals in components (matrix factorization problems) — scikit-learn 1.0.2 documentation](https://scikit-learn.org/stable/modules/decomposition.html?highlight=nmf#non-negative-matrix-factorization-nmf-or-nnmf) ``` from sklearn.decomposition import NMF from sklearn.datasets import make_blobs import numpy as np centers = [[5, 10, 5], [10, 4, 10], [6, 8, 8]] X, _ = make_blobs(centers=centers) # 以centers为中心生成数据 n_components = 2 # 潜在变量的个数 model = NMF(n_components=n_components) model.fit(X) W = model.transform(X) # 分解后的矩阵 H = model.components_ print(X.shape) print(W.shape) print(H.shape) print(H) #print(W) V = np.dot(W,H) for i in range(10): print('V - ', V[i:(i+1),:]) print('X - ', X[i:(i+1),:]) print('reconstruction_err_', model.reconstruction_err_) # 损失函数值 print('n_iter_', model.n_iter_) # 实际迭代次数 ``` ## olivetti_faces MNF ``` from time import time from numpy.random import RandomState import matplotlib.pyplot as plt #from sklearn.datasets import fetch_olivetti_faces from sklearn import decomposition import scipy.io as spio n_row, n_col = 2, 3 n_components = n_row * n_col image_shape = (64, 64) rng = RandomState(0) # ############################################################################# # Load faces data # dataset = fetch_olivetti_faces('./', True,random_state=rng) datafile = '../resource/data/olivettifaces/olivettifaces.mat' dataset = spio.loadmat(datafile) # print(dataset.keys()) # dict_keys(['__header__', '__version__', '__globals__', 'faces', 'p', 'u', 'v']) faces = np.transpose(dataset['faces']) print(dataset['faces'].shape) n_samples,n_features= faces.shape print("Dataset consists of %d faces, features is %s" % (n_samples, n_features)) def plot_gallery(title, images, n_col=n_col, n_row=n_row, cmap=plt.cm.gray): plt.figure(figsize=(2. * n_col, 2.26 * n_row)) plt.suptitle(title, size=16) for i, comp in enumerate(images): plt.subplot(n_row, n_col, i + 1) vmax = max(comp.max(), -comp.min()) plt.imshow(comp.reshape(image_shape), cmap=cmap, interpolation='nearest', vmin=-vmax, vmax=vmax) plt.xticks(()) plt.yticks(()) plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.) # ############################################################################# estimators = [ ('Non-negative components - NMF', decomposition.NMF(n_components=n_components, init='nndsvda', tol=5e-3)) ] # ############################################################################# # Plot a sample of the input data plot_gallery("First centered Olivetti faces", faces[:n_components]) # ############################################################################# # Do the estimation and plot it for name, estimator in estimators: print("Extracting the top %d %s..." % (n_components, name)) t0 = time() data = faces estimator.fit(data) train_time = (time() - t0) print("done in %0.3fs" % train_time) components_ = estimator.components_ print('components_:', components_.shape, '\n**\n', components_) plot_gallery('%s - Train time %.1fs' % (name, train_time), components_) plt.show() ```
github_jupyter
``` # reload packages %load_ext autoreload %autoreload 2 ``` ### Choose GPU ``` %env CUDA_DEVICE_ORDER=PCI_BUS_ID %env CUDA_VISIBLE_DEVICES=0 import tensorflow as tf gpu_devices = tf.config.experimental.list_physical_devices('GPU') if len(gpu_devices)>0: tf.config.experimental.set_memory_growth(gpu_devices[0], True) print(gpu_devices) tf.keras.backend.clear_session() ``` ### Load packages ``` import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from tqdm.autonotebook import tqdm from IPython import display import pandas as pd import umap import copy import os, tempfile import tensorflow_addons as tfa ``` ### parameters ``` dataset = "cifar10" labels_per_class = 1024 # 'full' n_latent_dims = 1024 confidence_threshold = 0.8 # minimum confidence to include in UMAP graph for learned metric learned_metric = True # whether to use a learned metric, or Euclidean distance between datapoints augmented = True # min_dist= 0.001 # min_dist parameter for UMAP negative_sample_rate = 5 # how many negative samples per positive sample batch_size = 128 # batch size optimizer = tf.keras.optimizers.Adam(1e-3) # the optimizer to train optimizer = tfa.optimizers.MovingAverage(optimizer) label_smoothing = 0.2 # how much label smoothing to apply to categorical crossentropy max_umap_iterations = 50 # how many times, maximum, to recompute UMAP max_epochs_per_graph = 50 # how many epochs maximum each graph trains for (without early stopping) umap_patience = 5 # how long before recomputing UMAP graph ``` #### Load dataset ``` from tfumap.semisupervised_keras import load_dataset ( X_train, X_test, X_labeled, Y_labeled, Y_masked, X_valid, Y_train, Y_test, Y_valid, Y_valid_one_hot, Y_labeled_one_hot, num_classes, dims ) = load_dataset(dataset, labels_per_class) ``` ### load architecture ``` from tfumap.semisupervised_keras import load_architecture encoder, classifier, embedder = load_architecture(dataset, n_latent_dims) ``` ### load pretrained weights ``` from tfumap.semisupervised_keras import load_pretrained_weights encoder, classifier = load_pretrained_weights(dataset, augmented, labels_per_class, encoder, classifier) ``` #### compute pretrained accuracy ``` # test current acc pretrained_predictions = classifier.predict(encoder.predict(X_test, verbose=True), verbose=True) pretrained_predictions = np.argmax(pretrained_predictions, axis=1) pretrained_acc = np.mean(pretrained_predictions == Y_test) print('pretrained acc: {}'.format(pretrained_acc)) ``` ### get a, b parameters for embeddings ``` from tfumap.semisupervised_keras import find_a_b a_param, b_param = find_a_b(min_dist=min_dist) ``` ### build network ``` from tfumap.semisupervised_keras import build_model model = build_model( batch_size=batch_size, a_param=a_param, b_param=b_param, dims=dims, encoder=encoder, classifier=classifier, negative_sample_rate=negative_sample_rate, optimizer=optimizer, label_smoothing=label_smoothing, embedder = None, ) ``` ### build labeled iterator ``` from tfumap.semisupervised_keras import build_labeled_iterator labeled_dataset = build_labeled_iterator(X_labeled, Y_labeled_one_hot, augmented, dims) ``` ### training ``` from livelossplot import PlotLossesKerasTF from tfumap.semisupervised_keras import get_edge_dataset from tfumap.semisupervised_keras import zip_datasets ``` #### callbacks ``` # early stopping callback early_stopping = tf.keras.callbacks.EarlyStopping( monitor='val_classifier_acc', min_delta=0, patience=15, verbose=0, mode='auto', baseline=None, restore_best_weights=False ) # plot losses callback groups = {'acccuracy': ['classifier_accuracy', 'val_classifier_accuracy'], 'loss': ['classifier_loss', 'val_classifier_loss']} plotlosses = PlotLossesKerasTF(groups=groups) history_list = [] current_validation_acc = 0 batches_per_epoch = np.floor(len(X_train)/batch_size).astype(int) epochs_since_last_improvement = 0 for current_umap_iterations in tqdm(np.arange(max_umap_iterations)): # make dataset edge_dataset = get_edge_dataset( model, classifier, encoder, X_train, Y_masked, batch_size, confidence_threshold, labeled_dataset, dims, learned_metric = learned_metric ) # zip dataset zipped_ds = zip_datasets(labeled_dataset, edge_dataset, batch_size) # train dataset history = model.fit( zipped_ds, epochs=max_epochs_per_graph, validation_data=( (X_valid, tf.zeros_like(X_valid), tf.zeros_like(X_valid)), {"classifier": Y_valid_one_hot}, ), callbacks = [early_stopping, plotlosses], max_queue_size = 100, steps_per_epoch = batches_per_epoch, #verbose=0 ) history_list.append(history) # get validation acc pred_valid = classifier.predict(encoder.predict(X_valid)) new_validation_acc = np.mean(np.argmax(pred_valid, axis = 1) == Y_valid) # if validation accuracy has gone up, mark the improvement if new_validation_acc > current_validation_acc: epochs_since_last_improvement = 0 current_validation_acc = copy.deepcopy(new_validation_acc) else: epochs_since_last_improvement += 1 if epochs_since_last_improvement > umap_patience: print('No improvement in {} UMAP iterators'.format(umap_patience)) break # umap loss 0.273 class_pred = classifier.predict(encoder.predict(X_test)) class_acc = np.mean(np.argmax(class_pred, axis=1) == Y_test) print(class_acc) ```
github_jupyter
# MNIST With SET This is an example of training an SET network on the MNIST dataset using synapses, pytorch, and torchvision. ``` #Import torch libraries and get SETLayer from synapses import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from synapses import SETLayer #Some extras for visualizations import numpy as np import matplotlib.pyplot as plt import seaborn as sns from IPython.display import clear_output print("done") ``` ## SET Layer The SET layer is a pytorch module that works with a similar API to a standard fully connected layer; to initialize, specify input and output dimensions.<br><br> NOTE: one condition mentioned in the paper is that epsilon (a hyperparameter controlling layer sparsity) be much less than the input dimension and much less than the output dimension. The default value of epsilon is 11. Keep dimensions much bigger than epsilon! (epsilon can be passed in as an init argument to the layer). ``` #initialize the layer sprs = SETLayer(128, 256) #We can see the layer transforms inputs as we expect inp = torch.randn((2, 128)) print('Input batch shape: ', tuple(inp.shape)) out = sprs(inp) print('Output batch shape: ', tuple(out.shape)) ``` In terms of behavior, the SETLayer transforms an input vector into the output space as would a fcl. ## Initial Connection Distribution The intialized layer has randomly assigned connections between input nodes and output nodes; each connection is associated with a weight, drawn from a normal distribution. ``` #Inspect init weight distribution plt.hist(np.array(sprs.weight.data), bins=40) plt.title('Weights distribution on initialization') plt.xlabel('Weight Value') plt.ylabel('Number of weights') plt.show() vec = sprs.connections[:, 0] vec = np.array(vec) values, counts = np.unique(vec, return_counts=True) plt.title('Connections to inputs') plt.bar(values, counts) plt.xlabel('Input vector index') plt.ylabel('Number of connections') plt.show() print("done") ``` The weights are sampled from a normal distribution, as is done with a standard fcl. The connections to the inputs are uniformly distributed.<br><br> ## Killing Connections When connections are reassigned in SET, some proportion (defined by hyperparameter zeta) of the weights closest to zero are removed. We can set these to zero using the zero_connections method on the layer. (This method leaves the connections unchanged.) ``` sprs.zero_connections() #Inspect init weight distribution plt.hist(np.array(sprs.weight.data), bins=40) plt.title('Weights distribution after zeroing connections') plt.xlabel('Weight Value') plt.ylabel('Number of weights') plt.show() print("done") ``` ## Evolving Connections The evolve_connections() method will reassign these weights to new connections between input and output nodes. By default, these weights are initialized by sampling from the same distribution as the init function. Optionally, these weights can be set at zero (with init=False argument). ``` sprs.evolve_connections() plt.hist(np.array(sprs.weight.data), bins=40) plt.title('Weights distribution after evolving connections') plt.show() plt.title('Connections to inputs') plt.bar(values, counts) plt.xlabel('Input vector index') plt.ylabel('Number of connections') plt.show() print("done") ``` We can see these weight values have been re-distributed; the new connections conform to the same uniform distribution as before. (We see in the SET paper, and here later on, that the adaptive algorithm learns to allocate these connections to more important input values.) ## A Simple SET Model The following is a simple sparsely-connected model using SETLayers with default hyperparameters. ``` class SparseNet(nn.Module): def __init__(self): super(SparseNet, self).__init__() self.set_layers = [] self.set1 = SETLayer(784, 512) self.set_layers.append(self.set1) #self.set2 = SETLayer(512, 512) #self.set_layers.append(self.set2) self.set2 = SETLayer(512, 128) self.set_layers.append(self.set2) #Use a dense layer for output because of low output dimensionality self.fc1 = nn.Linear(128, 10) def zero_connections(self): """Sets connections to zero for inferences.""" for layer in self.set_layers: layer.zero_connections() def evolve_connections(self): """Evolves connections.""" for layer in self.set_layers: layer.evolve_connections() def forward(self, x): x = x.reshape(-1, 784) x = F.relu(self.set1(x)) x = F.relu(self.set2(x)) #x = F.relu(self.set3(x)) x = self.fc1(x) return F.log_softmax(x, dim=1) def count_params(model): prms = 0 for parameter in model.parameters(): n_params = 1 for prm in parameter.shape: n_params *= prm prms += n_params return prms device = "cpu" sparse_net = SparseNet().to(device) print('number of params: ', count_params(sparse_net)) ``` Consider a fully-connected model with the same architecture: It would contain more than 20 times the number of parameters!<br> ## Training on MNIST This code was adapted directly from the [pytorch mnist tutorial](https://github.com/pytorch/examples/blob/master/mnist/main.py). ``` class History(object): """Tracks and plots training history""" def __init__(self): self.train_loss = [] self.val_loss = [] self.train_acc = [] self.val_acc = [] def plot(self): clear_output() plt.plot(self.train_loss, label='train loss') plt.plot(self.train_acc, label='train acc') plt.plot(self.val_loss, label='val loss') plt.plot(self.val_acc, label='val acc') plt.legend() plt.show() def train(log_interval, model, device, train_loader, optimizer, epoch, history): model.train() correct = 0 loss_ = [] for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() loss = F.nll_loss(output, target) loss.backward() loss_.append(loss.item()) optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) history.train_loss.append(np.array(loss_).mean()) history.train_acc.append(correct/len(train_loader.dataset)) return history def test(model, device, test_loader, history): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() acc = correct / len(test_loader.dataset) test_loss /= len(test_loader.dataset) print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)'.format( test_loss, correct, len(test_loader.dataset), 100. * acc)) history.val_loss.append(test_loss) history.val_acc.append(acc) return history print("done") torch.manual_seed(0) #Optimizer settings lr = .01 momentum = .5 epochs = 50 batch_size=128 log_interval = 64 test_batch_size=128 train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=True) print("done") ``` ## Dealing with Optimizer Buffers Synapses recycles parameters. When connections are broken and reassigned, its parameter gets set to zero.<br><br> This system is designed to be computationally efficient, but it comes with a nasty side-effect. Often, we use optimizers with some sort of buffer; the simplest example is momentum in SGD. When we reset a parameter, the information about the overwritten parameter in the optimizer buffer is not useful. We need to overwrite specific values in the buffer also. To do this in pytorch, we need to pass the optimizer to each SETLayer to let synapses do this for us. <br><br> <b>Notice: I'm still working out the best way to initialize adaptive optimizers (current version makes a naive attempt to pick good values); SGD with momentum works fine</b> ``` optimizer = optim.SGD(sparse_net.parameters(), lr=lr, momentum=momentum, weight_decay=1e-2) for layer in sparse_net.set_layers: #here we tell our set layers about layer.optimizer = optimizer #This guy will keep track of optimization metrics. set_history = History() print("done") def show_MNIST_connections(model): vec = model.set1.connections[:, 0] vec = np.array(vec) _, counts = np.unique(vec, return_counts=True) t = counts.reshape(28, 28) sns.heatmap(t, cmap='viridis', xticklabels=[], yticklabels=[], square=True); plt.title('Connections per input pixel'); plt.show(); v = [t[13-i:15+i,13-i:15+i].mean() for i in range(14)] plt.plot(v) plt.show() print("done") import time epochs = 1000 for epoch in range(1, epochs + 1): #In the paper, evolutions occur on each epoch if epoch != 1: set_history.plot() show_MNIST_connections(sparse_net) if epoch != 1: print('Train set: Average loss: {:.4f}, Accuracy: {:.2f}%'.format( set_history.train_loss[epoch-2], 100. * set_history.train_acc[epoch-2])) print('Test set: Average loss: {:.4f}, Accuracy: {:.2f}%'.format( set_history.val_loss[epoch-2], 100. * set_history.val_acc[epoch-2])) sparse_net.evolve_connections() show_MNIST_connections(sparse_net) set_history = train(log_interval, sparse_net, device, train_loader, optimizer, epoch, set_history) #And smallest connections are removed during inference. sparse_net.zero_connections() set_history = test(sparse_net, device, test_loader, set_history) time.sleep(10) ```
github_jupyter
# ** IMPORT PACKAGES: ** ``` # Python peripherals import os import random # Scipy import scipy.io import scipy.stats as ss # Numpy import numpy # Matplotlib import matplotlib.pyplot as plt import matplotlib.collections as mcoll import matplotlib.ticker as ticker # PyTorch import torch from torch.utils.data.sampler import SubsetRandomSampler from torch.utils.data.sampler import SequentialSampler from torch.utils.data import DataLoader # IPython from IPython.display import display, HTML # Deep signature import deep_signature.utils from deep_signature.data_generation import SimpleCurveDatasetGenerator from deep_signature.data_generation import SimpleCurveManager from deep_signature.training import DeepSignatureNet from deep_signature.training import ContrastiveLoss from deep_signature.training import ModelTrainer from deep_signature.training import DeepSignaturePairsDataset from deep_signature import curve_processing ``` # ** HELPER FUNCTIONS: ** ``` def chunker(seq, size): return (seq[pos:pos + size] for pos in range(0, len(seq), size)) # https://stackoverflow.com/questions/36074455/python-matplotlib-with-a-line-color-gradient-and-colorbar def colorline(ax, x, y, z=None, cmap='copper', norm=plt.Normalize(0.0, 1.0), linewidth=3, alpha=1.0): """ http://nbviewer.ipython.org/github/dpsanders/matplotlib-examples/blob/master/colorline.ipynb http://matplotlib.org/examples/pylab_examples/multicolored_line.html Plot a colored line with coordinates x and y Optionally specify colors in the array z Optionally specify a colormap, a norm function and a line width """ # Default colors equally spaced on [0,1]: if z is None: z = numpy.linspace(0.0, 1.0, len(x)) # Special case if a single number: # to check for numerical input -- this is a hack if not hasattr(z, "__iter__"): z = numpy.array([z]) z = numpy.asarray(z) segments = make_segments(x, y) lc = mcoll.LineCollection(segments, array=z, cmap=cmap, norm=norm, linewidth=linewidth, alpha=alpha) # ax = plt.gca() ax.add_collection(lc) return lc def make_segments(x, y): """ Create list of line segments from x and y coordinates, in the correct format for LineCollection: an array of the form numlines x (points per line) x 2 (x and y) array """ points = numpy.array([x, y]).T.reshape(-1, 1, 2) segments = numpy.concatenate([points[:-1], points[1:]], axis=1) return segments def plot_dist(ax, dist): x = numpy.array(range(dist.shape[0])) y = dist ax.set_xlim(x.min(), x.max()) ax.set_ylim(y.min(), y.max()) colorline(ax=ax, x=x, y=y, cmap='hsv') def plot_curve_sample(ax, curve, curve_sample, indices, zorder, point_size=10, alpha=1, cmap='hsv'): x = curve_sample[:, 0] y = curve_sample[:, 1] c = numpy.linspace(0.0, 1.0, curve.shape[0]) ax.scatter( x=x, y=y, c=c[indices], s=point_size, cmap=cmap, alpha=alpha, norm=plt.Normalize(0.0, 1.0), zorder=zorder) def plot_curve_section_center_point(ax, x, y, zorder, radius=1, color='white'): circle = plt.Circle((x, y), radius=radius, color=color, zorder=zorder) ax.add_artist(circle) def plot_curve(ax, curve, linewidth=2, color='red', alpha=1): x = curve[:, 0] y = curve[:, 1] ax.plot(x, y, linewidth=linewidth, color=color, alpha=alpha) def plot_curvature(ax, curvature, color='red', linewidth=2): x = range(curvature.shape[0]) y = curvature ax.plot(x, y, color=color, linewidth=linewidth) def plot_sample(ax, sample, color, zorder, point_size=10, alpha=1): x = sample[:, 0] y = sample[:, 1] ax.scatter( x=x, y=y, s=point_size, color=color, alpha=alpha, zorder=zorder) def all_subdirs_of(b='.'): result = [] for d in os.listdir(b): bd = os.path.join(b, d) if os.path.isdir(bd): result.append(bd) return result ``` # ** GLOBAL SETTINGS: ** ``` curves_dir_path_train = 'C:/deep-signature-data/circles/curves/pairs/train' curves_dir_path_test = 'C:/deep-signature-data/circles/curves/pairs/test' negative_pairs_dir_path = 'C:/deep-signature-data/circles/datasets/pairs/negative-pairs' positive_pairs_dir_path = 'C:/deep-signature-data/circles/datasets/pairs/positive-pairs' results_base_dir_path = 'C:/deep-signature-data/circles/results/pairs' epochs = 100 batch_size = 256 validation_split = .05 learning_rate = 1e-4 mu = 1 rotation_factor=1 sampling_factor=1 multimodality_factor=15 supporting_points_count=3 sampling_points_count=None sampling_points_ratio=0.15 sectioning_points_count=None sectioning_points_ratio=0.1 sample_points=3 plt.style.use("dark_background") ``` # ** SANITY CHECK - CURVES: ** ``` curves = SimpleCurveDatasetGenerator.load_curves(dir_path=curves_dir_path_train) fig, ax = plt.subplots(1, 1, figsize=(80,40)) for label in (ax.get_xticklabels() + ax.get_yticklabels()): label.set_fontsize(30) ax.axis('equal') limit = 200 color_map = plt.get_cmap('rainbow', limit) for i, curve in enumerate(curves[:limit]): plot_curve(ax=ax, curve=curve, linewidth=5, color=color_map(i)) plt.show() ``` # ** SANITY CHECK - NEGATIVE PAIRS ** ``` negative_pairs = SimpleCurveDatasetGenerator.load_negative_pairs(dir_path=negative_pairs_dir_path) rows = 6 cols = 6 cells = rows * cols fig, ax = plt.subplots(rows, cols, figsize=(40,100)) axes = [] for i in range(rows): for j in range(cols): for label in (ax[i,j].get_xticklabels() + ax[i,j].get_yticklabels()): label.set_fontsize(10) # ax[i,j].axis('equal') axes.append(ax[i,j]) numpy.random.shuffle(negative_pairs) for negative_pair_index, negative_pair in enumerate(negative_pairs[:cells]): ax = axes[negative_pair_index] plot_sample(ax, negative_pair[0], point_size=50, alpha=1, color='red', zorder=50) plot_sample(ax, negative_pair[1], point_size=50, alpha=1, color='green', zorder=50) plot_sample(ax, numpy.array([[0,0]]), point_size=50, alpha=1, color='white', zorder=100) plt.show() ``` # ** SANITY CHECK - POSITIVE PAIRS ** ``` positive_pairs = SimpleCurveDatasetGenerator.load_positive_pairs(dir_path=positive_pairs_dir_path) rows = 6 cols = 8 cells = rows * cols fig, ax = plt.subplots(rows, cols, figsize=(40,100)) axes = [] for i in range(rows): for j in range(cols): for label in (ax[i,j].get_xticklabels() + ax[i,j].get_yticklabels()): label.set_fontsize(10) ax[i,j].axis('equal') axes.append(ax[i,j]) numpy.random.shuffle(positive_pairs) for positive_pair_index, positive_pair in enumerate(positive_pairs[:cells]): ax = axes[positive_pair_index] plot_sample(ax, positive_pair[0], point_size=50, alpha=1, color='red', zorder=50) plot_sample(ax, positive_pair[1], point_size=50, alpha=1, color='green', zorder=50) plot_sample(ax, numpy.array([[0,0]]), point_size=50, alpha=1, color='white', zorder=100) plt.show() ``` # ** SANITY CHECK - DATASET PAIRS ** ``` dataset = DeepSignaturePairsDataset() dataset.load_dataset( negative_pairs_dir_path=negative_pairs_dir_path, positive_pairs_dir_path=positive_pairs_dir_path) dataset_size = len(dataset) indices = list(range(dataset_size)) # numpy.random.shuffle(indices) sampler = SubsetRandomSampler(indices) data_loader = DataLoader(dataset, batch_size=1, sampler=sampler) display(HTML('<h3>Random samples of positive and negative examples:</h3>')) for pair_index, data in enumerate(data_loader, 0): if pair_index == 10: break curve1 = torch.squeeze(torch.squeeze(data['input'])[0]) curve2 = torch.squeeze(torch.squeeze(data['input'])[1]) label = int(torch.squeeze(data['labels'])) if label == 1: pair_type = 'Positive' else: pair_type = 'Negative' display(HTML(f'<h3>{pair_type} sample #{pair_index}:</h3>')) curve1 = curve1.cpu().numpy() curve2 = curve2.cpu().numpy() fig, ax = plt.subplots(1, 1, figsize=(5,5)) ax.axis('equal') plot_sample( ax=ax, sample=curve1, point_size=50, color='lightcoral', zorder=50) plot_sample( ax=ax, sample=curve2, point_size=50, color='skyblue', zorder=50) plot_sample(ax, numpy.array([[0,0]]), point_size=50, alpha=1, color='white', zorder=100) for label in (ax.get_xticklabels() + ax.get_yticklabels()): label.set_fontsize(10) plt.show() ``` # ** TRAINING ** ``` torch.set_default_dtype(torch.float64) dataset = SimpleDeepSignatureDataset() dataset.load_dataset( negative_pairs_dir_path=negative_pairs_dir_path, positive_pairs_dir_path=positive_pairs_dir_path) model = SimpleDeepSignatureNet(layers=20, sample_points=sample_points).cuda() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) loss_fn = ContrastiveLoss(mu) model_trainer = ModelTrainer(model=model, loss_fn=loss_fn, optimizer=optimizer) print(model) def epoch_handler(epoch_index): return results = model_trainer.fit(dataset=dataset, epochs=epochs, batch_size=batch_size, results_base_dir_path=results_base_dir_path, epoch_handler=epoch_handler) ``` # ** TRAIN/VALIDATION LOSS ** ``` # results_file_path = os.path.normpath(os.path.join(results_base_dir_path, 'results.npy')) all_subdirs = all_subdirs_of(results_base_dir_path) latest_subdir = os.path.normpath(max(all_subdirs, key=os.path.getmtime)) results = numpy.load(f"{latest_subdir}/results.npy", allow_pickle=True).item() epochs = results['epochs'] batch_size = results['batch_size'] train_loss_array = results['train_loss_array'] validation_loss_array = results['validation_loss_array'] epochs_list = numpy.array(range(len(train_loss_array))) fig, ax = plt.subplots(1, 1, figsize=(10,10)) ax.xaxis.set_major_locator(ticker.MaxNLocator(integer=True)) for label in (ax.get_xticklabels() + ax.get_yticklabels()): label.set_fontsize(20) ax.plot(epochs_list, train_loss_array, label='Train Loss', linewidth=7.0) ax.plot(epochs_list, validation_loss_array, label='Validation Loss', linewidth=7.0) plt.legend(fontsize=20, title_fontsize=20) # print(train_loss_array) # print(validation_loss_array) plt.show() ``` # ** TEST MODEL ** ``` torch.set_default_dtype(torch.float64) device = torch.device('cuda') model = DeepSignatureNet(layers=2, sample_points=sample_points).cuda() model.load_state_dict(torch.load(results['model_file_path'], map_location=device)) model.eval() limit = 50 curves = SimpleCurveDatasetGenerator.load_curves(dir_path=curves_dir_path_test) numpy.random.seed(50) numpy.random.shuffle(curves) curves = curves[:limit] color_map = plt.get_cmap('rainbow', limit) fig, ax = plt.subplots(2, 1, figsize=(80,100)) ax[0].axis('equal') for label in (ax[0].get_xticklabels() + ax[0].get_yticklabels()): label.set_fontsize(30) for label in (ax[1].get_xticklabels() + ax[1].get_yticklabels()): label.set_fontsize(30) low = 0.1 high = 0.4 delta = numpy.random.uniform(low=low, high=high, size=[4000, 2]) for curve_index, curve in enumerate(curves): plot_curve(ax=ax[0], curve=curve, color=color_map(curve_index), linewidth=5) predicted_curvature = numpy.zeros(curve.shape[0]) center_index = 1 for i in range(curve.shape[0]): current_delta = delta[i, :] * curve.shape[0] indices = numpy.array([i - int(current_delta[0]), i, i + int(current_delta[1])]) indices = numpy.mod(indices, curve.shape[0]) sample = curve[indices] center_point = sample[center_index] sample = sample - center_point if curve_processing.is_ccw(curve_sample=sample) is False: sample = numpy.flip(sample, axis=0) radians = curve_processing.calculate_tangent_angle(curve_sample=sample) sample = curve_processing.rotate_curve(curve=sample, radians=radians) batch_data = torch.unsqueeze(torch.unsqueeze(torch.from_numpy(sample).double(), dim=0), dim=0).cuda() with torch.no_grad(): predicted_curvature[i] = torch.squeeze(model(batch_data), dim=0).cpu().detach().numpy() plot_curvature(ax=ax[1], curvature=predicted_curvature, color=color_map(curve_index), linewidth=5) plt.show() ```
github_jupyter
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). # Challenge Notebook ## Problem: Implement merge sort. * [Constraints](#Constraints) * [Test Cases](#Test-Cases) * [Algorithm](#Algorithm) * [Code](#Code) * [Unit Test](#Unit-Test) * [Solution Notebook](#Solution-Notebook) ## Constraints * Is a naive solution sufficient? * Yes * Are duplicates allowed? * Yes * Can we assume the input is valid? * No * Can we assume this fits memory? * Yes ## Test Cases * None -> Exception * Empty input -> [] * One element -> [element] * Two or more elements * Left and right subarrays of different lengths ## Algorithm Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/sorting_searching/merge_sort/merge_sort_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. ## Code ``` class MergeSort(object): # merge sorted subarray left, right def _merge(self, left, right): result = [] i = 0 j = 0 while i < len(left) or j < len(right): if i >= len(left): result.append(right[j]) j += 1 elif j >= len(right): result.append(left[i]) i += 1 elif left[i] < right[j]: result.append(left[i]) i += 1 else: # left[i] >= right[j] result.append(right[j]) j += 1 return result def _sort(self, data): if (len(data) < 2): return data # divide data to 2 halves mid = len(data) / 2 # copy [0 ~ mid) left = data[:mid] # copy [mid, len) right = data[mid:] return self._merge(self._sort(left), self._sort(right)) def _sort2(self, data): if (len(data) < 2): return data i = 1 # i is size of each piece of subarray while 1: # j is index of the 'left' subarray of each merging pair. j = 0 result = [] while j < len(data): left = data[j:(j + i)] right = data[(j + i):(j + i + i)] result += self._merge(left, right) j += i + i data = result i += i if i >= len(data): break return data def sort(self, data): if data is None: raise return self._sort2(data) ``` ## Unit Test **The following unit test is expected to fail until you solve the challenge.** ``` # %load test_merge_sort.py from nose.tools import assert_equal, assert_raises class TestMergeSort(object): def test_merge_sort(self): merge_sort = MergeSort() print('None input') assert_raises(TypeError, merge_sort.sort, None) print('Empty input') assert_equal(merge_sort.sort([]), []) print('One element') assert_equal(merge_sort.sort([5]), [5]) print('Two or more elements, odd len') data = [5, 1, 7, 2, 6, -3, 5, 7] assert_equal(merge_sort.sort(data), sorted(data)) print('Two or more elements') data = [5, 1, 7, 2, 6, -3, 5, 7, -1] assert_equal(merge_sort.sort(data), sorted(data)) print('Success: test_merge_sort') def main(): test = TestMergeSort() test.test_merge_sort() if __name__ == '__main__': main() ``` ## Solution Notebook Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/sorting_searching/merge_sort/merge_sort_solution.ipynb) for a discussion on algorithms and code solutions.
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Import-pandas-and-load-the-NLS-data" data-toc-modified-id="Import-pandas-and-load-the-NLS-data-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Import pandas and load the NLS data</a></span></li><li><span><a href="#View-some-of-the-weeks-worked-and-college-enrollment-data" data-toc-modified-id="View-some-of-the-weeks-worked-and-college-enrollment-data-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>View some of the weeks worked and college enrollment data</a></span></li><li><span><a href="#Run-the-wide_to_long-function" data-toc-modified-id="Run-the-wide_to_long-function-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Run the wide_to_long function</a></span></li></ul></div> # Import pandas and load the NLS data ``` import pandas as pd # pd.set_option('display.width', 200) # pd.set_option('display.max_columns', 30) # pd.set_option('display.max_rows', 200) # pd.options.display.float_format = '{:,.0f}'.format import watermark %load_ext watermark %watermark -n -i -iv nls97 = pd.read_csv('data/nls97f.csv') nls97.set_index('personid', inplace=True) ``` # View some of the weeks worked and college enrollment data ``` weeksworkedcols = [ 'weeksworked00', 'weeksworked01', 'weeksworked02', 'weeksworked03', 'weeksworked04' ] colenrcols = [ 'colenroct00', 'colenroct01', 'colenroct02', 'colenroct03', 'colenroct04' ] nls97.loc[nls97['originalid'].isin([1, 2]), ['originalid'] + weeksworkedcols + colenrcols].T ``` # Run the wide_to_long function ``` workschool = pd.wide_to_long(nls97[['originalid'] + weeksworkedcols + colenrcols], stubnames=['weeksworked', 'colenroct'], i=['originalid'], j='year').reset_index() workschool.head(2) workschool['year'] = workschool['year'] + 2000 workschool = workschool.sort_values(['originalid', 'year']) workschool.set_index(['originalid'], inplace=True) # wide_to_long accomplishes in one step what it took us several steps to accomplish in # the previous recipe using melt workschool.head(10) weeksworkedcols = [ 'weeksworked00', 'weeksworked01', 'weeksworked02', 'weeksworked04', 'weeksworked05' ] workschool = pd.wide_to_long(nls97[['originalid'] + weeksworkedcols + colenrcols], stubnames=['weeksworked', 'colenroct'], i=['originalid'], j='year').reset_index() workschool['year'] = workschool['year'] + 2000 workschool = workschool.sort_values(['originalid', 'year']) workschool.set_index(['originalid'], inplace=True) workschool.head(12) ```
github_jupyter
## RNN Time Series Classification ### Importing Required Libraries and Data ``` import pandas as pd import numpy as np #to plot the data import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import matplotlib as mpl mpl.rcParams.update(mpl.rcParamsDefault) import time import os os.chdir("C:/Data/aircraft/") from sklearn.preprocessing import MinMaxScaler #to normalize data from sklearn.metrics import classification_report, confusion_matrix, roc_curve from sklearn.metrics import f1_score, roc_auc_score, accuracy_score, precision_score, recall_score from sklearn.utils import class_weight #for deep learning import keras import keras.backend as k from keras.models import Sequential from keras.layers import Dense, Activation, Masking, Dropout, SimpleRNN from keras.callbacks import History from keras import callbacks from keras.utils import to_categorical def prepare_data(drop_cols = True): dependent_var = ['RUL'] index_columns_names = ["UnitNumber","Cycle"] operational_settings_columns_names = ["OpSet"+str(i) for i in range(1,4)] sensor_measure_columns_names =["SensorMeasure"+str(i) for i in range(1,22)] input_file_column_names = index_columns_names + operational_settings_columns_names + sensor_measure_columns_names cols_to_drop = ['OpSet3', 'SensorMeasure1', 'SensorMeasure5', 'SensorMeasure6', 'SensorMeasure10', 'SensorMeasure14', 'SensorMeasure16', 'SensorMeasure18', 'SensorMeasure19'] df_train = pd.read_csv('train_FD001.txt',delim_whitespace=True,names=input_file_column_names) rul = pd.DataFrame(df_train.groupby('UnitNumber')['Cycle'].max()).reset_index() rul.columns = ['UnitNumber', 'max'] df_train = df_train.merge(rul, on=['UnitNumber'], how='left') df_train['RUL'] = df_train['max'] - df_train['Cycle'] df_train.drop('max', axis=1, inplace=True) df_train['failure_lbl_1'] = [1 if i < 50 else 0 for i in df_train.RUL] df_train['failure_lbl_2'] = df_train['failure_lbl_1'] df_train.failure_lbl_2[df_train.RUL < 25] = 2 df_test = pd.read_csv('test_FD001.txt', delim_whitespace=True, names=input_file_column_names) if(drop_cols == True): df_train = df_train.drop(cols_to_drop, axis = 1) df_test = df_test.drop(cols_to_drop, axis = 1) y_true = pd.read_csv('RUL_FD001.txt', delim_whitespace=True,names=["RUL"]) y_true["UnitNumber"] = y_true.index y_true['failure_lbl_1'] = [1 if i < 50 else 0 for i in y_true.RUL] y_true['failure_lbl_2'] = y_true['failure_lbl_1'] y_true.failure_lbl_2[y_true.RUL < 25] = 2 return df_train, df_test, y_true df_train, df_test, y_true = prepare_data(drop_cols=True) df_train.shape, df_test.shape, y_true.shape feats = df_train.columns.drop(['UnitNumber', 'Cycle', 'RUL', 'failure_lbl_1', 'failure_lbl_2']) min_max_scaler = MinMaxScaler(feature_range=(-1,1)) df_train[feats] = min_max_scaler.fit_transform(df_train[feats]) df_test[feats] = min_max_scaler.transform(df_test[feats]) df_train.head() df_test.head() y_true.head() ``` LSTM expects an input in the shape of a numpy array of 3 dimensions and I will need to convert train and test data accordingly. ``` def gen_train(id_df, seq_length, seq_cols): """ function to prepare train data into (samples, time steps, features) id_df = train dataframe seq_length = look back period seq_cols = feature columns """ data_array = id_df[seq_cols].values num_elements = data_array.shape[0] lstm_array=[] for start, stop in zip(range(0, num_elements-seq_length+1), range(seq_length, num_elements+1)): lstm_array.append(data_array[start:stop, :]) return np.array(lstm_array) def gen_target(id_df, seq_length, label): data_array = id_df[label].values num_elements = data_array.shape[0] return data_array[seq_length-1:num_elements+1] def gen_test(id_df, seq_length, seq_cols, mask_value): """ function to prepare test data into (samples, time steps, features) function only returns last sequence of data for every unit id_df = test dataframe seq_length = look back period seq_cols = feature columns """ df_mask = pd.DataFrame(np.zeros((seq_length-1,id_df.shape[1])),columns=id_df.columns) df_mask[:] = mask_value id_df = df_mask.append(id_df,ignore_index=True) data_array = id_df[seq_cols].values num_elements = data_array.shape[0] lstm_array=[] start = num_elements-seq_length stop = num_elements lstm_array.append(data_array[start:stop, :]) return np.array(lstm_array) ``` ### Function to Print results ``` def print_results(y_test, y_pred, multi_class = False): #f1-score if multi_class == True: f1 = f1_score(y_test, y_pred, average="macro") else: f1 = f1_score(y_test, y_pred) print("F1 Score: ", f1) print(classification_report(y_test, y_pred)) conf_matrix = confusion_matrix(y_test, y_pred) plt.figure(figsize=(12,12)) plt.subplot(221) sns.heatmap(conf_matrix, fmt = "d",annot=True, cmap='Blues') b, t = plt.ylim() plt.ylim(b + 0.5, t - 0.5) plt.title('Confuion Matrix') plt.ylabel('True Values') plt.xlabel('Predicted Values') #roc_auc_score if multi_class == False: model_roc_auc = roc_auc_score(y_test, y_pred) print ("Area under curve : ",model_roc_auc,"\n") fpr,tpr,thresholds = roc_curve(y_test, y_pred) gmeans = np.sqrt(tpr * (1-fpr)) ix = np.argmax(gmeans) threshold = np.round(thresholds[ix],3) plt.subplot(222) plt.plot(fpr, tpr, color='darkorange', lw=1, label = "Auc : %.3f" %model_roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--') plt.scatter(fpr[ix], tpr[ix], marker='o', color='black', label='Best Threshold:' + str(threshold)) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") ``` ## Binary Classification ``` sequence_length = 50 mask_value = 0 label = "failure_lbl_1" ``` Let's prepare data using above functions. ``` #generate train x_train=np.concatenate(list(list(gen_train(df_train[df_train['UnitNumber']==unit], sequence_length, feats)) for unit in df_train['UnitNumber'].unique())) print(x_train.shape) #generate target of train y_train = np.concatenate(list(list(gen_target(df_train[df_train['UnitNumber']==unit], sequence_length, label)) for unit in df_train['UnitNumber'].unique())) y_train.shape #generate test x_test=np.concatenate(list(list(gen_test(df_test[df_test['UnitNumber']==unit], sequence_length, feats, mask_value)) for unit in df_test['UnitNumber'].unique())) print(x_test.shape) #true target of test y_test = y_true.RUL.values y_test.shape nb_features = x_train.shape[2] nb_out = 1 nb_features cls_wt= class_weight.compute_class_weight('balanced',np.unique(y_train), y_train) cls_wt ``` ### Model ``` history = History() model = Sequential() model.add(SimpleRNN(16, input_shape=(sequence_length, nb_features), activation = 'relu')) model.add(Dense(8, activation = 'relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy']) model.summary() %%time # fit the model model.fit(x_train, y_train, epochs=100, batch_size=64, validation_split=0.2, verbose=1, class_weight = cls_wt, callbacks = [history, keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto')]) fig, ax = plt.subplots(nrows = 1, ncols = 2, figsize = (10, 4)) # Accuracy ax[0].plot(history.history['acc']) ax[0].plot(history.history['val_acc']) ax[0].set_ylabel('Accuracy') ax[0].set_xlabel('# Epoch') ax[0].legend(['train', 'validation'], loc='upper left') ax[0].set_title('Accuracy') # Loss ax[1].plot(history.history['loss']) ax[1].plot(history.history['val_loss']) ax[1].set_ylabel('Loss') ax[1].set_xlabel('# Epoch') ax[1].legend(['train', 'validation'], loc='upper left') ax[1].set_title('Loss') y_pred = model.predict_classes(x_test) print_results(y_true.failure_lbl_1, y_pred) confusion_matrix(y_train, model.predict_classes(x_train)) ``` ## Multiclass Classification ``` sequence_length = 50 mask_value = 0 label = "failure_lbl_2" ``` Let's prepare data using above functions. ``` #generate train x_train=np.concatenate(list(list(gen_train(df_train[df_train['UnitNumber']==unit], sequence_length, feats)) for unit in df_train['UnitNumber'].unique())) print(x_train.shape) #generate target of train y_train = np.concatenate(list(list(gen_target(df_train[df_train['UnitNumber']==unit], sequence_length, label)) for unit in df_train['UnitNumber'].unique())) y_train.shape y_train2 = to_categorical(y_train) y_train2 #generate test x_test=np.concatenate(list(list(gen_test(df_test[df_test['UnitNumber']==unit], sequence_length, feats, mask_value)) for unit in df_test['UnitNumber'].unique())) print(x_test.shape) nb_features = x_train.shape[2] nb_out = 1 nb_features cls_wt= class_weight.compute_class_weight('balanced',np.unique(y_train), y_train) cls_wt ``` ### Model ``` history2 = History() model2 = Sequential() model2.add(SimpleRNN(16, input_shape=(sequence_length, nb_features), activation = 'relu')) model2.add(Dense(8, activation = 'relu')) model2.add(Dense(units=3, activation='softmax')) model2.compile(loss="categorical_crossentropy", optimizer="adam", metrics=['accuracy']) model2.summary() %%time # fit the model model2.fit(x_train, y_train2, epochs=10, batch_size=64, validation_split=0.2, verbose=1, class_weight = cls_wt, callbacks = [history2, keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=0, mode='auto')]) fig, ax = plt.subplots(nrows = 1, ncols = 2, figsize = (10, 4)) # Accuracy ax[0].plot(history2.history['acc']) ax[0].plot(history2.history['val_acc']) ax[0].set_ylabel('Accuracy') ax[0].set_xlabel('# Epoch') ax[0].legend(['train', 'validation'], loc='upper left') ax[0].set_title('Accuracy') # Loss ax[1].plot(history2.history['loss']) ax[1].plot(history2.history['val_loss']) ax[1].set_ylabel('Loss') ax[1].set_xlabel('# Epoch') ax[1].legend(['train', 'validation'], loc='upper left') ax[1].set_title('Loss') y_pred = model2.predict_classes(x_test) print_results(y_true.failure_lbl_2, y_pred, multi_class=True) confusion_matrix(y_train, model2.predict_classes(x_train)) ```
github_jupyter
# Regression Errors Let's talk about errors in regression problems. Typically, in regression, we have a variable $y$ for which we want to learn a model to predict. The prediction from the model is usually denoted as $\hat{y}$. The error $e$ is thus defined as follows - $e = y - \hat{y}$ Since we have many pairs of the truth, $y$ and $\hat{y}$, we want to average over the differences. I will denote this error as the Mean Error `ME`. - $\mathrm{ME} = \frac{1}{n} \sum{y - \hat{y}}$ The problem with ME is that averaging over the differences may result in something close to zero. The reason is because the positive and negative differences will have a cancelling effect. No one really computes the error of a regression model in this way. A better way is to consider the Mean Absolute Error `MAE`, where we take the average of the absolute differences. - $\mathrm{MAE} = \frac{1}{n} \sum |y - \hat{y}|$ In MAE, since there are only positive differences resulting from $|y - \hat{y}|$, we avoid the cancelling effect of positive and negative values when averaging. Many times, data scientists want to punish models that predict values further from the truth. In that case, the Root Mean Squared Error `RMSE` is used. - $\mathrm{RMSE} = \sqrt{\frac{1}{n} \sum (y - \hat{y})^2}$ In RMSE, we do not take the difference as in ME or the absolute difference as in MAE, rather, we square the difference. The idea is that when a model's prediction is off from the truth, we should exaggerate the consequences as it reflects the reality that being further away from the truth is orders of magnitude worse. However, the squaring of the difference results in something that is no longer in the unit of $y$, as such, we take the square root to bring the scalar value back into unit with $y$. For all these measures of performance, the closer the value is to zero, the better. Let's look at the following made-up example where a hypothetical model has made some prediction $\hat{y}$ or `y_pred` and for each of these prediction, we have the ground truth $y$ or `y_true`. ``` import pandas as pd df = pd.DataFrame({ 'y_true': [10, 8, 7, 9, 4], 'y_pred': [11, 7, 6, 15, 1] }) df = pd.DataFrame({ 'y_true': [10, 8, 7, 9, 4], 'y_pred': [11, 7, 5, 11, 1] }) df ``` We will now compute the error `E`, absolute error `AE` and squared errors `SE` for each pair. ``` import numpy as np df['E'] = df.y_true - df.y_pred df['AE'] = np.abs(df.y_true - df.y_pred) df['SE'] = np.power(df.y_true - df.y_pred, 2.0) df ``` From E, AE and SE, we can compute the average or mean errors, ME, MAE, RMSE, respectively, as follows. ``` errors = df[['E', 'AE', 'SE']].mean() errors.se = np.sqrt(errors.SE) errors.index = ['ME', 'MAE', 'RMSE'] errors ``` As you can see, these judgement of errors are saying different things and might lead you to draw contradictory and/or conflicting conclusions. We know ME is defective, and so we will ignore interpreting ME. MAE says we can expect to be `2.4` off from the truth while RMSE says we can expect to be `9.6` off from the truth. The values `2.4` and `9.6` are very different; while `2.4` may seem to be tolerably `good`, on the other hand, `9.6` seems `bad`. One thing we can try to do is to `normalize` these values. Let's just look at RMSE. Here are some ways we can normalize RMSE. - using the `mean` of y, denoted as $\bar{y}$ - using the `standard deviation` of y, denoted as $\sigma_y$ - using the range of y, denoted as $y_{\mathrm{max}} - y_{\mathrm{min}}$ - using the interquartile range of y, denoted as $Q_y^1 - Q_y^3$ The code to compute these is as follows. - $\bar{y}$ is `me_y` - $\sigma_y$ is `sd_y` - $y_{\mathrm{max}} - y_{\mathrm{min}}$ is `ra_y` - $Q_y^1 - Q_y^3$ is `iq_y` Since these are used to divide RMSE, let's group them under a series as `denominators`. ``` from scipy.stats import iqr me_y = df.y_true.mean() sd_y = df.y_true.std() ra_y = df.y_true.max() - df.y_true.min() iq_y = iqr(df.y_true) denominators = pd.Series([me_y, sd_y, ra_y, iq_y], index=['me_y', 'sd_y', 'ra_y', 'iq_y']) denominators ``` Here's the results of normalizing RMSE with the mean `me`, standard deviation `sd`, range `ra` and interquartile range `iq`. ``` pd.DataFrame([{ r'$\mathrm{RMSE}_{\mathrm{me}}$': errors.RMSE / denominators.me_y, r'$\mathrm{RMSE}_{\mathrm{sd}}$': errors.RMSE / denominators.sd_y, r'$\mathrm{RMSE}_{\mathrm{ra}}$': errors.RMSE / denominators.ra_y, r'$\mathrm{RMSE}_{\mathrm{iq}}$': errors.RMSE / denominators.iq_y, }]).T.rename(columns={0: 'values'}) ``` That we have normalized RMSE, we can make a little bit better interpretation. - $\mathrm{RMSE}_{\mathrm{me}}$ is saying we can expect to be 126% away from the truth. - $\mathrm{RMSE}_{\mathrm{sd}}$ is saying we can expect to be over 4.2 standard deviation from the truth. - $\mathrm{RMSE}_{\mathrm{ra}}$ is saying we can expect to be 1.6 - $\mathrm{RMSE}_{\mathrm{iq}}$
github_jupyter
## Practice: Sentiment Analysis Classification In this notebook we will try to solve a classification problem where the goal is to classify movie reviews based on sentiment, negative or positive. This notebook presents the problem in its simplest terms unlike the sophisticated sentiment analysis which is done based on the presence or absence of specific words. We will use scikit learn data loading functionality to build training and testing data. The notebook is partially complete. Look for "Your code here" to complete the partial code. ----- **Activity 1: ** Load the data from '../../../datasets/movie_reviews' into the mvr variable ``` import nltk mvr = nltk.corpus.movie_reviews from sklearn.datasets import load_files data_dir = '../../datasets/movie_reviews' # <Your code here to load the movie reviews data in above path> mvr = load_files(data_dir) #help(mvr) # <Your code here to print the number of reviews> print('Number of Reviews: {0}'.format(len(mvr['data']))) ``` **Activity 2: ** Split the data in mvr.data into train(mvr_train) and test(mvr_test) datasets ``` from sklearn.model_selection import train_test_split mvr_train, mvr_test, y_train, y_test = train_test_split( mvr.data, mvr.target, test_size=0.25, random_state=23) ``` ----- Now that the training and testing data have been loaded into the notebook, we can build a simple pipeline by using a `CountVectorizer` and `MultinomialNB` to build a document-term matrix and to perform a Naive Bayes classification. ----- **Activity 3: ** Build a pipeline by using a `CountVectorizer` and `MultinomialNB` to build a document-term matrix and to perform a Naive Bayes classification. Print the metrics of classification result. ``` # Build simple pipeline from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn import metrics pipeline = Pipeline([('cv', CountVectorizer()), ('nb', MultinomialNB())]) # Build DTM and classify data pipeline.fit(mvr_train, y_train) # Predict the reviews on mvr_test data y_pred = pipeline.predict(mvr_test) # Print the prediction results print(metrics.classification_report(y_test, y_pred, target_names = mvr.target_names)) ``` **Activity 4: ** Use stop words in above `CountVectorizer`. Build the document-term matrix and perform a Naive Bayes classification again. Print the metrics of the new classification results. ``` pipeline = Pipeline([ ('vect', CountVectorizer()), ('clf', MultinomialNB()), ]) pipeline.set_params(tf__stop_words = 'english') # Build DTM and classify data pipeline.fit(mvr_train, y_train) y_pred = pipeline.predict(mvr_test) print(metrics.classification_report(y_test, y_pred, target_names = mvr.target_names)) ``` **Activity 5: ** Change the vectorizer to TF-IDF. Perform a Naive Bayes classification again. Print the metrics of new classification results. ``` from sklearn.feature_extraction.text import TfidfVectorizer pipeline = Pipeline([('tfidf',TfidfVectorizer()),('nb', MultinomialNB())]) pipeline.set_params(tf__stop_words = 'english') # Build DTM and classify data clf.fit(mvr_train, y_train) y_pred = clf.predict(mvr_test) print(metrics.classification_report(y_test, y_pred, target_names = mvr.target_names)) ``` **Activity 6: ** Change the TF-IDF parameters, such as `max_features` and `lowercase`. perform a Naive Bayes classification again. Print the metrics of the new classification results. Note: Find the documentation for [TfidfVectorizer here](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) to find the right values for max_features and lowercase. Play around with the parameter to see how results are changing. ``` from sklearn.feature_extraction.text import TfidfVectorizer tools = [('tf', TfidfVectorizer()), ('nb', MultinomialNB())] clf = Pipeline(tools) clf.<Your code to use max_features and lowercase with TfidfVectorizer> # Build DTM and classify data clf.fit(mvr_train, y_train) y_pred = clf.predict(mvr_test) print(metrics.classification_report(y_test, y_pred, target_names = mvr.target_names)) ``` **Activity 7: ** Change the classifier to the logistic regression algorithm. Print the results metrics. ``` from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model import LogisticRegression tools = [('vect', CountVectorizer(stop_words = 'english')), ('tfidf', TfidfVectorizer()), ('lr', LogisticRegression())] clf = Pipeline(tools) clf.set_params(tf__stop_words = 'english') # Build DTM and classify data clf.fit(mvr_train, y_train) y_pred = clf.predict(mvr_test) print(metrics.classification_report(y_test, y_pred, target_names = mvr.target_names)) ```
github_jupyter
# Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing and replaying the results of Flow experiments run using RL. The process of visualizing results breaks down into two main components: - reward plotting - policy replay Note that this tutorial only talks about visualization using sumo, and not other simulators like Aimsun. <hr> ## Visualization with RLlib ### Plotting Reward Similarly to how rllab handles reward plotting, RLlib supports reward visualization over the period of training using `tensorboard`. `tensorboard` takes one command-line input, `--logdir`, which is an rllib result directory (usually located within an experiment directory inside your `ray_results` directory). An example function call is below. ``` ! tensorboard --logdir /ray_results/dirthatcontainsallcheckpoints/ ``` If you do not wish to use `tensorboard`, you can also use the `flow/visualize/plot_ray_results.py` file. It takes as arguments the path to the `progress.csv` file located inside your experiment results directory, and the name(s) of the column(s) to plot. If you do not know what the name of the columns are, simply do not put any and a list of all available columns will be displayed to you. Example usage: ``` ! plot_ray_results.py /ray_results/experiment_dir/progress.csv training/return-average training/return-min ``` ### Replaying a Trained Policy The tool to replay a policy trained using RLlib is located in `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results, and second the number of the checkpoint you wish to visualize. There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`. ``` ! python ../../flow/visualize/visualizer_rllib.py /ray_results/dirthatcontainsallcheckpoints/ 1 ``` <hr> ## Data Collection and Analysis Any Flow experiment can output its results to a CSV file containing the contents of SUMO's built-in `emission.xml` files, specifying speed, position, time, fuel consumption, and many other metrics for all vehicles in a network over time. This section describes how to generate those `emission.csv` files when replaying and analyzing a trained policy. ### RLlib ``` # --emission_to_csv does the same as above ! python ../../flow/visualize/visualizer_rllib.py results/sample_checkpoint 1 --gen_emission ``` As in the rllab case, the `emission.csv` file can be found in `test_time_rollout/` and used from there. ### SUMO SUMO-only experiments can generate emission CSV files as well, based on an argument to the `experiment.run` method. `run` takes in arguments `(num_runs, num_steps, rl_actions=None, convert_to_csv=False)`. To generate an `emission.csv` file, pass in `convert_to_csv=True` in the Python file running your SUMO experiment.
github_jupyter
## Extend jupyterlab-lsp > Note: the API is likely to change in the future; your suggestions are welcome! ### How to add a new LSP feature? Features (as well as other parts of the frontend) reuse the [JupyterLab plugins system](https://jupyterlab.readthedocs.io/en/stable/developer/extension_dev.html#plugins). Each plugin is a [TypeScript](https://www.typescriptlang.org/) package exporting one or more `JupyterFrontEndPlugin`s (see [the JupyterLab extesion developer tutorial](https://jupyterlab.readthedocs.io/en/stable/developer/extension_tutorial.html) for an overview). Each feature has to register itself with the `FeatureManager` (which is provided after requesting `ILSPFeatureManager` token) using `register(options: IFeatureOptions)` method. Your feature specification should follow the `IFeature` interface, which can be divided into three major parts: - `editorIntegrationFactory`: constructors for the feature-CodeEditor integrators (implementing the `IFeatureEditorIntegration` interface), one for each supported CodeEditor (e.g. CodeMirror or Monaco); for CodeMirror integration you can base your feature integration on the abstract `CodeMirrorIntegration` class. - `labIntegration`: an optional object integrating feature with the JupyterLab interface - optional fields for easy integration of some of the common JupyterLab systems, such as: - settings system - commands system (including context menu) For further integration with the JupyterLab, you can request additional JupyterLab tokens (consult JupyterLab documentation on [core tokens](https://jupyterlab.readthedocs.io/en/stable/developer/extension_dev.html#core-tokens)). #### How to override the default implementation of a feature? You can specify a list of extensions to be disabled the the feature manager passing their plugin identifiers in `supersedes` field of `IFeatureOptions`. ### How to integrate a new code editor implementation? `CodeMirrorEditor` code editor is supported by default, but any JupyterLab editor implementing the `CodeEditor.IEditor` interface can be adapted for the use with the LSP extension. To add your custom code editor (e.g. Monaco) after implementing a `CodeEditor.IEditor` interface wrapper (which you would have anyways for the JupyterLab integration), you need to also implement a virtual editor (`IVirtualEditor` interface) for it. #### Why virtual editor? The virtual editor takes multiple instances of your editor (e.g. in a notebook) and makes them act like a single editor. For example, when "onKeyPress" event is bound on the VirtualEditor instance, it should be bound onto each actual code editor; this allows the features to be implemented without the knowledge about the number of editor instances on the page. #### How to register the implementation? A `virtualEditorManager` will be provided if you request `ILSPVirtualEditorManager` token; use `registerEditorType(options: IVirtualEditorType<IEditor>)` method passing a name that you will also use to identify the code editor, the editor class, and your VirtualEditor constructor. ### How to integrate a new `DocumentWidget`? JupyterLab editor widgets (such as _Notebook_ or _File Editor_) implement `IDocumentWidget` interface. Each such widget has to adapted by a `WidgetAdapter` to enable its use with the LSP extension. The role of the `WidgetAdapter` is to extract the document metadata (language, mimetype) and the underlying code editor (e.g. CodeMirror or Monaco) instances so that other parts of the LSP extension can interface with them without knowing about the implementation details of the DocumentWidget (or even about the existence of a Notebook construct!). Your custom `WidgetAdapter` implementation has to register itself with `WidgetAdapterManager` (which can be requested with `ILSPAdapterManager` token), calling `registerAdapterType(options: IAdapterTypeOptions)` method. Among the options, in addition to the custom `WidgetAdapter`, you need to provide a tracker (`IWidgetTracker`) which will notify the extension via a signal when a new instance of your document widget is getting created. ### How to add a custom magic or foreign extractor? It is now possible to register custom code replacements using `ILSPCodeOverridesManager` token and to register custom foreign code extractors using `ILSPCodeExtractorsManager` token, however this API is considered provisional and subject to change. #### Future plans for transclusions handling We will strive to make it possible for kernels to register their custom syntax/code transformations easily, but the frontend API will remain available for the end-users who write their custom syntax modifications with actionable side-effects (e.g. a custom IPython magic which copies a variable from the host document to the embedded document). ### How to add custom icons for the completer? 1. Prepare the icons in the SVG format (we use 16 x 16 pixels, but you should be fine with up to 24 x 24). You can load them for webpack in typescript using imports if you include a `typings.d.ts` file with the following content: ```typescript declare module '*.svg' { const script: string; export default script; } ``` in your `src/`. You should probably keep the icons in your `style/` directory. 2. Prepare `CompletionKind` → `IconSvgString` mapping for the light (and optionally dark) theme, implementing the `ICompletionIconSet` interface. We have an additional `Kernel` completion kind that is used for completions provided by kernel that had no recognizable type provided. 3. Provide all other metadata required by the `ICompletionTheme` interface and register it on `ILSPCompletionThemeManager` instance using `register_theme()` method. 4. Provide any additional CSS styling targeting the JupyterLab completer elements inside of `.lsp-completer-theme-{id}`, e.g. `.lsp-completer-theme-material .jp-Completer-icon svg` for the material theme. Remember to include the styles by importing the in one of the source files. For an example of a complete theme see [theme-vscode](https://github.com/krassowski/jupyterlab-lsp/tree/master/packages/theme-vscode). ## Extend jupyter-lsp ### Language Server Specs Language Server Specs can be [configured](./Configuring.ipynb) by Jupyter users, or distributed by third parties as python or JSON files. Since we'd like to see as many Language Servers work out of the box as possible, consider [contributing a spec](./Contributing.ipynb#specs), if it works well for you! ### Message Listeners Message listeners may choose to receive LSP messages immediately after being received from the client (e.g. `jupyterlab-lsp`) or a language server. All listeners of a message are scheduled concurrently, and the message is passed along **once all listeners return** (or fail). This allows listeners to, for example, modify files on disk before the language server reads them. If a listener is going to perform an expensive activity that _shouldn't_ block delivery of a message, a non-blocking technique like [IOLoop.add_callback][add_callback] and/or a [queue](https://www.tornadoweb.org/en/stable/queues.html) should be used. [add_callback]: https://www.tornadoweb.org/en/stable/ioloop.html#tornado.ioloop.IOLoop.add_callback #### Add a Listener with `entry_points` Listeners can be added via [entry_points][] by a package installed in the same environment as `notebook`: ```toml ## setup.cfg [options.entry_points] jupyter_lsp_listener_all_v1 = some-unique-name = some.module:some_function jupyter_lsp_listener_client_v1 = some-other-unique-name = some.module:some_other_function jupyter_lsp_listener_server_v1 = yet-another-unique-name = some.module:yet_another_function ``` At present, the entry point names generally have no impact on functionality aside from logging in the event of an error on import. [entry_points]: https://packaging.python.org/specifications/entry-points/ ##### Add a Listener with Jupyter Configuration Listeners can be added via `traitlets` configuration, e.g. ```yaml ## jupyter_notebook_config.jsons { 'LanguageServerManager': { 'all_listeners': ['some.module.some_function'], 'client_listeners': ['some.module.some_other_function'], 'server_listeners': ['some.module.yet_another_function'], }, } ``` ##### Add a listener with the Python API `lsp_message_listener` can be used as a decorator, accessed as part of a `serverextension`. This listener receives _all_ messages from the client and server, and prints them out. ```python from jupyter_lsp import lsp_message_listener def load_jupyter_server_extension(nbapp): @lsp_message_listener("all") async def my_listener(scope, message, language_server, manager): print("received a {} {} message from {}".format( scope, message["method"], language_server )) ``` `scope` is one of `client`, `server` or `all`, and is required. ##### Listener options Fine-grained controls are available as part of the Python API. Pass these as named arguments to `lsp_message_listener`. - `language_server`: a regular expression of language servers - `method`: a regular expression of LSP JSON-RPC method names
github_jupyter
# merge_sim&copy; Tutorial * This tutorial shows how to use merge_sim to reproduce the result in my graduate thesis. * Author: [chaonan99](chaonan99.github.io) * Date: 2017/06/10 * Code for this tutorial and the project is under MIT license. See the license file for detail. ``` from game import Case1VehicleGenerator, MainHigherSpeedVG, GameLoop %matplotlib inline ``` ## Case 1 This simulate the simple case in original paper, where 2 cars in each lane start at the same speed. ``` vehicle_generator = Case1VehicleGenerator() game = GameLoop(vehicle_generator) game.play() game.draw_result_pyplot() ``` ## Case 2 In this case, the main lane starts with a higher speed. ``` vehicle_generator = MainHigherSpeedVG() game = GameLoop(vehicle_generator) game.play() game.draw_result_pyplot() ``` ## OTM order Let's now implement OTM order. This can be easily fulfilled using the `SpeedIDAssigner` provided in `VehicleGeneratorBase`. ``` from common import config, VehicleState from helper import Helper from game import VehicleGeneratorBase, VehicleBuilder, OnBoardVehicle import numpy as np class MainHigherSpeedVGOTM(VehicleGeneratorBase): def __init__(self): super(MainHigherSpeedVGOTM, self).__init__() def buildSchedule(self): # lane0 (main road) t0_lane0 = np.arange(10, 30.1, 2.0) # generate a vehicle every 2.0 seconds from 10 s t0_lane1 = np.arange(9, 30.1, 3.1) # generate a vehicle every 3.1 seconds from 9 s v0_lane0 = 25.0 # v_0 on the main lane v0_lane1 = 15.0 # v_0 on the auxiliary lane for ti0 in t0_lane0: v = VehicleBuilder(-1)\ .setSpeed(v0_lane0)\ .setPosition(-config.control_len)\ .setAcceleration(0)\ .setLane(0).build() # The schedule is the actual vehicle queue used to generate vehicle in the game loop. # Append the built vehicle to the schedule. self.schedule.append(OnBoardVehicle(v, ti0, Helper.getTc(v))) for ti0 in t0_lane1: v = VehicleBuilder(-1)\ .setSpeed(v0_lane1)\ .setPosition(-config.control_len)\ .setAcceleration(0)\ .setLane(1).build() self.schedule.append(OnBoardVehicle(v, ti0, Helper.getTc(v))) # Use speed ID Assigner to get OTM order. self.SpeedIDAssigner() ``` ### Hard $t_\mathrm{m}$ recalculate The `SpeedGameLoop` implement the hard OTM order stated in the original paper. ``` from game import SpeedGameLoop vehicle_generator = MainHigherSpeedVGOTM() game = SpeedGameLoop(vehicle_generator) game.play() game.draw_result_pyplot() ``` ### Soft $t_\mathrm{m}$ recalculate We relieve $t_\mathrm{m}$ on the auxiliary lane. The alternative is provided in `Helper.getTm`. ``` class SpeedSoftGameLoop(GameLoop): def __init__(self, vscd): super(SpeedSoftGameLoop, self).__init__(vscd) self.on_board_vehicles = [] def nextStep(self): self.ctime += config.time_meta t = self.ctime ove_t = self.vscd.getAtTime(t) for v in ove_t: tmp_v_stack = [] while len(self.on_board_vehicles) > 0 and self.on_board_vehicles[-1].vehicle.ID > v.vehicle.ID: tmpv = self.on_board_vehicles.pop() # tmpv.t0 = t # tmpv.min_pass_time = max(tmpv.min_pass_time, Helper.getTmOptimal2(tmpv.vehicle.speed, # config.case_speed['speed_merge'], tmpv.vehicle.position, 0)) tmp_v_stack.append(tmpv) # Get t_m if len(self.on_board_vehicles) == 0 and len(self.finished_vehicels) == 0: v.tm = v.t0 + max(config.min_pass_time, v.min_pass_time) elif len(self.on_board_vehicles) > 0: v.tm = Helper.getTm(v, self.on_board_vehicles[-1], 'soft') else: v.tm = Helper.getTm(v, self.finished_vehicels[-1], 'soft') tmp_v_stack.append(v) prevve = None for i in reversed(range(len(tmp_v_stack))): ve = tmp_v_stack[i] if prevve is not None: ve.tm = Helper.getTm(ve, prevve, 'soft') self.on_board_vehicles.append(ve) TimeM = Helper.getTimeMatrix(t, ve.tm) ConfV = Helper.getConfigVec(ve) # from IPython import embed; embed() ve.ParaV = np.dot(np.linalg.inv(TimeM), ConfV) ve.state = VehicleState.ON_RAMP prevve = ve # print("ID {}".format(prevve.vehicle.ID)) for v in self.on_board_vehicles: Helper.updateAVP(v, t) if v.vehicle.position >= 0: v.state = VehicleState.ON_MERGING while not self.isEmpty() and self.on_board_vehicles[0].vehicle.position >= config.merging_len: self.on_board_vehicles[0].state = VehicleState.FINISHED self.finished_vehicels.append((self.on_board_vehicles.pop(0))) vehicle_generator = MainHigherSpeedVGOTM() game = SpeedSoftGameLoop(vehicle_generator) game.play() game.draw_result_pyplot() ``` ## Exercise Try to implement a random vehicle generator, which the time interval between two consecutive vehicle on the same lane obeying the exponential distribution and the speed in each lane obeying normal distribution. The answer can be found in `game.py` file.
github_jupyter
# UCI Dodgers dataset ``` import pandas as pd import numpy as np import os from pathlib import Path from config import data_raw_folder, data_processed_folder from timeeval import Datasets import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (20, 10) dataset_collection_name = "Dodgers" source_folder = Path(data_raw_folder) / "UCI ML Repository/Dodgers" target_folder = Path(data_processed_folder) print(f"Looking for source datasets in {source_folder.absolute()} and\nsaving processed datasets in {target_folder.absolute()}") dataset_name = "101-freeway-traffic" train_type = "unsupervised" train_is_normal = False input_type = "univariate" datetime_index = True dataset_type = "real" # create target directory dataset_subfolder = Path(input_type) / dataset_collection_name target_subfolder = target_folder / dataset_subfolder try: os.makedirs(target_subfolder) print(f"Created directories {target_subfolder}") except FileExistsError: print(f"Directories {target_subfolder} already exist") pass dm = Datasets(target_folder) data_file = source_folder / "Dodgers.data" events_file = source_folder / "Dodgers.events" # transform data df = pd.read_csv(data_file, header=None, encoding="latin1", parse_dates=[0], infer_datetime_format=True) df.columns = ["timestamp", "count"] #df["count"] = df["count"].replace(-1, np.nan) # read and add labels df_events = pd.read_csv(events_file, header=None, encoding="latin1") df_events.columns = ["date", "begin", "end", "game attendance" ,"away team", "game score"] df_events.insert(0, "begin_timestamp", pd.to_datetime(df_events["date"] + " " + df_events["begin"])) df_events.insert(1, "end_timestamp", pd.to_datetime(df_events["date"] + " " + df_events["end"])) df_events = df_events.drop(columns=["date", "begin", "end", "game attendance" ,"away team", "game score"]) # labelling df["is_anomaly"] = 0 for _, (t1, t2) in df_events.iterrows(): tmp = df[df["timestamp"] >= t1] tmp = tmp[tmp["timestamp"] <= t2] df.loc[tmp.index, "is_anomaly"] = 1 # mark missing values as anomaly as well df.loc[df["count"] == -1, "is_anomaly"] = 1 filename = f"{dataset_name}.test.csv" path = os.path.join(dataset_subfolder, filename) target_filepath = os.path.join(target_subfolder, filename) dataset_length = len(df) df.to_csv(target_filepath, index=False) print(f"Processed dataset {dataset_name} -> {target_filepath}") # save metadata dm.add_dataset((dataset_collection_name, dataset_name), train_path = None, test_path = path, dataset_type = dataset_type, datetime_index = datetime_index, split_at = None, train_type = train_type, train_is_normal = train_is_normal, input_type = input_type, dataset_length = dataset_length ) dm.save() dm.refresh() dm.df().loc[(slice(dataset_collection_name,dataset_collection_name), slice(None))] ``` ## Experimentation ``` data_file = source_folder / "Dodgers.data" df = pd.read_csv(data_file, header=None, encoding="latin1", parse_dates=[0], infer_datetime_format=True) df.columns = ["timestamp", "count"] #df["count"] = df["count"].replace(-1, np.nan) df events_file = source_folder / "Dodgers.events" df_events = pd.read_csv(events_file, header=None, encoding="latin1") df_events.columns = ["date", "begin", "end", "game attendance" ,"away team", "game score"] df_events.insert(0, "begin_timestamp", pd.to_datetime(df_events["date"] + " " + df_events["begin"])) df_events.insert(1, "end_timestamp", pd.to_datetime(df_events["date"] + " " + df_events["end"])) df_events = df_events.drop(columns=["date", "begin", "end", "game attendance" ,"away team", "game score"]) df_events # labelling df["is_anomaly"] = 0 for _, (t1, t2) in df_events.iterrows(): tmp = df[df["timestamp"] >= t1] tmp = tmp[tmp["timestamp"] <= t2] df.loc[tmp.index, "is_anomaly"] = 1 df.loc[df["count"] == -1, "is_anomaly"] = 1 df.iloc[15000:20000].plot(x="timestamp", y=["count", "is_anomaly"]) ```
github_jupyter
# Train a ready to use TensorFlow model with a simple pipeline ``` import os import sys import warnings warnings.filterwarnings("ignore") import numpy as np import matplotlib.pyplot as plt # the following line is not required if BatchFlow is installed as a python package. sys.path.append("../..") from batchflow import Pipeline, B, C, D, F, V from batchflow.opensets import MNIST, CIFAR10, CIFAR100 from batchflow.models.tf import ResNet18 ``` BATCH_SIZE might be increased for modern GPUs with lots of memory (4GB and higher). ``` BATCH_SIZE = 64 ``` # Create a dataset [MNIST](http://yann.lecun.com/exdb/mnist/) is a dataset of handwritten digits frequently used as a baseline for machine learning tasks. Downloading MNIST database might take a few minutes to complete. ``` dataset = MNIST(bar=True) ``` There are also predefined CIFAR10 and CIFAR100 datasets. # Define a pipeline config Config allows to create flexible pipelines which take parameters. For instance, if you put a model type into config, you can run a pipeline against different models. See [a list of available models](https://analysiscenter.github.io/batchflow/intro/tf_models.html#ready-to-use-models) to choose the one which fits you best. ``` config = dict(model=ResNet18) ``` # Create a template pipeline A template pipeline is not linked to any dataset. It's just an abstract sequence of actions, so it cannot be executed, but it serves as a convenient building block. ``` train_template = (Pipeline() .init_variable('loss_history', []) .init_model('conv_nn', C('model'), 'dynamic', config={'inputs/images/shape': B.image_shape, 'inputs/labels/classes': D.num_classes, 'initial_block/inputs': 'images'}) .to_array() .train_model('conv_nn', fetches='loss', images=B.images, labels=B.labels, save_to=V('loss_history', mode='a')) ) ``` # Train the model Apply a dataset and a config to a template pipeline to create a runnable pipeline: ``` train_pipeline = (train_template << dataset.train) << config ``` Run the pipeline (it might take from a few minutes to a few hours depending on your hardware) ``` train_pipeline.run(BATCH_SIZE, shuffle=True, n_epochs=1, drop_last=True, bar=True, prefetch=1) ``` Note that the progress bar often increments by 2 at a time - that's prefetch in action. It does not give much here, though, since almost all time is spent in model training which is performed under a thread-lock one batch after another without any parallelism (otherwise the model would not learn anything as different batches would rewrite one another's model weights updates). ``` plt.figure(figsize=(15, 5)) plt.plot(train_pipeline.v('loss_history')) plt.xlabel("Iterations"), plt.ylabel("Loss") plt.show() ``` # Test the model It is much faster than training, but if you don't have GPU it would take some patience. ``` test_pipeline = (dataset.test.p .import_model('conv_nn', train_pipeline) .init_variable('predictions') .init_variable('metrics') .to_array() .predict_model('conv_nn', fetches='predictions', images=B.images, save_to=V('predictions')) .gather_metrics('class', targets=B.labels, predictions=V('predictions'), fmt='logits', axis=-1, save_to=V('metrics', mode='a')) .run(BATCH_SIZE, shuffle=True, n_epochs=1, drop_last=False, bar=True) ) ``` Let's get the accumulated [metrics information](https://analysiscenter.github.io/batchflow/intro/models.html#model-metrics) ``` metrics = test_pipeline.get_variable('metrics') ``` Or a shorter version: `metrics = test_pipeline.v('metrics')` Now we can easiliy calculate any metrics we need ``` metrics.evaluate('accuracy') metrics.evaluate(['false_positive_rate', 'false_negative_rate'], multiclass=None) ``` # Save the model After learning the model, you may need to save it. It's easy to do this. ``` train_pipeline.save_model_now('conv_nn', path='path/to/save') ``` ## What's next? See [the image augmentation tutorial](./06_image_augmentation.ipynb) or return to the [table of contents](./00_description.ipynb).
github_jupyter
``` from __future__ import print_function import keras from keras.datasets import cifar10 from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten, Lambda from keras.layers import Conv2D, MaxPooling2D from keras.layers.normalization import BatchNormalization as BN from keras.layers import GaussianNoise as GN from keras.optimizers import SGD from keras.models import Model import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import os from keras.callbacks import LearningRateScheduler as LRS from keras.preprocessing.image import ImageDataGenerator #### LOAD AND TRANSFORM ## Download: ONLY ONCE! #os.system('wget https://www.dropbox.com/s/sakfqp6o8pbgasm/data.tgz') #os.system('tar xvzf data.tgz') ##### # Load x_train = np.load('x_train.npy') x_test = np.load('x_test.npy') y_train = np.load('y_train.npy') y_test = np.load('y_test.npy') # Stats print(x_train.shape) print(y_train.shape) print(x_test.shape) print(y_test.shape) ## View some images plt.imshow(x_train[3,:,:,: ] ) plt.show() ## Transforms x_train = x_train.astype('float32') x_test = x_test.astype('float32') y_train = y_train.astype('float32') y_test = y_test.astype('float32') x_train = keras.applications.vgg16.preprocess_input(x_train) x_test = keras.applications.vgg16.preprocess_input(x_test) batch_size = 32 num_classes = 20 epochs = 150 ## Labels y_train=y_train-1 y_test=y_test-1 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) datagen = ImageDataGenerator( width_shift_range=0.3, height_shift_range=0.3, rotation_range=45, zoom_range=[1.0,1.2], horizontal_flip=True ) vgg16 = keras.applications.VGG16( include_top=False, weights="imagenet", input_shape=(250, 250, 3), classes=1000, classifier_activation="softmax" ) vgg16.summary() ############################# ### BILINEAR #### ############################# # DEFINE A LEARNING RATE SCHEDULER def scheduler(epoch): if epoch < 25: return .1 elif epoch < 50: return 0.01 else: return 0.001 def scheduler_fine(epoch): if epoch < 25: return .0001 elif epoch < 50: return 0.00001 else: return 0.000001 set_lr = LRS(scheduler) set_lr_fine = LRS(scheduler_fine) def outer_product(x): phi_I = tf.einsum('ijkm,ijkn->imn',x[0],x[1]) # Einstein Notation [batch,31,31,depth] x [batch,31,31,depth] -> [batch,depth,depth] phi_I = tf.reshape(phi_I,[-1,x[0].shape[3]**2]) # Reshape from [batch_size,depth,depth] to [batch_size, depth*depth] phi_I = tf.divide(phi_I,x[0].shape[1]**2) # Divide by feature map size [sizexsize] y_ssqrt = tf.multiply(tf.sign(phi_I),tf.sqrt(tf.abs(phi_I)+1e-12)) # Take signed square root of phi_I z_l2 = tf.nn.l2_normalize(y_ssqrt) # Apply l2 normalization return z_l2 conv = vgg16.get_layer('block4_pool') d1 = Dropout(0.5)(conv.output) ## Why?? d2 = Dropout(0.5)(conv.output) ## Why?? x = Lambda(outer_product, name='outer_product')([d1,d2]) predictions = Dense(num_classes, activation='softmax', name='predictions')(x) model = Model(inputs=vgg16.input, outputs=predictions) model.summary() vgg16.trainable = False model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) ## TRAINING with DA and LRA history=model.fit_generator( datagen.flow(x_train, y_train,batch_size=batch_size), steps_per_epoch=len(x_train) / batch_size, epochs=10, validation_data=(x_test, y_test), callbacks=[set_lr], verbose=1 ) vgg16.trainable = True model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) ## TRAINING with DA and LRA history=model.fit_generator( datagen.flow(x_train, y_train,batch_size=batch_size), steps_per_epoch=len(x_train) / batch_size, epochs=epochs-10, validation_data=(x_test, y_test), callbacks=[set_lr_fine], verbose=1 ) ```
github_jupyter
``` %pip install bs4 %pip install lxml %pip install nltk %pip install textblob import urllib.request as ur from bs4 import BeautifulSoup ``` ## STEP 1: Read data from HTML and parse it to clean string ``` #We would extract the abstract from this HTML page article articleURL = "https://www.washingtonpost.com/news/the-switch/wp/2016/10/18/the-pentagons-massive-new-telescope-is-designed-to-track-space-junk-and-watch-out-for-killer-asteroids/" #HTML contains extra tags in a tree like structure page = ur.urlopen(articleURL).read().decode('utf8','ignore') soup = BeautifulSoup(page,"lxml") soup #We want the article or base text only soup.find('article') #Remove the article tags and get the plain text soup.find('article').text #Take all the articles from the page using find_all and combine together into a single string with a " " text = ' '.join(map(lambda p: p.text, soup.find_all('article'))) text #The encode() method encodes the string using the specified encoding. Convert back the encoded version to string by using decode() #Replace special encoded characters with a '?', further replace question mark with a blank char to get plain text from encoded article text. text.encode('ascii', errors='replace').decode('utf8').replace("?"," ") #All above steps encapsulated- to read and parse data from HTMl text import urllib.request as ur from bs4 import BeautifulSoup def getTextWaPo(url): page = ur.urlopen(url).read().decode('utf8') soup = BeautifulSoup(page,"lxml") text = ' '.join(map(lambda p: p.text, soup.find_all('article'))) return text.encode('ascii', errors='replace').decode('utf8').replace("?"," ") #calling function articleURL= "https://www.washingtonpost.com/news/the-switch/wp/2016/10/18/the-pentagons-massive-new-telescope-is-designed-to-track-space-junk-and-watch-out-for-killer-asteroids/" text = getTextWaPo(articleURL) text ``` ## STEP 2: Extract summary ``` import nltk from nltk.tokenize import sent_tokenize,word_tokenize from nltk.corpus import stopwords from string import punctuation #Strip all se4ntences in the text # A sentence is identified by a period or full stop. A space has to be accompanied by the full-stop else both sentences would be treated as a single sentence nltk.download('punkt') sents = sent_tokenize(text) sents #Strip all words/tokens in the text word_sent = word_tokenize(text.lower()) word_sent #Get all english stop words and punctuation marks nltk.download('stopwords') _stopwords = set(stopwords.words('english') + list(punctuation)) _stopwords #Filter stop words from our list of words in text word_sent=[word for word in word_sent if word not in _stopwords] word_sent #Use build in function to determine the frequency or the number of times each word occurs in the text #The higher the frequency, more is the importance of word from nltk.probability import FreqDist freq = FreqDist(word_sent) freq #The nlargest () function of the Python module heapq returns the specified number of largest elements from a Python iterable like a list, tuple and others. #heapq.nlargest(n, iterable, key=sorting_key, here used the dict.get function to get the value(frequency for a word) from key:value pair) from heapq import nlargest nlargest(10, freq, key=freq.get) #To check if these most important words match with the central theme of the article 'Space asteroid attack' #Now that we have the Word importance, we can calculate the significance score for each sentence #Word_Imp=Frequency of word in corpus #Sentence_Significance_score=SUM(Word_Imp for Words in the sentence) from collections import defaultdict ranking = defaultdict(int) for i,sent in enumerate(sents): for w in word_tokenize(sent.lower()): if w in freq: ranking[i] += freq[w] ranking #{Index of sentence : Sentence significance score} #Top most important 4 sentences - having maximum sentence significance score sents_idx = nlargest(4, ranking, key=ranking.get) sents_idx #Get the sentences from the top indices summary_1=[sents[j] for j in sorted(sents_idx)] summary_1 #Concat most important sentences to form the summary summary="" for i in range(len(summary_1)): summary=summary + summary_1[i] summary def summarize(text, n): sents = sent_tokenize(text) assert n <= len(sents) #Check if the sentences list have atleast n sentences word_sent = word_tokenize(text.lower()) _stopwords = set(stopwords.words('english') + list(punctuation)) word_sent=[word for word in word_sent if word not in _stopwords] freq = FreqDist(word_sent) ranking = defaultdict(int) for i,sent in enumerate(sents): for w in word_tokenize(sent.lower()): if w in freq: ranking[i] += freq[w] sents_idx = nlargest(n, ranking, key=ranking.get) summary_1= [sents[j] for j in sorted(sents_idx)] summary="" for i in range(len(summary_1)): summary=summary + summary_1[i] return summary #calling summarize(text,4) ```
github_jupyter
## Import and export of data from different sources in GRASS GIS GRASS GIS Location can contain data only in one coordinate reference system (CRS) in order to have full control over reprojection and avoid issues coming from on-the-fly reprojection. When starting a project, decide which CRS you will use. Create a new Location using Location Wizard (accessible from GRASS GIS start-up page). Specify desired CRS either by providing EPSG code (can be found e.g. at [epsg.io](http://epsg.io/)) or by providing a georeferenced file (such as Shapefile) which has the CRS you want. ### Importing data in common vector and raster formats For basic import of raster and vector files, use _r.import_ and _v.import_, respectively. These modules will reproject the input data if necessary. If the input data's CRS matches the Location's CRS, we can use _r.in.gdal_ or _v.in.ogr_ for importing raster and vector. Alternatively, you can use a two-step approach for the cases when the data's CRS doesn't match the Location's CRS. First create a new temporary Location based on the CRS of the data you want to import, switch to this Location and then use _r.in.gdal_ or _v.in.ogr_ to import raster and vector data, respectively. Then switch to the Location of your project and use _r.proj_ and _v.proj_ to reproject data from the temporary Location to your project Location. This approach is necessary for formats which are not supported by _r.import_ and _v.import_ modules. Modules _r.proj_ and _v.proj_ can be also used for bringing raster and vector maps from one Location to another. Modules _r.in.gdal_ and _v.in.ogr_ check whether the CRS of the imported data matches the Location's CRS. Sometimes the CRS of imported data is not specified correctly or is missing and therefore import fails. If you know that the actual CRS matches the Location's CRS, it is appropriate to use _r.in.gdal_'s or _v.in.ogr_'s -o flag to overwrite the projection check and import the data as they are. If you zoom to raster or vector in GRASS GUI and it does not fit with the rest of the data, it means that it was imported with wrong projection information (or with the -o flag when the coordinates in fact don't match). You can use r.info and v.info to get the information about the extents of (already imported) rasters and vectors. ### Importing CSV and other ASCII data There are many formats of plain text files. In the context of GIS we usually talk about ASCII formats and CSV files. CSV files usually hold only coordinates and sometimes attributes of points. These files usually don't have CRS information attached to them, so we must be very careful and import the data only if the coordinates are in the CRS of the Location we are using. Let's create a CSV file called `points.txt` using a text editor (Notepad++, TextEdit, MS Notepad), for example: ``` 637803.6,223804.7 641835.5,223761.2 643056.0,217419.0 ``` The coordinates we entered are in EPSG:3358 and we assume that the GRASS Location is using this CRS as well. This file can be imported to GRASS GIS using: ``` !v.in.ascii input=points.txt output=test_ascii separator=comma x=1 y=2 ``` Notice, we have to specify the column number where the X and Y (optionally Z) coordinates are stored. In this example, X coordinates are in the first column Y in the second one. Don't forget to specify correct column delimiter. If the data are not in the CRS we are using, create a new Location with matching CRS, import the data and use _v.proj_ as described above. ### Importing lidar point clouds Lidar point clouds can be imported in two ways: as raster maps using binning or as vector points. However, one must explore the dataset first. In command line, we can check the projection information and other metadata about a LAS file using _lasinfo_ tool: ``` !lasinfo tile_0793_016_spm.las ``` _r.in.lidar_ module can be used to scan the spatial extent of the dataset: ``` !r.in.lidar input=tile_0793_016_spm.las -s ``` #### Binning Before creating the actual raster with elevation, we need to decide the extent and the resolution we will use for the binning. We can use _r.in.lidar_ module for that by setting the resolution directly and using a -e flag to use dataset extent instead of taking it from the computational region. We are interested in the density of points, so we use `method=n`: ``` !r.in.lidar input=tile_0793_016_spm.las output=tile_0793_016_n method=n -e resolution=2 ``` After determining the optimal resolution for binning and the desired area, we can use _g.region_ to set the computational region. _r.in.lidar_ without the additional parameters above will create a raster map from points using binning with resolution and extent taken from the computational region: ``` !r.in.lidar input=tile_0793_016_spm.las output=tile_0793_016 ``` #### Interpolation When the result of binning contains a lot of NULL cells or when it is not smooth enough for further analysis, we can import the point cloud as vector points and interpolate a raster. Supposing that we already determined the desired extent and resolution (using _r.in.lidar_ as described above) we can use _v.in.lidar_ lidar for import (and using class filter to get only ground points): ``` !v.in.lidar input=tile_0793_016_spm.las output=tile_0793_016 class=2 -r -t -b ``` This import only the points of class 2 (ground) in the current computational region without the attribute table and building the topology. Then we follow with interpolation using, e.g. _v.surf.rst_ module: ``` !v.surf.rst input=tile_0793_016 elevation=tile_0793_016_elevation slope=tile_0793_016_slope aspect=tile_0793_016_aspect npmin=100 tension=20 smooth=1 ``` #### Importing data in different CRS In case the CRS of the file doesn't match the CRS used in the GRASS Location, reprojection can be done before importing using _las2las_ tool. The following example command is for reprojecting tiles in NAD83/North Carolina in feet (EPSG:2264) into NAD83/North Carolina in meters (EPSG:3358): ``` !las2las --a_srs=EPSG:2264 --t_srs=EPSG:3358 -i input_spf.las -o output_spm.las ``` #### Importing data with broken projection information Modules _r.in.lidar_ and _v.in.lidar_ check whether the CRS of the imported data matches the Location's CRS. Sometimes the CRS of imported data is not specified correctly or is missing and therefore import fails. If you know that the actual CRS matches the Location's CRS, it is appropriate to use _r.in.lidar_'s or _v.in.lidar_'s -o flag to overwrite the projection check and import the data as they are. ``` !r.in.lidar input=tile_0793_016_spm.las -s -o ``` ### Transferring GRASS GIS data between two computers If two GRASS GIS users want to exchange data, they can use GRASS GIS native exchange format -- _packed map_. A vector or raster map can be exported from a GRASS Location in this format using _v.pack_ or _r.pack_ respectively. This format preserves everything for a map in a way as it is stored in a GRASS Database. _Projection of the source and target GRASS Locations must be the same._ If GRASS GIS users wish to exchange GRASS Mapsets, they can do so as long as the source and target GRASS Locations have the same projection. The PERMANENT Mapset should not be usually exchanged as it is a crucial part of the given Location. Locations can be easily transferred in between GRASS Database directories on different computers as they carry all data and projection information within them and the storage format used in the background is platform independent. Locations as well as whole GRASS Databases can be copied and moved in the same way as any other directories on the computer. ### Further resources * [GRASS GIS manual](http://grass.osgeo.org/grass72/manuals) * [About GRASS GIS Database structure](https://grass.osgeo.org/grass72/manuals/grass_database.html) * [GRASS GIS for ArcGIS users](https://grasswiki.osgeo.org/wiki/GRASS_GIS_for_ArcGIS_users) * [epsg.io](http://epsg.io/) (Repository of EPSG codes) ``` # end the GRASS session os.remove(rcfile) ```
github_jupyter
``` from Bio import pairwise2 as pw from Bio import Seq from Bio import SeqIO import Bio from Bio.Alphabet import IUPAC import numpy as np import pandas as pd import re import time from functools import wraps def fn_timer(function): @wraps(function) def function_timer(*args, **kwargs): t0 = time.time() result = function(*args, **kwargs) t1 = time.time() print ("Total time running %s: %s seconds" % (function.__name__, str(t1-t0)) ) return result return function_timer uniprot = list(SeqIO.parse('uniprot-proteome_human.fasta','fasta')) for seq in uniprot: # x = seq.id seq.id = x.split("|")[1] ids = [seq.id for seq in uniprot] names = [seq.name for seq in uniprot] uni_dict = {} for i in uniprot: uni_dict[i.id] =i grist = pd.read_table('gristone_positive_data.txt') grist.columns grist.loc[[1,3,5]] #@fn_timer def calc_identity_score(seq1,seq2,gap=-0.5,extend=-0.1): """ return seq1 seq1 pairwise identiy score seq1 is positive seq1 seq2 is string Bio.pairwise2.format_alignment output: MPKGKKAKG------ ||||||| --KGKKAKGKKVAPA Score=7 alignment output: [('MPKGKKAKG------', '--KGKKAKGKKVAPA', 7.0, 0, 15)] score = ali[0][2] = 7 """ ali = pw.align.globalxs(seq1,seq2,gap,extend,score_only=True) # gap penalty = -0.5 in case of cak caak score =3 return ali/min(len(seq1),len(seq2)) # 返回短序列的值,防止substring #@fn_timer def window_generator(seq, window_lenth=7,step=1): """ return list of seq window slide seq is string """ if len(seq) >= window_lenth: return [seq[i:i+window_lenth] for i in range(0,len(seq)-window_lenth+1,step)] else: return [] window_generator('123',3) from Bio.pairwise2 import format_alignment def flat(nums): res = [] for i in nums: if isinstance(i, list): res.extend(flat(i)) else: res.append(i) return res #@fn_timer def slide_with_flank(seq,full,step=1,up_flank=6,down_flank=6,flags=0): """ return window slide result as list for a full str given potential sub seq and it's flank removed seq=abc, full = 01234abc45678abc1234 up=1 down=1 step =1 result is ['012', '123', '567', '234'] 生成一个str full 的滑动窗口自片段,去除了seq和其上下游的部分 """ res = [] window_len = len(seq) coords = [i.span() for i in re.finditer(seq,full,flags)] # = search_all # 处理首尾情况 if len(coords) == 0: res.append(window_generator(full,window_lenth=window_len,step=step)) elif len(coords) == 1: if (coords[0][0]-up_flank) >= 0: res.append(window_generator(full[0:coords[0][0]-up_flank], window_lenth=window_len,step=step)) if (coords[0][1]+down_flank) <= len(full): res.append(window_generator(full[coords[0][1]+up_flank:], window_lenth=window_len,step=step)) else: # len(coords) >1 if (coords[0][0]-up_flank) >= 0: res.append(window_generator(full[0:coords[0][0]-up_flank], window_lenth=window_len,step=step)) for i in range(1,len(coords)): ## 处理 1 2 之间的东西 if coords[i][0] - coords[i-1][1] > up_flank+down_flank+window_len: res.append(window_generator(full[coords[i-1][1]+up_flank:coords[i][0]-down_flank], window_lenth=window_len,step=step)) if (coords[-1][1]+down_flank) <= len(full): res.append(window_generator(full[coords[-1][1]+down_flank:], window_lenth=window_len,step=step)) return flat(res) slide_with_flank('abc','01234abc456raa78abc7779',up_flank=2,down_flank=2) ###### @fn_timer def filter_with_identity_affinity(seq,full,identity_cutoff=0.5,step=1,up=6,down=6,flags=0): filtered = {} slides = slide_with_flank(seq,full,step=step,up_flank=up,down_flank=down,flags=flags) for s in slides: identity_score = calc_identity_score(seq,s) if identity_score <= identity_cutoff: #if s in filtered.keys(): #已经有score, 寸大值表明有大片段 filtered[s] = identity_score return filtered grist['neg1'] =np.full_like(grist['peptide'],'o') t0 = time.process_time() for i in range(0,len(grist)): #for i in range(0,10000): g = grist.iloc[i] if g['uniport_id'] in ids: pep = str(g['left_flanking']) + str(g['peptide']) + str(g['right_flanking']) full = str(uni_dict[g['uniport_id']].seq) res_local = filter_with_identity_affinity(pep,full,step=10) # res_full = {} # for win in res_local.keys(): # 过滤掉阳性集在window结果中 identity>0.5的 # for pep in np.random.choice(grist['peptide'],100):#.remove(g['peptide']): # score = calc_identity_score(win,pep) # if score <0.5: # res_full[win] = score # else: # break # res_full_key = sorted(res_full.keys(),key=lambda x:(res_local[x],res_full[x])) k=sorted(res_local.keys(),key=lambda x:res_local[x]) if len(k): grist.loc[i,'neg1'] = k[0] print(time.process_time()-t0) #grist[grist['neg1'] != 'o'].to_csv('testdata0401.txt',index = False) grist[['left_flanking','right_flanking']] a=grist['neg1'][0] #gri grist.loc? a grist.loc? grist.head() persons={'ZhangSan':'male', 'LiSi':'male', 'WangHong':'female'} #找出所有男性 males = filter(lambda x:'male'== x[1], persons.items()) for (key,value) in males: print('%s : %s' % (key,value)) grist.to_csv? %%writefile train.py ```
github_jupyter
# Just-in-time Compilation with [Numba](http://numba.pydata.org/) ## Numba is a JIT compiler which translates Python code in native machine language * Using special decorators on Python functions Numba compiles them on the fly to machine code using LLVM * Numba is compatible with Numpy arrays which are the basis of many scientific packages in Python * It enables parallelization of machine code so that all the CPU cores are used ``` import math import numpy as np import matplotlib.pyplot as plt import numba ``` ## Using `numba.jit` Numba offers `jit` which can used to decorate Python functions. ``` def is_prime(n): if n <= 1: raise ArithmeticError('"%s" <= 1' % n) if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True n = np.random.randint(2, 10000000, dtype=np.int64) # Get a random integer between 2 and 10000000 print(n, is_prime(n)) #is_prime(1) @numba.jit def is_prime_jitted(n): if n <= 1: raise ArithmeticError('"%s" <= 1' % n) if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 100000, dtype=np.int64, size=10000) %time p1 = [is_prime(n) for n in numbers] %time p2 = [is_prime_jitted(n) for n in numbers] ``` ## Using `numba.jit` with `nopython=True` ``` @numba.jit(nopython=True) def is_prime_njitted(n): if n <= 1: raise ArithmeticError('"%s" <= 1' % n) if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 100000, dtype=np.int64, size=1000) %time p1 = [is_prime_jitted(n) for n in numbers] %time p2 = [is_prime_njitted(n) for n in numbers] ``` ## Using ` @numba.jit(nopython=True)` is equivalent to using ` @numba.njit` ``` @numba.njit def is_prime_njitted(n): if n <= 1: raise ArithmeticError('n <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 100000, dtype=np.int64, size=1000) %time p = [is_prime_jitted(n) for n in numbers] %time p = [is_prime_njitted(n) for n in numbers] ``` ## Use `cache=True` to cache the compiled function ``` import math from numba import njit @njit(cache=True) def is_prime_njitted_cached(n): if n <= 1: raise ArithmeticError('n <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 100000, dtype=np.int64, size=1000) %time p = [is_prime_njitted(n) for n in numbers] %time p = [is_prime_njitted_cached(n) for n in numbers] ``` ## Vector Triad Benchmark Python vs Numpy vs Numba ``` from timeit import default_timer as timer def vecTriad(a, b, c, d): for j in range(a.shape[0]): a[j] = b[j] + c[j] * d[j] def vecTriadNumpy(a, b, c, d): a[:] = b + c * d @numba.njit() def vecTriadNumba(a, b, c, d): for j in range(a.shape[0]): a[j] = b[j] + c[j] * d[j] # Initialize Vectors n = 10000 # Vector size r = 100 # Iterations a = np.zeros(n, dtype=np.float64) b = np.empty_like(a) b[:] = 1.0 c = np.empty_like(a) c[:] = 1.0 d = np.empty_like(a) d[:] = 1.0 # Python version start = timer() for i in range(r): vecTriad(a, b, c, d) end = timer() mflops = 2.0 * r * n / ((end - start) * 1.0e6) print(f'Python: Mflops/sec: {mflops}') # Numpy version start = timer() for i in range(r): vecTriadNumpy(a, b, c, d) end = timer() mflops = 2.0 * r * n / ((end - start) * 1.0e6) print(f'Numpy: Mflops/sec: {mflops}') # Numba version vecTriadNumba(a, b, c, d) # Run once to avoid measuring the compilation overhead start = timer() for i in range(r): vecTriadNumba(a, b, c, d) end = timer() mflops = 2.0 * r * n / ((end - start) * 1.0e6) print(f'Numba: Mflops/sec: {mflops}') ``` ## Eager compilation using function signatures ``` import math from numba import njit @njit(['boolean(int64)', 'boolean(int32)']) def is_prime_njitted_eager(n): if n <= 1: raise ArithmeticError('n <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 1000000, dtype=np.int64, size=1000) # Run twice aft %time p1 = [is_prime_njitted_eager(n) for n in numbers] %time p2 = [is_prime_njitted_eager(n) for n in numbers] p1 = [is_prime_njitted_eager(n) for n in numbers.astype(np.int32)] #p2 = [is_prime_njitted_eager(n) for n in numbers.astype(np.float64)] ``` ## Calculating and plotting the [Mandelbrot set](https://en.wikipedia.org/wiki/Mandelbrot_set) ``` X, Y = np.meshgrid(np.linspace(-2.0, 1, 1000), np.linspace(-1.0, 1.0, 1000)) def mandelbrot(X, Y, itermax): mandel = np.empty(shape=X.shape, dtype=np.int32) for i in range(X.shape[0]): for j in range(X.shape[1]): it = 0 cx = X[i, j] cy = Y[i, j] x = 0.0 y = 0.0 while x * x + y * y < 4.0 and it < itermax: x, y = x * x - y * y + cx, 2.0 * x * y + cy it += 1 mandel[i, j] = it return mandel fig = plt.figure(figsize=(15, 10)) ax = fig.add_subplot(111) %time m = mandelbrot(X, Y, 100) ax.imshow(np.log(1 + m), extent=[-2.0, 1, -1.0, 1.0]); ax.set_aspect('equal') ax.set_ylabel('Im[c]') ax.set_xlabel('Re[c]'); @numba.njit(parallel=True) def mandelbrot_jitted(X, Y, radius2, itermax): mandel = np.empty(shape=X.shape, dtype=np.int32) for i in numba.prange(X.shape[0]): for j in range(X.shape[1]): it = 0 cx = X[i, j] cy = Y[i, j] x = cx y = cy while x * x + y * y < 4.0 and it < itermax: x, y = x * x - y * y + cx, 2.0 * x * y + cy it += 1 mandel[i, j] = it return mandel fig = plt.figure(figsize=(15, 10)) ax = fig.add_subplot(111) %time m = mandelbrot_jitted(X, Y, 4.0, 100) ax.imshow(np.log(1 + m), extent=[-2.0, 1, -1.0, 1.0]); ax.set_aspect('equal') ax.set_ylabel('Im[c]') ax.set_xlabel('Re[c]'); ``` ### Getting parallelization information ``` mandelbrot_jitted.parallel_diagnostics(level=3) ``` ## Creating `ufuncs` using `numba.vectorize` ``` from math import sin from numba import float64, int64 def my_numpy_sin(a, b): return np.sin(a) + np.sin(b) @np.vectorize def my_sin(a, b): return sin(a) + sin(b) @numba.vectorize([float64(float64, float64), int64(int64, int64)], target='parallel') def my_sin_numba(a, b): return np.sin(a) + np.sin(b) x = np.random.randint(0, 100, size=9000000) y = np.random.randint(0, 100, size=9000000) %time _ = my_numpy_sin(x, y) %time _ = my_sin(x, y) %time _ = my_sin_numba(x, y) ``` ### Vectorize the testing of prime numbers ``` @numba.vectorize('boolean(int64)') def is_prime_v(n): if n <= 1: raise ArithmeticError(f'"0" <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 10000000000, dtype=np.int64, size=100000) %time p = is_prime_v(numbers) ``` ### Parallelize the vectorized function ``` @numba.vectorize(['boolean(int64)', 'boolean(int32)'], target='parallel') def is_prime_vp(n): if n <= 1: raise ArithmeticError('n <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 10000000000, dtype=np.int64, size=100000) %time p1 = is_prime_v(numbers) %time p2 = is_prime_vp(numbers) # Print the largest primes from to 1 and 10 millions numbers = np.arange(1000000, 10000001, dtype=np.int32) %time p1 = is_prime_vp(numbers) primes = numbers[p1] for n in primes[-10:]: print(n) ```
github_jupyter
``` import sys sys.path.append('../input/shopee-competition-utils') sys.path.insert(0,'../input/pytorch-image-models') import numpy as np import pandas as pd import torch from torch import nn from torch.nn import Parameter from torch.nn import functional as F from torch.utils.data import Dataset, DataLoader import albumentations from albumentations.pytorch.transforms import ToTensorV2 from custom_scheduler import ShopeeScheduler from custom_activation import replace_activations, Mish from custom_optimizer import Ranger import math import cv2 import timm import os import random import gc from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import GroupKFold from sklearn.neighbors import NearestNeighbors from tqdm.notebook import tqdm class CFG: DATA_DIR = '../input/shopee-product-matching/train_images' TRAIN_CSV = '../input/shopee-product-matching/train.csv' # data augmentation IMG_SIZE = 512 MEAN = [0.485, 0.456, 0.406] STD = [0.229, 0.224, 0.225] SEED = 2021 # data split N_SPLITS = 5 TEST_FOLD = 0 VALID_FOLD = 1 EPOCHS = 8 BATCH_SIZE = 8 NUM_WORKERS = 4 DEVICE = 'cuda:0' CLASSES = 6609 SCALE = 30 MARGINS = [0.5,0.6,0.7,0.8,0.9] MARGIN = 0.5 BEST_THRESHOLD = 0.19 BEST_THRESHOLD_MIN2 = 0.225 MODEL_NAME = 'resnet50' MODEL_NAMES = ['resnet50','resnext50_32x4d','densenet121','efficientnet_b3','eca_nfnet_l0'] LOSS_MODULE = 'arc' LOSS_MODULES = ['arc','curricular'] USE_ARCFACE = True MODEL_PATH = f'{MODEL_NAME}_{LOSS_MODULE}_face_epoch_8_bs_8_margin_{MARGIN}.pt' FC_DIM = 512 SCHEDULER_PARAMS = { "lr_start": 1e-5, "lr_max": 1e-5 * 32, "lr_min": 1e-6, "lr_ramp_ep": 5, "lr_sus_ep": 0, "lr_decay": 0.8, } def seed_everything(seed): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = True # set True to be faster seed_everything(CFG.SEED) def get_test_transforms(): return albumentations.Compose( [ albumentations.Resize(CFG.IMG_SIZE,CFG.IMG_SIZE,always_apply=True), albumentations.Normalize(mean=CFG.MEAN, std=CFG.STD), ToTensorV2(p=1.0) ] ) class ShopeeImageDataset(torch.utils.data.Dataset): """for validating and test """ def __init__(self,df, transform = None): self.df = df self.root_dir = CFG.DATA_DIR self.transform = transform def __len__(self): return len(self.df) def __getitem__(self,idx): row = self.df.iloc[idx] img_path = os.path.join(self.root_dir,row.image) image = cv2.imread(img_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) label = row.label_group if self.transform: augmented = self.transform(image=image) image = augmented['image'] return image,torch.tensor(1) ``` ## ArcMarginProduct ``` class ArcMarginProduct(nn.Module): r"""Implement of large margin arc distance: : Args: in_features: size of each input sample out_features: size of each output sample s: norm of input feature m: margin cos(theta + m) """ def __init__(self, in_features, out_features, s=30.0, m=0.50, easy_margin=False, ls_eps=0.0): print('Using Arc Face') super(ArcMarginProduct, self).__init__() self.in_features = in_features self.out_features = out_features self.s = s self.m = m self.ls_eps = ls_eps # label smoothing self.weight = Parameter(torch.FloatTensor(out_features, in_features)) nn.init.xavier_uniform_(self.weight) self.easy_margin = easy_margin self.cos_m = math.cos(m) self.sin_m = math.sin(m) self.th = math.cos(math.pi - m) self.mm = math.sin(math.pi - m) * m def forward(self, input, label): # --------------------------- cos(theta) & phi(theta) --------------------------- cosine = F.linear(F.normalize(input), F.normalize(self.weight)) sine = torch.sqrt(1.0 - torch.pow(cosine, 2)) phi = cosine * self.cos_m - sine * self.sin_m if self.easy_margin: phi = torch.where(cosine > 0, phi, cosine) else: phi = torch.where(cosine > self.th, phi, cosine - self.mm) # --------------------------- convert label to one-hot --------------------------- # one_hot = torch.zeros(cosine.size(), requires_grad=True, device='cuda') one_hot = torch.zeros(cosine.size(), device=CFG.DEVICE) one_hot.scatter_(1, label.view(-1, 1).long(), 1) if self.ls_eps > 0: one_hot = (1 - self.ls_eps) * one_hot + self.ls_eps / self.out_features # -------------torch.where(out_i = {x_i if condition_i else y_i) ------------- output = (one_hot * phi) + ((1.0 - one_hot) * cosine) output *= self.s return output, nn.CrossEntropyLoss()(output,label) ``` ## CurricularFace ``` ''' credit : https://github.com/HuangYG123/CurricularFace/blob/8b2f47318117995aa05490c05b455b113489917e/head/metrics.py#L70 ''' def l2_norm(input, axis = 1): norm = torch.norm(input, 2, axis, True) output = torch.div(input, norm) return output class CurricularFace(nn.Module): def __init__(self, in_features, out_features, s = 30, m = 0.50): super(CurricularFace, self).__init__() print('Using Curricular Face') self.in_features = in_features self.out_features = out_features self.m = m self.s = s self.cos_m = math.cos(m) self.sin_m = math.sin(m) self.threshold = math.cos(math.pi - m) self.mm = math.sin(math.pi - m) * m self.kernel = nn.Parameter(torch.Tensor(in_features, out_features)) self.register_buffer('t', torch.zeros(1)) nn.init.normal_(self.kernel, std=0.01) def forward(self, embbedings, label): embbedings = l2_norm(embbedings, axis = 1) kernel_norm = l2_norm(self.kernel, axis = 0) cos_theta = torch.mm(embbedings, kernel_norm) cos_theta = cos_theta.clamp(-1, 1) # for numerical stability with torch.no_grad(): origin_cos = cos_theta.clone() target_logit = cos_theta[torch.arange(0, embbedings.size(0)), label].view(-1, 1) sin_theta = torch.sqrt(1.0 - torch.pow(target_logit, 2)) cos_theta_m = target_logit * self.cos_m - sin_theta * self.sin_m #cos(target+margin) mask = cos_theta > cos_theta_m final_target_logit = torch.where(target_logit > self.threshold, cos_theta_m, target_logit - self.mm) hard_example = cos_theta[mask] with torch.no_grad(): self.t = target_logit.mean() * 0.01 + (1 - 0.01) * self.t cos_theta[mask] = hard_example * (self.t + hard_example) cos_theta.scatter_(1, label.view(-1, 1).long(), final_target_logit) output = cos_theta * self.s return output, nn.CrossEntropyLoss()(output,label) class ShopeeModel(nn.Module): def __init__( self, n_classes = CFG.CLASSES, model_name = CFG.MODEL_NAME, fc_dim = CFG.FC_DIM, margin = CFG.MARGIN, scale = CFG.SCALE, use_fc = True, pretrained = True, use_arcface = CFG.USE_ARCFACE): super(ShopeeModel,self).__init__() print(f'Building Model Backbone for {model_name} model, margin = {margin}') self.backbone = timm.create_model(model_name, pretrained=pretrained) if 'efficientnet' in model_name: final_in_features = self.backbone.classifier.in_features self.backbone.classifier = nn.Identity() self.backbone.global_pool = nn.Identity() elif 'resnet' in model_name: final_in_features = self.backbone.fc.in_features self.backbone.fc = nn.Identity() self.backbone.global_pool = nn.Identity() elif 'resnext' in model_name: final_in_features = self.backbone.fc.in_features self.backbone.fc = nn.Identity() self.backbone.global_pool = nn.Identity() elif 'densenet' in model_name: final_in_features = self.backbone.classifier.in_features self.backbone.classifier = nn.Identity() self.backbone.global_pool = nn.Identity() elif 'nfnet' in model_name: final_in_features = self.backbone.head.fc.in_features self.backbone.head.fc = nn.Identity() self.backbone.head.global_pool = nn.Identity() self.pooling = nn.AdaptiveAvgPool2d(1) self.use_fc = use_fc if use_fc: self.dropout = nn.Dropout(p=0.0) self.fc = nn.Linear(final_in_features, fc_dim) self.bn = nn.BatchNorm1d(fc_dim) self._init_params() final_in_features = fc_dim if use_arcface: self.final = ArcMarginProduct(final_in_features, n_classes, s=scale, m=margin) else: self.final = CurricularFace(final_in_features, n_classes, s=scale, m=margin) def _init_params(self): nn.init.xavier_normal_(self.fc.weight) nn.init.constant_(self.fc.bias, 0) nn.init.constant_(self.bn.weight, 1) nn.init.constant_(self.bn.bias, 0) def forward(self, image, label): feature = self.extract_feat(image) logits = self.final(feature,label) return logits def extract_feat(self, x): batch_size = x.shape[0] x = self.backbone(x) x = self.pooling(x).view(batch_size, -1) if self.use_fc: x = self.dropout(x) x = self.fc(x) x = self.bn(x) return x def read_dataset(): df = pd.read_csv(CFG.TRAIN_CSV) df['matches'] = df.label_group.map(df.groupby('label_group').posting_id.agg('unique').to_dict()) df['matches'] = df['matches'].apply(lambda x: ' '.join(x)) gkf = GroupKFold(n_splits=CFG.N_SPLITS) df['fold'] = -1 for i, (train_idx, valid_idx) in enumerate(gkf.split(X=df, groups=df['label_group'])): df.loc[valid_idx, 'fold'] = i labelencoder= LabelEncoder() df['label_group'] = labelencoder.fit_transform(df['label_group']) train_df = df[df['fold']!=CFG.TEST_FOLD].reset_index(drop=True) train_df = train_df[train_df['fold']!=CFG.VALID_FOLD].reset_index(drop=True) valid_df = df[df['fold']==CFG.VALID_FOLD].reset_index(drop=True) test_df = df[df['fold']==CFG.TEST_FOLD].reset_index(drop=True) train_df['label_group'] = labelencoder.fit_transform(train_df['label_group']) return train_df, valid_df, test_df def precision_score(y_true, y_pred): y_true = y_true.apply(lambda x: set(x.split())) y_pred = y_pred.apply(lambda x: set(x.split())) intersection = np.array([len(x[0] & x[1]) for x in zip(y_true, y_pred)]) len_y_pred = y_pred.apply(lambda x: len(x)).values precision = intersection / len_y_pred return precision def recall_score(y_true, y_pred): y_true = y_true.apply(lambda x: set(x.split())) y_pred = y_pred.apply(lambda x: set(x.split())) intersection = np.array([len(x[0] & x[1]) for x in zip(y_true, y_pred)]) len_y_true = y_true.apply(lambda x: len(x)).values recall = intersection / len_y_true return recall def f1_score(y_true, y_pred): y_true = y_true.apply(lambda x: set(x.split())) y_pred = y_pred.apply(lambda x: set(x.split())) intersection = np.array([len(x[0] & x[1]) for x in zip(y_true, y_pred)]) len_y_pred = y_pred.apply(lambda x: len(x)).values len_y_true = y_true.apply(lambda x: len(x)).values f1 = 2 * intersection / (len_y_pred + len_y_true) return f1 def get_image_embeddings(df, model): image_dataset = ShopeeImageDataset(df,transform=get_test_transforms()) image_loader = torch.utils.data.DataLoader( image_dataset, batch_size=CFG.BATCH_SIZE, pin_memory=True, num_workers = CFG.NUM_WORKERS, drop_last=False ) embeds = [] with torch.no_grad(): for img,label in tqdm(image_loader): img = img.to(CFG.DEVICE) label = label.to(CFG.DEVICE) feat,_ = model(img,label) image_embeddings = feat.detach().cpu().numpy() embeds.append(image_embeddings) del model image_embeddings = np.concatenate(embeds) print(f'Our image embeddings shape is {image_embeddings.shape}') del embeds gc.collect() return image_embeddings def get_image_neighbors(df, embeddings, threshold = 0.2, min2 = False): nbrs = NearestNeighbors(n_neighbors = 50, metric = 'cosine') nbrs.fit(embeddings) distances, indices = nbrs.kneighbors(embeddings) predictions = [] for k in range(embeddings.shape[0]): if min2: idx = np.where(distances[k,] < CFG.BEST_THRESHOLD)[0] ids = indices[k,idx] if len(ids) <= 1 and distances[k,1] < threshold: ids = np.append(ids,indices[k,1]) else: idx = np.where(distances[k,] < threshold)[0] ids = indices[k,idx] posting_ids = ' '.join(df['posting_id'].iloc[ids].values) predictions.append(posting_ids) df['pred_matches'] = predictions df['f1'] = f1_score(df['matches'], df['pred_matches']) df['recall'] = recall_score(df['matches'], df['pred_matches']) df['precision'] = precision_score(df['matches'], df['pred_matches']) del nbrs, distances, indices gc.collect() return df def search_best_threshold(valid_df,model): search_space = np.arange(10, 50, 1) valid_embeddings = get_image_embeddings(valid_df, model) print("Searching best threshold...") best_f1_valid = 0. best_threshold = 0. for i in search_space: threshold = i / 100 valid_df = get_image_neighbors(valid_df, valid_embeddings, threshold=threshold) valid_f1 = valid_df.f1.mean() valid_recall = valid_df.recall.mean() valid_precision = valid_df.precision.mean() print(f"threshold = {threshold} -> f1 score = {valid_f1}, recall = {valid_recall}, precision = {valid_precision}") if (valid_f1 > best_f1_valid): best_f1_valid = valid_f1 best_threshold = threshold print("Best threshold =", best_threshold) print("Best f1 score =", best_f1_valid) CFG.BEST_THRESHOLD = best_threshold # phase 2 search print("Searching best min2 threshold...") search_space = np.arange(CFG.BEST_THRESHOLD * 100, CFG.BEST_THRESHOLD * 100 + 20, 0.5) best_f1_valid = 0. best_threshold = 0. for i in search_space: threshold = i / 100 valid_df = get_image_neighbors(valid_df, valid_embeddings, threshold=threshold,min2=True) valid_f1 = valid_df.f1.mean() valid_recall = valid_df.recall.mean() valid_precision = valid_df.precision.mean() print(f"min2 threshold = {threshold} -> f1 score = {valid_f1}, recall = {valid_recall}, precision = {valid_precision}") if (valid_f1 > best_f1_valid): best_f1_valid = valid_f1 best_threshold = threshold print("Best min2 threshold =", best_threshold) print("Best f1 score after min2 =", best_f1_valid) CFG.BEST_THRESHOLD_MIN2 = best_threshold def save_embeddings(): """Save valid and test image embeddings. """ train_df, valid_df, test_df = read_dataset() PATH_PREFIX = '../input/image-model-trained/' for i in range(len(CFG.LOSS_MODULES)): CFG.LOSS_MODULE = CFG.LOSS_MODULES[i] if 'arc' in CFG.LOSS_MODULE: CFG.USE_ARCFACE = True else: CFG.USE_ARCFACE = False for j in range(len(CFG.MODEL_NAMES)): CFG.MODEL_NAME = CFG.MODEL_NAMES[j] for k in range(len(CFG.MARGINS)): CFG.MARGIN = CFG.MARGINS[k] model = ShopeeModel(model_name = CFG.MODEL_NAME, margin = CFG.MARGIN, use_arcface = CFG.USE_ARCFACE) model.eval() model = replace_activations(model, torch.nn.SiLU, Mish()) CFG.MODEL_PATH = f'{CFG.MODEL_NAME}_{CFG.LOSS_MODULE}_face_epoch_8_bs_8_margin_{CFG.MARGIN}.pt' MODEL_PATH = PATH_PREFIX + CFG.MODEL_PATH model.load_state_dict(torch.load(MODEL_PATH)) model = model.to(CFG.DEVICE) valid_embeddings = get_image_embeddings(valid_df, model) VALID_EMB_PATH = '../input/image-embeddings/' + CFG.MODEL_PATH[:-3] + '_valid_embed.csv' np.savetxt(VALID_EMB_PATH, valid_embeddings, delimiter=',') TEST_EMB_PATH = '../input/image-embeddings/' + CFG.MODEL_PATH[:-3] + '_test_embed.csv' test_embeddings = get_image_embeddings(test_df, model) np.savetxt(TEST_EMB_PATH, test_embeddings, delimiter=',') save_embeddings() ``` ## Run test Parameters: + `CFG.MARGIN = [0.5,0.6,0.7,0.8,0.9]` + `CFG.MODEL_NAME = ['resnet50','resnext50_32x4d','densenet121','efficientnet_b3','eca_nfnet_l0']` + `CFG.LOSS_MODULE = ['arc','curricular']` ## save and load embeddings + `np.savetxt('tf_efficientnet_b5_ns.csv', image_embeddings1, delimiter=',')` + `image_embeddings4 = np.loadtxt(CFG.image_embeddings4_path, delimiter=',')` + `image_embeddings4_path = '../input/image-embeddings/efficientnet_b3.csv'`
github_jupyter
# 7. előadás *Tartalom:* Függvények, pár további hasznos library (import from ... import ... as szintaktika, time, random, math, regex (regular expressions), os, sys) ### Függvények Találkozhattunk már függvényekkel más programnyelvek kapcsán. De valójában mik is azok a függvények? A függvények: • újrahasználható kódok • valamilyen specifikus feladatot végeznek el • flexibilisek • egyszer definiálandók • nem csinálnak semmit, amíg nem hívjuk meg őket Hogyan definiálunk egy függvényt? ``` python def fuggveny_neve(parameter1, parameter2, stb): # ide írjuk a kódot return visszatérési_érték_lista1, v_2, v_3 # opcionális! ``` Ezt hívni a következőképp lehet: ``` python a, b, c = fuggveny_neve(parameter1, parameter2 …) # vagy, ha nem vagyunk kiváncsiak az összes visszatérési értékre a, _, _ = fuggveny_neve(parameter1, parameter2 …) ``` Kezdjük egy egyszerű függvénnyel! Írjunk egy olyan függvényt, amely a `"Helló, <név>"` sztringet adja vissza: ``` def udvozlet(nev): print("Hello,", nev, end="!") udvozlet("Tamás") ``` Egyes függvényeknek van paraméterük/argumentumuk, másoknak azonban nincs. Egyes esetekben a paraméteres függvények bizonyulhatnak megfelelőnek, pont a paraméterek nyújtotta flexibilitás miatt, más esetekben pedig pont a paraméter nélküli függvények használata szükséges. Egyes függvényeknek sok visszatérési érékük van, másoknál elhagyható a `return`. Írjunk egy olyan `adat_generalas` függvényt, amely 3 paramétert fogad: a generálandó listák darabszámát, lépésközét és opcionálisan egy eltolás értéket, amit hozzáadunk a generált adatokhoz. Amenyyiben nem adjuk meg a 3. paramétert, az eltolást, úgy az alapértelmezett érték legyen `0`. A 3 visszatérsi érték az `X` adat, valamint a 2 `Y` adat (koszinusz és színusz). ``` import math def adat_generalas(darab, lepes, eltolas = 0): x = [x * lepes for x in range(0, darab)] y_sin = [math.sin(x * lepes) + eltolas for x in range(0, darab)] y_cos = [math.cos(x * lepes) + eltolas for x in range(0, darab)] return x, y_sin, y_cos ``` Próbáljuk ki a fenti függvényt és jelezzük ki `plot`-ként. ``` import matplotlib.pyplot as plt x1, y1, y2 = adat_generalas(80, 0.1) x2, y3, _ = adat_generalas(400, 0.02, 0.2) plt.plot(x1, y1, "*") plt.plot(x1, y2, "*") plt.plot(x2, y3, "*") plt.show() ``` A paraméterekkel rendelkező függvényeknél nagyon fontos a paraméterek sorrendje. Nem mindegy, hogy függvényhíváskor milyen sorrendben adjuk meg a paramétereket. Pl.: ``` def upload_events(events_file, location): # … (feltehetően a kód jön ide) … return # Helytelen hívás!!! upload_events("New York", "events.csv") # Helyes hívás!!! upload_events("events.csv", "New York") ``` Ha azonban nem kedveljük ezt a szabályt, akár át is hághatjuk úgy, hogy megmondjuk a függvénynek, melyik érték, melyik változóhoz tartozik! ``` # Így is helyes!!! upload_events(location="New York", events_file="events.csv") ``` Fontos még megemlíteni a függvények visszatérési értékét. A függvények egy vagy több értéket adhatnak vissza a **return** parancsszó meghívásával. A nem meghatározott visszatérési érték az ún. **None**. Ezen kívül visszatérési érték lehet *szám, karakter, sztring, lista, könyvtár, TRUE, FALSE,* vagy bármilyen más típus. ``` # Egy érték vissza adása def product(x, y): return x*y product(5,6) # Több érték vissza adása def get_attendees(filename): # Ide jön a kód, amely vissza adja a „teacher”, „assistants” és „students” értékeket. #teacher, assistants, students = get_attendees("file.csv") return ``` ## További hasznos könyvtárak: #### Az *import* szintakszis Ahhoz, hogy beépített függvényeket használjunk a kódban, szükség van azok elérésére. Ezek általában modulokban, vagy csomagokban találhatók. Ezeket az **import** kulcsszóval tudjuk beépíteni. Amikor a Python importálja pl. a *hello* nevű modult, akkor az interpreter először végig listázza a beépített modulokat, és ha nem találja ezek között, akkor elkezdi keresni a *hello.py* nevű fájlt azokban a mappákban, amelyeknek listáját a **sys.path** változótól kapja meg. Az importálás alapvető szintaktikája az **import** kulcsszóból és a beillesztendő modul nevéből áll. **import hello** Mikor importálunk egy modult, elérhetővé tesszük azt a kódunkban, mint egy különálló névteret (*namespace*). Ez azt jelenti, hogy amikor meghívunk egy függvényt, pont (.) jellel kell összekapcsolnunk a modulnévvel így: [modul].[függvény] ``` import random random.randint(0,5) #meghívja a randint függvényt, ami egy random egész számot ad vissza ``` Nézzünk erre egy példát, amely 10 véletlenszerű egész számot ír ki! ``` import random for i in range(10): print(random.randint(1, 25)) ``` #### A *from … import …* szintakszis Általában akkor használjuk, mikor hivatkozni akarunk egy konkrét függvényre a modulból, így elkerüljük a ponttal való referálást. Nézzük az előző példát ezzel a megoldással: ``` from random import randint for i in range(10): print(randint(1, 25)) ``` #### Aliasok használata, az *import … as …* szintakszis Pythonban lehetőség van a modulok és azok függvényeinek átnevezésére az **as** kulcsszó segítségével, _ilyet már régebben is használtunk_. Pl.: ``` import math as m ``` Egyes hosszú nevű modulok helyett pl. általánosan megszokott az aliasok használata. Ilyet régóta használunk pl.: ``` import matplotlib.pyplot as plt ``` ### A *time* és *datetime* könyvtárak: A **time** modul alapvető idővel és dátummal kapcsolatos függvényeket tartalmaz. Két különböző megjelenítési formát használ és számos függvény segít ezeknek az oda-vissza konvertálásában: - **float** másodpercek száma – ez a UNIX belső megjelenítési formája. Ebben a megjelenítési formában az egyes időpontok között eltelt idő egy lebegőpontos szám. - **struct_time** oblektum – ez kilenc attribútummal rendelkezik egy időpont megjelenítésére, a Gergely naptár szerinti dátumkövetést alkalmazva. Ebben a formában nincs lehetőség két időpont között eltelt idő megjelenítésére, ilyenkor oda-vissza kell konvertálnunk a **float** és **struct_time** között. A **datetime** modul tartalmaz minden szükséges objektumot és metódust, amelyek szükségesek lehetnek a Gergely naptár szerinti időszámítás helyes kezeléséhez. A **datetime** csak egyfajta időpont megjelenítési formát tartalmaz, ellenben négy olyan osztállyal rendelkezik, amelyekkel könnyedén kezelhetjük a dátumokat és időpontokat: - **datetime.time** – négy attribítummal rendelkezik, ezek az óra, perc, másodperc és a századmásodperc. - **datetime.date** – három attribútuma van, ezek az év, hónap és a nap. - **datetime.datetime** – ez kombinálni képes a **datetime.time** és **datetime.date** osztályokat. - **datetime.timedelta** – ez az eltelt időt mutatja meg két **date**, **time** vagy **datetime** között. Az értékei megjeleníthetők napokban, másodpercekben vagy századmásodpercekben. Nézzünk néhány példát: ``` from datetime import date print(date.today()) # aktuális napi dátum from datetime import datetime print(datetime.now()) # pillanatnyi idő ``` Próbáljuk ki, hogy amennyiben két időpont között várunk 2 másodpercet, az ténylegesen pontosan mennyi eltelt időt jelent. ``` from datetime import timedelta from time import sleep t1 = datetime.now() print(t1) sleep(2) # várjunk 2 másodpercet t2 = datetime.now() print(t2) print("Ténylegesen eltelt idő:", timedelta(minutes=(t2-t1).total_seconds())) # valamilyen eltelt idő import time time.clock() # másodpercek lebegőpontos ábrázolása. UNIX rendszeren a processzor időt, Windowson a függvény első hívásától eltelt időt mutatja fali óra szerint. ``` ### A *random* könytár: Ez a könyvtár pszeudó-random generátorokat képes létrehozni. Nézzünk néhány konkrét példát: ``` from random import random random() #random float [0, 1) között from random import uniform uniform(2.5, 10.0) #random float [2.5, 10.0) között from random import expovariate expovariate(1/10) # 1 osztva egy nem nulla értékű középértékkel from random import randrange num1=randrange(10) # [0,9] közé eső random integer: randrange(stop) num2=randrange(0, 101, 2) # [0,100] közé eső páros integer: randrange(start, stop, step) num1, num2 from random import choice choice(['win', 'lose', 'draw']) # egy lista egyik random eleme from random import shuffle deck = 'ace two three four'.split() shuffle(deck) # elemek összekeverése deck from random import sample sample([10, 20, 30, 40, 50, 60, 70], k=4) #k=n darab elemet ad vissza a halmazból from random import randint randint(1,10) # random integert ad [1,10] intervallumban ``` ### A *math* könyvtár Matematikai függvények használatát teszi lehetővé C szabvány alapján. Néhány konkrét példa: ``` import math math.ceil(5.6) # lebegőpontos számok felkerekítése egész típusú számmá math.factorial(10) # fatoriális számítás math.floor(5.6) # lebegőpontos számok lefele kerekítése egész típusú számmá math.gcd(122, 6) # visszaadja a két szám legnyagyobb közös osztóját math.exp(2) # Euler-féle szám az x hatványra emelve ``` Ennek pythonban nem sok értelme van, ahatványozás megy a `**` művelettel is. (`2 ** 3`). ``` math.pow(2, 3) # pow(x,y): x az y hatványra emelve, rövidebben 2**3 math.sqrt(36) # négyzetgyökvonás, rövidebben 36**0.5 math.cos(0.5) # cosinus függvény radiánokban kifejezve, ugyanígy működik a math.sin(x) is. math.degrees(1) # radián-fok konverzió, ugyanígy math.radian(x) fok-radián átalakító. ``` Matematikai állandók: ``` print(math.pi) # Pi print(math.e) # Euler-szám print(math.inf) # végtelen print(math.nan) # nem szám (not a number) ``` ### Az *OS* könyvtár Az OS modul alapvetően olyan függvényeket tartalmaz melyek az operációs rendszerrel kapcsolatos műveleteket támogatja. A modul beszúrása az **`import os`** paranccsal történik. Nézzünk néhány ilyen metódust: ``` import os os.system("dir") # shell parancsot hajt végre, ha a parancs hatással van a kimentre, akkor ez nem jelenik meg ``` Visszatérésként csak `0`-t ad, ha sikeres és `1`-et, ha nem sikeres a művelet. De a konzolon megjelenik minden kimenet: ```python Directory of C:\Users\herno\Documents\GitHub\sze-academic-python\eload 2018-11-03 16:42 <DIR> . 2018-11-03 16:42 <DIR> .. 2018-10-24 06:16 <DIR> .ipynb_checkpoints 2018-10-24 11:02 <DIR> data 2018-10-24 06:16 5,846 ea00.md 2018-11-03 15:30 18,775 ea01.ipynb 2018-07-31 17:41 21,838 ea02.ipynb 2018-07-31 17:41 26,484 ea03.ipynb 2018-09-10 15:23 293,223 ea04.ipynb 2018-10-24 06:16 128,088 ea05.ipynb 2018-07-17 11:01 34,838 ea06.ipynb 2018-11-03 16:42 49,489 ea07.ipynb 2018-11-03 15:30 10,384 ea08.ipynb 2018-11-03 15:30 401,267 ea10.ipynb ``` ``` os.getcwd() # kiírja az ektuális munkakönytárat os.getpid() # ez a futó folyamat ID-jét írja ki ``` A további példák nem kerülnek lefuttatásra, csak felsoroljuk őket: 1) `os.chroot(path)` - a folyamat root könytárának megváltoztatása `'path'` elérésre 2) `os.listdir(path)` - belépések száma az adott könytárba 3) `os.mkdir(path)` - könyvtár létrehozása a `'path'` útvonalon 4) `os.remove(path)` - a `'path'`-ban levő fájl törlése 5) `os.removedirs(path)` - könyvtárak rekurzív törlése 6) `os.rename(src, dst)` - az `'src'` átnevezése `'dst'`-re, ez lehet fájl vagy könyvtár ### A *sys* könyvtár A **sys** modul különféle információval szolgál az egyes Python interptreterrel kapcsolatos konstansokról, függvényekről és metódusokról. ``` import sys sys.stderr.write('Ez egy stderr szoveg\n') # stderr kimenetre küld hibaüzenetet sys.stderr.flush() # a pufferben eltárolt tartalmat flush-olja a kimenetre sys.stdout.write('Ez egy stdout szoveg\n') # szöveget küld az stdout-ra print ("script name is", sys.argv[0]) # az 'argv' tartalmazza a parancssori argumentumokat, # ezen belül az argv[0] maga a szkript neve print ("Az elérési útvonalon", len(sys.path), "elem van.") # lekérdezzük a 'path'-ban levő elemek számát sys.path # kiíratjuk a 'path'-ban levő elemeket ``` ``` python print(sys.modules.keys()) # kiíratjuk az importált modulok neveit ``` ``` dict_keys(['pkgutil', '__future__', 'filecmp', 'mpl_toolkits', 'platform', 'distutils.debug', 'jedi.evaluate.context', 'ctypes._endian', 'msvcrt', 'parso.pgen2.parse', '_stat', 'jedi.evaluate', '_pickle', 'parso.python.pep8', 'jedi.evaluate.context.module', 'distutils.log', 'collections', ...... ``` ``` print (sys.platform) # lekérdezzük a platformunk típusát ``` ``` python print("hello") sys.exit(1) # nem lép ki egyből, hanem meghívja a 'SystemExit' kivételt. Ennek kezelésére látsd a következő példát. print("there") Output: hello SystemExit: 1 ``` ``` print ("hello") try: # 'try'-al kezeljük a 'SystemExit' kivételt sys.exit(1) except SystemExit: pass print ("there") ``` ### A reguláris kifejezések: *regex* Számos helyzetben kell sztringeket feldolgoznunk. Ezek a sztringek azonban nem mindig érkeznek egyből értelmezhető formában, vagy nem követnek semmilyen mintát. Ilyenkor elég nehéz ezek feldolgozása. Erre nyújt egy elfogadható megoldást a reguláris kifejezések alkalmazása. A **reguláris kifejezés** tehát egy minta, vagy mintasorozat, amelynek segítségével könnyebben feldolgozhatjuk az egy adott halmazhoz tartozó adatokat, melyeknek egy közös mintát kellene követniük. Nézzünk egy példát! Ha pl. a minta szabályom az "aba" szócska, akkor minden olyan sztring, amely megfelel az "aba" formátumnak, beletartozik a halmazba. Egy ilyen egyszerű szabály azonban csak egyszerű sztringeket képes kezelni. Egy összetettebb szabály, mint pl. a "ab&ast;a" már sokkal több kimeneti lehetőséget nyújt. Lényegében, sztringek végtelen halmazát képes generálni, olyanokat mint: "aa", "aba", "abba", stb. Itt már kicsit nehezebb ellenőrizni, hogy a generált sztring ebből a szabályból származik-e. #### Néhány alapszabályt a reguláris kifejezések létrehozásával kapcsolatban - Bármely hagyományos karakter, akár önmagában is reguláris kifejezést alkothat - A '.' karakter bármely egyedülálló karakternek megfelelhet. Pl. az "x.y"-nak megfelelhet a "xay", "xby", stb., de a "xaby" már NEM. - A szögletes zárójelek [...] olyan szabályt definiálnak, amely alapján az adott elem megfelel a zárójelben levő halmaznak. Pl. a "x[abc]z"-nak megfelelhet az "xaz", "xbz" és az "xcz" is. Vagy pl. az "x[a-z]z" mintában a középső karakter az ABC bármely betűje lehet. Ilyenkor az intervallumot kötőjellel '-' jelöljük meg. - A '^' jellel módosított szögletes zárójelek [^...] azt jelentik, hogy a reguláris kifejezésben a zárójelben felsoroltakon kívűl minden más szerepelhet. Pl. a "1[^2-8]2" kifejezésben a zárójel helyén csak '1' és '9' lehet, mert a "2-8" közötti intervallumot kizártuk. - Több reguláris kifejezést csoportosíthatunk zárójelek (...) segítségével. Pl. a "(ab)c" szabály az "ab" és 'c' reguláris kifejezések csoportosítása. - A reguláris kifejezések ismétlődhetnek. Pl. a "x&ast;" megismételheti a 'x'-t nullászor, vagy ennél többször, a "x+" megismétli az 'x'-t 1 vagy annál többször, a "x?" megismétli 'x'-t 0 vagy 1 alkalommal. Konkrét példa: a "1(abc)&ast;2" kifejezésnek megfelelhet a "12", "1abc2", és akár a "1abcabcabc2" is. - Ha valamilyen szabály a sor elején kezdődik, akkor azt "^" mintával jelöljük, ha a sor végén található, akkor pedig a "^&dollar;" mintával. #### A reguláris kifejezések alkalmazása Pythonban A reguláris kifejezések alkalmazásához be kell szúrnunk a **re** könyvtárat. Nézzünk ezzel kapcsolatban néhány konkrét feladatot: ``` import re szov1 = "2019 november 12" print("Minden szám:", re.findall("\d+", szov1)) print("Minden egyéb:", re.findall("[^\d+]", szov1)) print("Angol a-z A-Z:", re.findall("[a-zA-Z]+", szov1)) szov2 = "<body>Ez egy példa<br></body>" print(re.findall("<.*?>", szov2)) szov3 = "sör sört sár sír sátor Pártol Piros Sanyi Peti Pite Pete " print("s.r :", re.findall("s.r", szov3)) print("s.r. :", re.findall("s.r.", szov3)) print("s[áí]r :", re.findall("s[áí]r", szov3)) print("P.t. :", re.findall("P.t.", szov3)) print("P.*t. :", re.findall("P.*t.", szov3)) print("P.{0,3}t.:", re.findall("P.{0,3}t.", szov3)) ``` Nézzünk egy összetettebb példát: ``` fajl_lista = [ "valami_S015_y001.png", "valami_S015_y001.npy", "valami_S014_y001.png", "valami_S014_y001.npy", "valami_S013_y001.png", "valami_S013_y001.npy", "_S999999_y999.npy"] r1 = re.compile(r"_S\d+_y\d+\.png") r2 = re.compile(r"_S\d+_y\d+\..*") f1 = list(filter(r1.search, fajl_lista)) f2 = list(filter(r2.search, fajl_lista)) print("_S\d+_y\d+\.png \n---------------") for f in f1: print(f) print(""); print("_S\d+_y\d+\..* \n---------------") for f in f2: print(f) ``` További információt a reguláris kifejezésekkel kapcsolatban [ITT](https://www.regular-expressions.info/index.html) találnak. ## _Used sources_ / Felhasznált források: - [Shannon Turner: Python lessons repository](https://github.com/shannonturner/python-lessons) MIT license (c) Shannon Turner 2013-2014, - [Siki Zoltán: Python mogyoróhéjban](http://www.agt.bme.hu/gis/python/python_oktato.pdf) GNU FDL license (c) Siki Zoltán, - [BME AUT](https://github.com/bmeaut) MIT License Copyright (c) BME AUT 2016-2018, - [Python Software Foundation documents](https://docs.python.org/3/) Copyright (c), Python Software Foundation, 2001-2018, - [Regular expressions](https://www.regular-expressions.info/index.html) Copyright (c) 2003-2018 Jan Goyvaerts. All rights reserved.
github_jupyter
<a href="https://colab.research.google.com/github/google/neural-tangents/blob/master/notebooks/phase_diagram.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ### Imports & Utils ``` !pip install -q git+https://www.github.com/google/neural-tangents import jax.numpy as np from jax.experimental import optimizers from jax.api import grad, jit, vmap from jax import lax from jax.config import config config.update('jax_enable_x64', True) from functools import partial import neural_tangents as nt from neural_tangents import stax _Kernel = nt.utils.kernel.Kernel def Kernel(K): """Create an input Kernel object out of an np.ndarray.""" return _Kernel(cov1=np.diag(K), nngp=K, cov2=None, ntk=None, is_gaussian=True, is_reversed=False, diagonal_batch=True, diagonal_spatial=False, shape1=(K.shape[0], 1024), shape2=(K.shape[1], 1024), x1_is_x2=True, is_input=True, batch_axis=0, channel_axis=1) def fixed_point(f, initial_value, threshold): """Find fixed-points of a function f:R->R using Newton's method.""" g = lambda x: f(x) - x dg = grad(g) def cond_fn(x): x, last_x = x return np.abs(x - last_x) > threshold def body_fn(x): x, _ = x return x - g(x) / dg(x), x return lax.while_loop(cond_fn, body_fn, (initial_value, 0.0))[0] from IPython.display import set_matplotlib_formats set_matplotlib_formats('pdf', 'svg') import matplotlib.pyplot as plt import seaborn as sns sns.set_style(style='white') def format_plot(x='', y='', grid=True): ax = plt.gca() plt.grid(grid) plt.xlabel(x, fontsize=20) plt.ylabel(y, fontsize=20) def finalize_plot(shape=(1, 1)): plt.gcf().set_size_inches( shape[0] * 1.5 * plt.gcf().get_size_inches()[1], shape[1] * 1.5 * plt.gcf().get_size_inches()[1]) plt.tight_layout() ``` # Phase Diagram We will reproduce the phase diagram described in [Poole et al.](https://papers.nips.cc/paper/6322-exponential-expressivity-in-deep-neural-networks-through-transient-chaos) and [Schoenholz et al.](https://arxiv.org/abs/1611.01232) using Neural Tangents. In these and subsequent papers, it was found that deep neural networks can exhibit a phase transition as a function of the variance of their weights ($\sigma_w^2$) and biases ($\sigma_b^2$). For networks with $\tanh$ activation functions, this phase transition is between an "ordered" phase and a "chaotic" phase. In the ordered phase, pairs of inputs collapse to a single point as they propagate through the network. By contrast, in the chaotic phase, nearby inputs become increasingly dissimilar in later layers of the network. This phase diagram is shown below. A number of properties of neural networks - such as trainability, mode-collapse, and maximum learing rate - have now been related to this phase diagram over many papers (recently e.g. [Yang et al.](https://arxiv.org/abs/1902.08129), [Jacot et al.](https://arxiv.org/abs/1907.05715), [Hayou et al.](https://arxiv.org/abs/1905.13654), and [Xiao et al.](https://arxiv.org/abs/1912.13053)). \ ![Phase Diagram](https://raw.githubusercontent.com/google/neural-tangents/master/notebooks/figures/pennington_phase_diagram.svg?sanitize=true) > Phase diagram for $\tanh$ neural networks (appeared in [Pennington et al.](https://arxiv.org/abs/1802.09979)). \ Consider two inputs to a neural network, $x_1$ and $x_2$, normalized such that $\|x_1\| = \|x_2\| = q^0$. We can compute the cosine-angle between the inputs, $c^0 = \cos\theta_{12} = \frac{x_1 \cdot x_2}{q^0}$. Additionally, we can keep track of the norm and cosine angle of the resulting pre-activations ($q^l$ and $c^l$ respectively) as signal passes through layers of the neural network. In the wide-network limit there are deterministic functions, called the $\mathcal Q$-map and the $\mathcal{C}$-map, such that $q^{l+1} = \mathcal Q(q^l)$ and $c^{l+1} = \mathcal C(q^l, c^l)$. \ In fully-connected networks with $\tanh$-like activation functions, both the $\mathcal Q$-map and $\mathcal C$-map have unique stable-fixed-points, $q^*$ and $c^*$, such that $q^* = \mathcal Q(q^*)$ and $c^* = \mathcal C(q^*, c^*)$. To simplify the discussion, we typically choose to normalize our inputs so that $q^0 = q^*$ and we can restrict our study to the $\mathcal C$-map. The $\mathcal C$-map always has a fixed point at $c^* = 1$ since two identical inputs will remain identical as they pass through the network. However, this fixed point is not always stable and two points that start out very close together will often separate. Indeed, the ordered and chaotic phases are characterized by the stability of the $c^* = 1$ fixed point. In the ordered phase $c^* = 1$ is stable and pairs of inputs converge to one another as they pass through the network. In the chaotic phase the $c^* = 1$ point is unstable and a new, stable, fixed point with $c^* < 1$ emerges. The phase boundary is defined as the point where $c^* = 1$ is marginally stable. \ To understand the stability of a fixed point, $c^*$, we will use the standard technique in Dynamical Systems theory and expand the $\mathcal C$-map in $\epsilon^l = c^l - c^*$ which implies that $\epsilon^{l+1} = \chi(c^*)\epsilon^l$ where $\chi = \frac{\partial\mathcal C}{\partial C}$. This implies that sufficiently close to a fixed point of the dynamics, $\epsilon^l = \chi(c^*)^l$. If $\chi(c^*) < 1$ then the fixed point is stable and points move towards the fixed point exponentially quickly. If $\chi(c^*) > 1$ then points move away from the fixed point exponentially quickly. This implies that the phase boundary, being defined by the marginal stability of $c^* = 1$, will be where $\chi_1 = \chi(1) = 1$. \ To reproduce these results in Neural Tangents, we notice first that the $\mathcal{C}$-map described above is intimately related to the NNGP kernel, $K^l$, of [Lee et al.](https://arxiv.org/abs/1711.00165), [Matthews et al.](https://arxiv.org/abs/1804.11271), and [Novak et al.](https://arxiv.org/abs/1810.05148). The core of Neural Tangents is a map $\mathcal T$ for a wide range of architectures such that $K^{l + 1} = \mathcal T(K^l)$. Since $C^l$ can be written in terms of the NNGP kernel as $C^l = K^l_{12} / q^*$ this implies that Neural Tangents provides a way of computing the $\mathcal{C}$-map for a wide range of network architectures. \ To produce the phase diagam above, we must compute $q^*$ and $c^*$ as well as $\chi_1$. We will use a fully-connected network with $\text{Erf}$ activation functions since they admit an analytic kernel function and are very similar to $\tanh$ networks. We will first define the $\mathcal Q$-map by noting that the $\mathcal Q$-map will be identical to $\mathcal T$ if the covariance matrix has only a single entry. We will use Newton's method to find $q^*$ given the $\mathcal Q$-map. Next we will use the relationship above to define the $\mathcal C$-map in terms of $\mathcal T$. We will again use Newton's method to find the stable $c^*$ fixed point. We can define $\chi$ by using JAX's automatic differentiation to compute the derivative of the $\mathcal C$-map. This can be written relatively concisely below. \ Note: this particular phase diagram holds for a wide range of neural networks but, emphatically, not for ReLUs. The ReLU phase diagram is somewhat different and could be investigated using Neural Tangents. However, we will save it for a followup notebook. ``` def c_map(W_var, b_var): W_std = np.sqrt(W_var) b_std = np.sqrt(b_var) # Create a single layer of a network as an affine transformation composed # with an Erf nonlinearity. kernel_fn = stax.serial(stax.Dense(1024, W_std, b_std), stax.Erf())[2] def q_map_fn(q): return kernel_fn(Kernel(np.array([[q]]))).nngp[0, 0] qstar = fixed_point(q_map_fn, 1.0, 1e-7) def c_map_fn(c): K = np.array([[qstar, qstar * c], [qstar * c, qstar]]) K_out = kernel_fn(Kernel(K)).nngp return K_out[1, 0] / qstar return c_map_fn c_star = lambda W_var, b_var: fixed_point(c_map(W_var, b_var), 0.1, 1e-7) chi = lambda c, W_var, b_var: grad(c_map(W_var, b_var))(c) chi_1 = partial(chi, 1.) ``` To generate the phase diagram above, we would like to compute the fixed-point correlation not only at a single value of $(\sigma_w^2,\sigma_b^2)$ but on a whole mesh. We can use JAX's `vmap` functionality to do this. Here we define vectorized versions of the above functions. ``` def vectorize_over_sw_sb(fn): # Vectorize over the weight variance. fn = vmap(fn, (0, None)) # Vectorize over the bias variance. fn = vmap(fn, (None, 0)) return fn c_star = jit(vectorize_over_sw_sb(c_star)) chi_1 = jit(vectorize_over_sw_sb(chi_1)) ``` We can use these functions to plot $c^*$ as a function of the weight and bias variance. As expected, we see a region where $c^* = 1$ and a region where $c^* < 1$. ``` W_var = np.arange(0, 3, 0.01) b_var = np.arange(0., 0.25, 0.001) plt.contourf(W_var, b_var, c_star(W_var, b_var)) plt.colorbar() plt.title('$C^*$ as a function of weight and bias variance', fontsize=14) format_plot('$\\sigma_w^2$', '$\\sigma_b^2$') finalize_plot((1.15, 1)) ``` We can, of course, threshold on $c^*$ to get a cleaner definition of the phase diagram. ``` plt.contourf(W_var, b_var, c_star(W_var, b_var) > 0.999, levels=3, colors=[[1.0, 0.89, 0.811], [0.85, 0.85, 1]]) plt.title('Phase diagram in terms of weight and bias variance', fontsize=14) format_plot('$\\sigma_w^2$', '$\\sigma_b^2$') finalize_plot((1, 1)) ``` As described above, the boundary between the two phases should be defined by $\chi_1(\sigma_w^2, \sigma_b^2) = 1$ where $\chi_1$ is given by the derivative of the $\mathcal C$-map. ``` plt.contourf(W_var, b_var, chi_1(W_var, b_var)) plt.colorbar() plt.title(r'$\chi^1$ as a function of weight and bias variance', fontsize=14) format_plot('$\\sigma_w^2$', '$\\sigma_b^2$') finalize_plot((1.15, 1)) ``` We can see that the boundary where $\chi_1$ crosses 1 corresponds to the phase boundary we observe above. ``` plt.contourf(W_var, b_var, c_star(W_var, b_var) > 0.999, levels=3, colors=[[1.0, 0.89, 0.811], [0.85, 0.85, 1]]) plt.contourf(W_var, b_var, np.abs(chi_1(W_var, b_var) - 1) < 0.003, levels=[0.5, 1], colors=[[0, 0, 0]]) plt.title('Phase diagram in terms of weight and bias variance', fontsize=14) format_plot('$\\sigma_w^2$', '$\\sigma_b^2$') finalize_plot((1, 1)) ```
github_jupyter
`Дисциплина: Методы и технологии машинного обучения` `Уровень подготовки: бакалавриат` `Направление подготовки: 01.03.02 Прикладная математика и информатика` `Семестр: осень 2021/2022` # Лабораторная работа №3: Линейные модели. Кросс-валидация. В практических примерах ниже показано: * как пользоваться инструментами предварительного анализа для поиска линейных взаимосвязей * как строить и интерпретировать линейные модели с логарифмами * как оценивать точность моделей с перекрёстной проверкой (LOOCV, проверка по блокам) *Модели*: множественная линейная регрессия *Данные*: `insurance` (источник: <https://www.kaggle.com/mirichoi0218/insurance/version/1>) ``` # настройка ширины страницы блокнота ....................................... from IPython.core.display import display, HTML display(HTML("<style>.container { width:80% !important; }</style>")) ``` # Указания к выполнению ## Загружаем пакеты ``` # загрузка пакетов: инструменты -------------------------------------------- # работа с массивами import numpy as np # фреймы данных import pandas as pd # графики import matplotlib as mpl # стили и шаблоны графиков на основе matplotlib import seaborn as sns # перекодировка категориальных переменных from sklearn.preprocessing import LabelEncoder # тест Шапиро-Уилка на нормальность распределения from scipy.stats import shapiro # для таймера import time # загрузка пакетов: модели ------------------------------------------------- # линейные модели import sklearn.linear_model as skl_lm # расчёт MSE from sklearn.metrics import mean_squared_error # кросс-валидация from sklearn.model_selection import train_test_split, LeaveOneOut from sklearn.model_selection import KFold, cross_val_score # полиномиальные модели from sklearn.preprocessing import PolynomialFeatures # константы # ядро для генератора случайных чисел my_seed = 9212 # создаём псевдоним для короткого обращения к графикам plt = mpl.pyplot # настройка стиля и отображения графиков # примеры стилей и шаблонов графиков: # http://tonysyu.github.io/raw_content/matplotlib-style-gallery/gallery.html mpl.style.use('seaborn-whitegrid') sns.set_palette("Set2") # раскомментируйте следующую строку, чтобы посмотреть палитру # sns.color_palette("Set2") ``` ## Загружаем данные Набор данных `insurance` в формате .csv доступен для загрузки по адресу: <https://raw.githubusercontent.com/aksyuk/MTML/main/Labs/data/insurance.csv>. Справочник к данным: <https://github.com/aksyuk/MTML/blob/main/Labs/data/CodeBook_insurance.md>. Загружаем данные во фрейм и кодируем категориальные переменные. ``` # читаем таблицу из файла .csv во фрейм fileURL = 'https://raw.githubusercontent.com/aksyuk/MTML/main/Labs/data/insurance.csv' DF_raw = # выясняем размерность фрейма print('Число строк и столбцов в наборе данных:\n', DF_raw.shape) # первые 5 строк фрейма # типы столбцов фрейма ``` Проверим, нет ли в таблице пропусков. ``` # считаем пропуски в каждом столбце ``` Пропусков не обнаружено. ``` # кодируем категориальные переменные # пол sex_dict = DF_raw['sexFemale'] = # курильщик yn_dict = DF_raw['smokerYes'] = # находим уникальные регионы # добавляем фиктивные на регион: число фиктивных = число уникальных - 1 df_dummy = df_dummy.head(5) # объединяем с исходным фреймом DF_all = # сколько теперь столбцов DF_all.shape # смотрим первые 8 столбцов DF_all.iloc[:, :8].head(5) # смотрим последние 5 столбцов DF_all.iloc[:, 8:].head(5) # оставляем в наборе данных только то, что нужно # (плюс метки регионов для графиков) DF_all = DF_all[['charges', 'age', 'sexFemale', 'bmi', 'children', 'smokerYes', 'region_northwest', 'region_southeast', 'region_southwest', 'region']] # перекодируем регион в числовой фактор, # чтобы использовать на графиках class_le = DF_all['region'] = DF_all.columns DF_all.dtypes # удаляем фрейм-исходник ``` Прежде чем переходить к анализу данных, разделим фрейм на две части: одна (90%) станет основой для обучения моделей, на вторую (10%) мы сделаем прогноз по лучшей модели. ``` # данные для построения моделей DF = # данные для прогнозов DF_predict = ``` ## Предварительный анализ данных ### Считаем описательные статистики Рассчитаем описательные статистики для непрерывных переменных. Из таблицы ниже можно видеть, что переменная `charges`, которая является зависимой переменной модели, сильно отличается по масштабу от всех остальных. Также заметим, что из всех объясняющих только переменная `children` принимает нулевые значения. Остальные показатели положительны. ``` # описательные статистики для непрерывных переменных ``` ### Строим графики Посмотрим на графики взаимного разброса непрерывных переменных. ``` # матричный график разброса с линиями регрессии plt.show() ``` Судя по этим графикам: * распределение зависимой `charges` не является нормальным; * из всех объясняющих нормально распределена только `bmi`; * имеется три уровня стоимости страховки, что заметно на графиках разброса `charges` от `age`; * облако наблюдений на графике `charges` от `bmi` делится на две неравные части; * объясняющая `children` дискретна, что очевидно из её смысла: количество детей; * разброс значений `charges` у застрахованных с количеством детей 5 (максимум из таблицы выше) намного меньше, чем у остальных застрахованных. Наблюдаемые закономерности могут объясняться влиянием одной или нескольких из фиктивных объясняющих переменных. Построим график, раскрасив точки цветом в зависимости от пола застрахованного лица. ``` # матричный график разброса с цветом по полу plt.show() ``` Теперь покажем цветом на графиках отношение застрахованых лиц к курению. ``` # матричный график разброса с цветом по smokerYes plt.show() ``` Покажем с помощью цвета на графиках регионы. ``` # матричный график разброса с цветом по region plt.show() ``` Нарисуем график отдельно по `region_southeast`. ``` # матричный график разброса с цветом по региону southeast plt.show() ``` Посмотрим на корреляционные матрицы непрерывных переменных фрейма. ``` # корреляционная матрица по всем наблюдениям corr_mat = corr_mat.style.background_gradient(cmap='coolwarm').set_precision(2) ``` Посчитаем корреляционные матрицы для курящих и некурящих застрахованных лиц. ``` # корреляционная матрица по классу курильщиков corr_mat = corr_mat.style.background_gradient(cmap='coolwarm').set_precision(2) # корреляционная матрица по классу не курильщиков corr_mat = corr_mat.style.background_gradient(cmap='coolwarm').set_precision(2) ``` ### Логарифмируем зависимую переменную Важным допущением линейной регрессии является нормальность зависимой переменной. Чтобы добиться нормального распределения, используют логарифмирование либо преобразование Бокса-Кокса. В этой лабораторной остановимся на логарифмировании. ``` # логарифмируем зависимую переменную DF['log_charges'] = # описательные статистики для непрерывных показателей DF[['charges', 'log_charges', 'age', 'bmi', 'children']].describe() ``` Проведём формальные тесты на нормальность. ``` # тестируем на нормальность for col in ['charges', 'log_charges']: stat, p = shapiro(DF[col]) print(col, 'Statistics=%.2f, p=%.4f' % (stat, p)) # интерпретация alpha = 0.05 if p > alpha: print('Распределение нормально (H0 не отклоняется)\n') else: print('Распределение не нормально (H0 отклоняется)\n') ``` Логарифмирование меняет взаимосвязи между переменными. ``` # матричный график разброса с цветом по smokerYes sns.pairplot(DF[['log_charges', 'age', 'bmi', 'children', 'smokerYes']], hue='smokerYes') plt.show() # корреляционная матрица по классу не курильщиков corr_mat = DF.loc[DF['smokerYes'] == 0][['log_charges', 'age', 'bmi', 'children']].corr() corr_mat.style.background_gradient(cmap='coolwarm').set_precision(2) # корреляционная матрица по классу курильщиков corr_mat = DF.loc[DF['smokerYes'] == 1][['log_charges', 'age', 'bmi', 'children']].corr() corr_mat.style.background_gradient(cmap='coolwarm').set_precision(2) ``` ## Строим модели регрессии ### Спецификация моделей По итогам предварительного анализа данных можно предложить следующие спецификации линейных регрессионных моделей: 1. `fit_lm_1`: $\hat{charges} = \hat{\beta_0} + \hat{\beta_1} \cdot smokerYes + \hat{\beta_2} \cdot age + \hat{\beta_3} \cdot bmi$ 1. `fit_lm_2`: $\hat{charges} = \hat{\beta_0} + \hat{\beta_1} \cdot smokerYes + \hat{\beta_2} \cdot age \cdot smokerYes + \hat{\beta_3} \cdot bmi$ 1. `fit_lm_3`: $\hat{charges} = \hat{\beta_0} + \hat{\beta_1} \cdot smokerYes + \hat{\beta_2} \cdot bmi \cdot smokerYes + \hat{\beta_3} \cdot age$ 1. `fit_lm_4`: $\hat{charges} = \hat{\beta_0} + \hat{\beta_1} \cdot smokerYes + \hat{\beta_2} \cdot bmi \cdot smokerYes + \hat{\beta_3} \cdot age \cdot smokerYes$ 1. `fit_lm_1_log`: то же, что `fit_lm_1`, но для зависимой $\hat{log\_charges}$ 1. `fit_lm_2_log`: то же, что `fit_lm_2`, но для зависимой $\hat{log\_charges}$ 1. `fit_lm_3_log`: то же, что `fit_lm_3`, но для зависимой $\hat{log\_charges}$ 1. `fit_lm_4_log`: то же, что `fit_lm_4`, но для зависимой $\hat{log\_charges}$ Кроме того, добавим в сравнение модели зависимости `charges` и `log_sharges` от всех объясняющих переменных: `fit_lm_0` и `fit_lm_0_log` соответственно. ### Обучение и интерпретация Создаём матрицы значений объясняющих переменных ( $X$ ) и вектора значений зависимой ( $y$ ) для всех моделей. ``` # данные для моделей 1, 5 df1 = DF[['charges', 'smokerYes', 'age', 'bmi']] # данные для моделей 2, 6 df2 = DF[['charges', 'smokerYes', 'age', 'bmi']] df2.loc[:, 'age_smokerYes'] = df2 = # данные для моделей 3, 7 df3 = DF[['charges', 'smokerYes', 'age', 'bmi']] df3.loc[:, 'bmi_smokerYes'] = df3 = # данные для моделей 4, 8 df4 = DF[['charges', 'smokerYes', 'age', 'bmi']] df4.loc[:, 'age_smokerYes'] = df4.loc[:, 'bmi_smokerYes'] = df4 = # данные для моделей 9, 10 df0 = ``` Построим модели от всех объясняющих переменных на всех наблюдениях `DF`, чтобы проинтерпретировать параметры. В модели для зависимой переменной `charges` интерпретация стандартная: 1. Константа – базовый уровень зависимой переменной, когда все объясняющие равны 0. 2. Коэффициент при объясняющей переменной $X$ показывает, на сколько своих единиц измерения изменится $Y$, если $X$ увеличится на одну свою единицу измерения. ``` lm = skl_lm.LinearRegression() # модель со всеми объясняющими, y X = y = fit_lm_0 = print('модель fit_lm_0:\n', 'константа ', np.around(fit_lm_0.intercept_, 3), '\n объясняющие ', list(X.columns.values), '\n коэффициенты ', np.around(fit_lm_0.coef_, 3)) # оценим MSE на обучающей # прогнозы y_pred = MSE = MSE ``` С интрпретацией модели на логарифме $Y$ дела обстоят сложнее: 1. Константу сначала надо экспоненциировать, далее интерпретировать как для обычной модели регрессии. 1. Коэффициент при $X$ нужно экспоненциировать, затем вычесть из получившегося числа 1, затем умножить на 100. Результат показывает, на сколько процентов изменится (увеличится, если коэффициент положительный, и уменьшится, если отрицательный) зависимая переменная, если $X$ увеличится на одну свою единицу измерения. ``` # модель со всеми объясняющими, y_log X = df0.drop(['charges'], axis=1) y = np.log(df0.charges).values.reshape(-1, 1) fit_lm_0_log = lm.fit(X, y) print('модель fit_lm_0_log:\n', 'константа ', np.around(fit_lm_0_log.intercept_, 3), '\n объясняющие ', list(X.columns.values), '\n коэффициенты ', np.around(fit_lm_0_log.coef_, 3)) # пересчёт коэффициентов для их интерпретации # оценим MSE на обучающей # прогнозы y_pred = fit_lm_0_log.predict(X) MSE_log = sum((np.exp(y) - np.exp(y_pred).reshape(-1, 1))**2) / len(y) MSE_log print('MSE_train модели для charges меньше MSE_train', 'модели для log(charges) в ', np.around(MSE_log / MSE, 1), 'раз') ``` ### Оценка точности #### LOOCV Сделаем перекрёстную проверку точности моделей по одному наблюдению. ``` # LeaveOneOut CV loo = # модели для y scores = list() # таймер tic = for df in [df0, df1, df2, df3, df4] : X = y = score = scores.append(score) # таймер toc = print(f"Расчёты методом LOOCV заняли {} секунд") # модели для y_log scores_log = list() # таймер tic = time.perf_counter() for df in [df0, df1, df2, df3, df4] : loo.get_n_splits(df) X = df.drop(['charges'], axis=1) y = np.log(df.charges) score = cross_val_score(lm, X, y, cv=loo, n_jobs=1, scoring='neg_mean_squared_error').mean() scores_log.append(score) # таймер toc = time.perf_counter() print(f"Расчёты методом LOOCV заняли {toc - tic:0.2f} секунд") ``` Сравним ошибки для моделей на исходных значениях `charges` с ошибками моделей на логарифме. ``` [np.around(-x, 2) for x in scores] [np.around(-x, 3) for x in scores_log] ``` Определим самые точные модели отдельно на `charges` и на `log_charges`. ``` # самая точная на charges fits = ['fit_lm_0', 'fit_lm_1', 'fit_lm_2', 'fit_lm_3', 'fit_lm_4'] print('Наименьшая ошибка на тестовой с LOOCV у модели', fits[scores.index(max(scores))], ':\nMSE_loocv =', np.around(-max(scores), 0)) # самая точная на log(charges) fits = ['fit_lm_0_log', 'fit_lm_1_log', 'fit_lm_2_log', 'fit_lm_3_log', 'fit_lm_4_log'] print('Наименьшая ошибка на тестовой с LOOCV у модели', fits[scores_log.index(max(scores_log))], ':\nMSE_loocv =', np.around(-max(scores_log), 3)) ``` #### Перекрёстная проверка по блокам Теоретически этот метод менее затратен, чем LOOCV. Проверим на наших моделях. ``` # Перекрёстная проверка по 10 блокам folds = # ядра для разбиений перекрёстной проверкой r_state = # модели для y scores = list() # таймер tic = time.perf_counter() i = 0 for df in [df0, df1, df2, df3, df4] : X = df.drop(['charges'], axis=1) y = df.charges kf_10 = score = cross_val_score(lm, X, y, cv=kf_10, scoring='neg_mean_squared_error').mean() scores.append(score) i+=1 # таймер toc = time.perf_counter() print(f"Расчёты методом CV по 10 блокам заняли {toc - tic:0.2f} секунд") # Перекрёстная проверка по 10 блокам folds = 10 # ядра для разбиений перекрёстной проверкой r_state = np.arange(my_seed, my_seed + 9) # модели для y scores_log = list() # таймер tic = time.perf_counter() i = 0 for df in [df0, df1, df2, df3, df4] : X = df.drop(['charges'], axis=1) y = np.log(df.charges) kf_10 = KFold(n_splits=folds, random_state=r_state[i], shuffle=True) score = cross_val_score(lm, X, y, cv=kf_10, scoring='neg_mean_squared_error').mean() scores_log.append(score) i+=1 # таймер toc = time.perf_counter() print(f"Расчёты методом CV по 10 блокам заняли {toc - tic:0.2f} секунд") # самая точная на charges fits = ['fit_lm_0', 'fit_lm_1', 'fit_lm_2', 'fit_lm_3', 'fit_lm_4'] print('Наименьшая ошибка на тестовой с k-fold10 у модели', fits[scores.index(max(scores))], ':\nMSE_kf10 =', np.around(-max(scores), 0)) # самая точная на log(charges) fits = ['fit_lm_0_log', 'fit_lm_1_log', 'fit_lm_2_log', 'fit_lm_3_log', 'fit_lm_4_log'] print('Наименьшая ошибка на тестовой с k-fold10 у модели', fits[scores_log.index(max(scores_log))], ':\nMSE_kf10 =', np.around(-max(scores_log), 3)) ``` Можно убедиться, что оценка MSE методом перекрёстной проверки по 10 блокам даёт результаты, практически идентичные методу LOOCV. При этом скорость у первого метода при 1204 наблюдениях выше на два порядка. Самой точной среди моделей для `charges` оказалась `fit_lm_3`, а среди моделей для `charges_log` – `fit_lm_0_log`. Оценим точность прогноза по этим моделям на отложенные наблюдения. ``` # прогноз по fit_lm_3 # модель на всех обучающих наблюдениях X = df3.drop(['charges'], axis=1) y = df3.charges.values.reshape(-1, 1) fit_lm_3 = # значения y на отложенных наблюдениях y = DF_predict[['charges']].values.reshape(-1, 1) # матрица объясняющих на отложенных наблюдениях X = DF_predict[['smokerYes', 'age', 'bmi']] X.loc[:, 'bmi_smokerYes'] = X.loc[:, 'bmi'] * X.loc[:, 'smokerYes'] X = X.drop(['bmi'], axis=1) # прогнозы y_pred = # ошибка MSE = print('MSE модели fit_lm_3 на отложенных наблюдениях = %.2f' % MSE) # прогноз по fit_lm_log_0 # модель X = df0.drop(['charges'], axis=1) y = np.log(df0.charges).values.reshape(-1, 1) fit_lm_0_log = # значения y на отложенных наблюдениях y = np.log(DF_predict[['charges']].values.reshape(-1, 1)) # матрица объясняющих на отложенных наблюдениях X = DF_predict.drop(['charges', 'region'], axis=1) # прогнозы y_pred = # ошибка MSE_log = print('MSE модели fit_lm_0_log на отложенных наблюдениях = %.2f' % MSE_log) ``` Очевидно, на выборке для прогноза точнее модель `fit_lm_3`: $\hat{charges} = \hat{\beta_0} + \hat{\beta_1} \cdot smokerYes + \hat{\beta_2} \cdot bmi \cdot smokerYes + \hat{\beta_3} \cdot age$ ``` print('модель fit_lm_3:\n', 'константа ', np.around(fit_lm_3.intercept_, 3), '\n объясняющие ', list(df3.drop(['charges'], axis=1).columns.values), '\n коэффициенты ', np.around(fit_lm_3.coef_, 3)) ``` # Источники 1. *James G., Witten D., Hastie T. and Tibshirani R.* An Introduction to Statistical Learning with Applications in R. URL: [http://www-bcf.usc.edu/~gareth/ISL/ISLR%20First%20Printing.pdf](https://drive.google.com/file/d/15PdWDMf9hkfP8mrCzql_cNiX2eckLDRw/view?usp=sharing) 1. Рашка С. Python и машинное обучение: крайне необходимое пособие по новейшей предсказательной аналитике, обязательное для более глубокого понимания методологии машинного обучения / пер. с англ. А.В. Логунова. – М.: ДМК Пресс, 2017. – 418 с.: ил. 1. Interpreting Log Transformations in a Linear Model / virginia.edu. URL: <https://data.library.virginia.edu/interpreting-log-transformations-in-a-linear-model/> 1. Python Timer Functions: Three Ways to Monitor Your Code / realpython.com. URL: <https://realpython.com/python-timer/>
github_jupyter
# Lab 12: Epidemics, or the Spread of Viruses ``` %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as img import numpy as np import scipy as sp import scipy.stats as st import networkx as nx from scipy.integrate import odeint from operator import itemgetter print ('Modules Imported!') ``` ## Epidemics, or the Spread of Viruses: The study of how viruses spread through populations began well over a hundred years ago. The original studies concerned biological viruses, but the principles found application in modeling the spread of ideas or practices in social networks (such as what seeds farmers use) even before the advent of computers. More recently, computer networks, and in particular, on-line social networks, have stimulated renewed attention on the theory, to model, for example, the spread of computer viruses through networks, the adoption of new technology, and the spread of such things as ideas and emotional states through social networks. One of the simplest models for the spread of infection is the discrete-time Reed Frost model, proposed in the 1920s. It goes as follows. Suppose the individuals that can be infected are the nodes of an undirected graph. An edge between two nodes indicates a pathway for the virus to spread from one node to the other node. The Reed Frost model assumes that each node is in one of three states at each integer time $t\geq 0:$ susceptible, infected, or removed. This is thus called an SIR model. At $t=0$, each individual is either susceptible or infected. The evolution over one time step is the following. A susceptible node has a chance $\beta$ to become infected by each of its infected neighbors, with the chances from different neighbors being independent. Thus if a susceptible node has $k$ infected neighbors at time $t,$ the probability the node is *not* infected at time $t+1$ (i.e. it remains susceptible) is $(1-\beta)^k,$ and the probability the node is infected at time $t+1$ is $1-(1-\beta)^k.$ It is assumed that a node is removed one time step after being infected, and once a node is removed, it remains removed forever. In applications, removed could mean the node has recovered and has gained immunity, so infection is no longer spread by that node. To summarize, the model is completely determined by the graph, the initial states of the nodes, and the parameter $\beta.$ One question of interest is how likely is the virus to infect a large fraction of nodes, and how quickly will it spread. Other questions are to find the effect of well connected clusters in the graph, or the impact of nodes of large degree, on the spread of the virus. If the virus represents adoption of a new technology, the sponsoring company might be interested in finding a placement of initially infected nodes (achieved by free product placements) to maximize the chances that most nodes become infected. Below is code that uses the Networkx package to simulate the spread of a virus. A simple special case, and the one considered first historically, is if the virus can spread from any node to any other node. This corresponds to a tightly clustered population; the graph is the complete graph. For this case, the system can be modeled by a three dimensional Markov process $(X_t)$ with a state $(S,I,R),$ denoting the numbers of susceptible, infected, and removed, nodes, respectively. Given $X_t=(S,I,R),$ the distribution of $X_{t+1}$ is determined by generating the number of newly infected nodes, which has the binomial distribution with parameters $S$ and $p=1-(1-\beta)^I$ (because each of the susceptible nodes is independently infected with probability $p.$) ``` # Simulation of Reed Frost model for fully connected graph (aka mean field model) # X[t,:]=[number susceptible, number infected, number removed] at time t # Since odeint wants t first in trajectories X[t,i], let's do that consistently n=100 # Number of nodes I_init=6 # Number of nodes initially infected, the others are initially susceptible c=2.0 # Use a decimal point when specifying c. beta=c/n # Note that if n nodes each get infected with probability beta=c/n, # then c, on average, are infected. T=100 # maximum simulation time X = np.zeros((T+1,3),dtype=np.int) X[0,0], X[0,1] = n-I_init, I_init t=0 while t<T and X[t,1]>0: # continue (up to time T) as long as there exist infected nodes newI=np.random.binomial(X[t,0],1.0-(1.0-beta)**X[t,1]) # number of new infected nodes X[t+1,0]=X[t,0]-newI X[t+1,1]=newI X[t+1,2]=X[t,1]+X[t,2] t=t+1 plt.figure() plt.plot(X[0:t,0], 'g', label='susceptible') plt.plot(X[0:t,1], 'r', label='infected') plt.plot(X[0:t,2], 'b', label='removed') plt.title('SIR Model for complete graph') plt.xlabel('time') plt.ylabel('number of nodes') plt.legend() ``` Run the code a few times to get an idea of the variations that can occur. Then try increasing n to 1000 or 10,000 and running the simulation a few more times. Note that the code scales down the infection parmeter $\beta$ inversely with respect to the population size, that is: $\beta = c/n$ for a constant $c.$ If, instead, $\beta$ didn't depend on $n,$ then the infection would spread much faster for large $n$ and the fully connected graph. A key principle of the spread of viruses (or branching processes) is that the number infected will tend to increase if the mean number, $F,$ of new infections caused by each infected node satisfies $F>1.$ <br> **<SPAN style="BACKGROUND-COLOR: #C0C0C0">Problem 1:</SPAN>** In the simulation with $n$ large you should see the number of infected nodes increase and then decrease at some *turnaround time.* Determine how the turnaround time depends on the fraction of nodes that are susceptible. How does the constant $c$ enter into this relationship? It may be helpful to change the value of $c$ a few times and view how it effects the graph. (You do not need to write code for this problem--you can write your answer in a markdown cell). __Answer:__ (Your answer here) **<SPAN style="BACKGROUND-COLOR: #C0C0C0">End of Problem 1</SPAN>** **<SPAN style="BACKGROUND-COLOR: #C0C0C0">Problem 2:</SPAN>** We have assumed that an infected node is removed from the population after a single time step. If you were modeling the spread of a tweet this might be a good idea. If a tweet doesn't get retweeted immediately the probability that it does later is very close to 0. However, with something like a virus, an individual tends to be infected for more than a single day. Suppose $\gamma$ represents the probability an individual that is infected in one time step is removed from the population by the next time step. So the number of time slots an individual is infected has the geometric distribution with mean $1/\gamma$. <ol> <li> Modify the code above to include $\gamma = .25$. <li> Determine how allowing nodes to remain infected for multiple time slots (according to $\gamma$) changes the answer to the previous problem. </ol> ``` # Your code here ``` __Answer:__ (Your answer here) **<SPAN style="BACKGROUND-COLOR: #C0C0C0">End of Problem 2</SPAN>** If you run your previous code for n larger than 1000 the output should be nearly the same on each run (depending on the choice of $c$ and $\gamma$). In fact, for $n$ large the trajectory should follow the ode (based on the same principles explored in the previous lab): $\frac{dS}{dt} = -\beta IS $ $\frac{dI}{dt} = \beta IS - \gamma I$ $\frac{dR}{dt} = \gamma I$ The ode is derived based on condidering the expected value of the Makov process at a time $t+1,$ given the state is $(S,I,R)$ at time $t.$ Specifically, each of $I$ infected nodes has a chance to infect each of $S$ susceptible nodes, yielding an expected number of new infected nodes $\beta I S.$ (This equation overcounts the expected number of infections because a node can simultaneously be infected by two of its neighbors, but the extent of overcounting is small if $\beta$ is small.) Those nodes cause a decrease in $S$ and an increase in $I.$ Similarly, the expected number of susceptible nodes becoming removed is $\gamma I.$ **<SPAN style="BACKGROUND-COLOR: #C0C0C0">Problem 3:</SPAN>** Use the odeint method (as in Lab 11) to integrate this three dimensional ode and graph the results vs. time. Try to match the plot you generated for the previous problem for the parameter values $n=1000,$ $\gamma = 0.25,$ and $\beta=c/n$ with $c=2.0.$ To get the best match, plot your solution for about the same length of time as the stochastic simulation takes. ``` # Your code here ``` **<SPAN style="BACKGROUND-COLOR: #C0C0C0">End of Problem 3</SPAN>** The above simulations and calculations did not involve graphical structure. The following code simulates the Reed Frost model for a geometric random graph (we encountered such graphs in Lab 7. Since each node has only a few neighbors, we no longer scale the parameter $\beta$ down with the total number of nodes. ``` ## Reed Frost simulation over a random geometric graph ## (used at beginning of graph section in Lab 7) ## X is no longer Markov, the state of the network is comprised of the states of all nodes d=0.16 # distance threshold, pairs of nodes within distance d are connected by an edge G=nx.random_geometric_graph(100,d) #100 nodes in unit square, distance threshold d determines edges # position is stored as node attribute data for random_geometric_graph, pull it out for plotting pos=nx.get_node_attributes(G,'pos') ###################################### def my_display(t, X, show): """ The function puts numbers of nodes in states S,I,R into X[t,:] If (show==1 and no nodes are infected) or if show==2, G is plotted with node colors for S,I,R. Returns value 0 if no nodes are infected """ susceptible=[] infected=[] removed=[] for u in G.nodes(): if G.nodes[u]['state']=='S': susceptible.append(u) elif G.nodes[u]['state']=='I': infected.append(u) elif G.nodes[u]['state']=='R': removed.append(u) X[t,0] = np.size(susceptible) X[t,1] = np.size(infected) X[t,2] = np.size(removed) # show: 0=don't graph, 1 = show graph once at end, 2=show graph after each iteration if (show==1 and X[t,1]==0) or show==2: print ("Nodes infected at time",t,":",infected) plt.figure(figsize=(8,8)) nx.draw_networkx_edges(G,pos,alpha=0.4) #All edges are drawn alpha specifies edge transparency nx.draw_networkx_nodes(G,pos,nodelist=susceptible, node_size=80, node_color='g') nx.draw_networkx_nodes(G,pos,nodelist=infected, node_size=80, node_color='r') nx.draw_networkx_nodes(G,pos,nodelist=removed, node_size=80, node_color='b') plt.xlim(-0.05,1.05) plt.ylim(-0.05,1.05) plt.axis('off') plt.show() if X[t,1]==0: return 0; # No infected nodes else: return 1; # At least one node is infected ##################################### beta=0.3 gamma=.5 T = 40 X = np.zeros((T,3)) print ("Infection probability parameter beta=", beta) for u in G.nodes(): # Set the state of each node to susceptible G.nodes[u]['state']='S' G.nodes[0]['state']='I' # Change state of node 0 to infected t=0 show=2 # show: 0=don't graph, 1 = show graph once at end, 2=show graph after each iteration while t<T and my_display(t, X,show): # Plot graph, fill in X[t,:], and go through loop if some node is infected for u in G.nodes(): # This loop counts number of infected neighbors of each node G.nodes[u]['num_infected_nbrs']=0 for v in G.neighbors(u): if G.nodes[v]['state']=='I': G.nodes[u]['num_infected_nbrs']+=1 for u in G.nodes(): # This loop updates node states if G.nodes[u]['state']=='I' and np.random.random() < gamma: G.nodes[u]['state']='R' elif G.nodes[u]['state']=='S' and np.random.random() > np.power(1.0-beta,G.nodes[u]['num_infected_nbrs']): G.nodes[u]['state']='I' t=t+1 plt.figure() plt.plot(X[0:t,0], 'g', label='susceptible') plt.plot(X[0:t,1], 'r', label='infected') plt.plot(X[0:t,2], 'b', label='removed') plt.title('SIR Model') plt.xlabel('time') plt.ylabel('number of nodes') plt.legend() ################################### ``` Now let's try simulating the spread of a virus of a network obtained from real world data. Think of each node as a blog. It's neighbors are all the other blogs that it contains links to. Additionally, each blog contains a value (0,1) which represents a political party. So you can imagine a network with two large clusters (one for each party) with a smaller number of links going between the clusters. Specifically, we upload the graph from the file pblogs.gml. (This data can be used freely though its source should be cited: Lada A. Adamic and Natalie Glance, "The political blogosphere and the 2004 US Election", in Proceedings of the WWW-2005 Workshop on the Weblogging Ecosystem (2005).) It may take a while to load the graph, so we write the code in a box by itself so that you only need to load the graph once. ``` ### Load G from polblogs.gml file and convert from directed to undirected graph. May take 20 seconds. G = nx.read_gml('polblogs.gml') # node labels are web addresses (as unicode strings) G=G.to_undirected(reciprocal=False) for u in G.nodes(): # Copy node labels (i.e. the urls of websites) to url values G.nodes[u]['url']= u G=nx.convert_node_labels_to_integers(G) # Replace node labels by numbers 0,1,2, ... print ("G loaded from polblogs.gml and converted to undirected graph") #Here are some methods for exploring properties of the graph #uncomment next line to see attributes of all nodes of G #print (G.nodes(data=True)) # note, for example, that node 1 has url rightrainbow.com #uncomment next line to see (node,degree) pairs sorted by decreasing degree #print (sorted(G.degree(),key=itemgetter(1),reverse=True)) #uncomment to see the neighbors of node 6 #print (G.neighbors(6)) ######### Simulate Reed Frost dynamics over undirected graph G, assuming G is loaded def my_count(t, X): """ The function puts numbers of nodes in states S,I,R into X[t,:] Returns value 0 if no nodes are infected """ susceptible=[] infected=[] removed=[] for u in G.nodes(): if G.node[u]['state']=='S': susceptible.append(u) elif G.node[u]['state']=='I': infected.append(u) elif G.node[u]['state']=='R': removed.append(u) X[t,0] = np.size(susceptible) X[t,1] = np.size(infected) X[t,2] = np.size(removed) if X[t,1]==0: return 0; # No infected nodes else: return 1; # At least one node is infected ##################################### beta=0.3 gamma=.5 T = 40 X = np.zeros((T,3)) print ("Infection probability parameter beta=", beta) for u in G.nodes(): # Set the state of each node to susceptible G.node[u]['state']='S' G.node[1]['state']='I' # Change state of node 1 to infected t=0 while t<T and my_count(t, X): # Fill in X[t,:], and go through loop if some node is infected for u in G.nodes(): # This loop counts number of infected neighbors of each node G.node[u]['num_infected_nbrs']=0 for v in G.neighbors(u): if G.node[v]['state']=='I': G.node[u]['num_infected_nbrs']+=1 for u in G.nodes(): # This loop updates node states if G.node[u]['state']=='I' and np.random.random() < gamma: G.node[u]['state']='R' elif G.node[u]['state']=='S' and np.random.random() > np.power(1.0-beta,G.node[u]['num_infected_nbrs']): G.node[u]['state']='I' t=t+1 plt.figure() plt.plot(X[0:t,0], 'g', label='susceptible') plt.plot(X[0:t,1], 'r', label='infected') plt.plot(X[0:t,2], 'b', label='removed') plt.title('SIR Model') plt.xlabel('time') plt.ylabel('number of nodes') plt.legend() ``` Run the code above a few times to see the variation. This graph has a much larger variation of degrees of the nodes than the random geometric graphs we simulated earlier.<br><br> **<SPAN style="BACKGROUND-COLOR: #C0C0C0">Problem 4:</SPAN>** 1. Adapt the code in the previous cell to run N=100 times and calculate the average number of susceptible nodes remaining after no infected nodes are left. Also, to get an idea of how accurately your average predicts the true mean, compute the sample standard deviation divided by sqrt(N). See <A href=http://en.wikipedia.org/wiki/Standard_deviation#Corrected_sample_standard_deviation> wikipedia </A> for definitions of sample mean and sample standard deviation (use either corrected or uncorrected version of sample standard variance) or use numpy.mean() and numpy.std(). Dividing the standard deviation of the samples by sqrt(N) estimates the standard deviation of your estimate of the mean. So if you were to increase N, your observed standard deviation wouldn't change by much, but your observed mean will be more accurate. 2. Now, you must let node 1 be initially infected, but you may remove ten carefully selected nodes before starting the simulations. Try to think of a good choice of which nodes to remove so as to best reduce the number of nodes that become infected. (You could use the method G.remove_node(n) to remove node $n$ from the graph $G$, but it would run faster to just initialize the state variable for the removed nodes to R but leave the node in $G.$) Then again compute the mean and estimated accuracy as before, for the number of nodes that are susceptible at the end of the simulation runs. Explain the reasoning you used. Ideally you should be able to increase the number of remaining susceptible nodes by at least ten percent for this example. ``` # Your code here ``` __Answer:__ (Your answer here) **<SPAN style="BACKGROUND-COLOR: #C0C0C0">End of Problem 4</SPAN>** ``` ### LEFTOVER CODE : READ IF INTERESTED BUT THERE IS NO LAB QUESTION FOR THIS ### Each node of the graph G loaded from polblogs.gml has a value, ### either 0 or 1, indicating whether the node corresponds to ### a politically liberal website or a politically conservative website. ### For fun, this code does the Reed Frost simulation (without gamma) ### and breaks down each of the S,I,R counts into the two values. ### An idea was to see if we could cluster the nodes by infecting one ### node and then seeing if other nodes in the same cluster are more ### likely to be infected. Indeed, in the simulation we see nodes ### with the same value as node 1 getting infected sooner. By the end, ### though, the number infected from the other value catch up. ##################################### def my_print(t): numS=np.array([0,0]) numI=np.array([0,0]) numR=np.array([0,0]) for u in G.nodes(): if G.node[u]['state']=='S': numS[G.node[u]['value']]+=1 elif G.node[u]['state']=='I': numI[G.node[u]['value']]+=1 elif G.node[u]['state']=='R': numR[G.node[u]['value']]+=1 print ("{0:3d}: {1:5d} {2:5d} {3:5d} {4:5d} {5:5d} {6:5d}"\ .format(t,numS[0], numS[1], numI[0],numI[1],numR[0],numR[1])) if np.sum(numI)==0: return 0; # No infected nodes else: return 1; # At least one node is infected ##################################### beta=0.3 print ("Infection probability parameter beta=", beta) print (" t Susceptible Infected Removed") for u in G.nodes(): # Set the state of each nodes to susceptible G.node[u]['state']='S' G.node[1]['state']='I' # Change state of node 1 to infected t=0 while my_print(t): # Plot graph and go through loop if some node is infected for u in G.nodes(): # This loop counts number of infected neighbors of each node G.node[u]['num_infected_nbrs']=0 for v in G.neighbors(u): if G.node[v]['state']=='I': G.node[u]['num_infected_nbrs']+=1 for u in G.nodes(): # This loop updates node states if G.node[u]['state']=='I': G.node[u]['state']='R' elif G.node[u]['state']=='S' and np.random.random() > np.power(1.0-beta,G.node[u]['num_infected_nbrs']): G.node[u]['state']='I' t=t+1 print ("finished") ``` ## Lab Questions: For this weeks lab, please answer all questions 1-4. <div class="alert alert-block alert-warning"> ## Academic Integrity Statement ## By submitting the lab with this statement, you declare you have written up the lab entirely by yourself, including both code and markdown cells. You also agree that you should not share your code with anyone else. Any violation of the academic integrity requirement may cause an academic integrity report to be filed that could go into your student record. See <a href="https://provost.illinois.edu/policies/policies/academic-integrity/students-quick-reference-guide-to-academic-integrity/">Students' Quick Reference Guide to Academic Integrity</a> for more information.
github_jupyter
### Genarating names with character-level RNN In this notebook we are going to follow the previous notebook wher we classified name's nationalities based on a character level RNN. This time around we are going to generate names using character level RNN. Example: _given a nationality and three starting characters we want to generate some names based on those characters_ We will be following [this pytorch tutorial](https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html). The difference between this notebook and the previous notebook is that we instead of predicting the class where the name belongs, we are going to output one letter at a time until we generate a name. This can be done on a word level but we will do this on a character based level in our case. ### Data preparation The dataset that we are going to use was downloaded [here](https://download.pytorch.org/tutorial/data.zip). This dataset has nationality as a file name and inside the files we will see the names that belongs to that nationality. I've uploaded this dataset on my google drive so that we can load it eaisly. ### Mounting the drive ``` from google.colab import drive drive.mount('/content/drive') data_path = '/content/drive/My Drive/NLP Data/names-dataset/names' ``` ### Imports ``` from __future__ import unicode_literals, print_function, division import os, string, unicodedata, random import torch from torch import nn from torch.nn import functional as F torch.__version__ all_letters = string.ascii_letters + " .,;'-" n_letters = len(all_letters) + 1 # Plus EOS marker ``` A function that converts all unicodes to ASCII. ``` def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' and c in all_letters ) def read_lines(filename): with open(filename, encoding='utf-8') as some_file: return [unicodeToAscii(line.strip()) for line in some_file] # Build the category_lines dictionary, a list of lines per category category_lines = {} all_categories = [] for filename in os.listdir(data_path): category = filename.split(".")[0] all_categories.append(category) lines = read_lines(os.path.join(data_path, filename)) category_lines[category] = lines n_categories = len(all_categories) print('# categories:', n_categories, all_categories) ``` ### Creating the Network This network extends from the previous notebook with an etra argumeny for the category tensor which is concatenated along with others. The category tensor is one-hot vector just like the letter input. We will output the most probable letter and used it as input to the next letter. ![img](https://i.imgur.com/jzVrf7f.png) ``` class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size) self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size) self.o2o = nn.Linear(hidden_size + output_size, output_size) self.dropout = nn.Dropout(0.1) self.softmax = nn.LogSoftmax(dim=1) def forward(self, category, input, hidden): input_combined = torch.cat((category, input, hidden), 1) hidden = self.i2h(input_combined) output = self.i2o(input_combined) output_combined = torch.cat((hidden, output), 1) output = self.o2o(output_combined) output = self.dropout(output) output = self.softmax(output) return output, hidden def initHidden(self): return torch.zeros(1, self.hidden_size) ``` ### Training First of all, helper functions to get random pairs of (category, line): ``` # Random item from a list def randomChoice(l): return l[random.randint(0, len(l) - 1)] # Get a random category and random line from that category def randomTrainingPair(): category = randomChoice(all_categories) line = randomChoice(category_lines[category]) return category, line line, cate = randomTrainingPair() line, cate ``` For each timestep (that is, for each letter in a training word) the inputs of the network will be ``(category, current letter, hidden state)`` and the outputs will be ``(next letter, next hidden state)``. So for each training set, we’ll need the category, a set of input letters, and a set of output/target letters. Since we are predicting the next letter from the current letter for each timestep, the letter pairs are groups of consecutive letters from the line - e.g. for `"ABCD<EOS>"` we would create (“A”, “B”), (“B”, “C”), (“C”, “D”), (“D”, “EOS”). ![img](https://i.imgur.com/JH58tXY.png) The category tensor is a one-hot tensor of size `<1 x n_categories>`. When training we feed it to the network at every timestep - this is a design choice, it could have been included as part of initial hidden state or some other strategy. ``` def category_tensor(category): li = all_categories.index(category) tensor = torch.zeros(1, n_categories) tensor[0][li] = 1 return tensor # out = 3 def input_tensor(line): tensor = torch.zeros(len(line), 1, n_letters) for li in range(len(line)): letter = line[li] tensor[li][0][all_letters.find(letter)] = 1 return tensor def target_tensor(line): letter_indexes = [all_letters.find(line[li]) for li in range(1, len(line))] letter_indexes.append(n_letters - 1) # EOS return torch.LongTensor(letter_indexes) ``` For convenience during training we’ll make a `randomTrainingExample` function that fetches a random (category, line) pair and turns them into the required (category, input, target) tensors. ``` # Make category, input, and target tensors from a random category, line pair def randomTrainingExample(): category, line = randomTrainingPair() category_t = category_tensor(category) input_line_tensor = input_tensor(line) target_line_tensor = target_tensor(line) return category_t, input_line_tensor, target_line_tensor ``` ### Training the Network In contrast to classification, where only the last output is used, we are making a prediction at every step, so we are calculating loss at every step. The magic of autograd allows you to simply sum these losses at each step and call backward at the end. ``` criterion = nn.NLLLoss() learning_rate = 0.0005 def train(category_tensor, input_line_tensor, target_line_tensor): target_line_tensor.unsqueeze_(-1) hidden = rnn.initHidden() rnn.zero_grad() loss = 0 for i in range(input_line_tensor.size(0)): output, hidden = rnn(category_tensor, input_line_tensor[i], hidden) l = criterion(output, target_line_tensor[i]) loss += l loss.backward() for p in rnn.parameters(): p.data.add_(p.grad.data, alpha=-learning_rate) return output, loss.item() / input_line_tensor.size(0) ``` To keep track of how long training takes I am adding a `time_since(timestamp)` function which returns a human readable string: ``` import time, math def time_since(since): now = time.time() s = now - since m = math.floor(s / 60) s -= m * 60 return '%dm %ds' % (m, s) ``` Training is business as usual - call train a bunch of times and wait a few minutes, printing the current time and loss every `print_every` examples, and keeping store of an average loss per `plot_every` examples in `all_losses` for plotting later. ``` rnn = RNN(n_letters, 128, n_letters) n_iters = 100000 print_every = 5000 plot_every = 500 all_losses = [] total_loss = 0 # Reset every plot_every iters start = time.time() for iter in range(1, n_iters + 1): output, loss = train(*randomTrainingExample()) total_loss += loss if iter % print_every == 0: print('%s (%d %d%%) %.4f' % (time_since(start), iter, iter / n_iters * 100, loss)) if iter % plot_every == 0: all_losses.append(total_loss / plot_every) total_loss = 0 ``` ### Plotting the losses * Plotting the historical loss from all_losses shows the network learning: ``` import matplotlib.pyplot as plt plt.figure() plt.plot(all_losses) plt.show() ``` #### Sampling the network To sample we give the network a letter and ask what the next one is, feed that in as the next letter, and repeat until the `EOS` token. * Create tensors for input category, starting letter, and empty hidden state * Create a string output_name with the starting letter * Up to a maximum output length, * Feed the current letter to the network * Get the next letter from highest output, and next hidden state * If the letter is EOS, stop here * If a regular letter, add to output_name and continue * Return the final name > Rather than having to give it a starting letter, another strategy would have been to include a “start of string” token in training and have the network choose its own starting letter. ``` max_length = 20 # Sample from a category and starting letter def sample(category, start_letter='A'): with torch.no_grad(): # no need to track history in sampling category_t = category_tensor(category) input = input_tensor(start_letter) hidden = rnn.initHidden() output_name = start_letter for i in range(max_length): output, hidden = rnn(category_t, input[0], hidden) topv, topi = output.topk(1) topi = topi[0][0] if topi == n_letters - 1: #eos break else: letter = all_letters[topi] output_name += letter input = input_tensor(letter) return output_name # Get multiple samples from one category and multiple starting letters def samples(category, start_letters='ABC'): for start_letter in start_letters: print(sample(category, start_letter)) samples('Russian', 'RUS') samples('German', 'GER') samples('Spanish', 'SPA') samples('Chinese', 'CHI') ``` ### Ref * [pytorch tutorial](https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html) * [Understanding LSTM Networks](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) * [The Unreasonable Effectiveness of Recurrent Neural Networks](https://karpathy.github.io/2015/05/21/rnn-effectiveness/) ``` ```
github_jupyter
# Shashank V. Sonar ## Task 5: Exploratory Data Analysis - Sports ### Step -1: Importing the required Libraries ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns %matplotlib inline from sklearn.cluster import KMeans from sklearn import datasets import warnings warnings.filterwarnings("ignore") import os import mpl_toolkits import json print('Libraries are imported Successfully') ``` ### Step-2: Importing the dataset ``` #Reading deliveries dataset df_deliveries=pd.read_csv('C:/Users/91814/Desktop/GRIP/Task 5/ipl/deliveries.csv',low_memory=False) print('Data Read Successfully') #Displaying the deliveries dataset df_deliveries #reading matches dataset df_matches=pd.read_csv('C:/Users/91814/Desktop/GRIP/Task 5/ipl/df_matches.csv',low_memory=False) print('Data Read Successfully') #displaying matches dataset df_matches ``` ### Step-3 Pre processing of Data ``` #displaying the first five rows of the matches dataset df_matches.head() df_matches.tail()#displaying the last five rows of the matches dataset df_matches['team1'].unique() #displaying team 1 df_matches['team2'].unique() #displaying team 2 df_deliveries['batting_team'].unique() #displaying the batting team df_deliveries['bowling_team'].unique() #displaying the bowling team #replacing with short team names in matches dataset df_matches.replace(['Royal Challengers Bangalore', 'Sunrisers Hyderabad', 'Rising Pune Supergiant', 'Mumbai Indians', 'Kolkata Knight Riders', 'Gujarat Lions', 'Kings XI Punjab', 'Delhi Daredevils', 'Chennai Super Kings', 'Rajasthan Royals', 'Deccan Chargers', 'Kochi Tuskers Kerala', 'Pune Warriors', 'Rising Pune Supergiants', 'Delhi Capitals'], ['RCB', 'SRH', 'PW', 'MI', 'KKR', 'GL', 'KXIP', 'DD','CSK','RR', 'DC', 'KTK','PW','RPS','DC'],inplace =True) #replacing with short team names in deliveries dataset df_deliveries.replace(['Royal Challengers Bangalore', 'Sunrisers Hyderabad', 'Rising Pune Supergiant', 'Mumbai Indians', 'Kolkata Knight Riders', 'Gujarat Lions', 'Kings XI Punjab', 'Delhi Daredevils', 'Chennai Super Kings', 'Rajasthan Royals', 'Deccan Chargers', 'Kochi Tuskers Kerala', 'Pune Warriors', 'Rising Pune Supergiants', 'Delhi Capitals'], ['RCB', 'SRH', 'PW', 'MI', 'KKR', 'GL', 'KXIP', 'DD','CSK','RR', 'DC', 'KTK','PW','RPS','DC'],inplace =True) print('Total Matches played:',df_matches.shape[0]) print('\n Venues played at:', df_matches['city'].unique()) print('\n Teams:', df_matches['team1'].unique) print('Total venues playes at:', df_matches['city'].nunique()) print('\n Total umpires:', df_matches['umpire1'].nunique()) print((df_matches['player_of_match'].value_counts()).idxmax(), ':has most man of the match awards') print((df_matches['winner'].value_counts()).idxmax(), ':has the highest number of match wins') df_matches.dtypes df_matches.nunique() ``` ### Full Data Summary ``` df_matches.info() ``` ### Statistical Summary of Data ``` df_matches.describe() ``` ### Observations ``` # The .csv file has data of ipl matches starting from the season 2008 to 2019. # The biggest margin of victory for the team batting first(win by runs) is 146. # The biggest victory of the team batting second(win by wickets) is by 10 Wickets. # 75% of the victorious teams that bat first won by the margin of 19 runs. # 75% of the victorious teams that bat second won by a margin of 6 Wickets. # There were 756 Ipl matches hosted from 2008 to 2019. ``` ### Columns in the data ``` df_matches.columns ``` ### Getting Unique values of each column ``` for col in df_matches: print(df_matches[col].unique()) ``` ### Finding out Null values in Each Columns ``` df_matches.isnull().sum() ``` ### Dropping of Columns having significant number of Null values ``` df_matches1 =df_matches.drop(columns=['umpire3'], axis =1) ``` ### Verification of Dropped Column ``` df_matches df_matches.isnull().sum() df_matches.fillna(0,inplace=True) df_matches df_matches.isnull().sum() # We have successfully replace the Null values with 'Zeros' df_deliveries.dtypes df_deliveries.shape df_deliveries.dtypes df_deliveries.info() df_deliveries.describe() df_deliveries.columns ``` ### Counting the Null Values in the data set ``` df_deliveries.isnull().sum() ``` ### Total number of null values in the dataset ``` df_deliveries.isnull().sum().sum() ``` ### Step - 4 Analysing the data ### Which Team had won by maximum runs? ``` df_matches.iloc[df_matches['win_by_runs'].idxmax()] ``` ### Which Team had won by maximum wicket? ``` df_matches.iloc[df_matches['win_by_wickets'].idxmax()] ``` ### Which Team had won by minimum Margin? ``` df_matches.iloc[df_matches[df_matches['win_by_runs'].ge(1)].win_by_runs.idxmin()] ``` ### Which Team had won by Minimum Wickets? ``` df_matches.iloc[df_matches[df_matches['win_by_wickets'].ge(1)].win_by_wickets.idxmin()] len(df_matches['season'].unique()) df_deliveries.fillna(0,inplace=True) df_deliveries df_deliveries.isnull().sum() ``` ### The team with most number of Wins per season ``` teams_per_season =df_matches.groupby('season')['winner'].value_counts() teams_per_season """ for i, w in wins_per_season.iteritmes(): print(i, w) for items in win_per_season.iteritems(): print(items) """ year = 2008 win_per_season_df_matches=pd.DataFrame(columns=['year', 'team', 'wins']) for items in teams_per_season.iteritems(): if items[0][0]==year: print(items) win_series =pd.DataFrame({ 'year': [items[0][0]], 'team':[items[0][1]], 'wins':[items[1]] }) win_per_season_df_matches = win_per_season_df_matches.append(win_series) year +=1 win_per_season_df_matches ``` ### Step- 5 DATA Visualising ``` venue_ser=df_matches['venue'].value_counts() venue_df_matches =pd.DataFrame(columns=['venue', 'matches']) for items in venue_ser.iteritems(): temp_df_matches =pd.DataFrame({ 'venue':[items[0]], 'matches':[items[1]] }) venue_df_matches = venue_df_matches.append(temp_df_matches,ignore_index=True) ``` ### Ipl Venues plt.title('Ipl Venues') sns.barplot(x='matches', y='venue', data =venue_df_matches); ### Number of Matches played and venue ``` venue_df_matches ``` ### The most Successful IPL team ``` team_wins_ser= df_matches['winner'].value_counts() team_wins_df_matches=pd.DataFrame(columns=['team', 'wins']) for items in team_wins_ser.iteritems(): temp_df1 =pd.DataFrame({ 'team':[items[0]], 'wins':[items[1]] }) team_wins_df_matches= team_wins_df_matches.append(temp_df1,ignore_index=True) ``` ### Finding the most successful Ipl team ``` team_wins_df_matches ``` ### IPL Victories by team ``` plt.title('Total victories of IPL Teams') sns.barplot(x='wins', y='team', data =team_wins_df_matches, palette ='ocean_r'); ``` ### Most valuable players ``` mpv_ser=df_matches['player_of_match'].value_counts() mvp_10_df_matches=pd.DataFrame(columns=['player','wins']) count = 0 for items in mvp_ser.iteritems(): if count>9: break else: temp_df2=pd.DataFrame({ 'player':[items[0]], 'wins':[items[1]] }) mvp_10_df_matches =mvp_10_df_matches.append(temp_df2,ignore_index=True) count += 1 ``` ### Top 10 Most valuable players ``` mvp_10_df_matches plt.title("Top IPL Player") sns.barplot(x='wins', y='player', data =mvp_10_df_matches, palette ='cool') ``` ### Team that won the most number of toss ``` toss_ser =df_matches['toss_winner'].value_counts() toss_df_matches=pd.DataFrame(columns=['team','wins']) for items in toss_ser.iteritems(): temp_df3=pd.DataFrame({ 'team':[items[0]], 'wins':[items[1]] }) toss_df_matches = toss_df_matches.append(temp_df3,ignore_index=True) ``` ### Count of Number of toss wins and teams ``` toss_df_matches ``` ### Teams which has won More number of Toss ``` plt.title('Which team won more number of Toss') sns.barplot(x='wins', y='team', data=toss_df_matches, palette='Dark2') ``` ### Observations ``` # Mumbai Indians has won the most toss(till 2019) in ipl history. ``` ### Numbebr of Matches won by team ``` plt.figure(figsize = (18,10)) sns.countplot(x='winner',data=df_matches, palette='cool') plt.title("Numbers of matches won by team ",fontsize=20) plt.xticks(rotation=50) plt.xlabel("Teams",fontsize=15) plt.ylabel("No of wins",fontsize=15) plt.show() df_matches.result.value_counts() plt.subplots(figsize=(10,6)) sns.countplot(x='season', hue='toss_decision', data=df_matches) plt.show() ``` ### Maximum Toss Winner ``` plt.subplots(figsize=(10,8)) ax=df_matches['toss_winner'].value_counts().plot.bar(width=0.9,color=sns.color_palette('RdYlGn', 20)) for p in ax.patches: ax.annotate(format(p.get_height()),(p.get_x()+0.15, p.get_height()+1)) plt.show() ``` ### Matches played across each seasons ``` plt.subplots(figsize=(10,8)) sns.countplot(x='season', data=df_matches,palette=sns.color_palette('winter')) plt.show() ``` ### Top 10 Batsman from the dataset ``` plt.subplots(figsize=(10,6)) max_runs=df_deliveries.groupby(['batsman'])['batsman_runs'].sum() ax=max_runs.sort_values(ascending=False)[:10].plot.bar(width=0.8,color=sns.color_palette('winter_r',20)) for p in ax.patches: ax.annotate(format(p.get_height()),(p.get_x()+0.1, p.get_height()+50),fontsize=15) plt.show() ``` ### Number of matches won by Toss winning side ``` plt.figure(figsize = (18,10)) sns.countplot('season',hue='toss_winner',data=df_matches,palette='hsv') plt.title("Numbers of matches won by batting and bowling first ",fontsize=20) plt.xlabel("season",fontsize=15) plt.ylabel("Count",fontsize=15) plt.show() df_matches # we will print winner season wise final_matches=df_matches.drop_duplicates(subset=['season'], keep='last') final_matches[['season','winner']].reset_index(drop=True).sort_values('season') # we will print number of season won by teams final_matches["winner"].value_counts() # we will print toss winner, toss decision, winner in final matches. final_matches[['toss_winner','toss_decision','winner']].reset_index(drop=True) # we will print man of the match final_matches[['winner','player_of_match']].reset_index(drop=True) # we will print numbers of fours hit by team season_data=df_matches[['id','season','winner']] complete_data=df_deliveries.merge(season_data,how='inner',left_on='match_id',right_on='id') four_data=complete_data[complete_data['batsman_runs']==4] four_data.groupby('batting_team')['batsman_runs'].agg([('runs by fours','sum'),('fours','count')]) Toss=final_matches.toss_decision.value_counts() labels=np.array(Toss.index) sizes = Toss.values colors = ['#FFBF00', '#FA8072'] plt.figure(figsize = (10,8)) plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True,startangle=90) plt.title('Toss Result', fontsize=20) plt.axis('equal') plt.show() # we will plot graph on four hit by players batsman_four=four_data.groupby('batsman')['batsman_runs'].agg([('four','count')]).reset_index().sort_values('four',ascending=0) ax=batsman_four.iloc[:10,:].plot('batsman','four',kind='bar',color='black') plt.title("Numbers of fours hit by playes ",fontsize=20) plt.xticks(rotation=50) plt.xlabel("Player name",fontsize=15) plt.ylabel("No of fours",fontsize=15) plt.show() # we will plot graph on no of four hit in each season ax=four_data.groupby('season')['batsman_runs'].agg([('four','count')]).reset_index().plot('season','four',kind='bar',color = 'red') plt.title("Numbers of fours hit in each season ",fontsize=20) plt.xticks(rotation=50) plt.xlabel("season",fontsize=15) plt.ylabel("No of fours",fontsize=15) plt.show() # we will print no of sixes hit by team six_data=complete_data[complete_data['batsman_runs']==6] six_data.groupby('batting_team')['batsman_runs'].agg([('runs by six','sum'),('sixes','count')]) # we will plot graph of six hit by players batsman_six=six_data.groupby('batsman')['batsman_runs'].agg([('six','count')]).reset_index().sort_values('six',ascending=0) ax=batsman_six.iloc[:10,:].plot('batsman','six',kind='bar',color='green') plt.title("Numbers of six hit by playes ",fontsize=20) plt.xticks(rotation=50) plt.xlabel("Player name",fontsize=15) plt.ylabel("No of six",fontsize=15) plt.show() # we will plot graph on no of six hit in each season ax=six_data.groupby('season')['batsman_runs'].agg([('six','count')]).reset_index().plot('season','six',kind='bar',color = 'blue') plt.title("Numbers of fours hit in each season ",fontsize=20) plt.xticks(rotation=50) plt.xlabel("season",fontsize=15) plt.ylabel("No of fours",fontsize=15) plt.show() # we will print no of matches played by batsman No_Matches_player= df_deliveries[["match_id","player_dismissed"]] No_Matches_player =No_Matches_player .groupby("player_dismissed")["match_id"].count().reset_index().sort_values(by="match_id",ascending=False).reset_index(drop=True) No_Matches_player.columns=["batsman","No_of Matches"] No_Matches_player .head(10) # Dismissals in IPL plt.figure(figsize=(18,10)) ax=sns.countplot(df_deliveries.dismissal_kind) plt.title("Dismissals in IPL",fontsize=20) plt.xlabel("Dismissals kind",fontsize=15) plt.ylabel("count",fontsize=15) plt.xticks(rotation=50) plt.show() wicket_data=df_deliveries.dropna(subset=['dismissal_kind']) wicket_data=wicket_data[~wicket_data['dismissal_kind'].isin(['run out','retired hurt','obstructing the field'])] # we will print ipl most wicket taking bowlers wicket_data.groupby('bowler')['dismissal_kind'].agg(['count']).reset_index().sort_values('count',ascending=False).reset_index(drop=True).iloc[:10,:] ``` ### Conclusion : The highest number of match played in IPL season was 2013,2014,2015. The highest number of match won by Mumbai Indians i.e 4 match out of 12 matches. Teams which Bowl first has higher chances of winning then the team which bat first. After winning toss more teams decide to do fielding first. In finals teams which decide to do fielding first win the matches more then the team which bat first. In finals most teams after winning toss decide to do fielding first. Top player of match winning are CH gayle, AB de villers. It is interesting that out of 12 IPL finals,9 times the team that won the toss was also the winner of IPL. The highest number of four hit by player is Shikar Dhawan. The highest number of six hit by player is CH gayle. Top leading run scorer in IPL are Virat kholi, SK Raina, RG Sharma. Dismissals in IPL was most by Catch out. The IPL most wicket taken blower is Harbajan Singh. The highest number of matches played by player name are SK Raina, RG Sharma.
github_jupyter
# Random Signals *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* ## Independent Processes The independence of random signals is a desired property in many applications of statistical signal processing, as well as uncorrelatedness and orthogonality. The concept of independence is introduced in the following together with a discussion of the links to uncorrelatedness and orthogonality. ### Definition Two stochastic events are said to be [independent](https://en.wikipedia.org/wiki/Independence_(probability_theory%29) if the probability of occurrence of one event is not affected by the occurrence of the other event. Or more specifically, if their joint probability equals the product of their individual probabilities. In terms of the bivariate probability density function (PDF) of two continuous-amplitude real-valued random processes $x[k]$ and $y[k]$ this reads \begin{equation} p_{xy}(\theta_x, \theta_y, k_x, k_y) = p_x(\theta_x, k_x) \cdot p_y(\theta_y, k_y) \end{equation} where $p_x(\theta_x, k_x)$ and $p_y(\theta_y, k_y)$ denote the univariate ([marginal](https://en.wikipedia.org/wiki/Marginal_distribution)) PDFs of the random processes for the time-instances $k_x$ and $k_y$, respectively. The bivariate PDF of two independent random processes is given by the multiplication of their univariate PDFs. It follows that the [second-order ensemble average](ensemble_averages.ipynb#Second-Order-Ensemble-Averages) for a linear mapping is given as \begin{equation} E\{ x[k_x] \cdot y[k_y] \} = E\{ x[k_x] \} \cdot E\{ y[k_y] \} \end{equation} The linear second-order ensemble average of two independent random signals is equal to the multiplication of their linear first-order ensemble averages. For jointly wide-sense stationary (WSS) processes, the bivariate PDF does only depend on the difference $\kappa = k_x - k_y$ of the time instants. Hence, two jointly WSS random signals are independent if \begin{equation} \begin{split} p_{xy}(\theta_x, \theta_y, \kappa) &= p_x(\theta_x, k_x) \cdot p_y(\theta_y, k_x - \kappa) \\ &= p_x(\theta_x) \cdot p_y(\theta_y, \kappa) \end{split} \end{equation} Above bivariate PDF is rewritten using the definition of [conditional probabilities](https://en.wikipedia.org/wiki/Conditional_probability) in order to specialize the definition of independence to one WSS random signal $x[k]$ \begin{equation} p_{xy}(\theta_x, \theta_y, \kappa) = p_{y|x}(\theta_x, \theta_y, \kappa) \cdot p_x(\theta_x) \end{equation} where $p_{y|x}(\theta_x, \theta_y, \kappa)$ denotes the conditional probability that $y[k - \kappa]$ takes the amplitude value $\theta_y$ under the condition that $x[k]$ takes the amplitude value $\theta_x$. Under the assumption that $y[k-\kappa] = x[k-\kappa]$ and substituting $\theta_x$ and $\theta_y$ by $\theta_1$ and $\theta_2$, independence for one random signal is defined as \begin{equation} p_{xx}(\theta_1, \theta_2, \kappa) = \begin{cases} p_x(\theta_1) \cdot \delta(\theta_2 - \theta_1) & \text{for } \kappa = 0 \\ p_x(\theta_1) \cdot p_x(\theta_2, \kappa) & \text{for } \kappa \neq 0 \end{cases} \end{equation} since the conditional probability $p_{x[k]|x[k-\kappa]}(\theta_1, \theta_2, \kappa) = \delta(\theta_2 - \theta_1)$ for $\kappa = 0$ since this represents a sure event. The bivariate PDF of an independent random signal is equal to the product of the univariate PDFs of the signal and the time-shifted signal for $\kappa \neq 0$. A random signal for which this condition does not hold shows statistical dependencies between samples. These dependencies can be exploited for instance for coding or prediction. #### Example - Comparison of bivariate PDF and product of marginal PDFs The following example estimates the bivariate PDF $p_{xx}(\theta_1, \theta_2, \kappa)$ of a WSS random signal $x[k]$ by computing its two-dimensional histogram. The univariate PDFs $p_x(\theta_1)$ and $p_x(\theta_2, \kappa)$ are additionally estimated. Both the estimated bivariate PDF and the product of the two univariate PDFs $p_x(\theta_1) \cdot p_x(\theta_2, \kappa)$ are plotted for different $\kappa$. ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline N = 10000000 # number of random samles M = 50 # number of bins for bivariate/marginal histograms def compute_plot_histograms(kappa): # shift signal x2 = np.concatenate((x1[kappa:], np.zeros(kappa))) # compute bivariate and marginal histograms pdf_xx, x1edges, x2edges = np.histogram2d(x1, x2, bins=(M,M), range=((-1.5, 1.5),(-1.5, 1.5)), normed=True) pdf_x1, _ = np.histogram(x1, bins=M, range=(-1.5, 1.5), density=True) pdf_x2, _ = np.histogram(x2, bins=M, range=(-1.5, 1.5), density=True) # plot results fig = plt.figure(figsize=(10, 10)) plt.subplot(121, aspect='equal') plt.pcolormesh(x1edges, x2edges, pdf_xx) plt.xlabel(r'$\theta_1$') plt.ylabel(r'$\theta_2$') plt.title(r'Bivariate PDF $p_{{xy}}(\theta_1, \theta_2, \kappa)$') plt.colorbar(fraction=0.046) plt.subplot(122, aspect='equal') plt.pcolormesh(x1edges, x2edges, np.outer(pdf_x1, pdf_x2)) plt.xlabel(r'$\theta_1$') plt.ylabel(r'$\theta_2$') plt.title(r'Product of PDFs $p_x(\theta_1) \cdot p_x(\theta_2, \kappa)$') plt.colorbar(fraction=0.046) fig.suptitle('Shift $\kappa =$ {:<2.0f}'.format(kappa), y=0.72) fig.tight_layout() # generate signal x = np.random.normal(size=N) x1 = np.convolve(x, [1, .5, .3, .7, .3], mode='same') # compute and plot the PDFs for various shifts compute_plot_histograms(0) compute_plot_histograms(2) compute_plot_histograms(20) ``` **Exercise** * With the given results, how can you evaluate the independence of the random signal? * Can the random signal assumed to be independent? Solution: According to the definition of independence, the bivariate PDF and the product of the univariate PDFs has to be equal for $\kappa \neq 0$. This is obviously not the case for $\kappa=2$. Hence, the random signal is not independent in a strict sense. However for $\kappa=20$ the condition for independence is sufficiently fulfilled, considering the statistical uncertainty due to a finite number of samples. ### Independence versus Uncorrelatedness Two continuous-amplitude real-valued jointly WSS random processes $x[k]$ and $y[k]$ are termed as [uncorrelated](correlation_functions.ipynb#Properties) if their cross-correlation function (CCF) is equal to the product of their linear means, $\varphi_{xy}[\kappa] = \mu_x \cdot \mu_y$. If two random signals are independent then they are also uncorrelated. This can be proven by introducing above findings for the linear second-order ensemble average of independent random signals into the definition of the CCF \begin{equation} \varphi_{xy}[\kappa] = E \{ x[k] \cdot y[k - \kappa] \} = E \{ x[k] \} \cdot E \{ y[k - \kappa] \} = \mu_x \cdot \mu_y \end{equation} where the last equality is a consequence of the assumed wide-sense stationarity. The reverse, that two uncorrelated signals are also independent does not hold in general from this result. The auto-correlation function (ACF) of an [uncorrelated signal](correlation_functions.ipynb#Properties) is given as $\varphi_{xx}[\kappa] = \mu_x^2 + \sigma_x^2 \cdot \delta[\kappa]$. Introducing the definition of independence into the definition of the ACF yields \begin{equation} \begin{split} \varphi_{xx}[\kappa] &= E \{ x[k] \cdot x[k - \kappa] \} \\ &= \begin{cases} E \{ x^2[k] \} & \text{for } \kappa = 0 \\ E \{ x[k] \} \cdot E \{ x[k - \kappa] \} & \text{for } \kappa \neq 0 \end{cases} \\ &= \begin{cases} \mu_x^2 + \sigma_x^2 & \text{for } \kappa = 0 \\ \mu_x^2 & \text{for } \kappa \neq 0 \end{cases} \\ &= \mu_x^2 + \sigma_x^2 \delta[\kappa] \end{split} \end{equation} where the result for $\kappa = 0$ follows from the bivariate PDF $p_{xx}(\theta_1, \theta_2, \kappa)$ of an independent signal, as derived above. It can be concluded from this result that an independent random signal is also uncorrelated. The reverse, that an uncorrelated signal is independent does not hold in general. ### Independence versus Orthogonality In geometry, two vectors are said to be [orthogonal](https://en.wikipedia.org/wiki/Orthogonality) if their dot product equals zero. This definition is frequently applied to finite-length random signals by interpreting them as vectors. The relation between independence, correlatedness and orthogonality is derived in the following. Let's assume two continuous-amplitude real-valued jointly wide-sense ergodic random signals $x_N[k]$ and $y_M[k]$ with finite lengths $N$ and $M$, respectively. The CCF $\varphi_{xy}[\kappa]$ between both can be reformulated as follows \begin{equation} \begin{split} \varphi_{xy}[\kappa] &= \frac{1}{N} \sum_{k=0}^{N-1} x_N[k] \cdot y_M[k-\kappa] \\ &= \frac{1}{N} < \mathbf{x}_N, \mathbf{y}_M[\kappa] > \end{split} \end{equation} where $<\cdot, \cdot>$ denotes the [dot product](https://en.wikipedia.org/wiki/Dot_product). The $(N+2M-2) \times 1$ vector $\mathbf{x}_N$ is defined as $$\mathbf{x}_N = \left[ \mathbf{0}^T_{(M-1) \times 1}, x[0], x[1], \dots, x[N-1], \mathbf{0}^T_{(M-1) \times 1} \right]^T$$ where $\mathbf{0}_{(M-1) \times 1}$ denotes the zero vector of length $M-1$. The $(N+2M-2) \times 1$ vector $\mathbf{y}_M[\kappa]$ is defined as $$\mathbf{y}_M = \left[ \mathbf{0}^T_{\kappa \times 1}, y[0], y[1], \dots, y[M-1], \mathbf{0}^T_{(N+M-2-\kappa) \times 1} \right]^T$$ It follows from above definition of orthogonality that two finite-length random signals are orthogonal if their CCF is zero. This implies that at least one of the two signals has to be mean free. It can be concluded further that two independent random signals are also orthogonal and uncorrelated if at least one of them is mean free. The reverse, that orthogonal signals are independent, does not hold in general. The concept of orthogonality can also be extended to one random signal by setting $\mathbf{y}_M[\kappa] = \mathbf{x}_N[\kappa]$. Since a random signal cannot be orthogonal to itself for $\kappa = 0$, the definition of orthogonality has to be extended for this case. According to the ACF of a mean-free uncorrelated random signal $x[k]$, self-orthogonality may be defined as \begin{equation} \frac{1}{N} < \mathbf{x}_N, \mathbf{x}_N[\kappa] > = \begin{cases} \sigma_x^2 & \text{for } \kappa = 0 \\ 0 & \text{for } \kappa \neq 0 \end{cases} \end{equation} An independent random signal is also orthogonal if it is zero-mean. The reverse, that an orthogonal signal is independent does not hold in general. #### Example - Computation of cross-correlation by dot product This example illustrates the computation of the CCF by the dot product. First, a function is defined which computes the CCF by means of the dot product ``` def ccf_by_dotprod(x, y): N = len(x) M = len(y) xN = np.concatenate((np.zeros(M-1), x, np.zeros(M-1))) yM = np.concatenate((y, np.zeros(N+M-2))) return np.fromiter([np.dot(xN, np.roll(yM, kappa)) for kappa in range(N+M-1)], float) ``` Now the CCF is computed using different methods: computation by the dot product and by the built-in correlation function. The CCF is plotted for the computation by the dot product, as well as the difference (magnitude) between both methods. The resulting difference is in the typical expected range due to numerical inaccuracies. ``` N = 32 # length of signals # generate signals np.random.seed(1) x = np.random.normal(size=N) y = np.convolve(x, [1, .5, .3, .7, .3], mode='same') # compute CCF ccf1 = 1/N * np.correlate(x, y, mode='full') ccf2 = 1/N * ccf_by_dotprod(x, y) kappa = np.arange(-N+1, N) # plot results plt.figure(figsize=(10, 4)) plt.subplot(121) plt.stem(kappa, ccf1) plt.xlabel('$\kappa$') plt.ylabel(r'$\varphi_{xy}[\kappa]$') plt.title('CCF by dot product') plt.grid() plt.subplot(122) plt.stem(kappa, np.abs(ccf1-ccf2)) plt.xlabel('$\kappa$') plt.title('Difference (magnitude)') plt.tight_layout() ``` **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2018*.
github_jupyter
# Training Neural Networks The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time. <img src="assets/function_approx.png" width=500px> At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function. To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a **loss function** (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems $$ \large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2} $$ where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels. By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called **gradient descent**. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base. <img src='assets/gradient_descent.png' width=350px> ## Backpropagation For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks. Training multilayer networks is done through **backpropagation** which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation. <img src='assets/backprop_diagram.png' width=550px> In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss. To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule. $$ \large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2} $$ **Note:** I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on. We update our weights using this gradient with some learning rate $\alpha$. $$ \large W^\prime_1 = W_1 - \alpha \frac{\partial \ell}{\partial W_1} $$ The learning rate $\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum. ## Losses in PyTorch Let's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels. Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss), > This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class. > > The input is expected to contain scores for each class. This means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the *logits* or *scores*. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one ([read more here](https://docs.python.org/3/tutorial/floatingpoint.html)). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities. ``` import torch from torch import nn import torch.nn.functional as F from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) # Download and load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10)) # Define the loss criterion = nn.CrossEntropyLoss() # Get our data images, labels = next(iter(trainloader)) # Flatten images images = images.view(images.shape[0], -1) # Forward pass, get our logits logits = model(images) # Calculate the loss with the logits and the labels loss = criterion(logits, labels) print(loss) ``` In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)). >**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss. Note that for `nn.LogSoftmax` and `F.log_softmax` you'll need to set the `dim` keyword argument appropriately. `dim=0` calculates softmax across the rows, so each column sums to 1, while `dim=1` calculates across the columns so each row sums to 1. Think about what you want the output to be and choose `dim` appropriately. ``` # TODO: Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) # TODO: Define the loss criterion = nn.NLLLoss() ### Run this to check your work # Get our data images, labels = next(iter(trainloader)) # Flatten images images = images.view(images.shape[0], -1) # Forward pass, get our logits logps = model(images) # Calculate the loss with the logits and the labels loss = criterion(logps, labels) print(loss) ``` ## Autograd Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, `autograd`, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set `requires_grad = True` on a tensor. You can do this at creation with the `requires_grad` keyword, or at any time with `x.requires_grad_(True)`. You can turn off gradients for a block of code with the `torch.no_grad()` content: ```python x = torch.zeros(1, requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False ``` Also, you can turn on or off gradients altogether with `torch.set_grad_enabled(True|False)`. The gradients are computed with respect to some variable `z` with `z.backward()`. This does a backward pass through the operations that created `z`. ``` x = torch.randn(2,2, requires_grad=True) print(x) y = x**2 print(y) ``` Below we can see the operation that created `y`, a power operation `PowBackward0`. ``` ## grad_fn shows the function that generated this variable print(y.grad_fn) ``` The autgrad module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor `y` to a scalar value, the mean. ``` z = y.mean() print(z) ``` You can check the gradients for `x` and `y` but they are empty currently. ``` print(x.grad) ``` To calculate the gradients, you need to run the `.backward` method on a Variable, `z` for example. This will calculate the gradient for `z` with respect to `x` $$ \frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\left[\frac{1}{n}\sum_i^n x_i^2\right] = \frac{x}{2} $$ ``` z.backward() print(x.grad) print(x/2) ``` These gradients calculations are particularly useful for neural networks. For training we need the gradients of the weights with respect to the cost. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step. ## Loss and Autograd together When we create a network with PyTorch, all of the parameters are initialized with `requires_grad = True`. This means that when we calculate the loss and call `loss.backward()`, the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass. ``` # Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() images, labels = next(iter(trainloader)) images = images.view(images.shape[0], -1) logps = model(images) loss = criterion(logps, labels) print('Before backward pass: \n', model[0].weight.grad) loss.backward() print('After backward pass: \n', model[0].weight.grad) ``` ## Training the network! There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's [`optim` package](https://pytorch.org/docs/stable/optim.html). For example we can use stochastic gradient descent with `optim.SGD`. You can see how to define an optimizer below. ``` from torch import optim # Optimizers require the parameters to optimize and a learning rate optimizer = optim.SGD(model.parameters(), lr=0.01) ``` Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch: * Make a forward pass through the network * Use the network output to calculate the loss * Perform a backward pass through the network with `loss.backward()` to calculate the gradients * Take a step with the optimizer to update the weights Below I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code `optimizer.zero_grad()`. When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches. ``` print('Initial weights - ', model[0].weight) images, labels = next(iter(trainloader)) images.resize_(64, 784) # Clear the gradients, do this because gradients are accumulated optimizer.zero_grad() # Forward pass, then backward pass, then update weights output = model.forward(images) loss = criterion(output, labels) loss.backward() print('Gradient -', model[0].weight.grad) # Take an update step and few the new weights optimizer.step() print('Updated weights - ', model[0].weight) ``` ### Training for real Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an *epoch*. So here we're going to loop through `trainloader` to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights. >**Exercise:** Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch. ``` ## Your solution here model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) # model.cuda() criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: # Flatten MNIST images into a 784 long vector images = images.view(images.shape[0], -1) # images, labels = images.cuda(), labels.cuda() # TODO: Training pass # Clear the gradients, do this because gradients are accumulated optimizer.zero_grad() # Forward pass, then backward pass, then update weights logps = model.forward(images) loss = criterion(logps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(trainloader)}") ``` With the network trained, we can check out it's predictions. ``` %matplotlib inline import helper images, labels = next(iter(trainloader)) img = images[0].view(1, 784) # Turn off gradients to speed up this part with torch.no_grad(): logits = model.forward(img) # Output of the network are logits, need to take softmax for probabilities ps = F.softmax(logits, dim=1) helper.view_classify(img.view(1, 28, 28), ps) ``` Now our network is brilliant. It can accurately predict the digits in our images. Next up you'll write the code for training a neural network on a more complex dataset.
github_jupyter
# Recommendations with IBM In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform. You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.** By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations. ## Table of Contents I. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br> II. [Rank Based Recommendations](#Rank)<br> III. [User-User Based Collaborative Filtering](#User-User)<br> IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br> V. [Matrix Factorization](#Matrix-Fact)<br> VI. [Extras & Concluding](#conclusions) At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import project_tests as t import pickle %matplotlib inline df = pd.read_csv('data/user-item-interactions.csv') df_content = pd.read_csv('data/articles_community.csv') del df['Unnamed: 0'] del df_content['Unnamed: 0'] # Show df to get an idea of the data df.head() # Show df_content to get an idea of the data df_content.head() ``` ### <a class="anchor" id="Exploratory-Data-Analysis">Part I : Exploratory Data Analysis</a> Use the dictionary and cells below to provide some insight into the descriptive statistics of the data. `1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article. ``` df.groupby('email')['article_id'].count().values interaction_per_user = df.groupby('email')['article_id'].count().values counts, bins, _ = plt.hist(df.article_id.value_counts().values, 50, cumulative=False) plt.xlabel('number of interactions per user') plt.ylabel('counts') plt.title('counts vs interactions per user') plt.grid(True) plt.show() # max number of user-article interactions by any user print("The maximum number of user-article interactions by any user is [%d]" % interaction_per_user.max()) # median value of the array of interaction_per_user print("50 percents of individuals interact with [%d] number of articles or fewer" % np.median(interaction_per_user)) # Fill in the median and maximum number of user_article interactios below median_val = 3 # 50% of individuals interact with ____ number of articles or fewer. max_views_by_user = 364 # The maximum number of user-article interactions by any 1 user is ______. ``` `2.` Explore and remove duplicate articles from the **df_content** dataframe. ``` # Find and explore duplicate articles print("duplicate articles:") df_content[df_content.duplicated("article_id", keep="first")] # Remove any rows that have the same article_id - only keep the first old_num_rows = df_content.shape[0] df_content.drop_duplicates("article_id", inplace=True) print("%d rows are dropped in df_content." % (old_num_rows - df_content.shape[0])) ``` `3.` Use the cells below to find: **a.** The number of unique articles that have an interaction with a user. **b.** The number of unique articles in the dataset (whether they have any interactions or not).<br> **c.** The number of unique users in the dataset. (excluding null values) <br> **d.** The number of user-article interactions in the dataset. ``` print("The number of unique articles that have an interaction with a user:", np.sum(df.groupby("article_id")['email'].count()>=1)) print("The number of unique articles in the dataset:", df_content.article_id.nunique()) print("The number of unique users in the dataset:", df.email.nunique()) print("The number of user-article interactions in the dataset:", df.shape[0]) unique_articles = 714 # The number of unique articles that have at least one interaction total_articles = 1051 # The number of unique articles on the IBM platform unique_users = 5148 # The number of unique users user_article_interactions = 45993 # The number of user-article interactions ``` `4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below). ``` print("The most viewed article in the dataset as a string with one value following the decimal:", df.groupby("article_id")["email"].count().idxmax()) print("The most viewed article in the dataset was viewed how many times:", df.groupby("article_id")["email"].count().max()) most_viewed_article_id = "1429.0" # The most viewed article in the dataset as a string with one value following the decimal max_views = 937 # The most viewed article in the dataset was viewed how many times? ## No need to change the code here - this will be helpful for later parts of the notebook # Run this cell to map the user email to a user_id column and remove the email column def email_mapper(): coded_dict = dict() cter = 1 email_encoded = [] for val in df['email']: if val not in coded_dict: coded_dict[val] = cter cter+=1 email_encoded.append(coded_dict[val]) return email_encoded email_encoded = email_mapper() del df['email'] df['user_id'] = email_encoded # show header df.head() ## If you stored all your results in the variable names above, ## you shouldn't need to change anything in this cell sol_1_dict = { '`50% of individuals have _____ or fewer interactions.`': median_val, '`The total number of user-article interactions in the dataset is ______.`': user_article_interactions, '`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user, '`The most viewed article in the dataset was viewed _____ times.`': max_views, '`The article_id of the most viewed article is ______.`': most_viewed_article_id, '`The number of unique articles that have at least 1 rating ______.`': unique_articles, '`The number of unique users in the dataset is ______`': unique_users, '`The number of unique articles on the IBM platform`': total_articles } # Test your dictionary against the solution t.sol_1_test(sol_1_dict) ``` ### <a class="anchor" id="Rank">Part II: Rank-Based Recommendations</a> Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with. `1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below. ``` def get_top_articles(n, df=df): ''' INPUT: n - (int) the number of top articles to return df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: top_articles - (list) A list of the top 'n' article titles ''' # Your code here top_idx = get_top_article_ids(n, df=df) top_articles = [df[df.article_id==float(x)]["title"].values[0] for x in top_idx] return top_articles # Return the top article titles from df (not df_content) def get_top_article_ids(n, df=df): ''' INPUT: n - (int) the number of top articles to return df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: top_articles - (list) A list of the top 'n' article titles ''' # Your code here top_articles = list(df.article_id.value_counts().index[:n]) top_articles = [str(x) for x in top_articles] return top_articles # Return the top article ids print(get_top_articles(10)) print(get_top_article_ids(10)) # Test your function by returning the top 5, 10, and 20 articles top_5 = get_top_articles(5) top_10 = get_top_articles(10) top_20 = get_top_articles(20) # Test each of your three lists from above t.sol_2_test(get_top_articles) ``` ### <a class="anchor" id="User-User">Part III: User-User Based Collaborative Filtering</a> `1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns. * Each **user** should only appear in each **row** once. * Each **article** should only show up in one **column**. * **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1. * **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**. Use the tests to make sure the basic structure of your matrix matches what is expected by the solution. ``` # create the user-article matrix with 1's and 0's def create_user_item_matrix(df): ''' INPUT: df - pandas dataframe with article_id, title, user_id columns OUTPUT: user_item - user item matrix Description: Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with an article and a 0 otherwise ''' # Fill in the function here user_item = df.groupby(["user_id", "article_id"])["title"].max().unstack().apply( lambda row: [1 if x else 0 for x in row], axis=1) return user_item # return the user_item matrix user_item = create_user_item_matrix(df) ## Tests: You should just need to run this cell. Don't change the code. assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right." assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right." assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right." print("You have passed our quick tests! Please proceed!") ``` `2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users. Use the tests to test your function. ``` def find_similar_users(user_id, user_item=user_item): ''' INPUT: user_id - (int) a user_id user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: similar_users - (list) an ordered list where the closest users (largest dot product users) are listed first Description: Computes the similarity of every pair of users based on the dot product Returns an ordered ''' # compute similarity of each user to the provided user similarity_array = user_item.dot(user_item.loc[user_id]).values # sort by similarity sorted_idx = np.argsort(similarity_array)[::-1] # create list of just the ids most_similar_users = user_item.index[sorted_idx] # remove the own user's id most_similar_users = list(np.setdiff1d(most_similar_users, user_id)) return most_similar_users # return a list of the users in order from most to least similar # Do a spot check of your function print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10])) print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5])) print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3])) ``` `3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user. ``` def get_article_names(article_ids, df=df): ''' INPUT: article_ids - (list) a list of article ids df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: article_names - (list) a list of article names associated with the list of article ids (this is identified by the title column) ''' # Your code here article_names = [df[df.article_id==float(x)]["title"].values[0] for x in article_ids] return article_names # Return the article names associated with list of article ids def get_user_articles(user_id, user_item=user_item): ''' INPUT: user_id - (int) a user id user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: article_ids - (list) a list of the article ids seen by the user article_names - (list) a list of article names associated with the list of article ids (this is identified by the doc_full_name column in df_content) Description: Provides a list of the article_ids and article titles that have been seen by a user ''' # Your code here user_data = user_item.loc[user_id] article_ids = user_data[user_data>0].index.tolist() article_names = get_article_names(article_ids) article_ids = [str(x) for x in article_ids] return article_ids, article_names # return the ids and names def user_user_recs(user_id, m=10): ''' INPUT: user_id - (int) a user id m - (int) the number of recommendations you want for the user OUTPUT: recs - (list) a list of recommendations for the user Description: Loops through the users based on closeness to the input user_id For each user - finds articles the user hasn't seen before and provides them as recs Does this until m recommendations are found Notes: Users who are the same closeness are chosen arbitrarily as the 'next' user For the user where the number of recommended articles starts below m and ends exceeding m, the last items are chosen arbitrarily ''' # get a list of similar users similar_users = find_similar_users(user_id, user_item=user_item) # add recommended articles iteratively recs = np.array([]) for similar_user in similar_users: # concatenate articles not seen before row_data = user_item.loc[similar_user] interacted_articles = np.array(row_data[row_data>0].index) interacted_articles = np.setdiff1d(interacted_articles, recs) recs = np.concatenate([recs, interacted_articles]) if len(recs) >= m: break recs = list(recs[:m]) return recs # return your recommendations for this user_id # Check Results get_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1 # Test your functions here - No need to change this code - just run this cell assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect." assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect." assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0']) assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']) assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0']) assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']) print("If this is all you see, you passed all of our tests! Nice job!") ``` `4.` Now we are going to improve the consistency of the **user_user_recs** function from above. * Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions. * Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier. ``` def get_top_sorted_users(user_id, df=df, user_item=user_item): ''' INPUT: user_id - (int) df - (pandas dataframe) df as defined at the top of the notebook user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: neighbors_df - (pandas dataframe) a dataframe with: neighbor_id - is a neighbor user_id similarity - measure of the similarity of each user to the provided user_id num_interactions - the number of articles viewed by the user - if a u Other Details - sort the neighbors_df by the similarity and then by number of interactions where highest of each is higher in the dataframe ''' # create data frame neighbors_df = pd.DataFrame({"similarity": user_item.dot(user_item.loc[user_id]), "num_interactions": user_item.apply(lambda x: sum(x), axis=1)}, index=user_item.index) # drop row at user_id neighbors_df = neighbors_df.drop(user_id) # sort by "similarity" and then num_interactions neighbors_df = neighbors_df.sort_values(["similarity", "num_interactions"], ascending=False) return neighbors_df # Return the dataframe specified in the doc_string def user_user_recs_part2(user_id, m=10): ''' INPUT: user_id - (int) a user id m - (int) the number of recommendations you want for the user OUTPUT: recs - (list) a list of recommendations for the user by article id rec_names - (list) a list of recommendations for the user by article title Description: Loops through the users based on closeness to the input user_id For each user - finds articles the user hasn't seen before and provides them as recs Does this until m recommendations are found Notes: * Choose the users that have the most total article interactions before choosing those with fewer article interactions. * Choose articles with the articles with the most total interactions before choosing those with fewer total interactions. ''' # get neighbors that is sorted neighbors_df = get_top_sorted_users(user_id, df=df, user_item=user_item) # get ranked user ids similar_users = neighbors_df.index.tolist() # get ranked article ids recs = np.array([]) for similar_user in similar_users: # obtain articles not seen before row_data = user_item.loc[similar_user] interacted_articles = np.array(row_data[row_data>0].index) interacted_articles = np.setdiff1d(interacted_articles, recs) # sort interacted_articles based on the number of interactions if len(interacted_articles)>0: interacted_articles = np.array(df[df.article_id.isin( list(interacted_articles))]["article_id"].value_counts().index) # concatenate to recs recs = np.concatenate([recs, interacted_articles]) if len(recs) >= m: break recs = list(recs[:m]) # get article names rec_names = get_article_names(recs, df=df) return recs, rec_names # Quick spot check - don't change this code - just use it to test your functions rec_ids, rec_names = user_user_recs_part2(20, 10) print("The top 10 recommendations for user 20 are the following article ids:") print(rec_ids) print() print("The top 10 recommendations for user 20 are the following article names:") print(rec_names) ``` `5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below. ``` ### Tests with a dictionary of results user1_most_sim = get_top_sorted_users(1).index.tolist()[0] # Find the user that is most similar to user 1 user131_10th_sim = get_top_sorted_users(131).index.tolist()[10] # Find the 10th most similar user to user 131 ## Dictionary Test Here sol_5_dict = { 'The user that is most similar to user 1.': user1_most_sim, 'The user that is the 10th most similar to user 131': user131_10th_sim, } t.sol_5_test(sol_5_dict) ``` `6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users. **Provide your response here.** get_top_articles(). Since it is a new user with no interaction in any article, it is not possible to make recommendation with collaborative method. What we have in hand is the number of interactions with all users for each article, we could make recommendation of articles with most universal interactions. This method does not provide personalized recommendatin as we don't have any information on the new user. If there is a better way for new user, I would like to make knowledge-based recommendation using prior knowledge about this new user, e.g., article preference, so that we could select the prefered articles accordingly. In this way, we might make better recommendations than the purely ranked ones. `7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation. ``` new_user = '0.0' # What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles. # Provide a list of the top 10 article ids you would give to new_user_recs = get_top_article_ids(10) # Your recommendations here assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users." print("That's right! Nice job!") ``` ### <a class="anchor" id="Content-Recs">Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)</a> Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information. `1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations. ### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills. ``` def make_content_recs(): ''' INPUT: OUTPUT: ''' ``` `2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender? ### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills. **Write an explanation of your content based recommendation system here.** `3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations. ### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills. ``` # make recommendations for a brand new user # make a recommendations for a user who only has interacted with article id '1427.0' ``` ### <a class="anchor" id="Matrix-Fact">Part V: Matrix Factorization</a> In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform. `1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook. ``` # Load the matrix here user_item_matrix = pd.read_pickle('user_item_matrix.p') # quick look at the matrix user_item_matrix.head() ``` `2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson. ``` # Perform SVD on the User-Item Matrix Here u, s, vt = np.linalg.svd(user_item_matrix) # use the built in to get the three matrices ``` **Provide your response here.** There is no missing value in user_item_matrix, which is why this time we get no error in implementing SVD function from numpy. `3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features. ``` num_latent_feats = np.arange(10,700+10,20) sum_errs = [] for k in num_latent_feats: # restructure with k latent features s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :] # take dot product user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new)) # compute error for each prediction to actual value diffs = np.subtract(user_item_matrix, user_item_est) # total errors and keep track of them err = np.sum(np.sum(np.abs(diffs))) sum_errs.append(err) plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]); plt.xlabel('Number of Latent Features'); plt.ylabel('Accuracy'); plt.title('Accuracy vs. Number of Latent Features'); ``` `4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below. Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below: * How many users can we make predictions for in the test set? * How many users are we not able to make predictions for because of the cold start problem? * How many articles can we make predictions for in the test set? * How many articles are we not able to make predictions for because of the cold start problem? ``` df_train = df.head(40000) df_test = df.tail(5993) def create_test_and_train_user_item(df_train, df_test): ''' INPUT: df_train - training dataframe df_test - test dataframe OUTPUT: user_item_train - a user-item matrix of the training dataframe (unique users for each row and unique articles for each column) user_item_test - a user-item matrix of the testing dataframe (unique users for each row and unique articles for each column) test_idx - all of the test user ids test_arts - all of the test article ids ''' # user_item_train user_item_train = create_user_item_matrix(df_train) # user_item_test user_item_test = create_user_item_matrix(df_test) # test_idx test_idx = np.array(user_item_test.index) # test_arts test_arts = np.array(user_item_test.columns.tolist()) return user_item_train, user_item_test, test_idx, test_arts user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test) print("How many users can we make predictions for in the test set?", len(np.intersect1d(user_item_train.index, test_idx, assume_unique=True))) print('How many users in the test set are we not able to make predictions for because of the cold start problem?', len(test_idx) - 20) print('How many articles can we make predictions for in the test set?', len(np.intersect1d(user_item_train.columns, test_arts, assume_unique=True))) print('How many articles in the test set are we not able to make predictions for because of the cold start problem?', len(test_arts)-574) # Replace the values in the dictionary below a = 662 b = 574 c = 20 d = 0 sol_4_dict = { 'How many users can we make predictions for in the test set?': c, # letter here, 'How many users in the test set are we not able to make predictions for because of the cold start problem?': a, # letter here, 'How many articles can we make predictions for in the test set?': b, # letter here, 'How many articles in the test set are we not able to make predictions for because of the cold start problem?': d # letter here } t.sol_4_test(sol_4_dict) ``` `5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`. Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data. ``` # fit SVD on the user_item_train matrix u_train, s_train, vt_train = np.linalg.svd(user_item_train) # fit svd similar to above then use the cells below # Use these cells to see how well you can use the training # decomposition to predict on test data # selected user_item_test test_idx_selected = np.intersect1d(test_idx, user_item_train.index) user_item_test_selected = user_item_test.loc[test_idx_selected] # selected u_train row_idx_selected = user_item_train.index.isin(test_idx) u_train_selected = u_train[row_idx_selected, :] # selected vt_train col_idx_selected = user_item_train.columns.isin(test_arts) vt_train_selected = vt_train[:, col_idx_selected] num_latent_feats = np.arange(10, len(test_arts), 10) sum_errs = [] for k in num_latent_feats: # restructure with k latent features s_new, u_new, vt_new = np.diag(s_train[:k]), u_train_selected[:, :k], vt_train_selected[:k, :] # take dot product user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new)) # compute error for each prediction to actual value diffs = np.subtract(user_item_test_selected, user_item_est) # total errors and keep track of them err = np.sum(np.sum(np.abs(diffs))) sum_errs.append(err) plt.plot(num_latent_feats, 1 - np.array(sum_errs)/(user_item_test_selected.shape[0]* user_item_test_selected.shape[1])); plt.xlabel('Number of Latent Features'); plt.ylabel('Accuracy'); plt.title('Accuracy vs. Number of Latent Features on Test Data'); plt.grid(True) plt.show() print("sparsity ratio:", user_item_test_selected.sum().sum() / (user_item_test_selected.shape[0] * user_item_test_selected.shape[1])) ``` `6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles? **Your response here.** As we could see in the above plot, the accuracy looks ok(>0.964) on test data but keeps declining as we increase the number of latent features. It is likely that as only 20 users are tested on the test data and user_item_test_selected has a very high sparsity ratio, more latent features are implemented more far away the prediction on test data is from the ground truth. Due to limited size of test data, we might consider test our new recommendation system online with real user data. For example, we could run an A/B test to observe if the new recommender system performs better than the existing one by comparing the results, over a period of time, between the experiment group and the control group. <a id='conclusions'></a> ### Extras Using your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capable of taking these tasks on to improve upon your work here! ## Conclusion > Congratulations! You have reached the end of the Recommendations with IBM project! > **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the [rubric](https://review.udacity.com/#!/rubrics/2322/view). You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible. ## Directions to Submit > Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left). > Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button. > Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations! ``` from subprocess import call call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb']) ```
github_jupyter
# Model Development V1 - This is really more like scratchwork - Divide this into multiple notebooks for easier reading **Reference** - http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html ``` import json import pickle from pymongo import MongoClient import numpy as np import pandas as pd from matplotlib import pyplot as plt %matplotlib inline import nltk import os from nltk.corpus import stopwords from sklearn.utils.extmath import randomized_svd # gensim from gensim import corpora, models, similarities, matutils # sklearn from sklearn import datasets from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.model_selection import train_test_split from sklearn.cluster import KMeans from sklearn.neighbors import KNeighborsClassifier import sklearn.metrics.pairwise as smp import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) ``` # NYT Corpus ## Read data in - pickle from mongod output on amazon ec2 instance scp -i ~/.ssh/aws_andrew andrew@35.166.29.151:/home/andrew/Notebooks/initial-model-df.pkl ~/ds/metis/challenges/ ``` with open('initial-model-df.pkl', 'rb') as nyt_data: df = pickle.load(nyt_data) df.shape df.columns df.head(30) df1 = df.dropna() ``` ## LSI Preprocessing ``` # docs = data['lead_paragraph'][0:100] docs = df1['lead_paragraph'] docs.shape for doc in docs: doc = doc.decode("utf8") # create a list of stopwords stopwords_set = frozenset(stopwords.words('english')) # Update iterator to remove stopwords class SentencesIterator(object): # giving 'stop' a list of stopwords would exclude them def __init__(self, dirname, stop=None): self.dirname = dirname def __iter__(self): # os.listdr is ALSO a generator for fname in os.listdir(self.dirname): for line in open(os.path.join(self.dirname, fname),encoding="latin-1"): # at each step, gensim needs a list of words line = line.lower().split() if stop: outline = [] for word in line: if word not in stopwords_set: outline.append(word) yield outline else: yield line docs1 = docs.dropna() for doc in docs1: doc = SentencesIterator(doc.decode("utf8")) docs = pd.Series.tolist(docs1) tfidf = TfidfVectorizer(stop_words="english", token_pattern="\\b[a-zA-Z][a-zA-Z]+\\b", min_df=10) tfidf_vecs = tfidf.fit_transform(docs) tfidf_vecs.shape # it's too big to see in a dataframe: # pd.DataFrame(tfidf_vecs.todense(), # columns=tfidf.get_feature_names() # ).head(30) ``` ## BASELINE: Multinomial Naive Bayes Classification - language is fundamentally different - captures word choice ``` pd.DataFrame(tfidf_vecs.todense(), columns=tfidf.get_feature_names() ).head() df1.shape, tfidf_vecs.shape # Train/Test split X_train, X_test, y_train, y_test = train_test_split(tfidf_vecs, df1['source'], test_size=0.33) # Train nb = MultinomialNB() nb.fit(X_train, y_train) # Test nb.score(X_test, y_test) ``` # LSI Begin **Essentially, this has been my workflow so far:** 1. TFIDF in sklearn --> output a sparse corpus matrix DTM 2. LSI (SVD) in gensim --> output a 300 dim matrix TDM - Analyze topic vectors 3. Viewed LSI[tfidf] ``` # terms by docs instead of docs by terms tfidf_corpus = matutils.Sparse2Corpus(tfidf_vecs.transpose()) # Row indices id2word = dict((v, k) for k, v in tfidf.vocabulary_.items()) # This is a hack for Python 3! id2word = corpora.Dictionary.from_corpus(tfidf_corpus, id2word=id2word) # Build an LSI space from the input TFIDF matrix, mapping of row id to word, and num_topics # num_topics is the number of dimensions (k) to reduce to after the SVD # Analagous to "fit" in sklearn, it primes an LSI space trained to 300-500 dimensions lsi = models.LsiModel(tfidf_corpus, id2word=id2word, num_topics=300) # Retrieve vectors for the original tfidf corpus in the LSI space ("transform" in sklearn) lsi_corpus = lsi[tfidf_corpus] # pass using square brackets # what are the values given by lsi? (topic distributions) # ALSO, IT IS LAZY! IT WON'T ACTUALLY DO THE TRANSFORMING COMPUTATION UNTIL ITS CALLED. IT STORES THE INSTRUCTIONS # Dump the resulting document vectors into a list so we can take a look doc_vecs = [doc for doc in lsi_corpus] doc_vecs[0] #print the first document vector for all the words ``` ## Doc-Term Cosine Similarity using LSI Corpus - cosine similarity of [docs to terms](http://localhost:8888/notebooks/ds/metis/classnotes/5.24.17%20Vector%20Space%20Models%2C%20NMF%2C%20W2V.ipynb#Toy-Example:-Conceptual-Similarity-Between-Arbitrary-Text-Blobs) ``` # Convert the gensim-style corpus vecs to a numpy array for sklearn manipulations nyt_lsi = matutils.corpus2dense(lsi_corpus, num_terms=300).transpose() nyt_lsi.shape lsi.show_topic(0) # Create an index transformer that calculates similarity based on our space index = similarities.MatrixSimilarity(lsi_corpus, num_features=len(id2word)) # all docs by 300 topic vectors (word vectors) pd.DataFrame(nyt_lsi).head() # need to transform by cosine similarity # look up if I need to change into an LDA corpus # take the mean of every word vector! (averaged across all document vectors) df.mean() # describes word usage ('meaning') across the body of documents in the nyt corpus # answers the question: what 'topics' has the nyt been talking about the most over 2005-2015? df.mean().sort_values() ``` # Sorted doc-doc cosine similarity! ``` # Create an index transformer that calculates similarity based on our space index = similarities.MatrixSimilarity(doc_vecs, num_features=len(id2word)) # Return the sorted list of cosine similarities to the first document sims = sorted(enumerate(index[doc_vecs[0]]), key=lambda item: -item[1]) sims # Document 1491 is very similar (.66) to document 0 # Let's take a look at how we did by analyzing syntax for sim_doc_id, sim_score in enumerate(sims[0:30]): print("DocumentID: {}, Similarity Score: {} ".format(sim_score[0], sim_score[1])) print("Headline: " + str(df1.iloc[sim_doc_id].headline.decode('utf-8'))) print("Lead Paragraph: " + str(df1.iloc[sim_doc_id].lead_paragraph.decode('utf-8'))) print("Publish Date: " + str(df1.iloc[sim_doc_id].date)) print('\n') ``` ## Pass into KMeans Clustering ``` # Convert the gensim-style corpus vecs to a numpy array for sklearn manipulations (back to docs to terms matrix) nyt_lsi = matutils.corpus2dense(lsi_corpus, num_terms=300).transpose() nyt_lsi.shape # Create KMeans. kmeans = KMeans(n_clusters=3) # Cluster nyt_lsi_clusters = kmeans.fit_predict(nyt_lsi) # Take a look. It likely didn't do cosine distances. print(nyt_lsi_clusters[0:50]) print("Lead Paragraph: \n" + str(df1.iloc[0:5].lead_paragraph)) ``` ## LSA Begin ``` lda = models.LdaModel(corpus=tfidf_corpus, num_topics=20, id2word=id2word, passes=3) lda.print_topics() lda_corpus = lda[tfidf_corpus] nyt_lda = matutils.corpus2dense(lda_corpus, num_terms=20).transpose() df3 = pd.DataFrame(nyt_lda) df3.mean().sort_values(ascending=False).head(10) ``` ## Logistic Regression / Random Forest - <s>Tried KNN Classifier </s> Destroyed me - probabilistic classification on a spectrum from nyt to natl enq ``` from sklearn.neighbors import KNeighborsClassifier import sklearn.metrics.pairwise as smp # Train/Test X_train, X_test, y_train, y_test = train_test_split(nyt_lsi, df1['source'], test_size=0.33) # X_train = X_train.reshape(1,-1) # X_test = X_test.reshape(1,-1) y_train = np.reshape(y_train.values, (-1,1)) y_test = np.reshape(y_test.values, (-1,1)) X_train.shape, X_test.shape y_train.shape, y_test.shape # WARNING: This ruined me # Need pairwise Cosine for KNN # Fit KNN classifier to training set with cosine distance. One of the best algorithms for clustering documents # knn = KNeighborsClassifier(n_neighbors=3, metric=smp.cosine_distances) # knn.fit(X_train, y_train) # knn.score(X_test, y_test) ``` # PHASE 2: pull in natl enq data - mix in labels, source labels - pull labels (source category in nyt) - Review Nlp notes - Feature trans & Pipelines - Gensim doc2vec ``` with open('mag-model-df.pkl', 'rb') as mag_data: df1 = pickle.load(mag_data) df1.head() df1.dropna(axis=0, how='all') df1.shape docs2 = df1['lead_paragraph'] docs2 = docs2.dropna() for doc in docs2: doc = SentencesIterator(doc) docs = pd.Series.tolist(docs2) tfidf = TfidfVectorizer(stop_words="english", token_pattern="\\b[a-zA-Z][a-zA-Z]+\\b", min_df=10) tfidf_vecs = tfidf.fit_transform(docs) tfidf_vecs.shape ``` ## BASELINE: Multinomial Naive Bayes ``` pd.DataFrame(tfidf_vecs.todense(), columns=tfidf.get_feature_names() ).head() # Train/Test split X_train, X_test, y_train, y_test = train_test_split(tfidf_vecs, df1['source'], test_size=0.33) # Train nb = MultinomialNB() nb.fit(X_train, y_train) # Test nb.score(X_test, y_test) ``` ## LSA Begin 2 ``` # terms by docs instead of docs by terms tfidf_corpus = matutils.Sparse2Corpus(tfidf_vecs.transpose()) # Row indices id2word = dict((v, k) for k, v in tfidf.vocabulary_.items()) # This is a hack for Python 3! id2word = corpora.Dictionary.from_corpus(tfidf_corpus, id2word=id2word) lda = models.LdaModel(corpus=tfidf_corpus, num_topics=20, id2word=id2word, passes=3) lda.print_topics() lda_corpus = lda[tfidf_corpus] nyt_lda = matutils.corpus2dense(lda_corpus, num_terms=20).transpose() df3 = pd.DataFrame(nyt_lda) df3.mean().sort_values(ascending=False).head(10) ``` # Future Work ===================================== # Troubleshoot doc2vec - look into the output of this ``` from gensim.models.doc2vec import Doc2Vec, TaggedDocument from pprint import pprint import multiprocessing # Create doc2Vec model d2v = doc2vec.Doc2Vec(tfidf_corpus,min_count=3,workers=5) ``` # PHASE 3: Visualize clusters - [NLP visualization PyLDAvis](https://github.com/bmabey/pyLDAvis) - [Bokeh](http://bokeh.pydata.org/en/latest/) - [Bqplot](https://github.com/bloomberg/bqplot) - I'd rather not d3... ``` with open('nyt-model-df.pkl', 'rb') as nyt_data: df = pickle.load(nyt_data) with open('mag-model-df.pkl', 'rb') as mag_data: df1 = pickle.load(mag_data) # select the relevant columns in our ratings dataset nyt_df = df[['lead_paragraph', 'source']] mag_df = df1[['lead_paragraph', 'source']] # For the word cloud: https://www.jasondavies.com/wordcloud/ nyt_df['lead_paragraph'].to_csv(path='nyt-text.csv', index=False) # For the word cloud: https://www.jasondavies.com/wordcloud/ mag_df['lead_paragraph'].to_csv(path='mag-text.csv', index=False) !ls ```
github_jupyter
``` """The file needed to run this notebook can be accessed from the following folder using a UTS email account: https://drive.google.com/drive/folders/1y6e1Z2SbLDKkmvK3-tyQ6INO5rrzT3jp """ ``` ##Task-1: Installation of Google Object Detection API and required packages ### Step 1: Import packages ``` #Mount the drive from google.colab import drive drive.mount('/content/drive') %tensorflow_version 1.x !pip install numpy==1.17.4 import os import re import tensorflow as tf print(tf.__version__) pip install --upgrade tf_slim ``` ### Step 2: Initial Configuration to Select SSD model config file and selection of other hyperparameters ``` # If you forked the repository, you can replace the link. repo_url = 'https://github.com/Tony607/object_detection_demo' # Number of training steps. num_steps = 10000 # 200000 # Number of evaluation steps. num_eval_steps = 50 MODELS_CONFIG = { 'ssd_mobilenet_v2': { 'model_name': 'ssd_mobilenet_v2_coco_2018_03_29', 'pipeline_file': 'ssd_mobilenet_v2_coco.config', 'batch_size': 12 }, 'faster_rcnn_inception_v2': { 'model_name': 'faster_rcnn_inception_v2_coco_2018_01_28', 'pipeline_file': 'faster_rcnn_inception_v2_pets.config', 'batch_size': 12 }, 'facessd_mobilenet_v2_quantized_open_image_v4': { 'model_name': 'facessd_mobilenet_v2_quantized_320x320_open_image_v4', 'pipeline_file': 'facessd_mobilenet_v2_quantized_320x320_open_image_v4.config', 'batch_size': 32 } } # Pick the model you want to use selected_model = 'faster_rcnn_inception_v2' # Name of the object detection model to use. MODEL = MODELS_CONFIG[selected_model]['model_name'] # Name of the pipline file in tensorflow object detection API. pipeline_file = MODELS_CONFIG[selected_model]['pipeline_file'] # Training batch size fits in Colabe's Tesla K80 GPU memory for selected model. batch_size = MODELS_CONFIG[selected_model]['batch_size'] %cd /content repo_dir_path = os.path.abspath(os.path.join('.', os.path.basename(repo_url))) !git clone {repo_url} %cd {repo_dir_path} !git pull ``` ### Step 3: Download Google Object Detection API and other dependencies ``` %cd /content !git clone --quiet https://github.com/tensorflow/models.git !apt-get install -qq protobuf-compiler python-pil python-lxml python-tk !pip install -q Cython contextlib2 pillow lxml matplotlib !pip install -q pycocotools %cd /content/models/research !protoc object_detection/protos/*.proto --python_out=. import os os.environ['PYTHONPATH'] += ':/content/models/research/:/content/models/research/slim/' !python object_detection/builders/model_builder_test.py ``` ##Task-2: Conversion of XML annotations and images into tfrecords for training and testing datasets ### Step 4: Prepare `tfrecord` files Use the following scripts to generate the `tfrecord` files. ```bash # Convert train folder annotation xml files to a single csv file, # generate the `label_map.pbtxt` file to `data/` directory as well. python xml_to_csv.py -i data/images/train -o data/annotations/train_labels.csv -l data/annotations # Convert test folder annotation xml files to a single csv. python xml_to_csv.py -i data/images/test -o data/annotations/test_labels.csv # Generate `train.record` python generate_tfrecord.py --csv_input=data/annotations/train_labels.csv --output_path=data/annotations/train.record --img_path=data/images/train --label_map data/annotations/label_map.pbtxt # Generate `test.record` python generate_tfrecord.py --csv_input=data/annotations/test_labels.csv --output_path=data/annotations/test.record --img_path=data/images/test --label_map data/annotations/label_map.pbtxt ``` ``` import zipfile #Extract the zip file local_zip = '/content/drive/My Drive/Assignment 3/resized-20200512T050739Z-001.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/content/object_detection_demo/data/images/train') zip_ref.close() import zipfile #Extract the zip file local_zip = '/content/drive/My Drive/Assignment 3/resized-20200512T050739Z-001.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/content/object_detection_demo/data/images/test') zip_ref.close() #create the annotation directory %cd /content/object_detection_demo/data annotation_dir = 'annotations/' os.makedirs(annotation_dir, exist_ok=True) """Need to manually upload the label_pbtxt file and the train_labels.csv and test_labels.csv into the annotation folder using the link here https://drive.google.com/drive/folders/1NqKz2tC8I5eL5Qo4YzZiEph8W-dtI44d """ %cd {repo_dir_path} # Generate `train.record` #Only need to change the path for csv input !python generate_tfrecord.py --csv_input=/content/object_detection_demo/data/annotations/train_labels.csv --output_path=/content/object_detection_demo/data/annotations/train.record --img_path=/content/object_detection_demo/data/images/train/resized --label_map /content/object_detection_demo/data/annotations/label_map.pbtxt # Generate `test.record` #Only need to change the path for csv input !python generate_tfrecord.py --csv_input=data/annotations/test_labels.csv --output_path=data/annotations/test.record --img_path=/content/object_detection_demo/data/images/test/resized --label_map data/annotations/label_map.pbtxt test_record_fname = '/content/object_detection_demo/data/annotations/test.record' train_record_fname = '/content/object_detection_demo/data/annotations/train.record' label_map_pbtxt_fname = '/content/object_detection_demo/data/annotations/label_map.pbtxt' ``` ### Step 5. Download the base model for transfer learning ``` %cd /content/models/research import os import shutil import glob import urllib.request import tarfile MODEL_FILE = MODEL + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' DEST_DIR = '/content/models/research/pretrained_model' if not (os.path.exists(MODEL_FILE)): urllib.request.urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar = tarfile.open(MODEL_FILE) tar.extractall() tar.close() os.remove(MODEL_FILE) if (os.path.exists(DEST_DIR)): shutil.rmtree(DEST_DIR) os.rename(MODEL, DEST_DIR) !echo {DEST_DIR} !ls -alh {DEST_DIR} fine_tune_checkpoint = os.path.join(DEST_DIR, "model.ckpt") fine_tune_checkpoint ``` ##Task-3: Training: Transfer learning from already trained models ###Step 6: configuring a training pipeline ``` import os pipeline_fname = os.path.join('/content/models/research/object_detection/samples/configs/', pipeline_file) assert os.path.isfile(pipeline_fname), '`{}` not exist'.format(pipeline_fname) def get_num_classes(pbtxt_fname): from object_detection.utils import label_map_util label_map = label_map_util.load_labelmap(pbtxt_fname) categories = label_map_util.convert_label_map_to_categories( label_map, max_num_classes=90, use_display_name=True) category_index = label_map_util.create_category_index(categories) return len(category_index.keys()) num_classes = get_num_classes(label_map_pbtxt_fname) with open(pipeline_fname) as f: s = f.read() with open(pipeline_fname, 'w') as f: # fine_tune_checkpoint s = re.sub('fine_tune_checkpoint: ".*?"', 'fine_tune_checkpoint: "{}"'.format(fine_tune_checkpoint), s) # tfrecord files train and test. s = re.sub( '(input_path: ".*?)(train.record)(.*?")', 'input_path: "{}"'.format(train_record_fname), s) s = re.sub( '(input_path: ".*?)(val.record)(.*?")', 'input_path: "{}"'.format(test_record_fname), s) # label_map_path s = re.sub( 'label_map_path: ".*?"', 'label_map_path: "{}"'.format(label_map_pbtxt_fname), s) # Set training batch_size. s = re.sub('batch_size: [0-9]+', 'batch_size: {}'.format(batch_size), s) # Set training steps, num_steps s = re.sub('num_steps: [0-9]+', 'num_steps: {}'.format(num_steps), s) # Set number of classes num_classes. s = re.sub('num_classes: [0-9]+', 'num_classes: {}'.format(num_classes), s) #set image resizer """s = re.sub('initial_learning_rate: [-+]?[0-9]*\.?[0-9]+', 'initial_learning_rate: {}'.format(initial_learning_rate), s)""" f.write(s) !cat {pipeline_fname} model_dir = 'training/' # Optionally remove content in output model directory to fresh start. !rm -rf {model_dir} os.makedirs(model_dir, exist_ok=True) ``` ### Step 7. Install Tensorboard to visualize the progress of training process ``` !wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip !unzip -o ngrok-stable-linux-amd64.zip LOG_DIR = model_dir get_ipython().system_raw( 'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &' .format(LOG_DIR) ) get_ipython().system_raw('./ngrok http 6006 &') ``` ### Step: 8 Get tensorboard link ``` ! curl -s http://localhost:4040/api/tunnels | python3 -c \ "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])" ``` ### Step 9. Training the model ``` !python /content/models/research/object_detection/model_main.py \ --pipeline_config_path={pipeline_fname} \ --model_dir={model_dir} \ --alsologtostderr \ --num_train_steps={num_steps} \ --num_eval_steps={num_eval_steps} !ls {model_dir} ``` ##Task-4: Freezing a trained model and export it for inference ### Step: 10 Exporting a Trained Inference Graph ``` import re import numpy as np output_directory = './fine_tuned_model' lst = os.listdir(model_dir) lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l] steps=np.array([int(re.findall('\d+', l)[0]) for l in lst]) last_model = lst[steps.argmax()].replace('.meta', '') last_model_path = os.path.join(model_dir, last_model) print(last_model_path) !python /content/models/research/object_detection/export_inference_graph.py \ --input_type=image_tensor \ --pipeline_config_path={pipeline_fname} \ --output_directory={output_directory} \ --trained_checkpoint_prefix={last_model_path} !ls {output_directory} ``` ### Step 11: Use frozen model for inference. ``` import os pb_fname = os.path.join(os.path.abspath(output_directory), "frozen_inference_graph.pb") assert os.path.isfile(pb_fname), '`{}` not exist'.format(pb_fname) !ls -alh {pb_fname} import os import glob # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_CKPT = pb_fname # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = label_map_pbtxt_fname # If you want to test the code with your images, just add images files to the PATH_TO_TEST_IMAGES_DIR. PATH_TO_TEST_IMAGES_DIR = os.path.join(repo_dir_path, "test") assert os.path.isfile(pb_fname) assert os.path.isfile(PATH_TO_LABELS) TEST_IMAGE_PATHS = glob.glob(os.path.join(PATH_TO_TEST_IMAGES_DIR, "*.*")) assert len(TEST_IMAGE_PATHS) > 0, 'No image found in `{}`.'.format(PATH_TO_TEST_IMAGES_DIR) print(TEST_IMAGE_PATHS) %cd /content/models/research/object_detection import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops # This is needed to display the images. %matplotlib inline from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories( label_map, max_num_classes=num_classes, use_display_name=True) category_index = label_map_util.create_category_index(categories) def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) # Size, in inches, of the output images. IMAGE_SIZE = (4, 4) def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = { output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze( tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze( tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast( tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [ real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [ real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int( output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) ```
github_jupyter
# Transfer Learning Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using [VGGNet](https://arxiv.org/pdf/1409.1556.pdf) trained on the [ImageNet dataset](http://www.image-net.org/) as a feature extractor. Below is a diagram of the VGGNet architecture. <img src="assets/cnnarchitecture.jpg" width=700px> VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes. You can read more about transfer learning from [the CS231n course notes](http://cs231n.github.io/transfer-learning/#tf). ## Pretrained VGGNet We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it. This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. **You'll need to clone the repo into the folder containing this notebook.** Then download the parameter file using the next cell. ``` from urllib import urlretrieve from os.path import isfile, isdir from tqdm import tqdm vgg_dir = 'tensorflow_vgg/' # Make sure vgg exists if not isdir(vgg_dir): raise Exception("VGG directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(vgg_dir + "vgg16.npy"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar: urlretrieve( 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy', vgg_dir + 'vgg16.npy', pbar.hook) else: print("Parameter file already exists!") ``` ## Flower power Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the [TensorFlow inception tutorial](https://www.tensorflow.org/tutorials/image_retraining). ``` import tarfile dataset_folder_path = 'flower_photos' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('flower_photos.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar: urlretrieve( 'http://download.tensorflow.org/example_images/flower_photos.tgz', 'flower_photos.tar.gz', pbar.hook) if not isdir(dataset_folder_path): with tarfile.open('flower_photos.tar.gz') as tar: tar.extractall() tar.close() ``` ## ConvNet Codes Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier. Here we're using the `vgg16` module from `tensorflow_vgg`. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from [the source code](https://github.com/machrisaa/tensorflow-vgg/blob/master/vgg16.py)): ``` self.conv1_1 = self.conv_layer(bgr, "conv1_1") self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2") self.pool1 = self.max_pool(self.conv1_2, 'pool1') self.conv2_1 = self.conv_layer(self.pool1, "conv2_1") self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2") self.pool2 = self.max_pool(self.conv2_2, 'pool2') self.conv3_1 = self.conv_layer(self.pool2, "conv3_1") self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2") self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3") self.pool3 = self.max_pool(self.conv3_3, 'pool3') self.conv4_1 = self.conv_layer(self.pool3, "conv4_1") self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2") self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3") self.pool4 = self.max_pool(self.conv4_3, 'pool4') self.conv5_1 = self.conv_layer(self.pool4, "conv5_1") self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2") self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3") self.pool5 = self.max_pool(self.conv5_3, 'pool5') self.fc6 = self.fc_layer(self.pool5, "fc6") self.relu6 = tf.nn.relu(self.fc6) ``` So what we want are the values of the first fully connected layer, after being ReLUd (`self.relu6`). To build the network, we use ``` with tf.Session() as sess: vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) ``` This creates the `vgg` object, then builds the graph with `vgg.build(input_)`. Then to get the values from the layer, ``` feed_dict = {input_: images} codes = sess.run(vgg.relu6, feed_dict=feed_dict) ``` ``` import os import numpy as np import tensorflow as tf from tensorflow_vgg import vgg16 from tensorflow_vgg import utils data_dir = 'flower_photos/' contents = os.listdir(data_dir) classes = [each for each in contents if os.path.isdir(data_dir + each)] ``` Below I'm running images through the VGG network in batches. > **Exercise:** Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values). ``` # Set the batch size higher if you can fit in in your GPU memory batch_size = 10 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: vgg = vgg16.Vgg16('./tensorflow_vgg/vgg16.npy') for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each files = os.listdir(class_path) for ii, file in enumerate(files, 1): # Add images to the current batch # utils.load_image crops the input images for us, from the center img = utils.load_image(os.path.join(class_path, file)) batch.append(img.reshape((1, 224, 224, 3))) labels.append(each) # Running the batch through the network to get the codes if ii % batch_size == 0 or ii == len(files): # Image batch to pass to VGG network images = np.concatenate(batch) # TODO: Get the values from the relu6 layer of the VGG network codes_batch = # Here I'm building an array of the codes if codes is None: codes = codes_batch else: codes = np.concatenate((codes, codes_batch)) # Reset to start building the next batch batch = [] print('{} images processed'.format(ii)) # write codes to file with open('codes', 'w') as f: codes.tofile(f) # write labels to file import csv with open('labels', 'w') as f: writer = csv.writer(f, delimiter='\n') writer.writerow(labels) ``` ## Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. ``` # read codes and labels from file import csv with open('labels') as f: reader = csv.reader(f, delimiter='\n') labels = np.array([each for each in reader if len(each) > 0]).squeeze() with open('codes') as f: codes = np.fromfile(f, dtype=np.float32) codes = codes.reshape((len(labels), -1)) ``` ### Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! > **Exercise:** From scikit-learn, use [LabelBinarizer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html) to create one-hot encoded vectors from the labels. ``` labels_vecs = # Your one-hot encoded labels array here ``` Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use [`StratifiedShuffleSplit`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) from scikit-learn. You can create the splitter like so: ``` ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) ``` Then split the data with ``` splitter = ss.split(x, y) ``` `ss.split` returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use `next(splitter)` to get the indices. Be sure to read the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) and the [user guide](http://scikit-learn.org/stable/modules/cross_validation.html#random-permutations-cross-validation-a-k-a-shuffle-split). > **Exercise:** Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets. ``` train_x, train_y = val_x, val_y = test_x, test_y = print("Train shapes (x, y):", train_x.shape, train_y.shape) print("Validation shapes (x, y):", val_x.shape, val_y.shape) print("Test shapes (x, y):", test_x.shape, test_y.shape) ``` If you did it right, you should see these sizes for the training sets: ``` Train shapes (x, y): (2936, 4096) (2936, 5) Validation shapes (x, y): (367, 4096) (367, 5) Test shapes (x, y): (367, 4096) (367, 5) ``` ### Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network. > **Exercise:** With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost. ``` inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]]) labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]]) # TODO: Classifier layers and operations logits = # output layer logits cost = # cross entropy loss optimizer = # training optimizer # Operations for validation/test accuracy predicted = tf.nn.softmax(logits) correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) ``` ### Batches! Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data. ``` def get_batches(x, y, n_batches=10): """ Return a generator that yields batches from arrays x and y. """ batch_size = len(x)//n_batches for ii in range(0, n_batches*batch_size, batch_size): # If we're not on the last batch, grab data with size batch_size if ii != (n_batches-1)*batch_size: X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] # On the last batch, grab the rest of the data else: X, Y = x[ii:], y[ii:] # I love generators yield X, Y ``` ### Training Here, we'll train the network. > **Exercise:** So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the `get_batches` function I wrote before to get your batches like `for x, y in get_batches(train_x, train_y)`. Or write your own! ``` saver = tf.train.Saver() with tf.Session() as sess: # TODO: Your training code here saver.save(sess, "checkpoints/flowers.ckpt") ``` ### Testing Below you see the test accuracy. You can also see the predictions returned for images. ``` with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: test_x, labels_: test_y} test_acc = sess.run(accuracy, feed_dict=feed) print("Test accuracy: {:.4f}".format(test_acc)) %matplotlib inline import matplotlib.pyplot as plt from scipy.ndimage import imread ``` Below, feel free to choose images and see how the trained classifier predicts the flowers in them. ``` test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg' test_img = imread(test_img_path) plt.imshow(test_img) # Run this cell if you don't have a vgg graph built if 'vgg' in globals(): print('"vgg" object already exists. Will not create again.') else: #create vgg with tf.Session() as sess: input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) vgg = vgg16.Vgg16() vgg.build(input_) with tf.Session() as sess: img = utils.load_image(test_img_path) img = img.reshape((1, 224, 224, 3)) feed_dict = {input_: img} code = sess.run(vgg.relu6, feed_dict=feed_dict) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: code} prediction = sess.run(predicted, feed_dict=feed).squeeze() plt.imshow(test_img) plt.barh(np.arange(5), prediction) _ = plt.yticks(np.arange(5), lb.classes_) ```
github_jupyter
``` import numpy as np import pandas as pd from keras.models import * from keras.layers import Input, merge, Conv2D, MaxPooling2D, UpSampling2D, Dropout, Cropping2D from keras.optimizers import * from keras.callbacks import ModelCheckpoint, LearningRateScheduler, EarlyStopping, ReduceLROnPlateau from datetime import datetime import time import sys import configparser import json import pickle import matplotlib.pyplot as plt #%matplotlib inline from unet.generator import * from unet.loss import * from unet.maskprocessor import * from unet.visualization import * from unet.modelfactory import * ``` This notebook trains the Road Segmentation model. The exported .py script takes in the config filename via a command line parameter. To run this notebook directly in jupyter notebook, please manually set config_file to point to a configuration file (e.g. cfg/default.cfg). ``` # command line args processing "python RoadSegmentor.py cfg/your_config.cfg" if len(sys.argv) > 1 and '.cfg' in sys.argv[1]: config_file = sys.argv[1] else: config_file = 'cfg/default.cfg' #print('missing argument. please provide config file as argument. syntax: python RoadSegmentor.py <config_file>') #exit(0) print('reading configurations from config file: {}'.format(config_file)) settings = configparser.ConfigParser() settings.read(config_file) x_data_dir = settings.get('data', 'train_x_dir') y_data_dir = settings.get('data', 'train_y_dir') print('x_data_dir: {}'.format(x_data_dir)) print('y_data_dir: {}'.format(y_data_dir)) data_csv_path = settings.get('data', 'train_list_csv') print('model configuration options:', settings.options("model")) model_dir = settings.get('model', 'model_dir') print('model_dir: {}'.format(model_dir)) timestr = time.strftime("%Y%m%d-%H%M%S") model_id = settings.get('model', 'id') print('model: {}'.format(model_id)) optimizer_label = 'Adam' # default if settings.has_option('model', 'optimizer'): optimizer_label = settings.get('model', 'optimizer') if settings.has_option('model', 'source'): model_file = settings.get('model', 'source') print('model_file: {}'.format(model_file)) else: model_file = None learning_rate = settings.getfloat('model', 'learning_rate') max_number_epoch = settings.getint('model', 'max_epoch') print('learning rate: {}'.format(learning_rate)) print('max epoch: {}'.format(max_number_epoch)) min_learning_rate = 0.000001 if settings.has_option('model', 'min_learning_rate'): min_learning_rate = settings.getfloat('model', 'min_learning_rate') print('minimum learning rate: {}'.format(min_learning_rate)) lr_reduction_factor = 0.1 if settings.has_option('model', 'lr_reduction_factor'): lr_reduction_factor = settings.getfloat('model', 'lr_reduction_factor') print('lr_reduction_factor: {}'.format(lr_reduction_factor)) batch_size = settings.getint('model', 'batch_size') print('batch size: {}'.format(batch_size)) input_width = settings.getint('model', 'input_width') input_height = settings.getint('model', 'input_height') img_gen = CustomImgGenerator(x_data_dir, y_data_dir, data_csv_path) train_gen = img_gen.trainGen(batch_size=batch_size, is_Validation=False) validation_gen = img_gen.trainGen(batch_size=batch_size, is_Validation=True) timestr = time.strftime("%Y%m%d-%H%M%S") model_filename = model_dir + '{}-{}.hdf5'.format(model_id, timestr) print('model checkpoint file path: {}'.format(model_filename)) # Early stopping prevents overfitting on training data # Make sure the patience value for EarlyStopping > patience value for ReduceLROnPlateau. # Otherwise ReduceLROnPlateau will never be called. early_stop = EarlyStopping(monitor='val_loss', patience=3, min_delta=0, verbose=1, mode='auto') model_checkpoint = ModelCheckpoint(model_filename, monitor='val_loss', verbose=1, save_best_only=True) reduceLR = ReduceLROnPlateau(monitor='val_loss', factor=lr_reduction_factor, patience=2, verbose=1, min_lr=min_learning_rate, epsilon=1e-4) training_start_time = datetime.now() number_validations = img_gen.validation_samples_count() samples_per_epoch = img_gen.training_samples_count() modelFactory = ModelFactory(num_channels = 3, img_rows = input_height, img_cols = input_width) if model_file is not None: model = load_model(model_dir + model_file, custom_objects={'dice_coef_loss': dice_coef_loss, 'dice_coef': dice_coef, 'binary_crossentropy_dice_loss': binary_crossentropy_dice_loss}) else: model = modelFactory.get_model(model_id) print(model.summary()) if optimizer_label == 'Adam': optimizer = Adam(lr = learning_rate) elif optimizer_label == 'RMSprop': optimizer = RMSprop(lr = learning_rate) else: raise ValueError('unsupported optimizer: {}'.format(optimizer_label)) model.compile(optimizer = optimizer, loss = dice_coef_loss, metrics = ['accuracy', dice_coef]) history = model.fit_generator(generator=train_gen, steps_per_epoch=np.ceil(float(samples_per_epoch) / float(batch_size)), validation_data=validation_gen, validation_steps=np.ceil(float(number_validations) / float(batch_size)), epochs=max_number_epoch, verbose=1, callbacks=[model_checkpoint, early_stop, reduceLR]) time_spent_trianing = datetime.now() - training_start_time print('model training complete. time spent: {}'.format(time_spent_trianing)) print(history.history) historyFilePath = model_dir + '{}-{}-train-history.png'.format(model_id, timestr) trainingHistoryPlot(model_id + timestr, historyFilePath, history.history) pickleFilePath = model_dir + '{}-{}-history-dict.pickle'.format(model_id, timestr) with open(pickleFilePath, 'wb') as handle: pickle.dump(history.history, handle, protocol=pickle.HIGHEST_PROTOCOL) ```
github_jupyter
``` from neo4j import GraphDatabase import json with open('credentials.json') as json_file: credentials = json.load(json_file) username = credentials['username'] pwd = credentials['password'] ``` ### NOTE ❣️ * BEFORE running this, still need to run `bin\neo4j console` to enable bolt on 127.0.0.1:7687 * When the queryyou wrote is wrong, the error will show connection or credential has problem, you don't really need to restart the server, after the query has been corrected, everything will be running fine. #### Userful Links * Results can be outputed: https://neo4j.com/docs/api/python-driver/current/results.html ``` driver = GraphDatabase.driver("bolt://localhost:7687", auth=(username, pwd)) def delete_all(tx): result = tx.run("""match (n) detach delete n""").single() if result is None: print('Removed All!') def create_entity(tx, entity_id, entity_name, entity_properties): query = """CREATE ("""+entity_id+""": """+entity_name+entity_properties+""")""" result = tx.run(query) def display_all(tx, query): results = tx.run(query) for record in results: print(record) return results.graph() with driver.session() as session: session.write_transaction(delete_all) session.write_transaction(create_entity, entity_id='Alice', entity_name='Client', entity_properties = "{name:'Alice', ip: '1.1.1.1', shipping_address: 'a place', billing_address: 'a place'}") graph = session.read_transaction(display_all, query="MATCH (c:Client) RETURN c") def create_all(tx, query): result = tx.run(query) query = """ // Clients CREATE (Alice:Client {name:'Alice', ip: '1.1.1.1', shipping_address: 'a place', billing_address: 'a place'}) CREATE (Bob:Client {name:'Bob', ip: '1.1.1.2', shipping_address: 'b place', billing_address: 'b place'}) CREATE (Cindy:Client {name:'Cindy', ip: '1.1.1.3', shipping_address: 'c place', billing_address: 'c place'}) CREATE (Diana:Client {name:'Diana', ip: '1.1.1.4', shipping_address: 'd place', billing_address: 'd place'}) CREATE (Emily:Client {name:'Emily', ip: '1.1.1.5', shipping_address: 'e place', billing_address: 'e place'}) CREATE (Fiona:Client {name:'Fiona', ip: '1.1.1.6', shipping_address: 'f place', billing_address: 'f place'}) // Products CREATE (prod1:Product {name: 'strawberry ice-cream', category: 'ice-cream', price: 6.9, unit: 'box'}) CREATE (prod2:Product {name: 'mint ice-cream', category: 'ice-cream', price: 6.9, unit: 'box'}) CREATE (prod3:Product {name: 'mango ice-cream', category: 'ice-cream', price: 6.9, unit: 'box'}) CREATE (prod4:Product {name: 'cheesecake ice-cream', category: 'ice-cream', price: 7.9, unit: 'box'}) CREATE (prod5:Product {name: 'orange', category: 'furit', unit: 'lb', price: 2.6, unit: 'box'}) CREATE (prod6:Product {name: 'dragon fruit', category: 'furit', unit: 'lb', price: 4.8, unit: 'box'}) CREATE (prod7:Product {name: 'kiwi', category: 'furit', unit: 'lb', price: 5.3, unit: 'box'}) CREATE (prod8:Product {name: 'cherry', category: 'furit', unit: 'lb', price: 4.8, unit: 'box'}) CREATE (prod9:Product {name: 'strawberry', category: 'furit', unit: 'lb', price: 3.9, unit: 'box'}) // Orders CREATE (d1:Order {id:'d1', name:'d1', deliverdate:'20190410', status:'delivered'}) CREATE (d2:Order {id:'d2', name:'d2', deliverdate:'20130708', status:'delivered'}) CREATE (d3:Order {id:'d3', name:'d3', deliverdate:'20021201', status:'delivered'}) CREATE (d4:Order {id:'d4', name:'d4', deliverdate:'20040612', status:'delivered'}) CREATE (d5:Order {id:'d5', name:'d5', deliverdate:'20110801', status:'delivered'}) CREATE (d6:Order {id:'d6', name:'d6',deliverdate:'20171212', status:'delivered'}) // Link Clients, Orders and ProductsCREATE CREATE (Alice)-[:PLACED]->(d1)-[:CONTAINS {quantity:1}]->(prod1), (d1)-[:CONTAINS {quantity:2}]->(prod2), (Bob)-[:PLACED]->(d2)-[:CONTAINS {quantity:2}]->(prod1), (d2)-[:CONTAINS {quantity:6}]->(prod7), (Cindy)-[:PLACED]->(d3)-[:CONTAINS {quantity:1}]->(prod9), (Alice)-[:PLACED]->(d4)-[:CONTAINS {quantity:100}]->(prod4), (Alice)-[:PLACED]->(d5)-[:CONTAINS {quantity:10}]->(prod8), (Alice)-[:PLACED]->(d6)-[:CONTAINS {quantity:1}]->(prod7); """ with driver.session() as session: session.write_transaction(delete_all) session.write_transaction(create_all, query) graph = session.read_transaction(display_all, query="""MATCH (c:Client)-[:PLACED]-(o)-[:CONTAINS]->(p) return c, o, p;""") graph with driver.session() as session: graph = session.read_transaction(display_all, query="""MATCH (c:Client {name:'Alice'})-[:PLACED]->(o)-[cts:CONTAINS]->(p) WITH c, o, SUM(cts.quantity * p.price) as order_price ORDER BY o.deliverdate WITH c.name AS name, COLLECT(o) AS os, COLLECT(order_price) as ops UNWIND [i IN RANGE(0, SIZE(os)-1) | {name: name, id: os[i].id, current_order_cost: round(ops[i]), other_orders: [x IN os[0..i] + os[i+1..SIZE(os)] | x.id], other_orders_costs: [x IN ops[0..i] + ops[i+1..SIZE(os)] | round(x)] }] AS result WITH result.name as name, result.id as order_id, result.current_order_cost as current_order_cost, result.other_orders as other_orders, result.other_orders_costs as other_orders_costs UNWIND(other_orders_costs) as unwind_other_orders_costs return name, order_id, current_order_cost, other_orders, other_orders_costs, round(stDev(unwind_other_orders_costs)) as other_costs_std;""") graph ```
github_jupyter
``` import matplotlib.pyplot as plt from tensorflow.keras.layers import Input,Conv2D from tensorflow.keras.layers import MaxPool2D,Flatten,Dense from tensorflow.keras import Model ``` ![](https://lh3.googleusercontent.com/-WuPVxynI_ss/X_6DG-R179I/AAAAAAAAsSc/S0rVDJtOW_Q7bbPdOC2xnvRn3DpRbbe6wCK8BGAsYHg/s0/2021-01-12.png) ``` input=Input(shape=(224,224,3)) ## Input is a RGB image so 3 channel ``` ## Conv Block-1 ``` x=Conv2D(filters=128,kernel_size=3,padding='same',activation='relu')(input) x=Conv2D(filters=128,kernel_size=3,padding='same',activation='relu')(x) x=MaxPool2D(pool_size=2,strides=2,padding='same')(x) ``` ## Conv Block-2 ``` x=Conv2D(filters=128,kernel_size=3,padding='same',activation='relu')(x) x=Conv2D(filters=128,kernel_size=3,padding='same',activation='relu')(x) x=MaxPool2D(pool_size=2,strides=2,padding='same')(x) ``` ### Conv Block-3 ``` x=Conv2D(filters=256,kernel_size=3,padding='same',activation='relu')(x) x=Conv2D(filters=256,kernel_size=3,padding='same',activation='relu')(x) x=Conv2D(filters=256,kernel_size=3,padding='same',activation='relu')(x) x=MaxPool2D(pool_size=2,strides=2,padding='same')(x) ``` ## Conv Block-4 ``` x=Conv2D(filters=512,kernel_size=3,padding='same',activation='relu')(x) x=Conv2D(filters=512,kernel_size=3,padding='same',activation='relu')(x) x=Conv2D(filters=512,kernel_size=3,padding='same',activation='relu')(x) x=MaxPool2D(pool_size=2,strides=2,padding='same')(x) ``` ## Conv Block-5 ``` x=Conv2D(filters=512,kernel_size=3,padding='same',activation='relu')(x) x=Conv2D(filters=512,kernel_size=3,padding='same',activation='relu')(x) x=Conv2D(filters=512,kernel_size=3,padding='same',activation='relu')(x) x=MaxPool2D(pool_size=2,strides=2,padding='same')(x) x=Flatten()(x) x=Dense(units=4096,activation='relu')(x) x=Dense(units=4096,activation='relu')(x) output=Dense(units=1000,activation='softmax')(x) model=Model(inputs=input,outputs=output) model.summary() ``` ### Keras pretrined weights ``` import tensorflow as tf from tensorflow.keras.applications.vgg16 import VGG16 import numpy as np import cv2 model=VGG16(weights='imagenet',include_top=True) model.compile(optimizer='sgd',loss='categorical_crossentropy') im= cv2.resize( cv2.imread('/home/hemanth/Documents/DeepLearning/CNN/2021-01-12.jpeg'),(224,224)) im=np.expand_dims(im,axis=0) im.astype(np.float32) out=model.predict(im) index=np.argmax(out) print(index) plt.plot(out) plt.show() ``` ## feature Extraction in VGG16 ``` from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.vgg16 import preprocess_input from tensorflow.keras import models base_model=VGG16(weights='imagenet',include_top=True) print(base_model) for i, layer in enumerate(base_model.layers): print(i,layer.name,layer.output_shape) model=models.Model(inputs=base_model.input, outputs=base_model.get_layer('block1_pool').output) imge_path='/home/hemanth/Documents/DeepLearning/CNN/cats_and_dogs_filtered/train/cats/cat.0.jpg' img=image.load_img(imge_path,target_size=(224,224)) x=image.img_to_array(img) x=np.expand_dims(x,axis=0) x=preprocess_input(x) x features=model.predict(x) print(features) plt.plot(features.ravel()) ``` ## Without weight vgg16 model with zero padding ``` import tensorflow as tf from tensorflow.keras import layers,models import cv2,numpy as np import os def VGG_16(weights_path=None): model = models.Sequential() model.add(layers.ZeroPadding2D((1,1),input_shape=(224,224, 3))) model.add(layers.Convolution2D(64, (3, 3), activation='relu')) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2,2), strides=(2,2))) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(128, (3, 3), activation='relu')) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2,2), strides=(2,2))) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(256, (3, 3), activation='relu')) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(256, (3, 3), activation='relu')) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(256, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2,2), strides=(2,2))) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(512, (3, 3), activation='relu')) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(512, (3, 3), activation='relu')) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(512, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2,2), strides=(2,2))) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(512, (3, 3), activation='relu')) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(512, (3, 3), activation='relu')) model.add(layers.ZeroPadding2D((1,1))) model.add(layers.Convolution2D(512, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2,2), strides=(2,2))) model.add(layers.Flatten()) #top layer of the VGG net model.add(layers.Dense(4096, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(4096, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(1000, activation='softmax')) if weights_path: model.load_weights(weights_path) return model im = cv2.resize(cv2.imread('2021-01-12.jpeg'), (224, 224)).astype(np.float32) #im = im.transpose((2,0,1)) im = np.expand_dims(im, axis=0) # Test pretrained model path_file = os.path.join(os.path.expanduser("~"), '.keras/models/vgg16_weights_tf_dim_ordering_tf_kernels.h5') model = VGG_16(path_file) model.summary() model.compile(optimizer='sgd', loss='categorical_crossentropy') out = model.predict(im) print(np.argmax(out)) ```
github_jupyter
This notebook shows the MEP quickstart sample, which also exists as a non-notebook version at: https://bitbucket.org/vitotap/python-spark-quickstart It shows how to use Spark (http://spark.apache.org/) for distributed processing on the PROBA-V Mission Exploitation Platform. (https://proba-v-mep.esa.int/) The sample intentionally implements a very simple computation: for each PROBA-V tile in a given bounding box and time range, a histogram is computed. The results are then summed and printed. Computation of the histograms runs in parallel. ## First step: get file paths A catalog API is available to easily retrieve paths to PROBA-V files: https://readthedocs.org/projects/mep-catalogclient/ ``` from catalogclient import catalog cat=catalog.Catalog() cat.get_producttypes() date = "2016-01-01" products = cat.get_products('PROBAV_L3_S1_TOC_333M', fileformat='GEOTIFF', startdate=date, enddate=date, min_lon=0, max_lon=10, min_lat=36, max_lat=53) #extract NDVI geotiff files from product metadata files = [p.file('NDVI')[5:] for p in products] print('Found '+str(len(files)) + ' files.') print(files[0]) #check if file exists !file {files[0]} ``` ## Second step: define function to apply Define the histogram function, this can also be done inline, which allows for a faster feedback loop when writing the code, but here we want to clearly separate the processing 'algorithm' from the parallelization code. ``` # Calculates the histogram for a given (single band) image file. def histogram(image_file): import numpy as np import gdal # Open image file img = gdal.Open(image_file) if img is None: print( '-ERROR- Unable to open image file "%s"' % image_file ) # Open raster band (first band) raster = img.GetRasterBand(1) xSize = img.RasterXSize ySize = img.RasterYSize # Read raster data data = raster.ReadAsArray(0, 0, xSize, ySize) # Calculate histogram hist, _ = np.histogram(data, bins=256) return hist ``` ## Third step: setup Spark To work on the processing cluster, we need to specify the resources we want: * spark.executor.cores: Number of cores per executor. Usually our tasks are single threaded, so 1 is a good default. * spark.executor.memory: memory to assign per executor. For the Java/Spark processing, not the Python part. * spark.yarn.executor.memoryOverhead: memory available for Python in each executor. We set up the SparkConf with these parameters, and create a SparkContext sc, which will be our access point to the cluster. ``` %%time # ================================================================ # === Calculate the histogram for a given number of files. The === # === processing is performed by spreading them over a cluster === # === of Spark nodes. === # ================================================================ from datetime import datetime from operator import add import pyspark import os # Setup the Spark cluster conf = pyspark.SparkConf() conf.set('spark.yarn.executor.memoryOverhead', 512) conf.set('spark.executor.memory', '512m') sc = pyspark.SparkContext(conf=conf) ``` ## Fourth step: compute histograms We use a couple of Spark functions to run our job on the cluster. Comments are provided in the code. ``` %%time # Distribute the local file list over the cluster. filesRDD = sc.parallelize(files,len(files)) # Apply the 'histogram' function to each filename using 'map', keep the result in memory using 'cache'. hists = filesRDD.map(histogram).cache() count = hists.count() # Combine distributed histograms into a single result total = list(hists.reduce(lambda h, i: map(add, h, i))) hists.unpersist() print( "Sum of %i histograms: %s" % (count, total)) #stop spark session if we no longer need it sc.stop() ``` ## Fifth step: plot our result Plot the array of values as a simple line chart using matplotlib. This is the most basic Python library. More advanced options such as bokeh, mpld3 and seaborn are also available. ``` %matplotlib inline import matplotlib.pyplot as plt plt.plot(total) plt.show() ```
github_jupyter
``` CRISP_DM = "C:/Users/kaivl/data_science_covid-19/CRISP_DM.png" from PIL import Image import glob Image.open(CRISP_DM) import pandas as pd import subprocess import os import ntpath pd.set_option('display.max_rows', 500) ``` ### Data Understanding * RKI, webscrape (webscraping) https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Fallzahlen.html * John Hopkins (GITHUB) https://github.com/CSSEGISandData/COVID-19.git * Rest API services to retreive data https://npgeo-corona-npgeo-de.hub.arcgis.com/ ### GITHUB csv data ``` git_pull = subprocess.Popen( "git pull", cwd = os.path.dirname('C:\\Users\\kaivl\\data_science_covid-19\\data\\raw\\COVID-19'), shell = True, stdout = subprocess.PIPE, stderr = subprocess.PIPE ) (out, error) = git_pull.communicate() print("Error : " + str(error)) print("out : " + str(out)) data_path='../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv' pd_raw=pd.read_csv(data_path) pd_raw.head() ``` ## Webscrapping ``` import requests from bs4 import BeautifulSoup page = requests.get("https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Fallzahlen.html") soup = BeautifulSoup(page.content, 'html.parser') html_table=soup.find('table') all_rows=html_table.find_all('tr') final_data_list=[] for pos,rows in enumerate(all_rows): col_list=[each_col.get_text(strip=True) for each_col in rows.find_all('td')] #tag td for each data element in raw final_data_list.append(col_list) pd_daily_status=pd.DataFrame(final_data_list).dropna().rename(columns={0:'state', 1:'cases', 2:'changes', 3:'cases in the past 7 days', 4:'7-day incidence', 5:'deaths'}) # Note: All the data is in german standard (decimals: ',' and tousands: '.') pd_daily_status.head() pd_daily_status.to_csv('../data/raw/RKI/RKI_data.csv',sep=';', index = False) # Data will be prepared in notebook 'data_preparation' ``` ## REST API calls ``` data=requests.get('https://services7.arcgis.com/mOBPykOjAyBO2ZKk/arcgis/rest/services/Coronaf%C3%A4lle_in_den_Bundesl%C3%A4ndern/FeatureServer/0/query?where=1%3D1&outFields=*&outSR=4326&f=json') import json json_object=json.loads(data.content) type(json_object) json_object.keys() # Extract the data from the dict. full_list=[] for pos,each_dict in enumerate (json_object['features'][:]): full_list.append(each_dict['attributes']) pd.DataFrame(full_list) pd_full_list=pd.DataFrame(full_list) pd_full_list.head() pd_full_list.to_csv('../data/raw/NPGEO/GER_state_data.csv',sep=';') ``` ## API access via REST service, e.g. USA data example of a REST confirm interface (attention registration mandatory) www.smartable.ai ``` import requests url_endpoint='https://api.smartable.ai/coronavirus/stats/US' headers = { 'Cache-Control': 'no-cache', 'Subscription-Key': '28ee4219700f48718be78b057beb7eb4', } response = requests.get('https://api.smartable.ai/coronavirus/stats/US', headers=headers) print(response) response.content US_dict=json.loads(response.content) with open('../data/raw/SMARTABLE/US_data.json', 'w') as outfile: json.dump(US_dict, outfile,indent=2) US_dict['stats']['breakdowns'][0] # Extract data of interest from dict. full_list_US_country=[] for pos, each_dict in enumerate(US_dict['stats']['breakdowns'][:]): flatten_dict = each_dict['location'] flatten_dict.update(dict(list(US_dict['stats']['breakdowns'][pos].items())[1:7])) full_list_US_country.append(flatten_dict) pd.DataFrame(full_list_US_country).to_csv('../data/raw/SMARTABLE/full_list_US_country.csv',sep=';',index=False) url_endpoint='https://api.smartable.ai/coronavirus/stats/IN' headers={ 'Cache-Control': 'no-cache', 'Subscription-Key': '79fead24cb57472d820cb56d1b451d7c', } response=requests.get(url_endpoint,headers=headers) IN_dict = json.loads(response.content) # importing strings for India dataset and dump into JSON file with .txt format with open ('../data/raw/IN_data.txt','w') as outfile: json.dump(IN_dict, outfile,indent=2) print(json.dumps(IN_dict, indent = 2)) #string dump # put all dictionary type data for INDIA into dataframe df_4 = pd.DataFrame(IN_dict) df_4.head() ``` ## Individual states in India ``` full_list_IN_country=[] for pos,each_dict in enumerate (IN_dict['stats']['breakdowns'][:]): flatten_dict=each_dict['location'] flatten_dict.update(dict(list(IN_dict['stats']['breakdowns'][pos].items())[1: 7]) ) full_list_IN_country.append(flatten_dict) df_india = pd.DataFrame(full_list_IN_country) pd.DataFrame(full_list_IN_country).to_csv('../data/raw/SMARTABLE/full_list_IN_country.csv',sep=';',index=False) ```
github_jupyter
``` import torch import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import random %matplotlib inline import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset, DataLoader import glob import os.path from tqdm import tqdm predictionDB = pd.read_csv("../data/processed/predictionDB.csv",lineterminator='\n') embeddings = [None]*len(predictionDB) i=0 k=0 #for np_name in glob.glob('./../data/processed/embeddings/*.np[yz]'): # embeddings[i] = np.load(np_name) # i = i + 1 for x in predictionDB["COMMIT_HASH"]: embeddings[i] = np.load("./../data/processed/embeddings2/"+x+".npy") i = i + 1 embeddings print(len(predictionDB),i) ''' clean = '' for i,s in enumerate(predictionDB["COMMIT_MESSAGE"][3].split()[:-3]): if i!=0: clean+= ' ' clean+=s print(clean) ''' ''' clean_column = [None]*len(predictionDB["COMMIT_MESSAGE"]) for i in range(len(predictionDB["COMMIT_MESSAGE"])): clean = '' for j,s in enumerate(predictionDB["COMMIT_MESSAGE"][i].split()[:-3]): if j!=0: clean+= ' ' clean+=s clean_column[i] = clean if not len(clean): clean_column[i] = predictionDB["COMMIT_MESSAGE"] predictionDB["CLEAN_CMS"] = clean_column ''' # predictionDB.to_csv("../data/processed/predictionDB2.csv", index='False') #export! #a = predictionDB["CLEAN_CMS"].to_frame() #np.where(a.applymap(lambda x: x == '')) predictionDB["CLEAN_CMS"][4355] from scipy import spatial a = predictionDB["CLEAN_CMS"][2] b = predictionDB["CLEAN_CMS"][3] c = predictionDB["CLEAN_CMS"][62912] emb_a = embeddings[2] emb_b = embeddings[3] emb_c = embeddings[62912] print("a =",a) print("b =",b) print("c =",c[:73]) print("") print("Similarity {emb(a),emb(b)} = %.2f" % (1-spatial.distance.cosine(emb_a, emb_b))) print("Similarity {emb(a),emb(c)} = %.2f" % (1-spatial.distance.cosine(emb_a, emb_c))) print("Similarity {emb(b),emb(c)} = %.2f" % (1-spatial.distance.cosine(emb_b, emb_c))) print("_______________________________________________________________________________________") print("") a = predictionDB["CLEAN_CMS"][6788] b = predictionDB["CLEAN_CMS"][6787] c = predictionDB["CLEAN_CMS"][4444] emb_a = embeddings[6788] emb_b = embeddings[6787] emb_c = embeddings[4444] print("a =",a) print("b =",b) print("c =",c) print("") print("Similarity {emb(a),emb(b)} = %.2f" % (1-spatial.distance.cosine(emb_a, emb_b))) print("Similarity {emb(a),emb(c)} = %.2f" % (1-spatial.distance.cosine(emb_a, emb_c))) print("Similarity {emb(b),emb(c)} = %.2f" % (1-spatial.distance.cosine(emb_b, emb_c))) a = predictionDB["CLEAN_CMS"][6788] b = predictionDB["CLEAN_CMS"][6787] c = predictionDB["CLEAN_CMS"][4444] emb_a = embeddings[6788] emb_b = embeddings[6787] emb_c = embeddings[4444] print("Commit msg a =",a) print("Commit msg b =",b) print("Commit msg c =",c) print("") print("Cosine similarity (a,b) = ",1-spatial.distance.cosine(emb_a, emb_b)) print("Cosine similarity (a,c) = ",1-spatial.distance.cosine(emb_a, emb_c)) print("Cosine similarity (b,c) = ",1-spatial.distance.cosine(emb_b, emb_c)) print(spatial.distance.cosine(emb_a, emb_c)) embeddings2 = pd.Series( (v for v in embeddings) ) embeddings2 #data = embeddings labels = predictionDB["inc_complexity"] for i in range(len(labels)): if labels[i]<=0: labels[i]=0 else: labels[i]=1 labels from sklearn.model_selection import train_test_split data_train, data_test, labels_train, labels_test = train_test_split(embeddings2, labels, test_size=0.20, random_state=42) labels_train.shape type(embeddings2) class commits_dataset(Dataset): def __init__(self, X, y): self.X = X self.y = y def __len__(self): return len(self.X.index) def __getitem__(self, index): return torch.Tensor(self.X.iloc[index]),torch.as_tensor(self.y.iloc[index]).float() commits_dataset_train = commits_dataset(X=data_train,y=labels_train) commits_dataset_test = commits_dataset(X=data_test,y=labels_test) # print(commits_dataset_train[0]) train_loader = DataLoader(dataset=commits_dataset_train, batch_size=32, shuffle=True) valid_loader = DataLoader(dataset=commits_dataset_test, batch_size=32, shuffle=False) #dls = DataLoaders(train_loader,valid_loader) #predictionDB["is_valid"] = np.zeros(len(predictionDB)) #for i in range(len(predictionDB["is_valid"])): # predictionDB["is_valid"][i] = 1 if random.random()<0.2 else 0 #from fastai.text.all import * #dls = TextDataLoaders.from_df(predictionDB, text_col='COMMIT_MESSAGE', label_col='inc_complexity', valid_col='is_valid') #dls.show_batch(max_n=3) # Multilayer perceptron class MultilayerPerceptron(nn.Module): def __init__(self): super().__init__() self.lin1 = nn.Linear(384, 512, bias=True) self.lin2 = nn.Linear(512, 256, bias=True) self.lin3 = nn.Linear(256, 1, bias=True) def forward(self, xb): x = xb.float() #x = xb.view(250, -1) x = F.relu(self.lin1(x)) x = F.relu(self.lin2(x)) return self.lin3(x) #mlp_learner = Learner(data=data, model=MultilayerPerceptron(), loss_func=nn.CrossEntropyLoss(),metrics=accuracy) #mlp_learner.fine_tune(20) model = MultilayerPerceptron() print(model) optimizer = torch.optim.Adam(model.parameters(), lr=0.0001) loss_fn = nn.BCELoss() mean_train_losses = [] mean_valid_losses = [] valid_acc_list = [] epochs = 100 for epoch in range(epochs): model.train() train_losses = [] valid_losses = [] for i, (embeddings, labels) in tqdm(enumerate(train_loader)): optimizer.zero_grad() outputs = model(embeddings) loss = loss_fn(outputs.squeeze(0),labels) loss.backward() optimizer.step() train_losses.append(loss.item()) model.eval() correct = 0 total = 0 with torch.no_grad(): for i, (embeddings, labels) in enumerate(valid_loader): outputs = model(embeddings) loss = loss_fn(outputs.squeeze(0), labels) valid_losses.append(loss.item()) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) mean_train_losses.append(np.mean(train_losses)) mean_valid_losses.append(np.mean(valid_losses)) print('epoch : {}, train loss : {:.4f}, valid loss : {:.4f}'\ .format(epoch+1, mean_train_losses[-1], mean_valid_losses[-1])) ```
github_jupyter
``` from google.colab import drive ROOT = "/content/drive" drive.mount(ROOT) %cd "/content/drive/My Drive/Learning/deep-learning-v2-pytorch/intro-to-pytorch" ``` # Loading Image Data So far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks. We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images: <img src='assets/dog_cat.png'> We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ``` The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.html#imagefolder)). In general you'll use `ImageFolder` like so: ```python dataset = datasets.ImageFolder('path/to/data', transform=transform) ``` where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so: ``` root/dog/xxx.png root/dog/xxy.png root/dog/xxz.png root/cat/123.png root/cat/nsdf3.png root/cat/asd932_.png ``` where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. ### Transforms When you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor: ```python transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) ``` There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). ### Data Loaders With the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch. ```python dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) ``` Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`. ```python # Looping through it, get a batch on each loop for images, labels in dataloader: pass # Get one batch images, labels = next(iter(dataloader)) ``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ``` data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform = transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size = 63, shuffle = True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ``` If you loaded the data correctly, you should see something like this (your image will be different): <img src='assets/cat_cropped.png' width=244> ## Data Augmentation A common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc. To randomly rotate, scale and crop, then flip your images you would define your transforms like this: ```python train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) ``` You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so ```input[channel] = (input[channel] - mean[channel]) / std[channel]``` Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn. You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop. >**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ``` data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([.5, .5, .5], [.5, .5, .5])]) test_transforms = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([.5, .5, .5], [.5, .5, .5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ``` Your transformed images should look something like this. <center>Training examples:</center> <img src='assets/train_examples.png' width=500px> <center>Testing examples:</center> <img src='assets/test_examples.png' width=500px> At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny). In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ``` from torch import nn import torch.nn.functional as F from torch import optim fake_list = [100, 200, 300, 500] # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset class MyNet(nn.Module): def __init__(self, input_size, fc1, fc2, fc3, classes): super().__init__() self.fc1 = nn.Linear(input_size, fc1) self.fc2 = nn.Linear(fc1, fc2) self.fc3 = nn.Linear(fc2, fc3) self.output = nn.Linear(fc3, classes) self.dropout = nn.Dropout(p = .2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.output(x), dim = 1) return x classes = len(trainloader.dataset.classes) input_size = 224*224*3 fc1 = 512 fc2 = 256 fc3 = 128 model = MyNet(input_size, fc1, fc2, fc3, classes) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters()) epochs = 10 acc_history, val_acc_history, train_losses, test_losses = [], [], [], [] for e in range(epochs): acc = 0 running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) ps = torch.exp(log_ps) loss = criterion(log_ps, labels) _, top_class = ps.topk(1, dim = 1) equals = top_class == labels.view(*top_class.shape) acc = torch.mean(equals.type(torch.FloatTensor)) loss.backward() optimizer.step() running_loss += loss.item() else: val_acc = 0 val_loss = 0 with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) ps = torch.exp(log_ps) loss = criterion(log_ps, labels) _, top_class = ps.topk(1, dim = 1) equals = top_class == labels.view(*top_class.shape) val_acc = torch.mean(equals.type(torch.FloatTensor)) val_loss += loss.item() model.train() acc_history.append(acc.item()) val_acc_history.append(val_acc.item()) train_losses.append(running_loss/len(trainloader)) test_losses.append(val_loss/len(testloader)) print("Epoch: {}/{}".format(e+1, epochs)) print("loss: {}, val_loss: {}, acc: {}, val_acc: {}".format(running_loss/len(trainloader), val_loss/len(testloader), acc.item(), val_acc.item())) ``` images are too big using fc layers ``` ```
github_jupyter
``` from statsmodels.stats.outliers_influence import variance_inflation_factor from sklearn.model_selection import KFold from sklearn.datasets import make_regression from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score from sklearn.model_selection import cross_val_score %matplotlib inline %config InlineBackend.figure_formats = {'png', 'retina'} pd.options.mode.chained_assignment = None # default='warn'? # data_key DataFrame data_key = pd.read_csv('key.csv') # data_train DataFrame data_train = pd.read_csv('train.csv') # data_weather DataFrame data_weather = pd.read_csv('weather.csv') rain_text = ['FC', 'TS', 'GR', 'RA', 'DZ', 'SN', 'SG', 'GS', 'PL', 'IC', 'FG', 'BR', 'UP', 'FG+'] other_text = ['HZ', 'FU', 'VA', 'DU', 'DS', 'PO', 'SA', 'SS', 'PY', 'SQ', 'DR', 'SH', 'FZ', 'MI', 'PR', 'BC', 'BL', 'VC' ] data_weather['codesum'].replace("+", "") a = [] for i in range(len(data_weather['codesum'])): a.append(data_weather['codesum'].values[i].split(" ")) for i_text in a[i]: if len(i_text) == 4: a[i].append(i_text[:2]) a[i].append(i_text[2:]) data_weather["nothing"] = 1 data_weather["rain"] = 0 data_weather["other"] = 0 b = -1 for ls in a: b += 1 for text in ls: if text in rain_text: data_weather.loc[b, 'rain'] = 1 data_weather.loc[b, 'nothing'] = 0 elif text in other_text: data_weather.loc[b,'other'] = 1 data_weather.loc[b, 'nothing'] = 0 # 모든 데이터 Merge df = pd.merge(data_weather, data_key) station_nbr = df['station_nbr'] df.drop('station_nbr', axis=1, inplace=True) df['station_nbr'] = station_nbr df = pd.merge(df, data_train) # T 값 처리 하기. Remained Subject = > 'M' and '-' df['snowfall'][df['snowfall'] == ' T'] = 0.05 df['preciptotal'][df['preciptotal'] == ' T'] = 0.005 # 주말과 주중 구분 작업 하기 df['date'] = pd.to_datetime(df['date']) df['week7'] = df['date'].dt.dayofweek df['weekend'] = 0 df.loc[df['week7'] == 5, 'weekend'] = 1 df.loc[df['week7'] == 6, 'weekend'] = 1 df1 = df[df['station_nbr'] == 1]; df11 = df[df['station_nbr'] == 11] df2 = df[df['station_nbr'] == 2]; df12 = df[df['station_nbr'] == 12] df3 = df[df['station_nbr'] == 3]; df13 = df[df['station_nbr'] == 13] df4 = df[df['station_nbr'] == 4]; df14 = df[df['station_nbr'] == 14] df5 = df[df['station_nbr'] == 5]; df15 = df[df['station_nbr'] == 15] df6 = df[df['station_nbr'] == 6]; df16 = df[df['station_nbr'] == 16] df7 = df[df['station_nbr'] == 7]; df17 = df[df['station_nbr'] == 17] df8 = df[df['station_nbr'] == 8]; df18 = df[df['station_nbr'] == 18] df9 = df[df['station_nbr'] == 9]; df19 = df[df['station_nbr'] == 19] df10 = df[df['station_nbr'] == 10]; df20 = df[df['station_nbr'] == 20] df4 = df4.apply(pd.to_numeric, errors = 'coerce') df4.describe().iloc[:, 14:] # 없는 Column = codesum, station_nbr, date, store_nbr df4_drop_columns = ['date', 'station_nbr', 'codesum', 'store_nbr'] df4 = df4.drop(columns = df4_drop_columns) df4['store_nbr'].unique() # np.nan를 포함하고 있는 변수(column)를 찾아서, 그 변수에 mean 값 대입해서 Frame의 모든 Value가 fill 되게 하기. df4_columns = df4.columns # Cateogry 값을 포함하는 변수는 np.nan에 mode값으로 대체하고, 나머지 실수 값을 포함한 변수는 np.nan에 mean값으로 대체 for i in df4_columns: if (i == 'resultdir'): df4[i].fillna(df4[i].mode()[0], inplace=True) print(df4[i].mode()[0]) else: df4[i].fillna(df4[i].mean(), inplace=True) # 이제 모든 변수가 숫자로 표기 되었기 때문에, 가능 함. # 상대 습도 추가 # df4['relative_humility'] = 100*(np.exp((17.625*((df4['dewpoint']-32)/1.8))/(243.04+((df4['dewpoint']-32)/1.8)))/np.exp((17.625*((df4['tavg']-32)/1.8))/(243.04+((df4['tavg']-32)/1.8)))) # 체감온도 계산 df4["windchill"] = 35.74 + 0.6215*df4["tavg"] - 35.75*(df4["avgspeed"]**0.16) + 0.4275*df4["tavg"]*(df4["avgspeed"]**0.16) df4 = df4[df4['units'] != 0] model_df4 = sm.OLS.from_formula('np.log1p(units) ~ tmax + tmin + tavg + dewpoint + wetbulb + heat + cool + preciptotal + stnpressure + \ sealevel + resultspeed + resultdir + avgspeed + C(nothing) + C(rain) + C(other) + C(item_nbr) + C(week7) + \ C(weekend) + relative_humility + windchill + 0', data = df4) result_df4 = model_df4.fit() print(result_df4.summary()) anova_result_df4 = sm.stats.anova_lm(result_df4, typ=2).sort_values(by=['PR(>F)'], ascending = False) anova_result_df4[anova_result_df4['PR(>F)'] <= 0.05] vif = pd.DataFrame() vif["VIF Factor"] = [variance_inflation_factor(df4.values, i) for i in range(df4.shape[1])] vif["features"] = df4.columns vif = vif.sort_values("VIF Factor").reset_index(drop=True) vif # 10순위까지 겹치는것만 쓴다 # item_nbr, weekend, week7, preciptotal # item_nbr, weekend, week7, preciptotal model_df4 = sm.OLS.from_formula('np.log1p(units) ~ C(item_nbr) + C(week7) + C(weekend) + scale(preciptotal) + 0', data = df4) result_df4 = model_df4.fit() print(result_df4.summary()) X4 = df4[['week7', 'weekend', 'item_nbr', 'preciptotal']] y4 = df4['units'] model4 = LinearRegression() cv4 = KFold(n_splits=10, shuffle=True, random_state=0) cross_val_score(model4, X4, y4, scoring="r2", cv=cv4) ```
github_jupyter
# Title **Exercise: B.1 - MLP by Hand** # Description In this exercise, we will **construct a neural network** to classify 3 species of iris. The classification is based on 4 measurement predictor variables: sepal length & width, and petal length & width in the given dataset. <img src="../img/image5.jpeg" style="width: 500px;"> # Instructions: The Neural Network will be built from scratch using pre-trained weights and biases. Hence, we will only be doing the forward (i.e., prediction) pass. - Load the iris dataset from sklearn standard datasets. - Assign the predictor and response variables appropriately. - One hot encode the categorical labels of the predictor variable. - Load and inspect the pre-trained weights and biases. - Construct the MLP: - Augment X with a column of ones to create the augmented design matrix X - Create the first layer weight matrix by vertically stacking the bias vector on top of the weight vector - Perform the affine transformation - Activate the output of the affine transformation using ReLU - Repeat the first 3 steps for the hidden layer (augment, vertical stack, affine) - Use softmax on the final layer - Finally, predict y # Hints: This will further develop our intuition for the architecture of a deep neural network. This diagram shows the structure of our network. You may find it useful to refer to it during the exercise. <img src="../img/image6.png" style="width: 500px;"> This is our first encounter with a multi-class classification problem and also the softmax activation on the output layer. Note: $f_1()$ above is the ReLU activation and $f_2()$ is the softmax. <a href="https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical" target="_blank">to_categorical(y, num_classes=None, dtype='float32')</a> : Converts a class vector (integers) to the binary class matrix. <a href="https://numpy.org/doc/stable/reference/generated/numpy.vstack.html" target="_blank">np.vstack(tup)</a> : Stack arrays in sequence vertically (row-wise). <a href="https://numpy.org/doc/stable/reference/generated/numpy.dot.html" target="_blank">numpy.dot(a, b, out=None)</a> : Returns the dot product of two arrays. <a href="https://numpy.org/doc/stable/reference/generated/numpy.argmax.html" target="_blank">numpy.argmax(a, axis=None, out=None)</a> : Returns the indices of the maximum values along an axis. Note: This exercise is **auto-graded and you can try multiple attempts.** ``` #Import library import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from sklearn.datasets import load_iris from tensorflow.keras.utils import to_categorical %matplotlib inline #Load the iris data iris_data = load_iris() #Get the predictor and reponse variables X = iris_data.data y = iris_data.target #See the shape of the data print(f'X shape: {X.shape}') print(f'y shape: {y.shape}') #One-hot encode target labels Y = to_categorical(y) print(f'Y shape: {Y.shape}') ``` Load and inspect the pre-trained weights and biases. Compare their shapes to the NN diagram. ``` #Load and inspect the pre-trained weights and biases weights = np.load('data/weights.npy', allow_pickle=True) # weights for hidden (1st) layer w1 = weights[0] # biases for hidden (1st) layer b1 = weights[1] # weights for output (2nd) layer w2 = weights[2] #biases for output (2nd) layer b2 = weights[3] #Compare their shapes to that in the NN diagram. for arr, name in zip([w1,b1,w2,b2], ['w1','b1','w2','b2']): print(f'{name} - shape: {arr.shape}') print(arr) print() ``` For the first affine transformation we need to multiple the augmented input by the first weight matrix (i.e., layer). $$ \begin{bmatrix} 1 & X_{11} & X_{12} & X_{13} & X_{14}\\ 1 & X_{21} & X_{22} & X_{23} & X_{24}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & X_{n1} & X_{n2} & X_{n3} & X_{n4}\\ \end{bmatrix} \begin{bmatrix} b_{1}^1 & b_{2}^1 & b_{3}^1\\ W_{11}^1 & W_{12}^1 & W_{13}^1\\ W_{21}^1 & W_{22}^1 & W_{23}^1\\ W_{31}^1 & W_{32}^1 & W_{33}^1\\ W_{41}^1 & W_{42}^1 & W_{43}^1\\ \end{bmatrix} = \begin{bmatrix} z_{11}^1 & z_{12}^1 & z_{13}^1\\ z_{21}^1 & z_{22}^1 & z_{23}^1\\ \vdots & \vdots & \vdots \\ z_{n1}^1 & z_{n2}^1 & z_{n3}^1\\ \end{bmatrix} = \textbf{Z}^{(1)} $$ <span style='color:gray'>About the notation: superscript refers to the layer and subscript refers to the index in the particular matrix. So $W_{23}^1$ is the weight in the 1st layer connecting the 2nd input to 3rd hidden node. Compare this matrix representation to the slide image. Also note the bias terms and ones that have been added to 'augment' certain matrices. You could consider $b_1^1$ to be $W_{01}^1$.</span><div></div> <span style='color:blue'>1. Augment X with a column of ones to create `X_aug`</span><div></div><span style='color:blue'>2. Create the first layer weight matrix `W1` by vertically stacking the bias vector `b1`on top of `w1` (consult `add_ones_col` for ideas. Don't forget your `Tab` and `Shift+Tab` tricks!)</span><div></div><span style='color:blue'>3. Do the matrix multiplication to find `Z1`</span> ``` def add_ones_col(X): '''Augment matrix with a column of ones''' X_aug = np.hstack((np.ones((X.shape[0],1)), X)) return X_aug #Use add_ones_col() X_aug = add_ones_col(___) #Use np.vstack to add biases to weight matrix W1 = np.vstack((___,___)) #Use np.dot() to multiple X_aug and W1 Z1 = np.dot(___,___) ``` Next, we use our non-linearity $$ \textit{a}_{\text{relu}}(\textbf{Z}^{(1)})= \begin{bmatrix} h_{11} & h_{12} & h_{13}\\ h_{21} & h_{22} & h_{23}\\ \vdots & \vdots & \vdots \\ h_{n1} & h_{n2} & h_{n3}\\ \end{bmatrix}= \textbf{H} $$ <span style='color:blue'>1. Define the ReLU activation</span><div></div> <span style='color:blue'>2. use `plot_activation_func` to confirm implementation</span><div></div> <span style='color:blue'>3. Use relu on `Z1` to create `H`</span> ``` def relu(z: np.array) -> np.array: # hint: # relu(z) = 0 when z < 0 # otherwise relu(z) = z # your code here h = np.maximum(___,___) # np.maximum() will help return h #Helper code to plot the activation function def plot_activation_func(f, name): lin_x = np.linspace(-10,10,200) h = f(lin_x) plt.plot(lin_x, h) plt.xlabel('x') plt.ylabel('y') plt.title(f'{name} Activation Function') plot_activation_func(relu, name='RELU') # use your relu activation function on Z1 H = relu(___) ``` The next step is very similar to the first and so we've filled it in for you. $$ \begin{bmatrix} 1 & h_{11} & h_{12} & h_{13}\\ 1 & h_{21} & h_{22} & h_{23}\\ \vdots & \vdots & \vdots & \vdots \\ 1 & h_{n1} & h_{n2} & h_{n3}\\ \end{bmatrix} \begin{bmatrix} b_{1}^{(2)} & b_{2}^2 & b_{3}^2\\ W_{11}^2 & W_{12}^2 & W_{13}^2\\ W_{21}^2 & W_{22}^2 & W_{23}^2\\ W_{31}^2 & W_{32}^2 & W_{33}^2\\ \end{bmatrix}= \begin{bmatrix} z_{11}^2 & z_{12}^2 & z_{13}^2\\ z_{21}^2 & z_{22}^2 & z_{23}^2\\ \vdots & \vdots & \vdots \\ z_{n1}^2 & z_{n2}^2 & z_{n3}^2\\ \end{bmatrix} = \textbf{Z}^{(2)} $$ <span style='color:blue'>1. Augment `H` with ones to create `H_aug`</span><div></div> <span style='color:blue'>2. Combine `w2` and `b2` to create the output weight matric `W2`</span><div></div> <span style='color:blue'>3. Perform the matrix multiplication to produce `Z2`</span><div></div> ``` #Use add_ones_col() H_aug = ___ #Use np.vstack to add biases to weight matrix W2 = ___ #Use np.dot() Z2 = np.dot(H_aug,W2) ``` Finally we use the softmax activation on `Z2`. Now for each observation we have an output vector of length 3 which can be interpreted as a probability (they sum to 1). $$ \textit{a}_{\text{softmax}}(\textbf{Z}^2)= \begin{bmatrix} \hat{y}_{11} & \hat{y}_{12} & \hat{y}_{13}\\ \hat{y}_{21} & \hat{y}_{22} & \hat{y}_{23}\\ \vdots & \vdots & \vdots \\ \hat{y}_{n1} & \hat{y}_{n2} & \hat{y}_{n3}\\ \end{bmatrix} = \hat{\textbf{Y}} $$ <span style='color:blue'>1. Define softmax</span><div></div> <span style='color:blue'>2. Use `softmax` on `Z2` to create `Y_hat`</span><div></div> ``` def softmax(z: np.array) -> np.array: ''' Input: z - 2D numpy array of logits rows are observations, classes are columns Returns: y_hat - 2D numpy array of probabilities rows are observations, classes are columns ''' # hint: we are summing across the columns y_hat = np.exp(___)/np.sum(np.exp(___), axis=___, keepdims=True) return y_hat #Calling the softmax function Y_hat = softmax(___) ``` <span style='color:blue'>Now let's see how accuract the model's predictions are! Use `np.argmax` to collapse the columns of `Y_hat` to create `y_hat`, a vector of class labels like the original `y` before one-hot encoding.</span><div></div> ``` ### edTest(test_acc) ### # Compute the accuracy y_hat = np.argmax(___, axis=___) acc = sum(y == y_hat)/len(y) print(f'accuracy: {acc:.2%}') ```
github_jupyter
``` import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from pathlib import Path import os import random import matplotlib.pyplot as plt import seaborn as sns # classification metric from scipy.stats import spearmanr model_type = 'roberta' pretrained_model_name = 'roberta-base' # 'roberta-base-openai-detector' DATA_ROOT = Path("../input/google-quest-challenge/") MODEL_ROOT = Path("../input/"+pretrained_model_name) train = pd.read_csv(DATA_ROOT / 'train.csv') test = pd.read_csv(DATA_ROOT / 'test.csv') sample_sub = pd.read_csv(DATA_ROOT / 'sample_submission.csv') real_sub = pd.read_csv(Path("~/Downloads/submission.csv")) print(train.shape,test.shape) download_model=False train.head() # matplotlib histogram plt.hist(train['question_well_written'], color = 'blue', edgecolor = 'black', bins = int(180/20)) # Density Plot and Histogram of all arrival delays sns.distplot(train['question_well_written'], hist=True, kde=True, bins=int(180/20), color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4}) # Density Plot and Histogram of all arrival delays sns.distplot(real_sub['question_well_written'], hist=True, kde=True, bins=int(180/20), color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4}) train['question_well_written'].unique() labels = list(sample_sub.columns[1:].values) for label in labels: print(train[label].value_counts(normalize=True)) print() for label in labels: print(real_sub[label].value_counts(normalize=True)) print() import pdb from bisect import bisect def make_intervals(train_df,labels): boundaries ={} unique_values={} for label in labels: unique_values[label] =np.sort( train_df[label].unique()) boundaries[label] = [(unique_values[label][i+1]+unique_values[label][i])/2 for i in range(len(unique_values[label])-1)] return unique_values,boundaries unique_values,boundaries=make_intervals(train,labels) train["question_asker_intent_understanding"][2],boundaries["question_asker_intent_understanding"] real_sub["question_asker_intent_understanding"][2] def return_categorical_value(df_column,col_unique_values,col_boundaries): #pdb.set_trace() return df_column.apply(lambda row: col_unique_values[bisect(col_boundaries,df_column[1])]) real_sub2=real_sub.copy() real_sub2.head() for label in labels: real_sub2[label]=return_categorical_value(real_sub[label],unique_values[label],boundaries[label]) real_sub.head(20) real_sub2 for label in labels: print(train[label].value_counts(normalize=True)) print() for label in labels: print(real_sub2[label].value_counts(normalize=True)) def categorical_adjust(df_column): for in labels train[[labels]] real_sub[label].apply() ```
github_jupyter
# Code for Chapter 1. In this case we will review some of the basic R functions and coding paradigms we will use throughout this book. This includes loading, viewing, and cleaning raw data; as well as some basic visualization. This specific case we will use data from reported UFO sightings to investigate what, if any, seasonal trends exists in the data. ## Load data ``` import pandas as pd df = pd.read_csv('data/ufo/ufo_awesome.tsv', sep='\t', error_bad_lines=False, header=None) df.shape #error_bad_lines=False - for some lines the last column which contains a description, contains also invalid character \t, which is separator for us. We ignore it. df.head() ``` ## Set columns names ``` df.columns = ["DateOccurred","DateReported","Location","ShortDescription", "Duration","LongDescription"] ``` ## Check data size for the DateOccurred and DateReported columns ``` import matplotlib.pyplot as plt %matplotlib inline df['DateOccurred'].astype(str).str.len().plot(kind='hist') df['DateReported'].astype(str).str.len().plot(kind='hist') ``` ## Remove rows with incorrect dates (length not eqauls 8) ``` mask = (df['DateReported'].astype('str').str.len() == 8) & (df['DateOccurred'].astype('str').str.len() == 8) df = df.loc[mask] df.shape ``` ## Convert the DateReported and DateOccurred columns to Date type ``` df['DateReported'] = pd.to_datetime(df['DateReported'], format='%Y%m%d', errors='coerce') df['DateOccurred'] = pd.to_datetime(df['DateOccurred'], format='%Y%m%d', errors='coerce') ``` ## Split the 'Locaton' column to two new columns: 'City' and 'State' ``` df_location = df['Location'].str.partition(', ')[[0, 2]] df_location.columns = ['USCity', 'USState'] df['USCity'] = df_location['USCity'] df['USState'] = df_location['USState'] df.head() ``` ## Keep rows only with correct US states ``` USStates = ["AK","AL","AR","AZ","CA","CO","CT","DE","FL","GA","HI","IA","ID","IL", "IN","KS","KY","LA","MA","MD","ME","MI","MN","MO","MS","MT","NC","ND","NE","NH", "NJ","NM","NV","NY","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VA","VT", "WA","WI","WV","WY"] df = df[df['USState'].isin(USStates)] df.head() ``` ## Creating a histogram of frequencies for UFO sightings over time ``` df['DateOccurred'].dt.year.plot(kind='hist', bins=15, title='Exploratory histogram of UFO data over time') ``` ## We will only look at incidents that occurred from 1990 to the most recent ``` df = df[(df['DateOccurred'] >= '1990-01-01')] df['DateOccurred'].dt.year.plot(kind='hist', bins=15, title='Histogram of subset UFO data over time (1900 - 2010)') ``` ## Create finally histogram of subset UFO data over time (1900 - 2010) by US state ``` import matplotlib.pyplot as plt plt.style.use('ggplot') # set up figure & axes fig, axes = plt.subplots(nrows=10, ncols=5, sharex=True, sharey=True, figsize=(18, 12), dpi= 80) # drop sharex, sharey, layout & add ax=axes df['YearOccurred'] = df['DateOccurred'].dt.year df.hist(column='YearOccurred',by='USState', ax=axes) # set title and axis labels plt.suptitle('Number of UFO sightings by Month-Year and U.S. State (1990-2010)', x=0.5, y=1, ha='center', fontsize='xx-large') fig.text(0.5, 0.06, 'Times', ha='center', fontsize='x-large') fig.text(0.05, 0.5, 'Number of Sightings', va='center', rotation='vertical', fontsize='x-large') plt.savefig('images/ufo_sightings.pdf', format='pdf') ```
github_jupyter
# Attempting to load higher order ASPECT elements An initial attempt at loading higher order element output from ASPECT. The VTU files have elements with a VTU type of `VTK_LAGRANGE_HEXAHEDRON` (VTK ID number 72, https://vtk.org/doc/nightly/html/classvtkLagrangeHexahedron.html#details), corresponding to 2nd order (quadratic) hexahedron, resulting in 27 nodes. Some useful links about this type of FEM output: * https://blog.kitware.com/modeling-arbitrary-order-lagrange-finite-elements-in-the-visualization-toolkit/ * https://github.com/Kitware/VTK/blob/0ce0d74e67927fd964a27c045d68e2f32b5f65f7/Common/DataModel/vtkCellType.h#L112 * https://github.com/ju-kreber/paraview-scripts * https://doi.org/10.1016/B978-1-85617-633-0.00006-X * https://discourse.paraview.org/t/about-high-order-non-traditional-lagrange-finite-element/1577/4 * https://gitlab.kitware.com/vtk/vtk/-/blob/7a0b92864c96680b1f42ee84920df556fc6ebaa3/Common/DataModel/vtkHigherOrderInterpolation.cxx At present, tis notebook requires the `vtu72` branch on the `meshio` fork at https://github.com/chrishavlin/meshio/pull/new/vtu72 to attempt to load the `VTK_LAGRANGE_HEXAHEDRON` output. As seen below, the data can be loaded with the general `unstructured_mesh_loader` but `yt` can not presently handle higher order output. ``` import os, yt, numpy as np import xmltodict, meshio DataDir=os.path.join(os.environ.get('ASPECTdatadir','../'),'litho_defo_sample','data') pFile=os.path.join(DataDir,'solution-00002.pvtu') if os.path.isfile(pFile) is False: print("data file not found") class pvuFile(object): def __init__(self,file,**kwargs): self.file=file self.dataDir=kwargs.get('dataDir',os.path.split(file)[0]) with open(file) as data: self.pXML = xmltodict.parse(data.read()) # store fields for convenience self.fields=self.pXML['VTKFile']['PUnstructuredGrid']['PPointData']['PDataArray'] def load(self): conlist=[] # list of 2D connectivity arrays coordlist=[] # global, concatenated coordinate array nodeDictList=[] # list of node_data dicts, same length as conlist con_offset=-1 for mesh_id,src in enumerate(self.pXML['VTKFile']['PUnstructuredGrid']['Piece']): mesh_name="connect{meshnum}".format(meshnum=mesh_id+1) # connect1, connect2, etc. srcFi=os.path.join(self.dataDir,src['@Source']) # full path to .vtu file [con,coord,node_d]=self.loadPiece(srcFi,mesh_name,con_offset+1) con_offset=con.max() conlist.append(con.astype("i8")) coordlist.extend(coord.astype("f8")) nodeDictList.append(node_d) self.connectivity=conlist self.coordinates=np.array(coordlist) self.node_data=nodeDictList def loadPiece(self,srcFi,mesh_name,connectivity_offset=0): # print(srcFi) meshPiece=meshio.read(srcFi) # read it in with meshio coords=meshPiece.points # coords and node_data are already global connectivity=meshPiece.cells_dict['lagrange_hexahedron'] # 2D connectivity array # parse node data node_data=self.parseNodeData(meshPiece.point_data,connectivity,mesh_name) # offset the connectivity matrix to global value connectivity=np.array(connectivity)+connectivity_offset return [connectivity,coords,node_data] def parseNodeData(self,point_data,connectivity,mesh_name): # for each field, evaluate field data by index, reshape to match connectivity con1d=connectivity.ravel() conn_shp=connectivity.shape comp_hash={0:'cx',1:'cy',2:'cz'} def rshpData(data1d): return np.reshape(data1d[con1d],conn_shp) node_data={} for fld in self.fields: nm=fld['@Name'] if nm in point_data.keys(): if '@NumberOfComponents' in fld.keys() and int(fld['@NumberOfComponents'])>1: # we have a vector, deal with components for component in range(int(fld['@NumberOfComponents'])): comp_name=nm+'_'+comp_hash[component] # e.g., velocity_cx m_F=(mesh_name,comp_name) # e.g., ('connect1','velocity_cx') node_data[m_F]=rshpData(point_data[nm][:,component]) else: # just a scalar! m_F=(mesh_name,nm) # e.g., ('connect1','T') node_data[m_F]=rshpData(point_data[nm]) return node_data pvuData=pvuFile(pFile) pvuData.load() ``` So it loads... `meshio`'s treatment of high order elements is not complicated: it assumes the same number of nodes per elements and just reshapes the 1d connectivity array appropriately. In this case, a single element has 27 nodes: ``` pvuData.connectivity[0].shape ``` And yes, it can load: ``` ds4 = yt.load_unstructured_mesh( pvuData.connectivity, pvuData.coordinates, node_data = pvuData.node_data ) ``` but the plots are don't actually take advantage of all the data, noted by the warning when slicing: "High order elements not yet supported, dropping to 1st order." ``` p=yt.SlicePlot(ds4, "x", ("all", "T")) p.set_log("T",False) p.show() ``` This run is a very high aspect ratio cartesian simulation so let's rescale the coords first and then reload (**TO DO** look up how to do this with *yt* after loading the data...) ``` def minmax(x): return [x.min(),x.max()] for idim in range(0,3): print([idim,minmax(pvuData.coordinates[:,idim])]) # some artificial rescaling for idim in range(0,3): pvuData.coordinates[:,idim]=pvuData.coordinates[:,idim] / pvuData.coordinates[:,idim].max() ds4 = yt.load_unstructured_mesh( pvuData.connectivity, pvuData.coordinates, node_data = pvuData.node_data ) p=yt.SlicePlot(ds4, "x", ("all", "T")) p.set_log("T",False) p.show() ``` To use all the data, we need to add a new element mapping for sampling these elements (see `yt/utilities/lib/element_mappings.pyx`). These element mappings can be automatically generated using a symbolic math library, e.g., `sympy`. See `ASPECT_VTK_quad_hex_mapping.ipynb`
github_jupyter
<img src="resources/titanic_sinking.gif" alt="Titanic sinking gif" style="margin: 10px auto 20px; border-radius: 15px;" width="100%"/> <a id="project-overview"></a> _**Potonuće Titanika** jedno je od najozloglašenijih brodoloma u istoriji._ _15. aprila 1912. godine, tokom njegovog prvog putovanja, široko smatrani „nepotopivi“ RMS Titanic potonuo je nakon sudara sa santom leda u severnom Atlantskom okeanu. Nažalost, nije bilo dovoljno čamaca za spasavanje za sve na brodu, što je rezultovalo smrću 1502 od 2224 putnika i posade._ _Iako je u preživljavanju učestvovao neki element sreće, čini se da su neke grupe ljudi imale veću verovatnoću preživljavanja u odnosu na druge._ Cilj projekta predstavlja prediktivni model koji odgovara na pitanje: _„za koje vrste ljudi je veća verovatnoća da će preživeti?“_ korišćenjem dostupnih podataka o putnicima (tj. ime, starost, pol, socijalno-ekonomska klasa, itd.). <hr> <a id="table-of-contents"></a> ### Pregled sadržaja: * [Pregled projekta](#project-overview) * [Učitavanje i opis skupa podataka](#loading-datasets) * [Provera relevantnosti kolona za model](#column-relevance) * [Kreiranje trening i test skupova podataka](#train-test-split) * [Evaluacija/bodovanje klasifikacionih algoritama](#model-scoring) * [Model i predikcija](#model-prediction) <hr> ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.model_selection import cross_val_score, cross_val_predict from sklearn.model_selection import train_test_split from sklearn.preprocessing import normalize, scale from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression, Perceptron from sklearn.svm import SVC %matplotlib inline ``` ### Učitavanje i opis skupa podataka <a id="loading-datasets"></a> <a href="#table-of-contents"> Povratak na pregled sadržaja </a> <div style="display: inline-block;"> | Variable | Definition | Key | |----------|------------------------------------------|------------------------------------------------| | Survived | Yes or No | 1 = Survived, 0 = Died | | Pclass | A proxy of socio-economic status (SES) | 1 = Upper, 2 = Middle, 3 = Lower | | Name | Passenger name | e.g. Allen, Mr. William Henry | | Sex | Passenger sex | male or female | | Age | Passenger age | integer | | SibSp | # of siblings/spouses aboard the Titanic | integer | | Parch | # of parents/children aboard the Titanic | integer | | Ticket | Ticket number | e.g. A/5 21171 | | Fare | Passenger fare | float | | Cabin | Cabin number (str) | e.g. B42 | | Embarked | Port of Embarkation | C = Cherbourg, Q = Queenstown, S = Southampton | </div> - **train.csv** sadrži detalje o podskupu putnika na brodu (tačnije 891 putnik - gde svaki putnik predstavlja zaseban red u tabeli). - **test.csv** sadrži detalje o podskupu putnika na brodu (tačnije 418 putnika - gde svaki putnik predstavlja zaseban red u tabeli) ** **bez 'Survived' kolone**. ``` # ucitavanje skupova podataka train_df = pd.read_csv("datasets/train.csv").dropna(subset=["Age"]) test_df = pd.read_csv("datasets/test.csv") train_df.describe() ``` ### Provera relevantnosti kolona za model<a id="column-relevance"></a> Potrebno je proveriti koje od kolona uticu na rezultat prezivljavanja odredjenog putnika. - Sex - Izracunavanjem srednje vrednosti smo ustanovili da je **pol** putnika igrao bitnu ulogu u prezivljavanju. (veci deo prezivelih je bio zenskog pola) - Pclass - Izracunavanjem srednje vrednosti smo ustanovili da je **socijalno-ekonomska klasa** putnika igrala bitnu ulogu u prezivljavanju - SibSp - Iz histograma mozemo zakljuciti da je vecina prezivelih imala manje od dvoje brace/sestara ili partnera na Titaniku, tako da je **broj brace/sestara/partnera** igrao bitnu ulogu u prezivljavanju. - Age - Iz histograma mozemo zakljuciti da je vecina prezivelih bila starosti od 20-40 godina. - Parch - Iz histograma mozemo zakljuciti da je vecina prezivelih imala manje od dva deteta ili roditelja na Titaniku, tako daje **broj dece/roditelja** igrao bitnu ulogu u prezivljavanju. - U 'Cabin' koloni postoje duplikati, verovatno zato sto su odredjeni putnici delili kabinu, takodje ima dosta nepotpunih podataka. - Vecina putnika se ukrcala u Southampton-u. - Ime putnika kao i identifikacioni broj putnika nemaju nikakav uticaj na prezivljavanje. <a href="#table-of-contents"> Povratak na pregled sadržaja </a> ``` # Sex sns.catplot(x="Sex", y ="Survived", data=train_df, kind="bar", height=4) plt.show() men = np.mean(train_df.loc[train_df.Sex == 'male']["Survived"]) women = np.mean(train_df.loc[train_df.Sex == 'female']["Survived"]) print(f'Procenat muskaraca koji su preziveli iznosi {men * 100:.2f}%, a procenat zena {women * 100:.2f}%.\n') # Pclass sns.catplot(x="Pclass", y ="Survived", data=train_df, kind="bar", height=4) plt.show() upper = np.mean(train_df.loc[train_df.Pclass == 1]["Survived"]) middle = np.mean(train_df.loc[train_df.Pclass == 2]["Survived"]) lower = np.mean(train_df.loc[train_df.Pclass == 3]["Survived"]) print(f'Procenat prezivelih vise socijalno-ekonomske klase iznosi {upper * 100:.2f}%,', f'srednje {middle * 100:.2f}%,', f'i nize klase {lower * 100:.2f}%.\n') # SibSp g = sns.FacetGrid(train_df, col='Survived') g.map(plt.hist, 'SibSp', bins=20) plt.show() # Age g = sns.FacetGrid(train_df, col='Survived') g.map(plt.hist, 'Age', bins=20) plt.show() # Parch g = sns.FacetGrid(train_df, col='Survived') g.map(plt.hist, 'Parch', bins=20) plt.show() ``` ### Kreiranje trening i test skupova podataka <a id="train-test-split"></a> <a href="#table-of-contents"> Povratak na pregled sadržaja </a> ``` features = ["Pclass", "Sex", "SibSp", "Parch"] train_x = pd.get_dummies(train_df[features]) train_y = train_df.Survived.to_numpy() test_x = pd.get_dummies(test_df[features]) ``` ### Evaluacija/bodovanje klasifikacionih algoritama <a id="model-scoring"></a> <a href="#table-of-contents"> Povratak na pregled sadržaja </a> Bodovanjem 5 klasifikacionih algoritama cemo odrediti koji nam najvise odgovara za kreiranje prediktivnog modela. Algoritmi koji su bodovani/evaluirani: - Random Forest ❌ - K Nearest Neighbor (KNN) ❌ - Support Vector Machine (SVM) ✔️ - Logistic Regression ❌ - Perceptron ❌ Evaluaciju smo vrsili sa '**cross_val_score**' metodom koja predstavlja nacin da iskoristimo trening podatke da bismo dobili dobre procene kako ce se model izvrsavati sa test podacima. ``` """ Random Forest """ random_forest = RandomForestClassifier(n_estimators=200, max_depth=5, random_state=1) score = cross_val_score(random_forest, train_x, train_y, scoring='accuracy', cv=10).mean() print(f'Random Forest Score: {score * 100:.2f}%') """ KNN """ knn = KNeighborsClassifier(n_neighbors=25) score = cross_val_score(knn, train_x, train_y, scoring='accuracy', cv=10).mean() print(f'KNN Score: {score * 100:.2f}%') """ SVM """ svm = SVC(kernel="rbf") score = cross_val_score(svm, train_x, train_y, scoring='accuracy', cv=10).mean() print(f'SVM Score: {score * 100:.2f}%') """ Logistic Regression """ log_reg = LogisticRegression(max_iter=2000) score = cross_val_score(log_reg, train_x, train_y, scoring='accuracy', cv=10).mean() print(f'Logistic Regression Score: {score * 100:.2f}%') """ Perceptron """ perceptron = Perceptron(random_state=1) score = cross_val_score(perceptron, train_x, train_y, scoring='accuracy', cv=10).mean() print(f'Perceptron Score: {score * 100:.2f}%') ``` ### Model i predikcija <a id="model-prediction"></a> <a href="#table-of-contents"> Povratak na pregled sadržaja </a> S obzirom da smo pri evaluaciji 5 klasifikacionih algoritama videli da **Support Vector Machine** (_SVM_) klasifikator daje najbolje rezultate, njega cemo odabrati za kreiranje prediktivnog modela. ``` svm.fit(train_x, train_y) predictions = svm.predict(test_x) output = pd.DataFrame({'PassengerId': test_df.PassengerId, 'Survived': predictions}) output.to_csv('survival_prediction.csv', index=False) output ``` ### Score ![Kaggle Score](attachment:4fb97ef8-ca3c-489f-9744-758ab04bef43.png)
github_jupyter
# Индекс поиска ``` import numpy as np import pandas as pd import datetime import matplotlib from matplotlib import pyplot as plt import matplotlib.patches as mpatches matplotlib.style.use('ggplot') %matplotlib inline ``` ### Описание: Индекс строится на основе кризисных дескрипторов, взятых из [статьи Столбова.](https://yadi.sk/i/T24TXCw2Jzy8oQ) Автор посмотрел, что активнее всего гуглилось по категории "Финансы и страхование" в пик кризиса 2008 года. Он отобрал эти дескрипторы и разбавил их ещё несколькими терминами. ### Технические особенности: Скачиваем поисковые запросы по всем дескрипторам Столбова. Можно и руками, но для больших объёмов поисковой скачки, есть [рекурсивный парсер.](https://nbviewer.jupyter.org/github/FUlyankin/Parsers/blob/master/Parsers%20/google_trends_selenium_parser.ipynb) Индекс будем строить двумя способами: - взвесив все слова с коэффицентами $$ w_i = \frac{\sum_{j} r_{ij}}{\sum_{i,j} r_{ij}} $$ - взяв одну из компонент PCA-разложения. Брать будем не первую компоненту, а ту компоненту, которая улавливает в себе "пики". В нашем случае это вторая компонента. ``` !ls ../01_Google_trends_parser path = '../01_Google_trends_parser/krizis_poisk_odinar_month.tsv' df_poisk = pd.read_csv(path, sep='\t') df_poisk.set_index('Месяц', inplace=True) print(df_poisk.shape) df_poisk.head() def index_make(df_term): corr_matrix = df_term.corr() w = np.array(corr_matrix.sum()/corr_matrix.sum().sum()) print(w) index = (np.array(df_term).T*w.reshape(len(w),1)).sum(axis = 0) mx = index.max() mn = index.min() return 100*(index - mn)/(mx - mn) def min_max_scaler(df, col): mx = df[col].max() mn = df[col].min() df[col] = 100*(df[col] - mn)/(mx - mn) pass index_poisk = index_make(df_poisk) df_pi = pd.DataFrame() df_pi['fielddate'] = df_poisk.index df_pi['poiskInd_ind_corr'] = index_poisk df_pi.set_index('fielddate').plot(legend=True, figsize=(12,6)); from sklearn.decomposition import PCA model_pca = PCA(n_components= 15) model_pca.fit(df_poisk) df_pi_pca = model_pca.transform(df_poisk) plt.plot(model_pca.explained_variance_, label='Component variances') plt.xlabel('n components') plt.ylabel('variance') plt.legend(loc='upper right'); df_pi['poiskInd_ind_pca'] = list(df_pi_pca[:,1]) min_max_scaler(df_pi, 'poiskInd_ind_pca') df_pi.plot(legend=True, figsize=(12,6)); ``` -------- ``` df_pi.to_csv('../Индексы/data_simple_index_v2/poisk_krizis_index_month.tsv', sep="\t", index=None) ``` -------------
github_jupyter
``` import os import json import pickle import random from collections import defaultdict, Counter from indra.literature.adeft_tools import universal_extract_text from indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id from adeft.discover import AdeftMiner from adeft.gui import ground_with_gui from adeft.modeling.label import AdeftLabeler from adeft.modeling.classify import AdeftClassifier from adeft.disambiguate import AdeftDisambiguator from adeft_indra.ground.ground import AdeftGrounder from adeft_indra.model_building.s3 import model_to_s3 from adeft_indra.model_building.escape import escape_filename from adeft_indra.db.content import get_pmids_for_agent_text, get_pmids_for_entity, \ get_plaintexts_for_pmids adeft_grounder = AdeftGrounder() shortforms = ['VSV'] model_name = ':'.join(sorted(escape_filename(shortform) for shortform in shortforms)) results_path = os.path.abspath(os.path.join('../..', 'results', model_name)) miners = dict() all_texts = {} for shortform in shortforms: pmids = get_pmids_for_agent_text(shortform) if len(pmids) > 10000: pmids = random.choices(pmids, k=10000) text_dict = get_plaintexts_for_pmids(pmids, contains=shortforms) text_dict = {pmid: text for pmid, text in text_dict.items() if len(text) > 5} miners[shortform] = AdeftMiner(shortform) miners[shortform].process_texts(text_dict.values()) all_texts.update(text_dict) longform_dict = {} for shortform in shortforms: longforms = miners[shortform].get_longforms() longforms = [(longform, count, score) for longform, count, score in longforms if count*score > 2] longform_dict[shortform] = longforms combined_longforms = Counter() for longform_rows in longform_dict.values(): combined_longforms.update({longform: count for longform, count, score in longform_rows}) grounding_map = {} names = {} for longform in combined_longforms: groundings = adeft_grounder.ground(longform) if groundings: grounding = groundings[0]['grounding'] grounding_map[longform] = grounding names[grounding] = groundings[0]['name'] longforms, counts = zip(*combined_longforms.most_common()) pos_labels = [] list(zip(longforms, counts)) grounding_map, names, pos_labels = ground_with_gui(longforms, counts, grounding_map=grounding_map, names=names, pos_labels=pos_labels, no_browser=True, port=8890) result = [grounding_map, names, pos_labels] result grounding_map, names, pos_labels = [{'vesicular stomatis virus': 'Taxonomy:11276', 'vesicular stomatitis virus': 'Taxonomy:11276'}, {'Taxonomy:11276': 'Vesicular stomatitis virus'}, ['Taxonomy:11276']] excluded_longforms = [] grounding_dict = {shortform: {longform: grounding_map[longform] for longform, _, _ in longforms if longform in grounding_map and longform not in excluded_longforms} for shortform, longforms in longform_dict.items()} result = [grounding_dict, names, pos_labels] if not os.path.exists(results_path): os.mkdir(results_path) with open(os.path.join(results_path, f'{model_name}_preliminary_grounding_info.json'), 'w') as f: json.dump(result, f) additional_entities = {'HGNC:5413': 'IFITM2'} unambiguous_agent_texts = {} labeler = AdeftLabeler(grounding_dict) corpus = labeler.build_from_texts((text, pmid) for pmid, text in all_texts.items()) agent_text_pmid_map = defaultdict(list) for text, label, id_ in corpus: agent_text_pmid_map[label].append(id_) entity_pmid_map = {entity: set(get_pmids_for_entity(*entity.split(':', maxsplit=1), major_topic=True))for entity in additional_entities} intersection1 = [] for entity1, pmids1 in entity_pmid_map.items(): for entity2, pmids2 in entity_pmid_map.items(): intersection1.append((entity1, entity2, len(pmids1 & pmids2))) intersection2 = [] for entity1, pmids1 in agent_text_pmid_map.items(): for entity2, pmids2 in entity_pmid_map.items(): intersection2.append((entity1, entity2, len(set(pmids1) & pmids2))) intersection1 intersection2 all_used_pmids = set() for entity, agent_texts in unambiguous_agent_texts.items(): used_pmids = set() for agent_text in agent_texts: pmids = set(get_pmids_for_agent_text(agent_text)) new_pmids = list(pmids - all_texts.keys() - used_pmids) text_dict = get_plaintexts_for_pmids(new_pmids, contains=agent_texts) corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()]) used_pmids.update(new_pmids) all_used_pmids.update(used_pmids) for entity, pmids in entity_pmid_map.items(): new_pmids = list(set(pmids) - all_texts.keys() - all_used_pmids) if len(new_pmids) > 10000: new_pmids = random.choices(new_pmids, k=10000) text_dict = get_plaintexts_for_pmids(new_pmids) corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()]) names.update(additional_entities) pos_labels.extend(additional_entities.keys()) %%capture classifier = AdeftClassifier(shortforms, pos_labels=pos_labels, random_state=1729) param_grid = {'C': [100.0], 'max_features': [10000]} texts, labels, pmids = zip(*corpus) classifier.cv(texts, labels, param_grid, cv=5, n_jobs=5) classifier.stats disamb = AdeftDisambiguator(classifier, grounding_dict, names) disamb.dump(model_name, results_path) print(disamb.info()) preds = disamb.disambiguate(all_texts.values()) a = [text for text, pred in zip(all_texts.values(), preds) if pred[0].startswith('HGNC')] a ```
github_jupyter
# 1. Deep learning: Pre-requisites #### Understanding gradient descent, autodiff, and softmax ### Gradiant Descent Optimization technique for finding best parameters (Model parameters) for a giving problem - Cost function to reduce the error, for example, as the image. The neural network is trained with gradient descent to find the best solution. There are room for improvement, like applying "momentum" to the descent - as it descends, it gains speed and as it reaches a given value, it will reduce its speed so the minimum is reached faster. There are the local minimum problems, but they have solutions. However, in practice, local minimum is not a problem since it rarely happens in real world problems. <img src="../course_imgs/gradientdescent.png" alt="Gradient descent" width="200"/> ### Autodiff To apply gradient descent, you need to know the gradient from the cost function - the slope of the curve. Gradient is obtained from partial derivatives and calculus are hard and inefficient for computers. So **autodiff** is a technique to make things better. Specifically, **reverse-mode autodiff**. It still is a calculus trick (complicated but works) and <u>Tensorflow</u> uses it. Optimized for many inputs + few outputs, computing all partial derivatives by traversing your graph in the number of outputs that you have +1. This works well in neural networks because neurons usually have many inputs, but just one output. ### Softmax The results of a NN are weights and they are converted into a probability by the NN. This probability represents the chance of each sample to belong to a class. The class with the closest probability is the answer. This could be represented like the implementation of a sigmoid function. ![Sigmoid softmax](../course_imgs/sigmoid.png "Sigmoid") # 2. Introducing Artificial Neural Networks ### Biological inspiration Neurons are connected to each other via axons. Neurons communicate with others they are connected to via electrical signals and determined inputs to each neuron will activate it. - It is simple in an individual level, but layers of billions of neurons connected to each others, with thousands of connections yields a mind. ![neuron](../course_imgs/neuron.jpg "Neuron") ### Cortical columns Neurons seem to be arranged into stacks/columns that process information in parallel. "Mini-columns" of around 100 neurons are organized into larger "hyper-columns", and there are around 100 million mini-columns in our brain. This is similar to how a GPU works. ![Cortical columns](../course_imgs/cortical.jpg "Cortical columns") ### First artificial neurons Back to 1943 - Logical expressions creation: AND, OR and NOT. Two inputs may define if the output is on or off! <img src="../course_imgs/artificialneuron.png" alt="Artificial Neuron" width="200"/> ### Linear Threshold Unit (LTU) Back to 1957: Add weights into inputs and output is given by a step function. <img src="../course_imgs/ltu.png" alt="LTU" width="500"/> ### The perceptron A layer of LTU. A perceptron can learn by reinforcing weights that lead to correct behavior during training. There is also de bias neuron to make things work. <img src="../course_imgs/perceptron.png" alt="Perceptron" width="200"/> ### Multi-Layer perceptrons Addition of hidden layers. This is a Deep Neural Network, and training it is trickier. It began with a simple concept, and it all stacked...now we have a lot of neurons connected to each other and that is very complex...from a simple concept. <img src="../course_imgs/multilayerperceptrons.PNG" alt="Multi-Layer Perceptron" width="200"/> ### A Modern Deep Neural Network Replace the step function with other type of function, apply softmax to the output, and train the network using gradient descent other method. <img src="../course_imgs/deepneuralnet.PNG " alt="Modern Deep Neural Network" width="200"/>
github_jupyter
# Generación de observaciones aleatorias a partir de una distribución de probabilidad La primera etapa de la simulación es la **generación de números aleatorios**. Los números aleatorios sirven como el bloque de construcción de la simulación. La segunda etapa de la simulación es la **generación de variables aleatorias basadas en números aleatorios**. Esto incluye generar variables aleatorias <font color ='red'> discretas y continuas de distribuciones conocidas </font>. En esta clase, estudiaremos técnicas para generar variables aleatorias. Intentaremos dar respuesta a el siguiente interrogante: >Dada una secuencia de números aleatorios, ¿cómo se puede generar una secuencia de observaciones aleatorias a partir de una distribución de probabilidad dada? Varios enfoques diferentes están disponibles, dependiendo de la naturaleza de la distribución Considerando la generación de números alestorios estudiados previamente, asumiremos que tenemos disponble una secuencia $U_1,U_2,\cdots$ variables aleatorias independientes, para las cuales se satisface que: $$ P(U_i\leq u) = \begin{cases}0,& u<0\\ u,&0\leq u \leq 1\\ 1,& u>1 \end{cases} $$ es decir, cada variable se distribuye uniformemente entre 0 y 1. **Recordar:** En clases pasadas, observamos como transformar un número p-seudoaletorio distribuido uniformemte entre 0 y 1, en una distribución normalmente distribuida con media $(\mu,\sigma^2)\longrightarrow$ <font color='red'> [Médoto de Box Muller](http://www.lmpt.univ-tours.fr/~nicolis/Licence_NEW/08-09/boxmuller.pdf) </font> como un caso particular. En esta sesión, se presentarán dos de los técnicas más ampliamente utilizados para generar variables aletorias, a partir de una distribución de probabilidad. ## 1. Método de la transformada inversa Este método puede ser usado en ocasiones para generar una observación aleatoria. Tomando $X$ como la variable aletoria involucrada, denotaremos la función de distribución de probabilidad acumulada por $$F(x)=P(X\leq x),\quad \forall x$$ <font color ='blue'> Dibujar graficamente esta situación en el tablero</font> El método de la transformada inversa establece $$X = F^{-1}(U),\quad U \sim \text{Uniforme[0,1]}$$ donde $F^{-1}$ es la transformada inversa de $F$. Recordar que $F^{-1}$ está bien definida si $F$ es estrictamente creciente, de otro modo necesitamos una regla para solucionar los casos donde esta situación no se satisface. Por ejemplo, podríamos tomar $$F^{-1}(u)=\inf\{x:F(x)\geq u\}$$ Si hay muchos valores de $x$ para los cuales $F(x)=u$, esta regla escoje el valor mas pequeño. Observar esta situación en el siguiente ejemplo: ![imagen.png](attachment:imagen.png) Observe que en el intervalo $(a,b]$ si $X$ tiene distribución $F$, entonces $$P(a<X\leq b)=F(b)-F(a)=0\longrightarrow \text{secciones planas}$$ Por lo tanto si $F$ tienen una densidad continua, entonces $F$ es estrictamente creciente y su inversa está bien definida. Ahora observemos cuando se tienen las siguientes funciones: ![imagen.png](attachment:imagen.png) Observemos que sucede en $x_0$ $$\lim_{x \to x_0^-} F(x)\equiv F(x^-)<F(x^+)\equiv \lim_{x\to x_0^+}F(x)$$ Bajo esta distribución el resultado $x_0$ tiene probabilidad $F(x^+)-F(x^-)$. Por otro lado todos los valores de $u$ entre $[u_2,u_1]$ serán mapeados a $x_0$. Los siguientes ejemplos mostrarán una implementación directa de este método. ### Ejemplo 1: Distribución exponencial La distribución exponencial con media $\theta$ tiene distribución $$F(x)=1-e^{-x/\theta}, \quad x\geq 0$$ > Distrubución exponencial python: https://en.wikipedia.org/wiki/Exponential_distribution >### <font color= blue> Mostrar en el tablero la demostración ``` # Importamos las librerías principales import numpy as np import matplotlib.pyplot as plt import pandas as pd # Creamos la función que crea muestras distribuidas exponencialmente def D_exponential(theta,N): return -np.log(np.random.random(N))*theta # Media theta = 4 # Número de muestras N = 10**6 # creamos muestras exponenciales con la función que esta en numpy x = np.random.exponential(theta,N) # creamos muestras exponenciales con la función creada x2 = D_exponential(theta,N) # Graficamos el histograma para x plt.hist(x,100,density=True) plt.xlabel('valores aleatorios') plt.ylabel('probabilidad') plt.title('histograma función de numpy') print(np.mean(x)) plt.show() plt.hist(x2,100,density=True) plt.xlabel('valores aleatorios') plt.ylabel('probabilidad') plt.title('histograma función creada') print(np.mean(x2)) plt.show() ``` ### Ejemplo 2 Se sabe que la distribución Erlang resulta de la suma de $k$ variables distribuidas exponencialmente cada una con media $\theta$, y por lo tanto esta variable resultante tiene distribución Erlang de tamaño $k$ y media $theta$. > Enlace distribución Erlang: https://en.wikipedia.org/wiki/Erlang_distribution ``` N = 10**4 # Variables exponenciales x1 = np.random.exponential(4,N) x2 = np.random.exponential(4,N) x3 = np.random.exponential(4,N) x4 = np.random.exponential(4,N) x5 = np.random.exponential(4,N) # Variables erlang e0 = x1 e1 = (x1+x2) e2 = (x3+x4+x5) e3 = (x1+x2+x3+x4) e4 = x1+x2+x3+x4+x5 plt.hist(e0,100,density=True,label='1 exponencial') plt.hist(e1,100,density=True,label='suma de 2 exp') plt.hist(e2,100,density=True,label='suma de 3 exp') plt.hist(e3,100,density=True,label='suma de 4 exp') plt.hist(e4,100,density=True,label='suma de 5 exp') plt.legend() plt.show() ``` >### <font color= blue> Mostrar en el tablero la demostración ``` N, k = 10, 2 theta = 4 U = np.random.rand(N, k) -theta * np.log(np.product(U, axis=1)) # Función para crear variables aleatorias Erlang def D_erlang(theta:'media distribución',k,N): # Matriz de variables aleatorias de dim N*k mejora la velocidad del algoritmo U = np.random.rand(N, k) y = -theta * np.log(np.product(U, axis=1)) return y # Prueba de la función creada # Cantidad de muestras N = 10**4 # Parámetros de la distrubución erlang ks = [1,2,3,4,5] theta = 4 y = [D_erlang(theta, ks[i], N) for i in range(len(ks))] [plt.hist(y[i], bins=50, density=True, label=f'suma de {ks[i]} exp') for i in range(len(ks))] plt.legend() plt.show() ``` ### Función de densidad variables Erlang $$p(x)=x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)}\equiv x^{k-1}\frac{e^{-x/\theta}}{\theta^k(k-1)!}$$ ``` #Librería que tiene la función gamma y factorial # Para mostrar la equivalencia entre el factorial y la función gamma import scipy.special as sps from math import factorial as fac k = 4 theta = 4 x = np.arange(0,60,0.01) plt.show() # Comparación de la función gamma y la función factorial y= x**(k-1)*(np.exp(-x/theta) /(sps.gamma(k)*theta**k)) y2 = x**(k-1)*(np.exp(-x/theta) /(fac(k-1)*theta**k)) plt.plot(x,y,'r') plt.plot(x,y2,'b--') # plt.show() # Creo variables aleatorias erlang y obtengo su histograma en la misma gráfica anterior N = 10**4 r1 = D_erlang(theta,k,N) plt.hist(r1,bins=50,density=True) plt.show() ``` Para mejorar la eficiencia, creemos una función que grafique la misma gráfica anterior pero este caso que le podamos variar los parámetros `k` y $\theta$ de la distribución ``` # Función que grafica subplots para cada señal de distribución Erlang def histograma_erlang(signal:'señal que desea graficar', k:'Parámetro de la función Erlang'): plt.figure(figsize=(8,3)) count, x, _ = plt.hist(signal,100,density=True,label='k=%d'%k) y = x**(k-1)*(np.exp(-x/theta) /(sps.gamma(k)*theta**k)) plt.plot(x, y, linewidth=2,color='k') plt.ylabel('Probabilidad') plt.xlabel('Muestras') plt.legend() plt.show() ``` Con la función anterior, graficar la función de distribución de una Erlang con parámetros $\theta = 4$ y `Ks = [1,8,3,6] ` ``` theta = 4 # media N = 10**5 # Número de muestras Ks = [1,8,3,6] # Diferentes valores de k para la distribución Erlang # Obtengo a_erlang = list(map(lambda k: D_erlang(theta, k, N), Ks)) [histograma_erlang(erlang_i, k) for erlang_i, k in zip(a_erlang, Ks)] ``` ### Ejemplo 4 Distribución de Rayleigh $$F(x)=1-e^{-2x(x-b)},\quad x\geq b $$ > Fuente: https://en.wikipedia.org/wiki/Rayleigh_distribution ``` # Función del ejemplo 4 def D_rayleigh(b,N): return (b/2)+np.sqrt(b**2-2*np.log(np.random.rand(N)))/2 np.random.rayleigh? # Función de Raylegh que contiene numpy def D_rayleigh2(sigma,N): return np.sqrt(-2*sigma**2*np.log(np.random.rand(N))) b = 0.5; N =10**6;sigma = 2 r = D_rayleigh(b,N) # Función del ejemplo r2 = np.random.rayleigh(sigma,N) # Función que contiene python r3 = D_rayleigh2(sigma,N) # Función creada de acuerdo a la función de python plt.figure(1,figsize=(10,8)) plt.subplot(311) plt.hist(r3,100,density=True) plt.xlabel('valores aleatorios') plt.ylabel('probabilidad') plt.title('histograma función D_rayleigh2') plt.subplot(312) plt.hist(r2,100,density=True) plt.xlabel('valores aleatorios') plt.ylabel('probabilidad') plt.title('histograma función numpy') plt.subplot(313) plt.hist(r,100,density=True) plt.xlabel('valores aleatorios') plt.ylabel('probabilidad') plt.title('histograma función D_rayleigh') plt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=.5,wspace=0) plt.show() ``` > ### <font color='red'> [Médoto de Box Muller](http://www.lmpt.univ-tours.fr/~nicolis/Licence_NEW/08-09/boxmuller.pdf) </font> $\longrightarrow$ Aplicación del método de la transformada inversa ## Distribuciones discretas Para una variable dicreta, evaluar $F^{-1}$ se reduce a buscar en una tabla. Considere por ejemplo una variable aleatoria discreta, cuyos posibles valores son $c_1<c_2<\cdots<c_n$. Tome $p_i$ la probabilidad alcanzada por $c_i$, $i=1,\cdots,n$ y tome $q_0=0$, en donde $q_i$ representa las **probabilidades acumuladas asociadas con $c_i$** y está definido como: $$q_i=\sum_{j=1}^{i}p_j,\quad i=1,\cdots,n \longrightarrow q_i=F(c_i)$$ Entonces, para tomar muestras de esta distribución se deben de realizar los siguientes pasos: 1. Generar un número uniforme $U$ entre (0,1). 2. Encontrar $k\in\{1,\cdots,n\}$ tal que $q_{k-1}<U\leq q_k$ 3. Tomar $X=c_k$. ### Ejemplo numérico ``` val = [1,2,3,4,5] p_ocur = [.1,.2,.4,.2,.1] p_acum = np.cumsum(p_ocur) df = pd.DataFrame(index=val,columns=['Probabilidades','Probabilidad acumulada'], dtype='float') df.index.name = "Valores (índices)" df.loc[val,'Probabilidades'] = p_ocur df.loc[val,'Probabilidad acumulada'] = p_acum df ``` ### Ilustración del método ``` u = .01 sum([1 for p in p_acum if p < u]) indices = val N = 10 U =np.random.rand(N) # Diccionario de valores aleatorios rand2reales = {i: idx for i, idx in enumerate(indices)} # Series de los valores aletorios generados y = pd.Series([sum([1 for p in p_acum if p < ui]) for ui in U]).map(rand2reales) y def Gen_distr_discreta(p_acum: 'P.Acumulada de la distribución a generar', indices: 'valores reales a generar aleatoriamente', N: 'cantidad de números aleatorios a generar'): U =np.random.rand(N) # Diccionario de valores aleatorios rand2reales = {i: idx for i, idx in enumerate(indices)} # Series de los valores aletorios y = pd.Series([sum([1 for p in p_acum if p < ui]) for ui in U]).map(rand2reales) return y ``` # Lo que no se debe de hacer, cuando queremos graficar el histograma de una distribución discreta ``` N = 10**4 u =np.random.rand(N) v = Gen_distr_discreta(p_acum, val, N) plt.hist(v,bins = len(set(val))) plt.show() N = 10**4 v = Gen_distr_discreta(p_acum, val, N) # Método 1 (Correcto) y, bins = np.histogram(v,bins=len(set(val))) plt.bar(val, y) plt.title('METODO CORRECTO') plt.xlabel('valores (índices)') plt.ylabel('frecuencias') plt.show() # Método 2 (incorrecto) y,x,_ = plt.hist(v,bins=len(val)) plt.title('METODO INCORRECTO') plt.xlabel('valores (índices)') plt.ylabel('frecuencias') plt.legend(['incorrecto']) plt.show() def plot_histogram_discrete(distribucion:'distribución a graficar histograma', label:'label del legend'): # len(set(distribucion)) cuenta la cantidad de elementos distintos de la variable 'distribucion' plt.figure(figsize=[8,4]) y,x = np.histogram(distribucion,bins = len(set(distribucion))) plt.bar(list(set(distribucion)),y,label=label) plt.legend() plt.show() ``` ># <font color ='red'> **Tarea 5** > Para las siguiente dos funciones, genere muestres aleatorias que distribuyan según la función dada usando el método de la transformada inversa y grafique el histograma de 1000 muestras generadas con el método de la transformada inversa y compárela con el función $f(x)$ **(recuerde que $f(x)$ es la distribución de probabilidad y $F(x)$ es la distribución de probabilidad acumulada)** [ver este enlace para más información](https://es.wikipedia.org/wiki/Funci%C3%B3n_de_distribuci%C3%B3n). Este procedimiento se realiza con el fín de validar que el procedimiento y los resultados son correctos. > 1. Generación variable aleatoria continua >El tiempo en el cual un movimiento browniano se mantiene sobre su punto máximo en el intervalo [0,1] tiene una distribución >$$F(x)=\frac{2}{\pi}\sin^{-1}(\sqrt x),\quad 0\leq x\leq 1$$ </font> > 2. Generación variable aleatoria Discreta > La distribución binomial modela el número de éxitos de n ensayos independientes donde hay una probabilidad p de éxito en cada ensayo. > Generar una variable aletoria binomial con parámetros $n=10$ y $p=0.7$. Recordar que $$X\sim binomial(n,p) \longrightarrow p_i=P(X=i)=\frac{n!}{i!(n-i)!}p^i(1-p)^{n-i},\quad i=0,1,\cdots,n$$ > Por propiedades de la operación factorial la anterior $p_i$ se puede escribir como: > $$p_{i+1}=\frac{n-i}{i+1}\frac{p}{1-p} p_i $$ > **Nota:** Por notación recuerde que para el caso continuo $f(x)$ es la distribución de probabilidad (PDF), mientras $F(x)$ corresponde a la distribución de probabilidad acumulada (CDF). Para el caso discreto, $P(X=i)$ corresponde a su distribución de probabilidad (PMF) y $ F_{X}(x)=\operatorname {P} (X\leq x)=\sum _{x_{i}\leq x}\operatorname {P} (X=x_{i})=\sum _{x_{i}\leq x}p(x_{i})$, corresponde a su distribución de probabilidad acumulada (CDF). <script> $(document).ready(function(){ $('div.prompt').hide(); $('div.back-to-top').hide(); $('nav#menubar').hide(); $('.breadcrumb').hide(); $('.hidden-print').hide(); }); </script> <footer id="attribution" style="float:right; color:#808080; background:#fff;"> Created with Jupyter by Oscar David Jaramillo Z. </footer>
github_jupyter
# Amount Spend by a User on an E-Commerce Website In this machine learning project, I have collected the dataset from Kaggle (https://www.kaggle.com/iyadavvaibhav/ecommerce-customer-device-usage?select=Ecommerce+Customers) and I will be using Machine Learning to make predictions of amount spent by a user on E-commerce Website. Later, in order to apply the information gained in this study in real life, a GUI application was made using tkinter of python which takes in required parameters from the user and then displays the predcited amount spent by the user on E-commerce Website. ## Importing Libraries ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings("ignore") from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error from sklearn.compose import TransformedTargetRegressor from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.svm import SVR from sklearn.ensemble import RandomForestRegressor ``` ## Loading Dataset Now I will load the dataset which I have stored in the local directory with the name 'Ecommerce_customers'. ``` #Loading dataset dataset = pd.read_csv("Ecommerce_customers") ``` ## Data Exploration ``` print(f"Dimensions of dataset: {dataset.shape}") dataset.head() dataset.info() print("Statistics of dataset: ") dataset.describe() ``` ## Data Visualization In order to better understand the data, I will be using data visualization techniques like histogram and heatmaps. ``` #Histogram plt.figure(figsize = (20,20)) dataset.hist() #Heatmap correlation_matrix = dataset.corr().round(2) sns.heatmap(data = correlation_matrix, annot = True) ``` ## Data Pre-Processing As the columns of email, address and avatar do not affect the target variable, we do not include them in the list of independent variables. Here, we will split the dataset into independent and dependent variables and then split it into train and test data. ``` X = dataset.values[:,3:7] Y = dataset.values[:, 7:] print(f"Shape of X: {X.shape}") print(f"Shape of Y: {Y.shape}") #Splitting the dataset into train and test data X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state=0) ``` ## Machine Learning Models #### Linear Regression Model ``` lrm = TransformedTargetRegressor(regressor = LinearRegression(), transformer = StandardScaler()) lrm.fit(X_train, Y_train) predict = lrm.predict(X_test) lrm_score = mean_squared_error(Y_test, predict) #out = np.concatenate((Y_test, predict), axis=1) #print(out) print("The mean square error for Linear Regression Model is %.2f." %(lrm_score.round(2))) ``` #### Polynomial Regression Model ``` polynomial_scores = [] for i in range(2,6): polynomial = PolynomialFeatures(degree = i) X_poly_train = polynomial.fit_transform(X_train) X_poly_test = polynomial.fit_transform(X_test) prm = TransformedTargetRegressor(regressor = LinearRegression(), transformer = StandardScaler()) prm.fit(X_poly_train, Y_train) predict = prm.predict(X_poly_test) polynomial_scores.append(mean_squared_error(Y_test, predict)) pmr_score = polynomial_scores[0] #Plotting the graph between different degree of polynomial vs mean square error plt.figure(figsize = (10, 10)) plt.plot([i for i in range(2,6)], polynomial_scores, color = 'red') for i in range(2,6): plt.text(i, polynomial_scores[i-2], (i, polynomial_scores[i-2].round(2))) plt.xticks([i for i in range(1,6)]) plt.xlabel("Degree of Polynomial") plt.ylabel("Mean Square Error") plt.title("Polynomial Regression Model Scores for different degree of polynomial") plt.show() print("The mean square error for Polynomial Regression Model is %.2f." %(polynomial_scores[0].round(2))) ``` #### Support Vector Regression Model ``` svr_scores = [] kernels = ['linear', 'poly', 'rbf', 'sigmoid'] for i in range(len(kernels)): svr = TransformedTargetRegressor(regressor = SVR(kernel = kernels[i]), transformer = StandardScaler()) svr.fit(X_train, Y_train) predict = svr.predict(X_test) svr_scores.append(mean_squared_error(Y_test, predict)) #Plotting the bar grapgh for Support Vector Classifier for different kernels vs mean square error plt.figure(figsize=(10,10)) plt.bar(kernels, svr_scores, color=['black', 'yellow', 'red', 'grey']) for i in range(len(kernels)): plt.text(i, svr_scores[i], (svr_scores[i].round(2))) plt.title('Support Vector Regression Model scores for different Kernels') plt.xlabel('Kernels') plt.ylabel('Mean Square Error') svr_score = svr_scores[0] print("The mean square error for Support Vector Regression Model is {} with {} kernel.".format(svr_scores[0].round(2), 'linear')) ``` #### Random Forest Regression Model ``` rf_scores = [] estimators = [1, 10, 50 ,100, 500, 1000, 5000] for i in range(len(estimators)): rf = TransformedTargetRegressor(regressor = RandomForestRegressor(n_estimators = estimators[i]), transformer = StandardScaler()) rf.fit(X_train, Y_train) predict = rf.predict(X_test) rf_scores.append(mean_squared_error(Y_test, predict)) #Plotting the bar graph for Random Forest Regression Model scores for different kernels plt.figure(figsize=(10,10)) plt.bar([i for i in range(len(estimators))], rf_scores, color=['grey', 'maroon', 'purple', 'red', 'yellow', 'blue', 'black'], width=0.8) for i in range(len(estimators)): plt.text(i, rf_scores[i], (estimators[i], rf_scores[i].round(2))) plt.xticks([i for i in range(len(estimators))], labels=[str(estimators[i] for i in estimators)]) plt.title('Random Forest Regression Model score for different number of estimators') plt.xlabel('Number of Estimators') plt.ylabel('Mean Square Error') rf_score = rf_scores[6] print("The mean sqaure error for Random Forest Regression Model is {} with {} estimators.".format(rf_scores[6].round(2), 5000)) ``` ## Final Scores ``` print("The mean square error for Linear Regression Model is %.2f." %(lrm_score.round(2))) print("The mean square error for Polynomial Regression Model is %.2f." %(polynomial_scores[0].round(2))) print("The mean square error for Support Vector Regression Model is {} with {} kernel.".format(svr_scores[0].round(2), 'linear')) print("The mean sqaure error for Random Forest Regression Model is {} with {} estimators.".format(rf_scores[6].round(2), 5000)) ``` ## Conclusion In this project, I applied machine learning models to predict the amount spent by a user on an E-Commerce Website. After importing the data, I used data visualization techniques for data exploration. Then, I applied 4 machine learning models namely Linear Regressor, Polynomial Regressor, Support Vector Regressor and Random Forest Regressor. I varied parameters in each model to achieve best scores. Finally, Linear Regression Model achieved the least mean square error of 92.89. #### Final Model ``` final_model = TransformedTargetRegressor(regressor = LinearRegression(), transformer = StandardScaler()) final_model.fit(X_train, Y_train) predict = final_model.predict(X_test) final_model_score = mean_squared_error(Y_test, predict) print(f"Model Mean Square Error: {final_model_score.round(2)}") ``` #### Predictor Function Creating a function that will later be used to predict the amount spent by a user on an E-Commerce Webiste using Linear Regression model. ``` def predict_now(lis): lis = pd.DataFrame(lis) return final_model.predict(lis).item() #lis = {'Avg. Session Length': [34.497268], 'Time on App': [12.655651], 'Time on Website': [39.577668], 'Length of Membership': [4.082621]} #predict_now(lis) ``` ## Amount Spent on E-Commerce Website Predictor Application In order to use the above created machine learning model to predict the amount spent by a user on an E-Commerce Webiste using Linear Regression model, we need to build a Graphical User Interface (GUI) wherein the user can enter the required data and get a prediction. I have done this using tkinter GUI of python. #### Importing Libraries ``` import tkinter as tk from tkinter import ttk ``` #### Defining Required Functions ``` def submit(*args): try: final = { 'Avg. Session Length': [float(user4.get())], 'Time on App': [float(user5.get())], 'Time on Website': [float(user6.get())], 'Length of Membership': [float(user7.get())], } result = predict_now(final) this.set(f"Predicted amount spent by the user on E-Commerce Website is {result}") except ValueError: pass ``` #### Setting Up the Window ``` try: from ctypes import windll windll.shcore.SetProcessDPIAwareness(1) except: pass root = tk.Tk() root.title("Heart Disease Predictor") user1 = tk.StringVar() user2 = tk.StringVar() user3 = tk.StringVar() user4 = tk.StringVar() user5 = tk.StringVar() user6 = tk.StringVar() user7 = tk.StringVar() this = tk.StringVar(value = "Submit to get the result.") main = ttk.Frame(root, padding = (30, 10)) main.grid() ``` #### Declaring Widgets ``` label1 = ttk.Label(main, text = "Enter the following details: ") label1.configure(font=("helvetica", 15)) label1.grid(row = 0, column = 0, sticky = "w") entry1 = ttk.Label(main, text = "Email: ") user_input1 = ttk.Entry(main, width = 20, textvariable = user1) entry2 = ttk.Label(main, text = "Address: ") user_input2 = ttk.Entry(main, width = 50, textvariable = user2) entry3 = ttk.Label(main, text = "Avatar: ") user_input3 = ttk.Entry(main, width = 20, textvariable = user3) entry4 = ttk.Label(main, text = "Avg. Session Length: ") user_input4 = ttk.Entry(main, width = 10, textvariable = user4) entry5 = ttk.Label(main, text = "Time on App: ") user_input5 = ttk.Entry(main, width = 10, textvariable = user5) entry6 = ttk.Label(main, text = "Time on Website: ") user_input6 = ttk.Entry(main, width = 10, textvariable = user6) entry7 = ttk.Label(main, text = "Length of Membership: ") user_input7 = ttk.Entry(main, width = 10, textvariable = user7) button = ttk.Button(main, text = "Submit", command = submit) out = ttk.Label(main, textvariable = this) ``` #### Displaying the Widgets ``` entry1.grid(row = 1, column = 0, sticky = "w", padx = 10, pady = 10) user_input1.grid(row = 1, column = 1, sticky = "ew", padx = 10, pady = 10, columnspan = 4) user_input1.focus() entry2.grid(row = 2, column = 0, sticky = "w", padx = 10, pady = 10) user_input2.grid(row = 2, column = 1, sticky = "ew", padx = 10, pady = 10) entry3.grid(row = 3, column = 0, sticky = "w", padx = 10, pady = 10) user_input3.grid(row = 3, column = 1, sticky = "ew", padx = 10, pady = 10) entry4.grid(row = 4, column = 0, sticky = "w", padx = 10, pady = 10) user_input4.grid(row = 4, column = 1, sticky = "ew", padx = 10, pady = 10, columnspan = 4) entry5.grid(row = 5, column = 0, sticky = "w", padx = 10, pady = 10) user_input5.grid(row = 5, column = 1, sticky = "ew", padx = 10, pady = 10, columnspan = 4) entry6.grid(row = 6, column = 0, sticky = "w", padx = 10, pady = 10) user_input6.grid(row = 6, column = 1, sticky = "ew", padx = 10, pady = 10) entry7.grid(row = 7, column = 0, sticky = "w", padx = 10, pady = 10) user_input7.grid(row = 7, column = 1, sticky = "ew", padx = 10, pady = 10) button.grid(row = 8, column = 0, columnspan = 2, sticky = "ew", padx = 10, pady = 10) out.grid(row = 9, column = 0, columnspan = 2) root.mainloop() ```
github_jupyter
``` import torch import torchvision import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import matplotlib.pyplot as plt import random import backwardcompatibilityml.loss as bcloss import backwardcompatibilityml.scores as scores # Initialize random seed random.seed(123) torch.manual_seed(456) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False %matplotlib inline n_epochs = 3 batch_size_train = 64 batch_size_test = 1000 learning_rate = 0.01 momentum = 0.5 log_interval = 10 torch.backends.cudnn.enabled = False train_loader = list(torch.utils.data.DataLoader( torchvision.datasets.MNIST('datasets/', train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_train, shuffle=True)) test_loader = list(torch.utils.data.DataLoader( torchvision.datasets.MNIST('datasets/', train=False, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_test, shuffle=True)) train_loader_a = train_loader[:int(len(train_loader)/2)] train_loader_b = train_loader[int(len(train_loader)/2):] fig = plt.figure() for i in range(6): plt.subplot(2,3,i+1) plt.tight_layout() plt.imshow(train_loader_a[0][0][i][0], cmap='gray', interpolation='none') plt.title("Ground Truth: {}".format(train_loader_a[0][1][i])) plt.xticks([]) plt.yticks([]) fig class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return x, F.softmax(x, dim=1), F.log_softmax(x, dim=1) network = Net() optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum) train_losses = [] train_counter = [] test_losses = [] test_counter = [i*len(train_loader_a)*batch_size_train for i in range(n_epochs + 1)] def train(epoch): network.train() for batch_idx, (data, target) in enumerate(train_loader_a): optimizer.zero_grad() _, _, output = network(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader_a)*batch_size_train, 100. * batch_idx / len(train_loader_a), loss.item())) train_losses.append(loss.item()) train_counter.append( (batch_idx*64) + ((epoch-1)*len(train_loader_a)*batch_size_train)) def test(): network.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: _, _, output = network(data) test_loss += F.nll_loss(output, target, reduction="sum").item() pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).sum() test_loss /= len(train_loader_a)*batch_size_train test_losses.append(test_loss) print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(train_loader_a)*batch_size_train, 100. * correct / (len(train_loader_a)*batch_size_train))) test() for epoch in range(1, n_epochs + 1): train(epoch) test() fig = plt.figure() plt.plot(train_counter, train_losses, color='blue') plt.scatter(test_counter, test_losses, color='red') plt.legend(['Train Loss', 'Test Loss'], loc='upper right') plt.xlabel('number of training examples seen') plt.ylabel('negative log likelihood loss') fig with torch.no_grad(): _, _, output = network(test_loader[0][0]) fig = plt.figure() for i in range(6): plt.subplot(2,3,i+1) plt.tight_layout() plt.imshow(test_loader[0][0][i][0], cmap='gray', interpolation='none') plt.title("Prediction: {}".format( output.data.max(1, keepdim=True)[1][i].item())) plt.xticks([]) plt.yticks([]) fig import copy h1 = copy.deepcopy(network) h2 = copy.deepcopy(network) h1.eval() new_optimizer = optim.SGD(h2.parameters(), lr=learning_rate, momentum=momentum) lambda_c = 1.0 bc_loss = bcloss.BCNLLLoss(h1, h2, lambda_c) update_train_losses = [] update_train_counter = [] update_test_losses = [] update_test_counter = [i*len(train_loader_b)*batch_size_train for i in range(n_epochs + 1)] def train_update(epoch): for batch_idx, (data, target) in enumerate(train_loader_b): new_optimizer.zero_grad() loss = bc_loss(data, target) loss.backward() new_optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader_b)*batch_size_train, 100. * batch_idx / len(train_loader_b), loss.item())) update_train_losses.append(loss.item()) update_train_counter.append( (batch_idx*64) + ((epoch-1)*len(train_loader_b)*batch_size_train)) def test_update(): h2.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: _, _, output = h2(data) test_loss += F.nll_loss(output, target, reduction="sum").item() pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).sum() test_loss /= len(train_loader_b)*batch_size_train update_test_losses.append(test_loss) print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(train_loader_b)*batch_size_train, 100. * correct / (len(train_loader_b)*batch_size_train))) test_update() for epoch in range(1, n_epochs + 1): train_update(epoch) test_update() fig = plt.figure() plt.plot(update_train_counter, update_train_losses, color='blue') plt.scatter(update_test_counter, update_test_losses, color='red') plt.legend(['Train Loss', 'Test Loss'], loc='upper right') plt.xlabel('number of training examples seen') plt.ylabel('negative log likelihood loss') fig h2.eval() h1.eval() test_index = 2 with torch.no_grad(): _, _, h1_output = h1(test_loader[test_index][0]) _, _, h2_output = h2(test_loader[test_index][0]) h1_labels = h1_output.data.max(1)[1] h2_labels = h2_output.data.max(1)[1] expected_labels = test_loader[test_index][1] fig = plt.figure() for i in range(6): plt.subplot(2,3,i+1) plt.tight_layout() plt.imshow(test_loader[test_index][0][i][0], cmap='gray', interpolation='none') plt.title("Prediction: {}".format( h2_labels[i].item())) plt.xticks([]) plt.yticks([]) fig trust_compatibility = scores.trust_compatibility_score(h1_labels, h2_labels, expected_labels) error_compatibility = scores.error_compatibility_score(h1_labels, h2_labels, expected_labels) print(f"Error Compatibility Score: {error_compatibility}") print(f"Trust Compatibility Score: {trust_compatibility}") ```
github_jupyter
Find out how many orders, how many products and how many sellers are in the data. How many products have been sold at least once? Which is the product contained in more orders? ``` import pyspark from pyspark.context import SparkContext from pyspark.sql import SparkSession spark = SparkSession.builder \ .master("local") \ .config("spark.sql.autoBroadcastJoinThreshold", -1) \ .config("spark.executor.memory", "500mb") \ .appName("Exercise1") \ .getOrCreate() sellers_table = spark.read.parquet("./sellers_parquet") products_table = spark.read.parquet("./products_parquet") sales_table = spark.read.parquet("./sales_parquet") print('num of sellers: ', sellers_table.count()) print('num of products: ', products_table.count()) print('num of sales: ', sales_table.count()) sales_table.select("order_id").show(5) sales_table.filter(sales_table["num_pieces_sold"] > 55).count() from pyspark.sql.functions import countDistinct from pyspark.sql.functions import col from pyspark.sql.functions import count from pyspark.sql.functions import avg print('num of products sold at least once') sales_table.agg(countDistinct(col("product_id"))).show() sales_table.groupBy(col("product_id")).agg(count("*").alias("cnt")).orderBy(col("cnt").desc()).show(10) #sales_table.agg(col(sales_table.distinct())).show() distinct_products = sales_table.select('product_id').distinct().collect() #df.select('column1').distinct().collect() len(distinct_products) ``` How many distinct products have been sold in each day? ``` sales_table.groupBy(col("date")).agg(countDistinct(col("product_id")).alias("distinct_products_sold")).orderBy( col("distinct_products_sold").desc()).show() ``` What is the average revenue of the orders? ``` print('average num of pieces sold') sales_table.agg(avg(col("num_pieces_sold"))).show() # create a new column revenue_of_one_product = price * num_pieces_sold sales_table.join(products_table, sales_table["product_id"] == products_table["product_id"], "inner")\ .agg(avg(products_table["price"] * sales_table["num_pieces_sold"])).show() ``` For each seller, what is the average % contribution of an order to the seller's daily quota? ### Example If Seller_0 with `quota=250` has 3 orders: Order 1: 10 products sold Order 2: 8 products sold Order 3: 7 products sold The average % contribution of orders to the seller's quota would be: Order 1: 10/105 = 0.04 Order 2: 8/105 = 0.032 Order 3: 7/105 = 0.028 Average % Contribution = (0.04+0.032+0.028)/3 = 0.03333 ``` # for each seller in sellers table find quota. # For each seller find orders in sales_table. # contribution = sum(num_pieces_sold) / (num_orders * daily_target) sellers_table.filter(sellers_table["daily_target"] > 0).count() print(sales_table.join(sellers_table, sales_table["seller_id"] == sellers_table["seller_id"], "inner").withColumn( "ratio", sales_table["num_pieces_sold"]/sellers_table["daily_target"] ).groupBy(sales_table["seller_id"]).agg(avg("ratio")).show()) ```
github_jupyter
# ICLV This demonstrates an integrated choice and latent variable in [biogeme](http://biogeme.epfl.ch), using the biogeme example data. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import pandas as pd import numpy as np import matplotlib.pyplot as plt import biogeme.database as bdb import biogeme.biogeme as bio from biogeme.expressions import * import biogeme.distributions import biogeme.loglikelihood import biogeme.models import pickle av = { 0: Variable('av'), 1: Variable('av'), 2: Variable('av') } # estimate ICLV # use data from biogeme mnld = pd.read_csv('../data/optima.dat', sep='\t').sort_values('ID') mnld['Choice'] = mnld.Choice.replace({-1: np.nan}) mnld['age'] = mnld.age.replace({-1: np.nan}) mnld['college'] = (mnld.Education >= 6).astype('int64') mnld['incomeChf'] = mnld.Income.map({ 1: 1250, 2: 3250, 3: 5000, 4: 7000, 5: 9000, 6: 11000, -1: np.nan }) / 10000 mnld['TimeCar'] /= 60 mnld['TimePT'] /= 60 mnld['TimeWalk'] = mnld.TimeCar * 35 / 3 # assume average car trip at 35 mph, average walk at 3 mph mnld['av'] = 1 db = bdb.Database('iclv', mnld.dropna(subset=['incomeChf', 'Choice', 'age'])) betas = { 'ascCar': Beta('ascCar', 0, None, None, 0), 'ascPt': Beta('ascPt', 0, None, None, 0), 'travelTime': Beta('travelTime', 0, None, None, 0), 'incCar': Beta('incCar', 0, None, None, 0), 'incPt': Beta('incPt', 0, None, None, 0), # Latent variable influences on utility 'envirWalk': Beta('envirWalk', 0, None, None, 0), 'envirPt': Beta('envirPt', 0, None, None, 0), # Latent variable measurement equations 'alphaGlobalWarming': Beta('alphaGlobalWarming', 0, None, None, 1), 'betaGlobalWarming': Beta('betaGlobalWarming', 1, None, None, 1), 'sigmaGlobalWarming': Beta('sigmaGlobalWarming', 1, None, None, 1), 'alphaEconomy': Beta('alphaEconomy', 0, None, None, 0), 'betaEconomy': Beta('betaEconomy', 0, None, None, 0), 'sigmaEconomy': Beta('sigmaEconomy', 1, None, None, 0), # Latent variable equation 'alphaEnvir': Beta('alphaEnvir', 0, None, None, 0), 'ageEnvir': Beta('ageEnvir', 0, None, None, 0), 'collegeEnvir': Beta('collegeEnvir', 0, None, None, 0), 'envirSigma': Beta('envirSigma', 1, None, None, 0) } omega = RandomVariable('omega') density = biogeme.distributions.normalpdf(omega) # Workaround for biogeme bug: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!searchin/biogeme/bioNormalPdf%7Csort:date/biogeme/SeWFrgN74Zk/gNfM-PCsAwAJ def bioNormalPdf(x): return -x*x/2 - np.log((2*np.pi) ** 0.5) def loglikelihoodregression(meas,model,sigma): t = (meas - model) / sigma f = bioNormalPdf(t) - sigma return f latentEnvironment = betas['alphaEnvir'] + betas['ageEnvir'] * Variable('age') +\ betas['collegeEnvir'] * Variable('college') + betas['envirSigma'] * omega globalWarming = betas['alphaGlobalWarming'] + betas['betaGlobalWarming'] * latentEnvironment economy = betas['alphaEconomy'] + betas['betaEconomy'] * latentEnvironment globalWarmingLikelihood = loglikelihoodregression(Variable('Envir05'), globalWarming, betas['sigmaGlobalWarming']) economyLikelihood = loglikelihoodregression(Variable('Envir03'), economy, betas['sigmaEconomy']) # 0 = pt, 1 = car, 2 = walk utilities = { 0: betas['ascPt'] + betas['travelTime'] * Variable('TimePT') + betas['incPt'] * Variable('incomeChf') + betas['envirPt'] * latentEnvironment, 1: betas['ascCar'] + betas['travelTime'] * Variable('TimeCar') + betas['incCar'] * Variable('incomeChf'), 2: betas['travelTime'] * Variable('TimeWalk') + betas['envirWalk'] * latentEnvironment } condprob = biogeme.models.logit(utilities, av, Variable('Choice')) condlike = log(condprob) + globalWarmingLikelihood + economyLikelihood loglike = Integrate(condlike + log(density), 'omega') iclv = bio.BIOGEME(db, loglike) iclv.modelName = 'iclv' iclvRes = iclv.estimate() pd.DataFrame(iclvRes.getGeneralStatistics()).transpose()[[0]] res = iclvRes.getEstimatedParameters() res.loc['ageEnvir',['Value', 'Std err']] *= 10 # convert to tens of years, lazily res print(res[['Value', 'Std err', 't-test', 'p-value']].round(2).to_latex()) ```
github_jupyter
<a href="https://colab.research.google.com/github/NichaRoj/cubems-data-pipeline/blob/master/colab/example_ltsm.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #Pre Step ``` import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import datetime as dt from tensorflow.python.keras.layers import Dense from tensorflow.python.keras.models import Sequential from sklearn.preprocessing import MinMaxScaler np.random.seed(7) from tensorflow.python.keras.layers import LSTM from sklearn.metrics import mean_squared_error from google.colab import drive drive.mount('/content/drive') ``` #Load Concated Dataframe ``` path = "/content/drive/My Drive/Senior Project- Data Pipeline & Data Analytic/Cham5 Data/Bld_Load_Sum_All_Weather.csv" df = pd.read_csv(path) df.head() df.info() #Change to datetime dtype df['Date']=pd.to_datetime(df['Date']) #set index to DateTime df.set_index('Date', inplace=True) df.info() df.describe() ``` **Plot Graph Light Load** ``` #The pink color indicates missing values num_ticks=19 #tick number for Y-axis plt.figure(figsize = (10,5)) ax = sns.heatmap(df.isnull(),cbar=False, yticklabels=['Jul18','Aug18','Sept18','Oct18','Nov18','Dec18','Jan19','Feb19','Mar19','Apr19','May19','Jun19','Jul19','Aug19','Sept19','Oct19','Nov19','Dec19','']) ax.set_yticks(np.linspace(0,len(df),num_ticks,dtype=np.int)) ax.set_title('Dataframe missing values') plt.show() plt.figure(figsize=(10,5)) df['Light_Load(kW)'].plot() df['temp(degF)'].plot() plt.title('Light_Load vs Temp(F)') plt.legend(['Light_Load(kW)', 'temp(degF)'], loc='upper right') ``` #Prepare Dataframe for Light Load ``` df['LLd-1'] = df['Light_Load(kW)'].shift(1) df['W']= df.index.dayofweek df['H']= df.index.hour df.head() ``` **Normalize Data** ``` # normalize the dataset scaler = MinMaxScaler(feature_range=(0,1)) data_ll=scaler.fit_transform(df) plt.figure(figsize=(15,5)) plt.title('Normolize: Light_Load(kW)') plt.plot(data_ll[:,1]) #Light_Load(kW) plt.plot(data_ll[:,4]) #temp(degF) plt.legend(['Light_Load(kW)', 'temp(degF)'], loc='upper right') ``` **Split Train & Test: Light_Load(kW)** ``` n = int(len(data_ll)*0.8) trainY_ll = data_ll[1:n,1] #(starting from row1 to remove nan value) trainX_ll = data_ll[1:n,4:] testY_ll = data_ll[n:len(data_ll),1] #Light_Load(kW) testX_ll = data_ll[n:len(data_ll),4:] #starting from temp(F) col print(trainY_ll.shape) print(trainX_ll.shape) print('== temp(degF) === dew(degF) === humidity(%) === windspeed(mph) === solar(W/m2 )=== ACLd-1 === W === H === Light_Load(kW) ======') for i in range(5): print(trainX_ll[i], trainY_ll[i]) print(testY_ll.shape) print(testX_ll.shape) print('== temp(degF) === dew(degF) === humidity(%) === windspeed(mph) === solar(W/m2 )=== ACLd-1 === W === H === Light_Load(kW) ======') for i in range(5): print(testX_ll[i,:], testY_ll[i]) ``` **Reshape data before input into LTSM** ``` # reshape input to be [samples, time steps, features] trainX_ll = trainX_ll.reshape((trainX_ll.shape[0], 1, 8)) testX_ll = testX_ll.reshape((testX_ll.shape[0], 1, 8)) print('trainX_ll.shape=', trainX_ll.shape) print('testX_ll.shape=', testX_ll.shape) from tensorflow.keras.optimizers import Adam ``` #Create LSTM: Light Load lr=0.005 ``` opt = Adam(lr=0.005) model = Sequential() model.add(LSTM(6, activation='sigmoid',return_sequences=True, input_shape=(1, 8))) model.add(LSTM(6, activation='sigmoid', input_shape=(1, 8))) model.add(Dense(1, activation='sigmoid')) model.compile(loss='mean_squared_error', optimizer=opt) history = model.fit(trainX_ll, trainY_ll, validation_split=0.1, epochs=100, batch_size=24, verbose=1) print(model.summary()) ``` #Presict & Calculate RMSE lr=0.005 ``` #predict #Light_Load(kW) testPredict_ll = model.predict(testX_ll) #denormalize the test set testY_ll = testY_ll*(df['Light_Load(kW)'].max()-df['Light_Load(kW)'].min())+df['Light_Load(kW)'].min() #denormalize the prediction testPredict_ll = testPredict_ll*(df['Light_Load(kW)'].max()-df['Light_Load(kW)'].min())+df['Light_Load(kW)'].min() print(testY_ll.shape) print(testPredict_ll.shape) testPredict_ll.ravel().shape #plot testY vs testPredict plt.figure(figsize=(15,8)) plt.plot(testY_ll, label='testY_ll') plt.plot(testPredict_ll, label='testPredict_ll') plt.legend(loc='upper right') #calculate RMSE and MAPE RMSE = np.sqrt(np.mean(np.square(testY_ll-testPredict_ll.ravel()))) MAPE = np.mean(np.abs((testY_ll-testPredict_ll.ravel())/testY_ll))*100 print('RMSE=',RMSE) print('MAPE=',MAPE) ``` **Check model & Validation** ``` #Check model loss and validation loss plt.figure(figsize=(13,8)) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss: Light Load lr=0.005 epoch=100') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='upper right') plt.show() ```
github_jupyter
``` import numpy as np import os from astropy.table import Table from astropy.cosmology import FlatLambdaCDM from matplotlib import pyplot as plt from astropy.io import ascii from astropy.coordinates import SkyCoord import healpy import astropy.units as u import pandas as pd import matplotlib import pyccl from scipy import stats os.environ['CLMM_MODELING_BACKEND'] = 'ccl' # here you may choose ccl, nc (NumCosmo) or ct (cluster_toolkit) import clmm from clmm.support.sampler import fitters from importlib import reload import sys sys.path.append('../../') from magnification_library import * clmm.__version__ matplotlib.rcParams.update({'font.size': 16}) #define cosmology #astropy object cosmo = FlatLambdaCDM(H0=71, Om0=0.265, Tcmb0=0 , Neff=3.04, m_nu=None, Ob0=0.0448) #ccl object cosmo_ccl = pyccl.Cosmology(Omega_c=cosmo.Om0-cosmo.Ob0, Omega_b=cosmo.Ob0, h=cosmo.h, sigma8= 0.80, n_s=0.963) #clmm object cosmo_clmm = clmm.Cosmology(be_cosmo=cosmo_ccl) path_file = '../../../' key = 'LBGp' ``` ## **Profiles measured with TreeCorr** ``` quant = np.load(path_file + "output_data/binned_correlation_fct_Mpc_"+key+".npy", allow_pickle=True) quant_NK = np.load(path_file + "output_data/binned_correlation_fct_NK_Mpc_"+key+".npy", allow_pickle=True) ``` ## **Measuring profiles with astropy and CLMM** ## Open data ``` gal_cat_raw = pd.read_hdf(path_file+'input_data/cat_'+key+'.h5', key=key) dat = np.load(path_file+"input_data/source_sample_properties.npy", allow_pickle=True) mag_cut, alpha_cut, alpha_cut_err, mag_null, gal_dens, zmean = dat[np.where(dat[:,0]==key)][0][1:] print (alpha_cut) mag_cut selection_source = (gal_cat_raw['ra']>50) & (gal_cat_raw['ra']<73.1) & (gal_cat_raw['dec']<-27.) & (gal_cat_raw['dec']>-45.) selection = selection_source & (gal_cat_raw['mag_i_lsst']<mag_cut) & (gal_cat_raw['redshift']>1.5) gal_cat = gal_cat_raw[selection] [z_cl, mass_cl, n_halo] = np.load(path_file + "output_data/halo_bin_properties.npy", allow_pickle=True) np.sum(n_halo) ``` ## **Magnification profiles prediction** ``` def Mpc_to_arcmin(x_Mpc, z, cosmo=cosmo): return x_Mpc * cosmo.arcsec_per_kpc_proper(z).to(u.arcmin/u.Mpc).value def arcmin_to_Mpc(x_arc, z, cosmo=cosmo): return x_arc * cosmo.kpc_proper_per_arcmin(z).to(u.Mpc/u.arcmin).value def magnification_biais_model(rproj, mass_lens, z_lens, alpha, z_source, cosmo_clmm, delta_so='200', massdef='mean', Mc_relation ='Diemer15'): conc = get_halo_concentration(mass_lens, z_lens, cosmo_clmm.be_cosmo, Mc_relation, mdef[0], delta_so ) magnification = np.zeros(len(rproj)) for k in range(len(rproj)): magnification[k] = np.mean(clmm.theory.compute_magnification(rproj[k], mdelta=mass_lens, cdelta=conc, z_cluster=z_lens, z_source=z_source, cosmo=cosmo_clmm, delta_mdef=delta_so, massdef = massdef, halo_profile_model='NFW', z_src_model='single_plane')) model = mu_bias(magnification, alpha) - 1. return model, magnification def get_halo_concentration(mass_lens, z_lens, cosmo_ccl, relation="Diemer15", mdef="matter", delta_so=200): mdef = pyccl.halos.massdef.MassDef(delta_so, mdef, c_m_relation=relation) concdef = pyccl.halos.concentration.concentration_from_name(relation)() conc = concdef.get_concentration(cosmo=cosmo_clmm.be_cosmo, M=mass_lens, a=cosmo_clmm.get_a_from_z(z=z_lens), mdef_other=mdef) return conc hist = plt.hist(gal_cat['redshift'][selection], bins=100, range=[1.8,3.1], density=True, stacked=True); pdf_zsource = zpdf_from_hist(hist, zmin=0, zmax=10) plt.plot(pdf_zsource.x, pdf_zsource.y, 'r') plt.xlim(1,3.4) plt.xlabel('z source') plt.ylabel('pdf') zint = np.linspace(0, 3.5, 1000) zrand = np.random.choice(zint, 1000, p=pdf_zsource(zint)/np.sum(pdf_zsource(zint))) Mc_relation = "Diemer15" mdef = ["matter", "mean"] #differet terminology for ccl and clmm delta_so=200 #model with the full redshift distribution rp_Mpc = np.logspace(-2, 3, 100) model_mbias = np.zeros((rp_Mpc.size, len(z_cl), len(mass_cl))) model_magnification = np.zeros((rp_Mpc.size, len(z_cl), len(mass_cl))) for i in np.arange(z_cl.shape[0]): for j in np.arange(mass_cl.shape[1]): #rp_Mpc = arcmin_to_Mpc(rp, z_cl[i,j], cosmo) models = magnification_biais_model(rp_Mpc, mass_cl[i,j], z_cl[i,j], alpha_cut, zrand, cosmo_clmm, delta_so, massdef=mdef[1], Mc_relation=Mc_relation) model_mbias[:,i,j] = models[0] model_magnification[:,i,j] = models[1] ``` ## **Plotting figures** ## Example for one mass/z bin ``` i,j = 1,2 corr = np.mean(gal_cat['magnification']) - 1 plt.fill_between(quant[i,j][0], y1= quant[i,j][1] - np.sqrt(np.diag(quant[i,j][2])),\ y2 = quant[i,j][1] + np.sqrt(np.diag(quant[i,j][2])),color = 'grey', alpha=0.4, label='measured') expected_mu_bias = mu_bias(quant_NK[i,j][1] - corr, alpha_cut) - 1. expected_mu_bias_err = expected_mu_bias * (alpha_cut -1 ) * np.sqrt(np.diag(quant_NK[i,j][2])) /(quant_NK[i,j][1]) plt.errorbar(quant_NK[i,j][0], expected_mu_bias, yerr = expected_mu_bias_err, fmt='r.', label = 'predicted from meas. $\mu$') plt.plot(rp_Mpc, model_mbias[:,i,j],'k', lw=2, label='model (1 halo term)') plt.axvline(cosmo.kpc_proper_per_arcmin(z_cl[i,j]).to(u.Mpc/u.arcmin).value*healpy.nside2resol(4096, arcmin = True), linestyle="dotted", color='grey', label ='healpix resol.') plt.xscale('log') plt.xlim(0.1,8) plt.ylim(-0.25,1) plt.grid() plt.xlabel('$\\theta$ [Mpc]') plt.ylabel('$\delta_{\mu}$') plt.legend(fontsize='small', ncol=1) ``` ## Magnification biais profiles for cluster in mass/z bins ``` fig, axes = plt.subplots(5,5, figsize=[20,15], sharex=True) corr = np.mean(gal_cat['magnification']) - 1 for i,h in zip([0,1,2,3,4],range(5)): for j,k in zip([0,1,2,3,4],range(5)): ax = axes[5-1-k,h] ax.fill_between(quant[i,j][0], y1= quant[i,j][1] - np.sqrt(np.diag(quant[i,j][2])),\ y2 = quant[i,j][1] + np.sqrt(np.diag(quant[i,j][2])),color = 'grey', alpha=0.4) expected_mu_bias = mu_bias(quant_NK[i,j][1] - corr, alpha_cut) - 1. expected_mu_bias_err = expected_mu_bias * (alpha_cut -1 ) * np.sqrt(np.diag(quant_NK[i,j][2])) /(quant_NK[i,j][1]) ax.errorbar(quant_NK[i,j][0], expected_mu_bias, yerr = expected_mu_bias_err, fmt='r.', label = 'predicted from meas. $\mu$') ax.axvline(cosmo.kpc_proper_per_arcmin(z_cl[i,j]).to(u.Mpc/u.arcmin).value*healpy.nside2resol(4096, arcmin = True), linestyle="dotted", color='grey', label ='healpix resol.') ax.text(0.5, 0.80, "<z>="+str(round(z_cl[i,j],2)), transform=ax.transAxes, fontsize='x-small') ax.text(0.5, 0.90, "<M/1e14>="+str(round(mass_cl[i,j]/1e14,2)), transform=ax.transAxes, fontsize='x-small'); ax.plot(rp_Mpc, model_mbias[:,i,j],'k--') ax.axvline(0, color='black') [axes[4,j].set_xlabel('$\\theta$ [Mpc]') for j in range(5)] [axes[i,0].set_ylabel('$\delta_{\mu}$') for i in range(5)] plt.tight_layout() axes[0,0].set_xscale('log') axes[0,0].set_xlim(0.1,8) for i in range(axes.shape[0]): axes[4,i].set_ylim(-0.2,0.6) axes[3,i].set_ylim(-0.2,1.3) axes[2,i].set_ylim(-0.2,1.3) axes[1,i].set_ylim(-0.2,2.0) axes[0,i].set_ylim(-0.2,2.5) ``` ## Fitting the mass from the magnification biais profiles using the NFW model ``` def predict_function(radius_Mpc, logM, z_cl): mass_guess = 10**logM return magnification_biais_model(radius_Mpc, mass_guess, z_cl, alpha_cut, zrand, cosmo_clmm, delta_so, massdef=mdef[1], Mc_relation=Mc_relation)[0] def fit_mass(predict_function, data_for_fit, z): popt, pcov = fitters['curve_fit'](lambda radius_Mpc, logM: predict_function(radius_Mpc, logM, z), data_for_fit[0], data_for_fit[1], np.sqrt(np.diag(data_for_fit[2])), bounds=[10.,17.], absolute_sigma=True, p0=(13.)) logm, logm_err = popt[0], np.sqrt(pcov[0][0]) return {'logm':logm, 'logm_err':logm_err, 'm': 10**logm, 'm_err': (10**logm)*logm_err*np.log(10)} fit_mass_magnification = np.zeros(z_cl.shape, dtype=object) mass_eval = np.zeros((z_cl.shape)) mass_min = np.zeros((z_cl.shape)) mass_max = np.zeros((z_cl.shape)) for i in range(5): for j in range(5): fit_mass_magnification[i,j] = fit_mass(predict_function, quant[i,j], z_cl[i,j]) mass_eval[i,j] = fit_mass_magnification[i,j]['m'] mass_min[i,j] = fit_mass_magnification[i,j]['m'] - fit_mass_magnification[i,j]['m_err'] mass_max[i,j] = fit_mass_magnification[i,j]['m'] + fit_mass_magnification[i,j]['m_err'] fig, ax = plt.subplots(1, 3, figsize=(18,4)) ax[0].errorbar(mass_cl[0,:]*0.90, mass_eval[0,:],\ yerr = (mass_eval[0,:] - mass_min[0,:], mass_max[0,:] - mass_eval[0,:]),fmt='-o', label ="z="+str(round(z_cl[0,0],2))) ax[0].errorbar(mass_cl[1,:]*0.95, mass_eval[1,:],\ yerr = (mass_eval[1,:] - mass_min[1,:], mass_max[1,:] - mass_eval[1,:]),fmt='-o', label ="z="+str(round(z_cl[1,0],2))) ax[0].errorbar(mass_cl[2,:]*1.00, mass_eval[2,:],\ yerr = (mass_eval[2,:] - mass_min[2,:], mass_max[2,:] - mass_eval[2,:]),fmt='-o', label ="z="+str(round(z_cl[2,0],2))) ax[0].errorbar(mass_cl[3,:]*1.05, mass_eval[3,:],\ yerr = (mass_eval[3,:] - mass_min[3,:], mass_max[3,:] - mass_eval[3,:]),fmt='-o', label ="z="+str(round(z_cl[3,0],2))) ax[0].errorbar(mass_cl[4,:]*1.10, mass_eval[4,:],\ yerr = (mass_eval[4,:] - mass_min[4,:], mass_max[4,:] - mass_eval[4,:]),fmt='-o', label ="z="+str(round(z_cl[4,0],2))) ax[0].set_xscale('log') ax[0].set_yscale('log') ax[0].plot((4e13, 5e14),(4e13,5e14), color='black', lw=2) ax[0].legend(fontsize = 'small', ncol=1) ax[0].set_xlabel("$M_{FoF}$ true [$M_{\odot}$]") ax[0].set_ylabel("$M_{200,m}$ eval [$M_{\odot}$]") ax[0].grid() ax[1].errorbar(mass_cl[0,:]*0.96, mass_eval[0,:]/mass_cl[0,:],\ yerr = (mass_eval[0,:] - mass_min[0,:], mass_max[0,:] - mass_eval[0,:])/mass_cl[0,:],fmt='-o', label ="z="+str(round(z_cl[0,0],2))) ax[1].errorbar(mass_cl[1,:]*0.98, mass_eval[1,:]/mass_cl[1,:],\ yerr = (mass_eval[1,:] - mass_min[1,:], mass_max[1,:] - mass_eval[1,:])/mass_cl[1,:],fmt='-o', label ="z="+str(round(z_cl[1,0],2))) ax[1].errorbar(mass_cl[2,:]*1.00, mass_eval[2,:]/mass_cl[2,:],\ yerr = (mass_eval[2,:] - mass_min[2,:], mass_max[2,:] - mass_eval[2,:])/mass_cl[2,:],fmt='-o', label ="z="+str(round(z_cl[2,0],2))) ax[1].errorbar(mass_cl[3,:]*1.02, mass_eval[3,:]/mass_cl[3,:],\ yerr = (mass_eval[3,:] - mass_min[3,:], mass_max[3,:] - mass_eval[3,:])/mass_cl[3,:],fmt='-o', label ="z="+str(round(z_cl[3,0],2))) ax[1].errorbar(mass_cl[4,:]*1.04, mass_eval[4,:]/mass_cl[4,:],\ yerr = (mass_eval[4,:] - mass_min[4,:], mass_max[4,:] - mass_eval[4,:])/mass_cl[4,:],fmt='-o', label ="z="+str(round(z_cl[4,0],2))) ax[1].set_xlim(4e13, 5e14) ax[1].set_xscale('log') ax[1].axhline(1, color='black') #ax[1].legend() ax[1].set_xlabel("$M_{FoF}$ true [$M_{\odot}$]") ax[1].set_ylabel("$M_{200,m}$ eval/$M_{FoF}$ true") ax[2].errorbar(z_cl[0,:]*0.96, mass_eval[0,:]/mass_cl[0,:],\ yerr = (mass_eval[0,:] - mass_min[0,:], mass_max[0,:] - mass_eval[0,:])/mass_cl[0,:],fmt='-o', label ="z="+str(round(z_cl[0,0],2))) ax[2].errorbar(z_cl[1,:]*0.98, mass_eval[1,:]/mass_cl[1,:],\ yerr = (mass_eval[1,:] - mass_min[1,:], mass_max[1,:] - mass_eval[1,:])/mass_cl[1,:],fmt='-o', label ="z="+str(round(z_cl[1,0],2))) ax[2].errorbar(z_cl[2,:]*1.00, mass_eval[2,:]/mass_cl[2,:],\ yerr = (mass_eval[2,:] - mass_min[2,:], mass_max[2,:] - mass_eval[2,:])/mass_cl[2,:],fmt='-o', label ="z="+str(round(z_cl[2,0],2))) ax[2].errorbar(z_cl[3,:]*1.02, mass_eval[3,:]/mass_cl[3,:],\ yerr = (mass_eval[3,:] - mass_min[3,:], mass_max[3,:] - mass_eval[3,:])/mass_cl[3,:],fmt='-o', label ="z="+str(round(z_cl[3,0],2))) ax[2].errorbar(z_cl[4,:]*1.04, mass_eval[4,:]/mass_cl[4,:],\ yerr = (mass_eval[4,:] - mass_min[4,:], mass_max[4,:] - mass_eval[4,:])/mass_cl[4,:],fmt='-o', label ="z="+str(round(z_cl[4,0],2))) ax[2].axhline(1, color='black') ax[2].set_ylabel("$M_{200,m}$ eval/$M_{FoF}$ true") ax[2].set_xlabel('z') plt.tight_layout() np.save(path_file + "output_data/fitted_mass_from_magnification_bias_"+key+"_"+mdef[0]+str(delta_so)+"_cM_"+Mc_relation,[mass_eval, mass_min, mass_max]) ``` ## Comparison to the mass fitted from the magnification profile ``` mass_eval_mag, mass_min_mag, mass_max_mag = np.load(path_file + "output_data/fitted_mass_from_magnification_"+key+"_"+mdef[0]+str(delta_so)+"_cM_"+Mc_relation+".npy") fig, ax = plt.subplots(1, 3, figsize=(18,4))# sharex=True )#,sharey=True) colors = ["blue", "green" , "orange", "red", "purple"] ax[0].errorbar(mass_eval_mag[0,:], mass_eval[0,:],xerr = (mass_eval_mag[0,:] - mass_min_mag[0,:], mass_max_mag[0,:] - mass_eval_mag[0,:]),\ yerr = (mass_eval[0,:] - mass_min[0,:], mass_max[0,:] - mass_eval[0,:]),\ fmt='.', color = colors[0], mfc='none', label ="z="+str(round(z_cl[0,0],2))) ax[0].errorbar(mass_eval_mag[1,:], mass_eval[1,:],xerr = (mass_eval_mag[1,:] - mass_min_mag[1,:], mass_max_mag[1,:] - mass_eval_mag[1,:]),\ yerr = (mass_eval[1,:] - mass_min[1,:], mass_max[1,:] - mass_eval[1,:]),\ fmt='.', color = colors[1], mfc='none', label ="z="+str(round(z_cl[1,0],2))) ax[0].errorbar(mass_eval_mag[2,:], mass_eval[2,:],xerr = (mass_eval_mag[2,:] - mass_min_mag[2,:], mass_max_mag[2,:] - mass_eval_mag[2,:]),\ yerr = (mass_eval[2,:] - mass_min[2,:], mass_max[2,:] - mass_eval[2,:]),\ fmt='.', color = colors[2], mfc='none', label ="z="+str(round(z_cl[2,0],2))) ax[0].errorbar(mass_eval_mag[3,:], mass_eval[3,:],xerr = (mass_eval_mag[3,:] - mass_min_mag[3,:], mass_max_mag[3,:] - mass_eval_mag[3,:]),\ yerr = (mass_eval[3,:] - mass_min[3,:], mass_max[3,:] - mass_eval[3,:]),\ fmt='.', color = colors[3], mfc='none', label ="z="+str(round(z_cl[3,0],2))) ax[0].errorbar(mass_eval_mag[4,:], mass_eval[4,:],xerr = (mass_eval_mag[4,:] - mass_min_mag[4,:], mass_max_mag[4,:] - mass_eval_mag[4,:]),\ yerr = (mass_eval[4,:] - mass_min[4,:], mass_max[4,:] - mass_eval[4,:]),\ fmt='.', color = colors[4], mfc='none', label ="z="+str(round(z_cl[4,0],2))) ax[0].set_xscale('log') ax[0].set_yscale('log') ax[0].plot((4e13, 5e14),(4e13,5e14), color='black', lw=2) ax[0].legend(fontsize='small') ax[0].set_xlabel("$M_{200,m}~eval~from~\mu$[$M_{\odot}$]") ax[0].set_ylabel("$M_{200,m}~eval~from~\delta_{\mu}$[$M_{\odot}$]") ax[0].grid() ratio = mass_eval/mass_eval_mag ratio_err = ratio *( (0.5*(mass_max - mass_min))/mass_eval + (0.5*(mass_max_mag - mass_min_mag))/mass_eval_mag ) ax[1].errorbar(mass_cl[0,:]*0.96, ratio[0], yerr = ratio_err[0],fmt = 'o', color = colors[0]) ax[1].errorbar(mass_cl[1,:]*0.98, ratio[1], yerr = ratio_err[1],fmt = 'o', color = colors[1]) ax[1].errorbar(mass_cl[2,:]*1.00, ratio[2], yerr = ratio_err[2],fmt = 'o', color = colors[2]) ax[1].errorbar(mass_cl[3,:]*1.02, ratio[3], yerr = ratio_err[3],fmt = 'o', color = colors[3]) ax[1].errorbar(mass_cl[4,:]*1.04, ratio[4], yerr = ratio_err[4],fmt = 'o', color = colors[4]) ax[1].axhline(1, color='black') ax[1].set_xlabel("$M_{FoF}$ true [$M_{\odot}$]") ax[1].set_ylabel("$\\frac{M_{200,m}~eval~from~\delta_{\mu}}{M_{200,m}~eval~from~\mu}$") ax[1].set_xlim(4e13, 5e14) ax[1].set_xscale('log') ax[2].errorbar(z_cl[0,:]*0.96, ratio[0], yerr = ratio_err[0], fmt = 'o', color = colors[0]) ax[2].errorbar(z_cl[1,:]*0.98, ratio[1], yerr = ratio_err[1], fmt = 'o', color = colors[1]) ax[2].errorbar(z_cl[2,:]*1.00, ratio[2], yerr = ratio_err[2], fmt = 'o', color = colors[2]) ax[2].errorbar(z_cl[3,:]*1.02, ratio[3], yerr = ratio_err[3], fmt = 'o', color = colors[3]) ax[2].errorbar(z_cl[4,:]*1.04, ratio[4], yerr = ratio_err[4], fmt = 'o', color = colors[4]) ax[2].axhline(1, color='black') ax[2].set_ylabel("$\\frac{M_{200,m}~eval~from~\mu}{M_{200,m}~eval~from~\delta_{\mu}}$") ax[2].set_xlabel('z') plt.tight_layout() diff = (mass_eval - mass_eval_mag)/1e14 diff_err = (1/1e14) * np.sqrt((0.5*(mass_max - mass_min))**2 + (0.5*(mass_max_mag - mass_min_mag))**2) plt.hist((diff/diff_err).flatten()); plt.xlabel('$\chi$') plt.axvline(0, color='black') plt.axvline(-1, color='black', ls='--') plt.axvline(1, color='black', ls='--') plt.axvline(np.mean((diff/diff_err).flatten()), color='red') plt.axvline(np.mean((diff/diff_err).flatten()) - np.std((diff/diff_err).flatten()), color='red', ls=':') plt.axvline(np.mean((diff/diff_err).flatten()) + np.std((diff/diff_err).flatten()), color='red', ls=':') print("$\chi$ stats \n", \ "mean",np.round(np.mean((diff/diff_err).flatten()),2),\ ", mean err", np.round(np.std((diff/diff_err).flatten())/np.sqrt(25),2),\ ", std", np.round(np.std((diff/diff_err).flatten()),2),\ ", std approx err", np.round(np.std((diff/diff_err).flatten())/np.sqrt(2*(25-1)),2)) ``` ## Profile plot with the model corresponding to the fitted mass ``` model_for_fitted_mass = np.zeros(z_cl.shape,dtype=object) model_for_fitted_mass_min = np.zeros(z_cl.shape,dtype=object) model_for_fitted_mass_max = np.zeros(z_cl.shape,dtype=object) for i in range(z_cl.shape[0]): for j in range(z_cl.shape[1]): model_for_fitted_mass[i,j] = magnification_biais_model(rp_Mpc, mass_eval[i,j], z_cl[i,j], alpha_cut, zrand, cosmo_clmm, delta_so, massdef=mdef[1], Mc_relation=Mc_relation)[0] model_for_fitted_mass_min[i,j] = magnification_biais_model(rp_Mpc, mass_min[i,j], z_cl[i,j], alpha_cut, zrand, cosmo_clmm, delta_so, massdef=mdef[1], Mc_relation=Mc_relation)[0] model_for_fitted_mass_max[i,j] = magnification_biais_model(rp_Mpc, mass_max[i,j], z_cl[i,j], alpha_cut, zrand, cosmo_clmm, delta_so, massdef=mdef[1], Mc_relation=Mc_relation)[0] fig, axes = plt.subplots(5,5, figsize=[20,15], sharex=True) corr = np.mean(gal_cat['magnification']) - 1 for i,h in zip([0,1,2,3,4],range(5)): for j,k in zip([0,1,2,3,4],range(5)): ax = axes[5-1-k,h] ax.fill_between(quant[i,j][0], y1= quant[i,j][1] - np.sqrt(np.diag(quant[i,j][2])),\ y2 = quant[i,j][1] + np.sqrt(np.diag(quant[i,j][2])),color = 'grey', alpha=0.4) expected_mu_bias = mu_bias(quant_NK[i,j][1] - corr, alpha_cut) - 1. expected_mu_bias_err = expected_mu_bias * (alpha_cut -1 ) * np.sqrt(np.diag(quant_NK[i,j][2])) /(quant_NK[i,j][1]) ax.errorbar(quant_NK[i,j][0], expected_mu_bias, yerr = expected_mu_bias_err, fmt='r.', label = 'predicted from meas. $\mu$') ax.axvline(cosmo.kpc_proper_per_arcmin(z_cl[i,j]).to(u.Mpc/u.arcmin).value*healpy.nside2resol(4096, arcmin = True), linestyle="dotted", color='grey', label ='healpix resol.') ax.text(0.55, 0.80, "<z>="+str(round(z_cl[i,j],2)), transform=ax.transAxes, fontsize='x-small') ax.text(0.55, 0.90, "<M/1e14>="+str(round(mass_cl[i,j]/1e14,2)), transform=ax.transAxes, fontsize='x-small'); ax.set_xlabel('$\\theta$ [Mpc]') ax.set_ylabel('$\delta_{\mu}$') ax.plot(rp_Mpc, model_mbias[:,i,j],'k--') ax.fill_between(rp_Mpc, y1 = model_for_fitted_mass_min[i,j], y2 = model_for_fitted_mass_max[i,j],color='red', alpha=0.5) plt.tight_layout() axes[0,0].set_xscale('log') axes[0,0].set_xlim(0.1,8) for i in range(axes.shape[0]): axes[4,i].set_ylim(-0.2,0.6) axes[3,i].set_ylim(-0.2,1.3) axes[2,i].set_ylim(-0.2,1.3) axes[1,i].set_ylim(-0.2,2.0) axes[0,i].set_ylim(-0.2,2.5) ```
github_jupyter
[View in Colaboratory](https://colab.research.google.com/github/nikolay-bushkov/kaist_internship/blob/master/active_learning_for_question_classification.ipynb) # Predicting question type on the Yahoo dataset Based on https://developers.google.com/machine-learning/guides/text-classification/, https://github.com/cosmic-cortex/modAL/blob/f8df6021a1343d511d4c9b4c108ec5b683ce5487/examples/ranked_batch_mode.ipynb and https://doi.org/10.1145/3159652.3159733 (https://www.researchgate.net/publication/322488294_Identifying_Informational_vs_Conversational_Questions_on_Community_Question_Answering_Archives) ``` !pip3 install -q https://github.com/nikolay-bushkov/modAL/archive/feature/sparse_matrix_support.zip import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from sklearn.preprocessing import FunctionTransformer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_classif from sklearn.pipeline import Pipeline from sklearn.neural_network import MLPClassifier ``` ## Data preparation ``` !wget -q http://files.deeppavlov.ai/datasets/yahoo_answers_data/train.csv !wget -q http://files.deeppavlov.ai/datasets/yahoo_answers_data/valid.csv train_ds = pd.read_csv('train.csv') test_ds = pd.read_csv('valid.csv') X_train_full_raw, y_train_full = train_ds['Title'].values.astype(np.unicode), train_ds['Label'].values X_test_raw, y_test = test_ds['Title'].values.astype(np.unicode), test_ds['Label'].values for text, target in zip(X_train_full_raw[2:4], y_train_full[2:4]): print("Target: {}".format(target)) print(text) transformation_pipe = Pipeline([ ('featurizer', TfidfVectorizer(stop_words='english', ngram_range=(1, 3))), ('feature_selector', SelectKBest(f_classif, k=20000)) ]) X_train_full = transformation_pipe.fit_transform(X_train_full_raw, y_train_full) X_test = transformation_pipe.transform(X_test_raw) ``` ## Baseline model ``` from sklearn.metrics import roc_auc_score class FCClassifierROCAUC(MLPClassifier): """Replace default initialization parameters and scroing (ROC-AUC instead of accuracy).""" def __init__(self, hidden_layer_sizes=(32,32), alpha=0.01, batch_size=32, max_iter=1000, early_stopping=True, **kwargs): super(FCClassifierROCAUC, self).__init__( hidden_layer_sizes=hidden_layer_sizes, alpha=alpha, batch_size=batch_size, max_iter=max_iter, early_stopping=early_stopping, **kwargs) def score(self, X, y, sample_weight=None): if np.unique(self.classes_).shape[0] == np.unique(y).shape[0] == 2: # hack for beginning of training return roc_auc_score(y, self.predict(X), average='macro', sample_weight=sample_weight) else: return 0.5 clf = FCClassifierROCAUC() %time clf.fit(X_train_full, y_train_full) clf.score(X_train_full, y_train_full) clf.score(X_test, y_test) ``` ## Active learning part ``` from modAL.models import ActiveLearner def random_sampling(classifier, X_pool, n_instances=1, **uncertainty_measure_kwargs): n_samples = X_pool.shape[0] query_idx = np.random.choice(range(n_samples), size=n_instances) return query_idx, X_pool[query_idx] from sklearn.model_selection import train_test_split SEED_SIZE = 0 QUERY_SIZE = 16 QUERY_STEPS = (X_train_full.shape[0] - SEED_SIZE) // QUERY_SIZE X_seed, X_pool, y_seed, y_pool = train_test_split(X_train_full, y_train_full, train_size=SEED_SIZE, random_state=42) from modAL.batch import uncertainty_batch_sampling from tqdm import tqdm queries = range(1, QUERY_STEPS + 1) ``` ### Random sampling and uncertainty sampling (ranked-batch mode) ``` X_pool_rs, X_pool_us, y_pool_rs, y_pool_us = X_pool[:], X_pool[:], y_pool[:], y_pool[:] rs_scores = list() us_scores = list() rs_mask = np.ones(X_pool.shape[0], np.bool) us_mask = np.ones(X_pool.shape[0], np.bool) rs_learner = ActiveLearner( estimator=FCClassifierROCAUC(), query_strategy=random_sampling ) us_learner = ActiveLearner( estimator=FCClassifierROCAUC(), query_strategy=uncertainty_batch_sampling ) for query_step in tqdm(queries): #random query_idx_rs, query_inst_rs = rs_learner.query(X_pool_rs, n_instances=QUERY_SIZE) rs_learner.teach(query_inst_rs, y_pool_rs[query_idx_rs]) rs_mask[query_idx_rs] = 0 X_pool_rs, y_pool_rs = X_pool_rs[rs_mask], y_pool_rs[rs_mask] rs_mask = np.ones(X_pool_rs.shape[0], np.bool) rs_scores.append(rs_learner.score(X_test, y_test)) #uncertainty (batch) with cosine distances query_idx_us, query_inst_us = us_learner.query(X_pool_us, n_instances=QUERY_SIZE, metric='cosine', n_jobs=-2) us_learner.teach(query_inst_us, y_pool_us[query_idx_us]) us_mask[query_idx_us] = 0 X_pool_us, y_pool_us = X_pool_us[us_mask], y_pool_us[us_mask] us_mask = np.ones(X_pool_us.shape[0], np.bool) us_scores.append(us_learner.score(X_test, y_test)) plt.plot(rs_scores, label='Random sampling') plt.plot(us_scores, label='Uncertainty sampling (ranked-batch mode)') plt.title('Yahoo Question Classification') plt.xlabel('Query step') plt.ylabel('Test AUC-ROC') plt.legend() X_pool_ubs, y_pool_ubs = X_pool[:], y_pool[:] ubs_learner = ActiveLearner( estimator=FCClassifierROCAUC(), query_strategy=uncertainty_batch_sampling ) ubs_scores = list() ubs_mask = np.ones(X_pool.shape[0], np.bool) for query_step in tqdm(queries): #uncertainty (batch) with euclidean query_idx_ubs, query_inst_ubs = ubs_learner.query(X_pool_ubs, n_instances=QUERY_SIZE, n_jobs=-2) ubs_learner.teach(query_inst_ubs, y_pool_ubs[query_idx_ubs]) ubs_mask[query_idx_ubs] = 0 X_pool_ubs, y_pool_ubs = X_pool_ubs[ubs_mask], y_pool_ubs[ubs_mask] ubs_mask = np.ones(X_pool_ubs.shape[0], np.bool) ubs_scores.append(ubs_learner.score(X_test, y_test)) plt.plot(rs_scores, label='Random sampling') plt.plot(ubs_scores, label='Uncertainty sampling (ranked-batch mode, euclidean metric)') plt.title('Yahoo Question Classification') plt.xlabel('Query step') plt.ylabel('Test AUC-ROC') plt.legend() ```
github_jupyter
## Imports And Setting UP the Jupyter ``` import os from bs4 import BeautifulSoup as bs import requests import pymongo from splinter import Browser from webdriver_manager.chrome import ChromeDriverManager from splinter import Browser import pandas as pd #I am having a lot of issues with chromedriver so im including both ways to get it # executable_path = {'executable_path': ChromeDriverManager().install()} # browser = Browser('chrome', **executable_path, headless=False) executable_path = {'executable_path':r"C:\Users\danza\.wdm\drivers\chromedriver\win32\88.0.4324.96\chromedriver.exe"} browser = Browser('chrome', **executable_path, headless=False) ``` ## Connecting to the Nasa Url ``` nasaurl = "https://mars.nasa.gov/news/" browser.visit(nasaurl) html = browser.html nsoup = bs(html, 'html.parser') # I want to see if this works. After confirming it works it will be commented out since it has no further need for nsoup # nsoup ``` ## Things that are identified before obtaining the information we need through code ### Inspect the Webpage #### item_list - is where the titles and paragraph are located #### slide to get each individual slides or news article #### content_title is what we need to get a title #### "article_teaser_body" is the class to get a paragraph ``` # Get the lastest 5 titles om the page. I would prefer if i can loop this this but for now this will do. I went through many variations of this code, and none of them went the way I wanted them. Ntitle0 = nsoup.find_all("div", class_ = "content_title")[0].text Ntitle1 = nsoup.find_all("div", class_ = "content_title")[1].text Ntitle2 = nsoup.find_all("div", class_ = "content_title")[2].text Ntitle3 = nsoup.find_all("div", class_ = "content_title")[3].text Ntitle4 = nsoup.find_all("div", class_ = "content_title")[4].text # Get the 5 latest paragraphs of the page I would prefer if i can loop this this but for now this will do. Npar0 = nsoup.find_all('div', class_="article_teaser_body")[0].text Npar1 = nsoup.find_all('div', class_="article_teaser_body")[1].text Npar2 = nsoup.find_all('div', class_="article_teaser_body")[2].text Npar3 = nsoup.find_all('div', class_="article_teaser_body")[3].text Npar4 = nsoup.find_all('div', class_="article_teaser_body")[4].text #I would prefer to to do a for a loop that would capture the title and paragraph on the same loop. But i couldnt figure it out. So this will do. Either way the result i intended would have been the same. # Print the captured Variables and putting them in readable format. the print(" ") is just so i can have an empty space as a seperator between each article/summary group print(f"News Title: " + Ntitle0) print(f"Brief Summary: " + Npar0) print(" ") print(f"News Title: " + Ntitle1) print(f"Brief Summary: " + Npar1) print(" ") print(f"News Title: " + Ntitle2) print(f"Brief Summary: " + Npar2) print(" ") print(f"News Title: " + Ntitle3) print(f"Brief Summary: " + Npar3) print(" ") print(f"News Title: " + Ntitle4) print(f"Brief Summary: " + Npar4) # Images Scraping ## Interesting Information before coding ### link to Featured Image 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/image/featured/mars3.jpg' ### overall class = header ### class = header-image #setting up the scrap. By visiting the Website to scrap the image marspicurl = "https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html" browser.visit(marspicurl) phtml = browser.html psoup = bs(phtml, 'html.parser') #since I already I identified the link the featured image all i have to do is code it. Technically i dont have to do anything beyond this featured_image_url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/image/featured/mars3.jpg' print(featured_image_url) ``` # Mars Data Frame ## Scrapping the page to obtain the diameter, weight, mass ### Location of all the is in the class is tabled id = "tablepress-comp-mars" ``` #Connect to the Space Facts / Mars Page marsfacturl= "https://space-facts.com/mars/" #Reading the website through pandas mars_read = pd.read_html(marsfacturl) mars_read #The item we want is 0. MarsFact_df = mars_read[0] MarsFact_df #Renaming the Column Names to give it a better description MarsFact_df.columns = ["Data Types" , "Data Values"] MarsFact_df #I just wanted to bring up the comparison with Earth MarsEarthFact_df = mars_read[1] MarsEarthFact_df #Both 0 and 1 would have answered the questions. With about a 4 row difference, and 2 of the the rows in 1 are irrelavant to Earth. # Hemisphere Link Scraping # Not going to link to the website through this markdown like last time. I am purposelly keeping that error to remind this memo actually links things automatically #setting up the page to scrape hemiurl = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars" browser.visit(hemiurl) hhtml = browser.html hsoup = bs(hhtml, 'html.parser') # Obtaining the links andv put all those commentted section ins a dictionary. MarsHemisphere = { "Cereberus" :" https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif/full.jpg", "Schiaparelli" : "https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/schiaparelli_enhanced.tif/full.jpg", "Syrtis Major" : "https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/syrtis_major_enhanced.tif/full.jpg", "Valles Marineris" : "https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/valles_marineris_enhanced.tif/full.jpg "} MarsHemisphere ```
github_jupyter
``` from __future__ import print_function import sys import numpy as np from time import time import matplotlib.pyplot as plt from tqdm import tqdm import math import struct import binascii sys.path.append('/home/xilinx') from pynq import Overlay from pynq import allocate def float2bytes(fp): packNo = struct.pack('>f', np.float32(fp)) return int(binascii.b2a_hex(packNo), 16) print("Entry:", sys.argv[0]) print("System argument(s):", len(sys.argv)) print("Start of \"" + sys.argv[0] + "\"") # Overlay and IP ol = Overlay("/home/xilinx/xrwang/SQA_Opt3.bit") ipSQA = ol.QuantumMonteCarloOpt3_0 ipDMAIn = ol.axi_dma_0 # Number of Spins and Number of Trotters and Wirte Trotters numSpins = 1024 numTrotters = 8 trotters = np.random.randint(2, size=(numTrotters*numSpins)) k = 0 for addr in tqdm(range(0x400, 0x2400, 0x04)): # 8 * 1024 * 1 Byte (it extend boolean into uint8) tmp = (trotters[k+3] << 24) + (trotters[k+2] << 16) + (trotters[k+1] << 8) + (trotters[k]) ipSQA.write(addr, int(tmp)) k += 4 # Generate Random Numbers rndNum = np.ndarray(shape=(numSpins), dtype=np.float32) for i in tqdm(range(numSpins)): # rndNum[i] = np.random.randn() rndNum[i] = i+1 rndNum /= numSpins rndNum # Generate J coupling inBuffer0 = allocate(shape=(numSpins,numSpins), dtype=np.float32) for i in tqdm(range(numSpins)): for j in range(numSpins): inBuffer0[i][j] = - rndNum[i] * rndNum[j] # Some Constant Kernel Arguments ipSQA.write(0x10, numTrotters) # nTrot ipSQA.write(0x18, numSpins) # nSpin for addr in range(0x2400, 0x3400, 0x04): ipSQA.write(addr, 0) # h[i] # Iterations Parameters iter = 500 maxBeta = 8.0 Beta = 1.0 / 4096.0 G0 = 8.0 dBeta = (maxBeta-Beta) / iter # Iterations timeList = [] trottersList = [] for i in tqdm(range(iter)): # # Write Random Numbers (8*1024*4Bytes) # rn = np.random.uniform(0.0, 1.0, size=numTrotters*numSpins) # rn = np.log(rn) * numTrotters # Generate Jperp Gamma = G0 * (1.0 - i/iter) Jperp = -0.5 * np.log(np.tanh((Gamma/numTrotters) * Beta)) / Beta # # Write Random Nubmers # k = 0 # for addr in range(0x4000, 0xC000, 0x04): # ipSQA.write(addr, float2bytes(rn[k])) # k += 1 # Write Beta & Jperp ipSQA.write(0x3400, float2bytes(Jperp)) ipSQA.write(0x3408, float2bytes(Beta)) timeKernelStart = time() # Start Kernel ipSQA.write(0x00, 0x01) # Write Jcoup Stream ipDMAIn.sendchannel.transfer(inBuffer0) # Stream of Jcoup ipDMAIn.sendchannel.wait() # Wait at Here while (ipSQA.read(0x00) & 0x4) == 0x0: continue timeKernelEnd = time() timeList.append(timeKernelEnd - timeKernelStart) # Beta Incremental Beta += dBeta k = 0 newTrotters=np.ndarray(shape=numTrotters*numSpins) for addr in range(0x400, 0x2400, 0x04): # 8 * 1024 * 1 Byte (it extend boolean into uint8) tmp = ipSQA.read(addr) newTrotters[k] = (tmp) & 0x01 newTrotters[k+1] = (tmp>>8) & 0x01 newTrotters[k+2] = (tmp>>16) & 0x01 newTrotters[k+3] = (tmp>>24) & 0x01 k += 4 trottersList.append(newTrotters) print("Kernel execution time: " + str(np.sum(timeList)) + " s") best = (0,0,0,0,10e22) sumEnergy = [] k = 0 for trotters in tqdm(trottersList): a = 0 b = 0 sumE = 0 k += 1 for t in range(numTrotters): for i in range(numSpins): if trotters[t*numSpins+i] == 0: a += rndNum[i] else: b += rndNum[i] E = (a-b)**2 sumE += E if best[4] > E : best = (k, t, a, b, E) sumEnergy.append(sumE) plt.figure(figsize=(30,10)) plt.plot(sumEnergy) best ```
github_jupyter
**NOTE: An version of this post is on the PyMC3 [examples](https://docs.pymc.io/notebooks/blackbox_external_likelihood.html) page.** <!-- PELICAN_BEGIN_SUMMARY --> [PyMC3](https://docs.pymc.io/index.html) is a great tool for doing Bayesian inference and parameter estimation. It has a load of [in-built probability distributions](https://docs.pymc.io/api/distributions.html) that you can use to set up priors and likelihood functions for your particular model. You can even create your own [custom distributions](https://docs.pymc.io/prob_dists.html#custom-distributions). However, this is not necessarily that simple if you have a model function, or probability distribution, that, for example, relies on an external code that you have little/no control over (and may even be, for example, wrapped `C` code rather than Python). This can be problematic went you need to pass parameters set as PyMC3 distributions to these external functions; your external function probably wants you to pass it floating point numbers rather than PyMC3 distributions! <!-- PELICAN_END_SUMMARY --> ```python import pymc3 as pm: from external_module import my_external_func # your external function! # set up your model with pm.Model(): # your external function takes two parameters, a and b, with Uniform priors a = pm.Uniform('a', lower=0., upper=1.) b = pm.Uniform('b', lower=0., upper=1.) m = my_external_func(a, b) # <--- this is not going to work! ``` Another issue is that if you want to be able to use the gradient-based step samplers like [NUTS](https://docs.pymc.io/api/inference.html#module-pymc3.step_methods.hmc.nuts) and [Hamiltonian Monte Carlo (HMC)](https://docs.pymc.io/api/inference.html#hamiltonian-monte-carlo) then your model/likelihood needs a gradient to be defined. If you have a model that is defined as a set of Theano operators then this is no problem - internally it will be able to do automatic differentiation - but if yor model is essentially a "black box" then you won't necessarily know what the gradients are. Defining a model/likelihood that PyMC3 can use that calls your "black box" function is possible, but it relies on creating a [custom Theano Op](https://docs.pymc.io/advanced_theano.html#writing-custom-theano-ops). There are many [threads](https://discourse.pymc.io/search?q=as_op) on the PyMC3 [discussion forum](https://discourse.pymc.io/) about this (e.g., [here](https://discourse.pymc.io/t/custom-theano-op-to-do-numerical-integration/734), [here](https://discourse.pymc.io/t/using-pm-densitydist-and-customized-likelihood-with-a-black-box-function/1760) and [here](https://discourse.pymc.io/t/connecting-pymc3-to-external-code-help-with-understanding-theano-custom-ops/670)), but I couldn't find any clear example that described doing what I mention above. So, thanks to a very nice example [sent](https://discourse.pymc.io/t/connecting-pymc3-to-external-code-help-with-understanding-theano-custom-ops/670/7?u=mattpitkin) to me by [Jørgen Midtbø](https://github.com/jorgenem/), I have created what I hope is a clear description. Do let [me](https://twitter.com/matt_pitkin) know if you have any questions/spot any mistakes. In the examples below, I'm going to create a very simple model and log-likelihood function in [Cython](http://cython.org/). I use Cython just as an example to show what you might need if calling external `C` codes, but you could in fact be using pure Python codes. The log-likelihood function I use is actually just a [Normal distribution](https://en.wikipedia.org/wiki/Normal_distribution), so this is obviously overkill (and I'll compare it to doing the same thing purely with PyMC3 distributions), but should provide a simple to follow demonstration. ``` %matplotlib inline %load_ext Cython import numpy as np import pymc3 as pm import theano import theano.tensor as tt # for reproducibility here's some version info for modules used in this notebook import platform import cython import IPython import matplotlib import emcee import corner import os print("Python version: {}".format(platform.python_version())) print("IPython version: {}".format(IPython.__version__)) print("Cython version: {}".format(cython.__version__)) print("GSL version: {}".format(os.popen('gsl-config --version').read().strip())) print("Numpy version: {}".format(np.__version__)) print("Theano version: {}".format(theano.__version__)) print("PyMC3 version: {}".format(pm.__version__)) print("Matplotlib version: {}".format(matplotlib.__version__)) print("emcee version: {}".format(emcee.__version__)) print("corner version: {}".format(corner.__version__)) ``` First, I'll define my "_super-complicated_"&trade; model (a straight line!), which is parameterised by two variables (a gradient `m` and a y-intercept `c`) and calculated at a vector of points `x`. I'll define the model in [Cython](http://cython.org/) and call [GSL](https://www.gnu.org/software/gsl/) functions just to show that you could be calling some other `C` library that you need. In this case, the model parameters are all packed into a list/array/tuple called `theta`. I'll also define my "_really-complicated_"&trade; log-likelihood function (a Normal log-likelihood that ignores the normalisation), which takes in the list/array/tuple of model parameter values `theta`, the points at which to calculate the model `x`, the vector of "observed" data points `data`, and the standard deviation of the noise in the data `sigma`. ``` %%cython -I/usr/include -L/usr/lib/x86_64-linux-gnu -lgsl -lgslcblas -lm import cython cimport cython import numpy as np cimport numpy as np ### STUFF FOR USING GSL (FEEL FREE TO IGNORE!) ### # declare GSL vector structure and functions cdef extern from "gsl/gsl_block.h": cdef struct gsl_block: size_t size double * data cdef extern from "gsl/gsl_vector.h": cdef struct gsl_vector: size_t size size_t stride double * data gsl_block * block int owner ctypedef struct gsl_vector_view: gsl_vector vector int gsl_vector_scale (gsl_vector * a, const double x) nogil int gsl_vector_add_constant (gsl_vector * a, const double x) nogil gsl_vector_view gsl_vector_view_array (double * base, size_t n) nogil ################################################### # define your super-complicated model that uses load of external codes cpdef my_model(theta, np.ndarray[np.float64_t, ndim=1] x): """ A straight line! Note: This function could simply be: m, c = thetha return m*x + x but I've made it more complicated for demonstration purposes """ m, c = theta # unpack line gradient and y-intercept cdef size_t length = len(x) # length of x cdef np.ndarray line = np.copy(x) # make copy of x vector cdef gsl_vector_view lineview # create a view of the vector lineview = gsl_vector_view_array(<double *>line.data, length) # multiply x by m gsl_vector_scale(&lineview.vector, <double>m) # add c gsl_vector_add_constant(&lineview.vector, <double>c) # return the numpy array return line # define your really-complicated likelihood function that uses loads of external codes cpdef my_loglike(theta, np.ndarray[np.float64_t, ndim=1] x, np.ndarray[np.float64_t, ndim=1] data, sigma): """ A Gaussian log-likelihood function for a model with parameters given in theta """ model = my_model(theta, x) return -(0.5/sigma**2)*np.sum((data - model)**2) ``` Now, as things are, if we wanted to sample from this log-likelihood function, using certain prior distributions for the model parameters (gradient and y-intercept) using PyMC3 we might try something like this (using a [PyMC3 `DensityDist`](https://docs.pymc.io/prob_dists.html#custom-distributions)): ```python import pymc3 as pm # create/read in our "data" (I'll show this in the real example below) x = ... sigma = ... data = ... with pm.Model(): # set priors on model gradient and y-intercept m = pm.Uniform('m', lower=-10., upper=10.) c = pm.Uniform('c', lower=-10., upper=10.) # create custom distribution pm.DensityDist('likelihood', my_loglike, observed={'theta': (m, c), 'x': x, 'data': data, 'sigma': sigma}) # sample from the distribution trace = pm.sample(1000) ``` But, this will give an error like: ``` ValueError: setting an array element with a sequence. ``` This is because `m` and `c` are Theano tensor-type objects. So, what we actually need to do is create a [Theano Op](http://deeplearning.net/software/theano/extending/extending_theano.html). This will be a new class that wraps our log-likelihood function (or just our model function, if that is all that is required) into something that can take in Theano tensor objects, but internally can cast them as floating point values that can be passed to our log-likelihood function. I will do this below, initially without defining a [`grad()` method](http://deeplearning.net/software/theano/extending/op.html#grad) for the Op. ``` # define a theano Op for our likelihood function class LogLike(tt.Op): """ Specify what type of object will be passed and returned to the Op when it is called. In our case we will be passing it a vector of values (the parameters that define our model) and returning a single "scalar" value (the log-likelihood) """ itypes = [tt.dvector] # expects a vector of parameter values when called otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood) def __init__(self, loglike, data, x, sigma): """ Initialise the Op with various things that our log-likelihood function requires. Below are the things that are needed in this particular example. Parameters ---------- loglike: The log-likelihood (or whatever) function we've defined data: The "observed" data that our log-likelihood function takes in x: The dependent variable (aka 'x') that our model requires sigma: The noise standard deviation that our function requires. """ # add inputs as class attributes self.likelihood = loglike self.data = data self.x = x self.sigma = sigma def perform(self, node, inputs, outputs): # the method that is used when calling the Op theta, = inputs # this will contain my variables # call the log-likelihood function logl = self.likelihood(theta, self.x, self.data, self.sigma) outputs[0][0] = np.array(logl) # output the log-likelihood ``` Now, let's use this Op to repeat the example shown above. To do this I'll create some data containing a straight line with additive Gaussian noise (with a mean of zero and a standard deviation of `sigma`). I'll set uniform prior distributions on the gradient and y-intercept. As I've not set the `grad()` method of the Op PyMC3 will not be able to use the gradient-based samplers, so will fall back to using the [Slice](https://docs.pymc.io/api/inference.html#module-pymc3.step_methods.slicer) sampler. ``` # set up our data N = 10 # number of data points sigma = 1. # standard deviation of noise x = np.linspace(0., 9., N) mtrue = 0.4 # true gradient ctrue = 3. # true y-intercept truemodel = my_model([mtrue, ctrue], x) # make data data = sigma*np.random.randn(N) + truemodel ndraws = 3000 # number of draws from the distribution nburn = 1000 # number of "burn-in points" (which we'll discard) # create our Op logl = LogLike(my_loglike, data, x, sigma) # use PyMC3 to sampler from log-likelihood with pm.Model(): # uniform priors on m and c m = pm.Uniform('m', lower=-10., upper=10.) c = pm.Uniform('c', lower=-10., upper=10.) # convert m and c to a tensor vector theta = tt.as_tensor_variable([m, c]) # use a DensityDist (use a lamdba function to "call" the Op) pm.DensityDist('likelihood', lambda v: logl(v), observed={'v': theta}) trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True) # plot the traces _ = pm.traceplot(trace, lines=(('m', {}, [mtrue]), ('c', {}, [ctrue]))) # put the chains in an array (for later!) samples_pymc3 = np.vstack((trace['m'], trace['c'])).T ``` What if we wanted to use NUTS or HMC? If we knew the analytical derivatives of the model/likelihood function then we could add a [`grad()` method](http://deeplearning.net/software/theano/extending/op.html#grad) to the Op using that analytical form. But, what if we don't know the analytical form. If our model/likelihood is purely Python and made up of standard maths operators and Numpy functions, then the [autograd](https://github.com/HIPS/autograd) module could potentially be used to find gradients (also, see [here](https://github.com/ActiveState/code/blob/master/recipes/Python/580610_Auto_differentiation/recipe-580610.py) for a nice Python example of automatic differentiation). But, if our model/likelihood truely is a "black box" then we can just use the good-old-fashioned [finite difference](https://en.wikipedia.org/wiki/Finite_difference) to find the gradients - this can be slow, especially if there are a large number of variables, or the model takes a long time to evaluate. Below, I've written a function that uses finite difference (the central difference) to find gradients - it uses an iterative method with successively smaller step sizes to check that the gradient converges. But, you could do something far simpler and just use, for example, the SciPy [`approx_fprime`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.approx_fprime.html) function. ``` import warnings def gradients(vals, func, releps=1e-3, abseps=None, mineps=1e-9, reltol=1e-3, epsscale=0.5): """ Calculate the partial derivatives of a function at a set of values. The derivatives are calculated using the central difference, using an iterative method to check that the values converge as step size decreases. Parameters ---------- vals: array_like A set of values, that are passed to a function, at which to calculate the gradient of that function func: A function that takes in an array of values. releps: float, array_like, 1e-3 The initial relative step size for calculating the derivative. abseps: float, array_like, None The initial absolute step size for calculating the derivative. This overrides `releps` if set. `releps` is set then that is used. mineps: float, 1e-9 The minimum relative step size at which to stop iterations if no convergence is achieved. epsscale: float, 0.5 The factor by which releps if scaled in each iteration. Returns ------- grads: array_like An array of gradients for each non-fixed value. """ grads = np.zeros(len(vals)) # maximum number of times the gradient can change sign flipflopmax = 10. # set steps if abseps is None: if isinstance(releps, float): eps = np.abs(vals)*releps eps[eps == 0.] = releps # if any values are zero set eps to releps teps = releps*np.ones(len(vals)) elif isinstance(releps, (list, np.ndarray)): if len(releps) != len(vals): raise ValueError("Problem with input relative step sizes") eps = np.multiply(np.abs(vals), releps) eps[eps == 0.] = np.array(releps)[eps == 0.] teps = releps else: raise RuntimeError("Relative step sizes are not a recognised type!") else: if isinstance(abseps, float): eps = abseps*np.ones(len(vals)) elif isinstance(abseps, (list, np.ndarray)): if len(abseps) != len(vals): raise ValueError("Problem with input absolute step sizes") eps = np.array(abseps) else: raise RuntimeError("Absolute step sizes are not a recognised type!") teps = eps # for each value in vals calculate the gradient count = 0 for i in range(len(vals)): # initial parameter diffs leps = eps[i] cureps = teps[i] flipflop = 0 # get central finite difference fvals = np.copy(vals) bvals = np.copy(vals) # central difference fvals[i] += 0.5*leps # change forwards distance to half eps bvals[i] -= 0.5*leps # change backwards distance to half eps cdiff = (func(fvals)-func(bvals))/leps while 1: fvals[i] -= 0.5*leps # remove old step bvals[i] += 0.5*leps # change the difference by a factor of two cureps *= epsscale if cureps < mineps or flipflop > flipflopmax: # if no convergence set flat derivative (TODO: check if there is a better thing to do instead) warnings.warn("Derivative calculation did not converge: setting flat derivative.") grads[count] = 0. break leps *= epsscale # central difference fvals[i] += 0.5*leps # change forwards distance to half eps bvals[i] -= 0.5*leps # change backwards distance to half eps cdiffnew = (func(fvals)-func(bvals))/leps if cdiffnew == cdiff: grads[count] = cdiff break # check whether previous diff and current diff are the same within reltol rat = (cdiff/cdiffnew) if np.isfinite(rat) and rat > 0.: # gradient has not changed sign if np.abs(1.-rat) < reltol: grads[count] = cdiffnew break else: cdiff = cdiffnew continue else: cdiff = cdiffnew flipflop += 1 continue count += 1 return grads ``` So, now we can just redefine our Op with a `grad()` method, right? It's not quite so simple! The `grad()` method itself requires that its inputs are Theano tensor variables, whereas our `gradients` function above, like our `my_loglike` function, wants a list of floating point values. So, we need to define another Op that calculates the gradients. Below, I define a new version of the `LogLike` Op, called `LogLikeWithGrad` this time, that has a `grad()` method. This is followed by anothor Op called `LogLikeGrad` that, when called with a vector of Theano tensor variables, returns another vector of values that are the gradients (i.e., the [Jacobian](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant)) of our log-likelihood function at those values. Note that the `grad()` method itself does not return the gradients directly, but instead returns the [Jacobian](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant)-vector product (you can hopefully just copy what I've done and not worry about what this means too much!). ``` # define a theano Op for our likelihood function class LogLikeWithGrad(tt.Op): itypes = [tt.dvector] # expects a vector of parameter values when called otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood) def __init__(self, loglike, data, x, sigma): """ Initialise with various things that the function requires. Below are the things that are needed in this particular example. Parameters ---------- loglike: The log-likelihood (or whatever) function we've defined data: The "observed" data that our log-likelihood function takes in x: The dependent variable (aka 'x') that our model requires sigma: The noise standard deviation that out function requires. """ # add inputs as class attributes self.likelihood = loglike self.data = data self.x = x self.sigma = sigma # initialise the gradient Op (below) self.logpgrad = LogLikeGrad(self.likelihood, self.data, self.x, self.sigma) def perform(self, node, inputs, outputs): # the method that is used when calling the Op theta, = inputs # this will contain my variables # call the log-likelihood function logl = self.likelihood(theta, self.x, self.data, self.sigma) outputs[0][0] = np.array(logl) # output the log-likelihood def grad(self, inputs, g): # the method that calculates the gradients - it actually returns the # vector-Jacobian product - g[0] is a vector of parameter values theta, = inputs # our parameters return [g[0]*self.logpgrad(theta)] class LogLikeGrad(tt.Op): """ This Op will be called with a vector of values and also return a vector of values - the gradients in each dimension. """ itypes = [tt.dvector] otypes = [tt.dvector] def __init__(self, loglike, data, x, sigma): """ Initialise with various things that the function requires. Below are the things that are needed in this particular example. Parameters ---------- loglike: The log-likelihood (or whatever) function we've defined data: The "observed" data that our log-likelihood function takes in x: The dependent variable (aka 'x') that our model requires sigma: The noise standard deviation that out function requires. """ # add inputs as class attributes self.likelihood = loglike self.data = data self.x = x self.sigma = sigma def perform(self, node, inputs, outputs): theta, = inputs # define version of likelihood function to pass to derivative function def lnlike(values): return self.likelihood(values, self.x, self.data, self.sigma) # calculate gradients grads = gradients(theta, lnlike) outputs[0][0] = grads ``` Now, let's re-run PyMC3 with our new "grad"-ed Op. This time it will be able to automatically use NUTS. _Aside: As an addition, I've also defined a `my_model_random` function (note that, in this case, it requires that the `x` variable needed by the model function is global). This is used by the `random` argument of [`DensityDist`](https://docs.pymc.io/api/distributions/utilities.html?highlight=densitydist#pymc3.distributions.DensityDist) to define a function to use to draw instances of the model using the sampled parameters for [posterior predictive](https://docs.pymc.io/api/inference.html?highlight=sample_posterior_predictive#pymc3.sampling.sample_posterior_predictive) checks. This is only really needed if you want to fo posterior predictive check, but otherwise can be left out._ ``` # create our Op logl = LogLikeWithGrad(my_loglike, data, x, sigma) def my_model_random(point=None, size=None): """ Draw posterior predictive samples from model. """ return my_model((point["m"], point["c"]), x) # use PyMC3 to sampler from log-likelihood with pm.Model() as opmodel: # uniform priors on m and c m = pm.Uniform('m', lower=-10., upper=10.) c = pm.Uniform('c', lower=-10., upper=10.) # convert m and c to a tensor vector theta = tt.as_tensor_variable([m, c]) # use a DensityDist pm.DensityDist( 'likelihood', lambda v: logl(v), observed={'v': theta}, random=my_model_random, ) trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True) # plot the traces _ = pm.traceplot(trace, lines=(('m', {}, [mtrue]), ('c', {}, [ctrue]))) # put the chains in an array (for later!) samples_pymc3_2 = np.vstack((trace['m'], trace['c'])).T # just because we can, let's draw posterior predictive samples of the model ppc = pm.sample_posterior_predictive(trace, samples=250, model=opmodel) for vals in ppc['likelihood']: matplotlib.pyplot.plot(x, vals, color='b', alpha=0.05, lw=3) matplotlib.pyplot.plot(x, my_model((mtrue, ctrue), x), 'k--', lw=2) ``` Now, finally, just to check things actually worked as we might expect, let's do the same thing purely using PyMC3 distributions (because in this simple example we can!) ``` with pm.Model() as pymodel: # uniform priors on m and c m = pm.Uniform('m', lower=-10., upper=10.) c = pm.Uniform('c', lower=-10., upper=10.) # convert m and c to a tensor vector theta = tt.as_tensor_variable([m, c]) # use a Normal distribution pm.Normal('likelihood', mu=(m*x + c), sd = sigma, observed=data) trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True) # plot the traces _ = pm.traceplot(trace, lines=(('m', {}, [mtrue]), ('c', {}, [ctrue]))) # put the chains in an array (for later!) samples_pymc3_3 = np.vstack((trace['m'], trace['c'])).T ``` To check that they match let's plot all the examples together and also find the autocorrelation lengths. ``` import warnings warnings.simplefilter(action='ignore', category=FutureWarning) # supress emcee autocorr FutureWarning matplotlib.rcParams['font.size'] = 22 hist2dkwargs = {'plot_datapoints': False, 'plot_density': False, 'levels': 1.0 - np.exp(-0.5 * np.arange(1.5, 2.1, 0.5) ** 2)} # roughly 1 and 2 sigma colors = ['r', 'g', 'b'] labels = ['Theanp Op (no grad)', 'Theano Op (with grad)', 'Pure PyMC3'] for i, samples in enumerate([samples_pymc3, samples_pymc3_2, samples_pymc3_3]): # get maximum chain autocorrelartion length autocorrlen = int(np.max(emcee.autocorr.integrated_time(samples, c=3))); print('Auto-correlation length ({}): {}'.format(labels[i], autocorrlen)) if i == 0: fig = corner.corner(samples, labels=[r"$m$", r"$c$"], color=colors[i], hist_kwargs={'density': True}, **hist2dkwargs, truths=[mtrue, ctrue]) else: corner.corner(samples, color=colors[i], hist_kwargs={'density': True}, fig=fig, **hist2dkwargs) fig.set_size_inches(9, 9) ``` We can now check that the gradient Op works as we expect it to. First, just create and call the `LogLikeGrad` class, which should return the gradient directly (note that we have to create a [Theano function](http://deeplearning.net/software/theano/library/compile/function.html) to convert the output of the Op to an array). Secondly, we call the gradient from `LogLikeWithGrad` by using the [Theano tensor gradient](http://deeplearning.net/software/theano/library/gradient.html#theano.gradient.grad) function. Finally, we will check the gradient returned by the PyMC3 model for a Normal distribution, which should be the same as the log-likelihood function we defined. In all cases we evaluate the gradients at the true values of the model function (the straight line) that was created. ``` # test the gradient Op by direct call theano.config.compute_test_value = "ignore" theano.config.exception_verbosity = "high" var = tt.dvector() test_grad_op = LogLikeGrad(my_loglike, data, x, sigma) test_grad_op_func = theano.function([var], test_grad_op(var)) grad_vals = test_grad_op_func([mtrue, ctrue]) print('Gradient returned by "LogLikeGrad": {}'.format(grad_vals)) # test the gradient called through LogLikeWithGrad test_gradded_op = LogLikeWithGrad(my_loglike, data, x, sigma) test_gradded_op_grad = tt.grad(test_gradded_op(var), var) test_gradded_op_grad_func = theano.function([var], test_gradded_op_grad) grad_vals_2 = test_gradded_op_grad_func([mtrue, ctrue]) print('Gradient returned by "LogLikeWithGrad": {}'.format(grad_vals_2)) # test the gradient that PyMC3 uses for the Normal log likelihood test_model = pm.Model() with test_model: m = pm.Uniform('m', lower=-10., upper=10.) c = pm.Uniform('c', lower=-10., upper=10.) pm.Normal('likelihood', mu=(m*x + c), sigma=sigma, observed=data) gradfunc = test_model.logp_dlogp_function([m, c], dtype=None) gradfunc.set_extra_values({'m_interval__': mtrue, 'c_interval__': ctrue}) grad_vals_pymc3 = gradfunc(np.array([mtrue, ctrue]))[1] # get dlogp values print('Gradient returned by PyMC3 "Normal" distribution: {}'.format(grad_vals_pymc3)) ``` We can also do some [profiling](http://docs.pymc.io/notebooks/profiling.html) of the Op, as used within a PyMC3 Model, to check performance. First, we'll profile using the `LogLikeWithGrad` Op, and then doing the same thing purely using PyMC3 distributions. ``` # profile logpt using our Op opmodel.profile(opmodel.logpt).summary() # profile using our PyMC3 distribution pymodel.profile(pymodel.logpt).summary() ``` The Jupyter notebook used to produce this page can be downloaded from [here](http://mattpitkin.github.io/samplers-demo/downloads/notebooks/PyMC3CustomExternalLikelihood.ipynb).
github_jupyter
![Panel HighCharts Logo](https://raw.githubusercontent.com/MarcSkovMadsen/panel-highcharts/main/assets/images/panel-highcharts-logo.png) # 📈 Panel HighMap Reference Guide The [Panel](https://panel.holoviz.org) `HighMap` pane allows you to use the powerful [HighCharts](https://www.highcharts.com/) [Maps](https://www.highcharts.com/products/maps/) from within the comfort of Python 🐍 and Panel ❤️. ## License The `panel-highcharts` python package and repository is open source and free to use (MIT License), however the **Highcharts js library requires a license for commercial use**. For more info see the Highcharts license [FAQs](https://shop.highsoft.com/faq). ## Parameters: For layout and styling related parameters see the [Panel Customization Guide](https://panel.holoviz.org/user_guide/Customization.html). * **``object``** (dict): The initial user `configuration` of the `chart`. * **``object_update``** (dict) Incremental update to the existing `configuration` of the `chart`. * **``event``** (dict): Events like `click` and `mouseOver` if subscribed to using the `@` terminology. ## Methods * **``add_series``**: The method adds a new series to the chart. Takes the `options`, `redraw` and `animation` arguments. ___ # Usage ## Imports You must import something from panel_highcharts before you run `pn.extension('highmap')` ``` import panel_highcharts as ph ``` Additionally you can specify extra Highcharts `js_files` to include. `mapdata` can be supplied as a list. See the full list at [https://code.highcharts.com](https://code.highcharts.com) ``` ph.config.js_files(mapdata=["custom/europe"]) # Imports https://code.highcharts.com/mapdata/custom/europe.js import panel as pn pn.extension('highmap') ``` ## Configuration The `HighChart` pane is configured by providing a simple `dict` to the `object` parameter. For examples see the HighCharts [demos](https://www.highcharts.com/demo). ``` configuration = { "chart": {"map": "custom/europe", "borderWidth": 1}, "title": {"text": "Nordic countries"}, "subtitle": {"text": "Demo of drawing all areas in the map, only highlighting partial data"}, "legend": {"enabled": False}, "series": [ { "name": "Country", "data": [["is", 1], ["no", 1], ["se", 1], ["dk", 1], ["fi", 1]], "dataLabels": { "enabled": True, "color": "#FFFFFF", "formatter": """function () { if (this.point.value) { return this.point.name; } }""", }, "tooltip": {"headerFormat": "", "pointFormat": "{point.name}"}, } ], } chart = ph.HighMap(object=configuration, sizing_mode="stretch_both", min_height=600) chart ``` ## Layout ``` settings = pn.WidgetBox( pn.Param( chart, parameters=["height", "width", "sizing_mode", "margin", "object", "object_update", "event", ], widgets={"object": pn.widgets.LiteralInput, "object_update": pn.widgets.LiteralInput, "event": pn.widgets.StaticText}, sizing_mode="fixed", show_name=False, width=250, ) ) pn.Row(settings, chart, sizing_mode="stretch_both") ``` Try changing the `sizing_mode` to `fixed` and the `width` to `400`. ## Updates You can *update* the chart by providing a partial `configuration` to the `object_update` parameter. ``` object_update = { "title": {"text": "Panel HighMap - Nordic countries"}, } chart.object_update=object_update ``` Verify that the `title` and `series` was updated in the chart above. ## Events You can subscribe to chart events using an the `@` notation as shown below. If you add a string like `@name`, then the key-value pair `'channel': 'name'` will be added to the `event` dictionary. ``` event_update = { "series": [ { "allowPointSelect": "true", "point": { "events": { "click": "@click;}", "mouseOver": "@mouseOverFun", "select": "@select", "unselect": "@unselect", } }, "events": { "mouseOut": "@mouseOutFun", } } ] } chart.object_update=event_update ``` Verify that you can trigger the `click`, `mouseOver`, `select`, `unselect` and `mouseOut` events in the chart above and that the relevant `channel` value is used. ## Javascript You can use Javascript in the configuration via the `function() {...}` notation. ``` js_update = { "series": [ { "dataLabels": { "formatter": """function () { if (this.point.value) { if (this.point.name=="Denmark"){ return "❤️ " + this.point.name; } else { return this.point.name; } } }""", } } ], } chart.object_update=js_update ``` Verify that the x-axis labels now has `km` units appended in the chart above. # App Finally we can wrap it up into a nice app template. ``` chart.object =configuration = { "chart": {"map": "custom/europe", "borderWidth": 1}, "title": {"text": "Nordic countries"}, "subtitle": {"text": "Demo of drawing all areas in the map, only highlighting partial data"}, "legend": {"enabled": False}, "series": [ { "name": "Country", "data": [["is", 1], ["no", 1], ["se", 1], ["dk", 1], ["fi", 1]], "dataLabels": { "enabled": True, "color": "#FFFFFF", "formatter": """function () { if (this.point.value) { if (this.point.name=="Denmark"){ return "❤️ " + this.point.name; } else { return this.point.name; } } }""", }, "tooltip": {"headerFormat": "", "pointFormat": "{point.name}"}, "allowPointSelect": "true", "point": { "events": { "click": "@click;}", "mouseOver": "@mouseOverFun", "select": "@select", "unselect": "@unselect", } }, "events": { "mouseOut": "@mouseOutFun", } } ], } app = pn.template.FastListTemplate( site="Panel Highcharts", title="HighMap Reference Example", sidebar=[settings], main=[chart] ).servable() ``` You can serve with `panel serve HighMap.ipynb` and explore the app at http://localhost:5006/HighMap. Add the `--autoreload` flag to get *hot reloading* when you save the notebook. ![HighMap Reference Guide](https://raw.githubusercontent.com/MarcSkovMadsen/panel-highcharts/main/assets/images/HighMapApp.gif)
github_jupyter