markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
๋ฏธ์ง€์ˆ˜์˜ ๊ฐฏ์ˆ˜์™€ ์ •๋ฐฉํ–‰๋ ฌ์˜ ๊ณ„์ˆ˜๊ฐ€ ๊ฐ™๋‹ค๋Š” ๊ฒƒ์€ ์ด ์„ ํ˜• ์—ฐ๋ฆฝ ๋ฐฉ์ •์‹์˜ ํ•ด๋ฅผ ๊ตฌํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๋œป์ด๋‹ค.The number of unknowns and the rank of the matrix are the same; we can find a root of this system of linear equations. ์šฐ๋ณ€์„ ์ค€๋น„ํ•ด ๋ณด์ž.Let's prepare for the right side.
vector = py.matrix([[0, 0, 0, 0, 0, 100, 0, 0]]).T
_____no_output_____
BSD-3-Clause
60_linear_algebra_2/015_System_Linear_Eq_Four_Node_Truss.ipynb
cv2316eca19a/nmisp
ํŒŒ์ด์ฌ์˜ ํ™•์žฅ ๊ธฐ๋Šฅ ๊ฐ€์šด๋ฐ ํ•˜๋‚˜์ธ NumPy ์˜ ์„ ํ˜• ๋Œ€์ˆ˜ ๊ธฐ๋Šฅ `solve()` ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•ด๋ฅผ ๊ตฌํ•ด ๋ณด์ž.Using `solve()` of linear algebra subpackage of `NumPy`, a Python package, let's find a solution.
sol = nl.solve(matrix, vector) sol
_____no_output_____
BSD-3-Clause
60_linear_algebra_2/015_System_Linear_Eq_Four_Node_Truss.ipynb
cv2316eca19a/nmisp
![Triangular Truss](triangular_truss.svg) Final Bell๋งˆ์ง€๋ง‰ ์ข…
# stackoverfow.com/a/24634221 import os os.system("printf '\a'");
_____no_output_____
BSD-3-Clause
60_linear_algebra_2/015_System_Linear_Eq_Four_Node_Truss.ipynb
cv2316eca19a/nmisp
![](../graphics/solutions-microsoft-logo-small.png) Python for Data Professionals 02 Programming Basics Course Outline 1 - Overview and Course Setup 2 - Programming Basics (This section) 2.1 - Getting help 2.2 Code Syntax and Structure 2.3 Variables 2.4 Operations and Functions 3 Working with Data 4 Deployment and Environments Programming Basics OverviewFrom here on out, you'll focus on using Python in programming mode - you'll write code that you run from an IDE or a calling environment, not interactively from the command-line. As you work through this explanation, copy the code you see and run it to see the results. After you work through these copy-and-paste examples, you'll create your own code in the Activities that follow each section. 2.1 - Getting helpThe very first thing you should learn in any language is how to get help. You can [find the help documents on-line](https://docs.python.org/3/index.html), or simply type `help()` in your code. For help on a specific topic, put the topic in the parenthesis: `help(str)` To see a list of topics, type `help(topics)`
# Try it:
_____no_output_____
MIT
PythonForDataProfessionals/Python for Data Professionals/notebooks/.ipynb_checkpoints/02 Programming Basics-checkpoint.ipynb
fratei/sqlworkshops
2.2 Code Syntax and StructureLet's cover a few basics about how Python code is written. (For a full discussion, check out the [Style Guide for Python, called PEP 8](https://www.python.org/dev/peps/pep-0008/) ) Let's use the "Zen of Python" rules from Tim Peters for this course: Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than right now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! --Tim PetersIn general, use standard coding practices - don't use keywords for variables, be consistent in your naming (camel-case, lower-case, etc.), comment your code clearly, and understand the general syntax of your language, and follow the principles above. But the most important tip is to at least read the PEP 8 and decide for yourself how well that fits into your Zen.There is one hard-and-fast rule for Python that you *do* need to be aware of: indentation. You **must** indent your code for classes, functions (or methods), loops, conditions, and lists. You can use a tab or four spaces (spaces are the accepted way to do it) but in any case, you have to be consistent. If you use tabs, you always use tabs. If you use spaces, you have to use that throughout. It's best if you set your IDE to handle that for you, whichever way you go.Python code files have an extension of `.py`. Comments in Python start with the hash-tag: ``. There are no block comments (and this makes us all sad) so each line you want to comment must have a tag in front of that line. Keep the lines short (80 characters or so) so that they don't fall off a single-line display like at the command line. 2.3 VariablesVariables stand in for replaceable values. Python is not strongly-typed, meaning you can just declare a variable name and set it to a value at the same time, and Python will try and guess what data type you want. You use an `=` sign to assign values, and `==` to compare things.Quotes \" or ticks \' are fine, just be consistent.` There are some keywords to be aware of, but x and y are always good choices.``x = "Buck" I'm a string.``type(x)``y = 10 I'm an integer.``type(y)`To change the type of a value, just re-enter something else:`x = "Buck" I'm a string.``type(x)``x = 10 Now I'm an integer.``type(x)`Or cast it By implicitly declaring the conversion:`x = "10"``type(x)``print int(x)`To concatenate string values, use the `+` sign:`x = "Buck"``y = " Woody"``print(x + y)`
# Try it:
_____no_output_____
MIT
PythonForDataProfessionals/Python for Data Professionals/notebooks/.ipynb_checkpoints/02 Programming Basics-checkpoint.ipynb
fratei/sqlworkshops
2.4 Operations and FunctionsPython has the following operators: Arithmetic Operators Comparison (Relational) Operators Assignment Operators Logical Operators Bitwise Operators Membership Operators Identity OperatorsYou have the standard operators and functions from most every language. Here are some of the tokens: != *= << ^ " + <<= ^= """ += <= ` % , __ %= - == & -= > b" &= . >= b' ' ... >> j ''' / >>= r" ( // @ r' ) //= J |' * /= [ |= ** : \ ~ **= < ] Wait...that's it? That's all you're going to tell me? *(Hint: use what you've learned):*`help('symbols')`Walk through each of these operators carefully - you'll use them when you work with data in the next module.
# Try it:
_____no_output_____
MIT
PythonForDataProfessionals/Python for Data Professionals/notebooks/.ipynb_checkpoints/02 Programming Basics-checkpoint.ipynb
fratei/sqlworkshops
Activity - Programming basicsOpen the **02_ProgrammingBasics.py** file and run the code you see there. The exercises will be marked out using comments:` - Section Number`
# 02_ProgrammingBasics.py # Purpose: General Programming exercises for Python # Author: Buck Woody # Credits and Sources: Inline # Last Updated: 27 June 2018 # 2.1 Getting Help help() help(str) # <TODO> - Write code to find help on help # 2.2 Code Syntax and Structure # <TODO> - Python uses spaces to indicate code blocks. Fix the code below: x=10 y=5 if x > y: print(str(x) + " is greater than " + str(y)) # <TODO> - Arguments on first line are forbidden when not using vertical alignment. Fix this code: foo = long_function_name(var_one, var_two, var_three, var_four) # <TODO> operators sit far away from their operands. Fix this code: income = (gross_wages + taxable_interest + (dividends - qualified_dividends) - ira_deduction - student_loan_interest) # <TODO> - The import statement should use separate lines for each effort. You can fix the code below # using separate lines or by using the "from" statement: import sys, os # <TODO> - The following code has extra spaces in the wrong places. Fix this code: i=i+1 submitted +=1 x = x * 2 - 1 hypot2 = x * x + y * y c = (a + b) * (a - b) # 2.3 Variables # <TODO> - Add a line below x=3 that changes the variable x from int to a string x=3 type(x) # <TODO> - Write code that prints the string "This class is awesome" using variables: x="is awesome" y="This Class" # 2.4 Operations and Functions # <TODO> - Use some basic operators to write the following code: # Assign two variables # Add them # Subtract 20 from each, add those values together, save that to a new variable # Create a new string variable with the text "The result of my operations are: " # Print out a single string on the screen with the result of the variables # showing that result. # EOF: 02_ProgrammingBasics.py
_____no_output_____
MIT
PythonForDataProfessionals/Python for Data Professionals/notebooks/.ipynb_checkpoints/02 Programming Basics-checkpoint.ipynb
fratei/sqlworkshops
Manual Jupyter Notebook:https://athena.brynmawr.edu/jupyter/hub/dblank/public/Jupyter%20Notebook%20Users%20Manual.ipynb Jupyter Notebook Users ManualThis page describes the functionality of the [Jupyter](http://jupyter.org) electronic document system. Jupyter documents are called "notebooks" and can be seen as many things at once. For example, notebooks allow:* creation in a **standard web browser*** direct **sharing*** using **text with styles** (such as italics and titles) to be explicitly marked using a [wikitext language](http://en.wikipedia.org/wiki/Wiki_markup)* easy creation and display of beautiful **equations*** creation and execution of interactive embedded **computer programs*** easy creation and display of **interactive visualizations**Jupyter notebooks (previously called "IPython notebooks") are thus interesting and useful to different groups of people:* readers who want to view and execute computer programs* authors who want to create executable documents or documents with visualizations Table of Contents* [1. Getting to Know your Jupyter Notebook's Toolbar](1.-Getting-to-Know-your-Jupyter-Notebook's-Toolbar)* [2. Different Kinds of Cells](2.-Different-Kinds-of-Cells) * [2.1 Code Cells](2.1-Code-Cells) * [2.1.1 Code Cell Layout](2.1.1-Code-Cell-Layout) * [2.1.1.1 Row Configuration (Default Setting)](2.1.1.1-Row-Configuration-%28Default-Setting%29) * [2.1.1.2 Cell Tabbing](2.1.1.2-Cell-Tabbing) * [2.1.1.3 Column Configuration](2.1.1.3-Column-Configuration) * [2.2 Markdown Cells](2.2-Markdown-Cells) * [2.3 Raw Cells](2.3-Raw-Cells) * [2.4 Header Cells](2.4-Header-Cells) * [2.4.1 Linking](2.4.1-Linking) * [2.4.2 Automatic Section Numbering and Table of Contents Support](2.4.2-Automatic-Section-Numbering-and-Table-of-Contents-Support) * [2.4.2.1 Automatic Section Numbering](2.4.2.1-Automatic-Section-Numbering) * [2.4.2.2 Table of Contents Support](2.4.2.2-Table-of-Contents-Support) * [2.4.2.3 Using Both Automatic Section Numbering and Table of Contents Support](2.4.2.3-Using-Both-Automatic-Section-Numbering-and-Table-of-Contents-Support)* [3. Keyboard Shortcuts](3.-Keyboard-Shortcuts)* [4. Using Markdown Cells for Writing](4.-Using-Markdown-Cells-for-Writing) * [4.1 Block Elements](4.1-Block-Elements) * [4.1.1 Paragraph Breaks](4.1.1-Paragraph-Breaks) * [4.1.2 Line Breaks](4.1.2-Line-Breaks) * [4.1.2.1 Hard-Wrapping and Soft-Wrapping](4.1.2.1-Hard-Wrapping-and-Soft-Wrapping) * [4.1.2.2 Soft-Wrapping](4.1.2.2-Soft-Wrapping) * [4.1.2.3 Hard-Wrapping](4.1.2.3-Hard-Wrapping) * [4.1.3 Headers](4.1.3-Headers) * [4.1.4 Block Quotes](4.1.4-Block-Quotes) * [4.1.4.1 Standard Block Quoting](4.1.4.1-Standard-Block-Quoting) * [4.1.4.2 Nested Block Quoting](4.1.4.2-Nested-Block-Quoting) * [4.1.5 Lists](4.1.5-Lists) * [4.1.5.1 Ordered Lists](4.1.5.1-Ordered-Lists) * [4.1.5.2 Bulleted Lists](4.1.5.2-Bulleted-Lists) * [4.1.6 Section Breaks](4.1.6-Section-Breaks) * [4.2 Backslash Escape](4.2-Backslash-Escape) * [4.3 Hyperlinks](4.3-Hyperlinks) * [4.3.1 Automatic Links](4.3.1-Automatic-Links) * [4.3.2 Standard Links](4.3.2-Standard-Links) * [4.3.3 Standard Links With Mouse-Over Titles](4.3.3-Standard-Links-With-Mouse-Over-Titles) * [4.3.4 Reference Links](4.3.4-Reference-Links) * [4.3.5 Notebook-Internal Links](4.3.5-Notebook-Internal-Links) * [4.3.5.1 Standard Notebook-Internal Links Without Mouse-Over Titles](4.3.5.1-Standard-Notebook-Internal-Links-Without-Mouse-Over-Titles) * [4.3.5.2 Standard Notebook-Internal Links With Mouse-Over Titles](4.3.5.2-Standard-Notebook-Internal-Links-With-Mouse-Over-Titles) * [4.3.5.3 Reference-Style Notebook-Internal Links](4.3.5.3-Reference-Style-Notebook-Internal-Links) * [4.4 Tables](4.4-Tables) * [4.4.1 Cell Justification](4.4.1-Cell-Justification) * [4.5 Style and Emphasis](4.5-Style-and-Emphasis) * [4.6 Other Characters](4.6-Other-Characters) * [4.7 Including Code Examples](4.7-Including-Code-Examples) * [4.8 Images](4.8-Images) * [4.8.1 Images from the Internet](4.8.1-Images-from-the-Internet) * [4.8.1.1 Reference-Style Images from the Internet](4.8.1.1-Reference-Style-Images-from-the-Internet) * [4.9 LaTeX Math](4.9-LaTeX-Math)* [5. Bibliographic Support](5.-Bibliographic-Support) * [5.1 Creating a Bibtex Database](5.1-Creating-a-Bibtex-Database) * [5.1.1 External Bibliographic Databases](5.1.1-External-Bibliographic-Databases) * [5.1.2 Internal Bibliographic Databases](5.1.2-Internal-Bibliographic-Databases) * [5.1.2.1 Hiding Your Internal Database](5.1.2.1-Hiding-Your-Internal-Database) * [5.1.3 Formatting Bibtex Entries](5.1.3-Formatting-Bibtex-Entries) * [5.2 Cite Commands and Citation IDs](5.2-Cite-Commands-and-Citation-IDs)* [6. Turning Your Jupyter Notebook into a Slideshow](6.-Turning-Your-Jupyter-Notebook-into-a-Slideshow) 1. Getting to Know your Jupyter Notebook's Toolbar At the top of your Jupyter Notebook window there is a toolbar. It looks like this: ![](images/jupytertoolbar.png) Below is a table which helpfully pairs a picture of each of the items in your toolbar with a corresponding explanation of its function. Button|Function-|-![](images/jupytertoolbarsave.png)|This is your save button. You can click this button to save your notebook at any time, though keep in mind that Jupyter Notebooks automatically save your progress very frequently. ![](images/jupytertoolbarnewcell.png)|This is the new cell button. You can click this button any time you want a new cell in your Jupyter Notebook. ![](images/jupytertoolbarcutcell.png)|This is the cut cell button. If you click this button, the cell you currently have selected will be deleted from your Notebook. ![](images/jupytertoolbarcopycell.png)|This is the copy cell button. If you click this button, the currently selected cell will be duplicated and stored in your clipboard. ![](images/jupytertoolbarpastecell.png)|This is the past button. It allows you to paste the duplicated cell from your clipboard into your notebook. ![](images/jupytertoolbarupdown.png)|These buttons allow you to move the location of a selected cell within a Notebook. Simply select the cell you wish to move and click either the up or down button until the cell is in the location you want it to be.![](images/jupytertoolbarrun.png)|This button will "run" your cell, meaning that it will interpret your input and render the output in a way that depends on [what kind of cell] [cell kind] you're using. ![](images/jupytertoolbarstop.png)|This is the stop button. Clicking this button will stop your cell from continuing to run. This tool can be useful if you are trying to execute more complicated code, which can sometimes take a while, and you want to edit the cell before waiting for it to finish rendering. ![](images/jupytertoolbarrestartkernel.png)|This is the restart kernel button. See your kernel documentation for more information.![](images/jupytertoolbarcellkind.png)|This is a drop down menu which allows you to tell your Notebook how you want it to interpret any given cell. You can read more about the [different kinds of cells] [cell kind] in the following section. ![](images/jupytertoolbartoolbartype.png)|Individual cells can have their own toolbars. This is a drop down menu from which you can select the type of toolbar that you'd like to use with the cells in your Notebook. Some of the options in the cell toolbar menu will only work in [certain kinds of cells][cell kind]. "None," which is how you specify that you do not want any cell toolbars, is the default setting. If you select "Edit Metadata," a toolbar that allows you to edit data about [Code Cells][code cells] directly will appear in the corner of all the Code cells in your notebook. If you select "Raw Cell Format," a tool bar that gives you several formatting options will appear in the corner of all your [Raw Cells][raw cells]. If you want to view and present your notebook as a slideshow, you can select "Slideshow" and a toolbar that enables you to organize your cells in to slides, sub-slides, and slide fragments will appear in the corner of every cell. Go to [this section][slideshow] for more information on how to create a slideshow out of your Jupyter Notebook. ![](images/jupytertoolbarsectionmove.png)|These buttons allow you to move the location of an entire section within a Notebook. Simply select the Header Cell for the section or subsection you wish to move and click either the up or down button until the section is in the location you want it to be. If your have used [Automatic Section Numbering][section numbering] or [Table of Contents Support][table of contents] remember to rerun those tools so that your section numbers or table of contents reflects your Notebook's new organization. ![](images/jupytertoolbarsectionnumbering.png)|Clicking this button will automatically number your Notebook's sections. For more information, check out the Reference Guide's [section on Automatic Section Numbering][section numbering].![](images/jupytertoolbartableofcontents.png)|Clicking this button will generate a table of contents using the titles you've given your Notebook's sections. For more information, check out the Reference Guide's [section on Table of Contents Support][table of contents].![](images/jupytertoolbarbib.png)|Clicking this button will search your document for [cite commands][] and automatically generate intext citations as well as a references cell at the end of your Notebook. For more information, you can read the Reference Guide's [section on Bibliographic Support][bib support].![](images/jupytertoolbartab.png)|Clicking this button will toggle [cell tabbing][], which you can learn more about in the Reference Guides' [section on the layout options for Code Cells][cell layout].![](images/jupytertoolbarcollumn.png)|Clicking this button will toggle the [collumn configuration][] for Code Cells, which you can learn more about in the Reference Guides' [section on the layout options for Code Cells][cell layout].![](images/jupytertoolbarspellcheck.png)|Clicking this button will toggle spell checking. Spell checking only works in unrendered [Markdown Cells][] and [Header Cells][]. When spell checking is on all incorrectly spelled words will be underlined with a red squiggle. Keep in mind that the dictionary cannot tell what are [Markdown][md writing] commands and what aren't, so it will occasionally underline a correctly spelled word surrounded by asterisks, brackets, or other symbols that have specific meaning in Markdown. [cell kind]: 2.-Different-Kinds-of-Cells "Different Kinds of Cells"[code cells]: 2.1-Code-Cells "Code Cells"[raw cells]: 2.3-Raw-Cells "Raw Cells"[slideshow]: 6.-Turning-Your-Jupyter-Notebook-into-a-Slideshow "Turning Your Jupyter Notebook Into a Slideshow"[section numbering]: 2.4.2.1-Automatic-Section-Numbering[table of contents]: 2.4.2.2-Table-of-Contents-Support[cell tabbing]: 2.1.1.2-Cell-Tabbing[cell layout]: 2.1.1-Code-Cell-Layout[bib support]: 5.-Bibliographic-Support[cite commands]: 5.2-Cite-Commands-and-Citation-IDs[md writing]: 4.-Using-Markdown-Cells-for-Writing[collumn configuration]: 2.1.1.3-Column-Configuration[Markdown Cells]: 2.2-Markdown-Cells[Header Cells]: 2.4-Header-Cells 2. Different Kinds of Cells There are essentially four kinds of cells in your Jupyter notebook: Code Cells, Markdown Cells, Raw Cells, and Header Cells, though there are six levels of Header Cells. 2.1 Code Cells By default, Jupyter Notebooks' Code Cells will execute Python. Jupyter Notebooks generally also support JavaScript, Python, HTML, and Bash commands. For a more comprehensive list, see your Kernel's documentation. 2.1.1 Code Cell Layout Code cells have both an input and an output component. You can view these components in three different ways. 2.1.1.1 Row Configuration (Default Setting) Unless you specific otherwise, your Code Cells will always be configured this way, with both the input and output components appearing as horizontal rows and with the input above the output. Below is an example of a Code Cell in this default setting:
2 + 3
_____no_output_____
MIT
Data Science Academy/Python Fundamentos/Cap01/JupyterNotebook-ManualUsuario.ipynb
tobraga/Cursos
2.1.1.2 Cell Tabbing Cell tabbing allows you to look at the input and output components of a cell separately. It also allows you to hide either component behind the other, which can be usefull when creating visualizations of data. Below is an example of a tabbed Code Cell:
2+3
_____no_output_____
MIT
Data Science Academy/Python Fundamentos/Cap01/JupyterNotebook-ManualUsuario.ipynb
tobraga/Cursos
2.1.1.3 Column Configuration Like the row configuration, the column layout option allows you to look at both the input and the output components at once. In the column layout, however, the two components appear beside one another, with the input on the left and the output on the right. Below is an example of a Code Cell in the column configuration:
2+3
_____no_output_____
MIT
Data Science Academy/Python Fundamentos/Cap01/JupyterNotebook-ManualUsuario.ipynb
tobraga/Cursos
**Assignment 1 Day 3**
n = int(input("Enter the altitude")) if n<=1000 : print("Safe to land") elif n<=5000 and n>1000 : print("Bring Down to 1000") else : print("Turn Around")
_____no_output_____
Apache-2.0
Day_3_Assignment.ipynb
ratikeshbajpai/Letsupgrade-Python
Recommendations with IBMIn this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform. You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/!/rubrics/2322/view). **Please save regularly.**By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations. Table of ContentsI. [Exploratory Data Analysis](Exploratory-Data-Analysis)II. [Rank Based Recommendations](Rank)III. [User-User Based Collaborative Filtering](User-User)IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](Content-Recs)V. [Matrix Factorization](Matrix-Fact)VI. [Extras & Concluding](conclusions)At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
import pandas as pd import numpy as np import matplotlib.pyplot as plt import project_tests as t import pickle from matplotlib.pyplot import figure %matplotlib inline df = pd.read_csv('data/user-item-interactions.csv') df_content = pd.read_csv('data/articles_community.csv') del df['Unnamed: 0'] del df_content['Unnamed: 0'] # Show df to get an idea of the data df.head() # Show df_content to get an idea of the data df_content.head()
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
Part I : Exploratory Data AnalysisUse the dictionary and cells below to provide some insight into the descriptive statistics of the data.`1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article.
# Count interactions per user, sorted interactions = df.groupby('email').count().drop(['title'],axis=1) interactions.columns = ['nb_articles'] interactions_sorted = interactions.sort_values(['nb_articles']) interactions_sorted.head() interactions_sorted.describe() #plt.figure(figsize=(10,30)) plt.style.use('ggplot') interactions_plot = interactions_sorted.reset_index().groupby('nb_articles').count() interactions_plot.plot.bar(figsize=(20,10)) plt.title('Number of users per number of interactions') plt.xlabel('Interactions') plt.ylabel('Users') plt.legend(('Number of users',),prop={"size":10}) plt.show() # Fill in the median and maximum number of user_article interactios below median_val = 3 # 50% of individuals interact with ____ number of articles or fewer. max_views_by_user = 364 # The maximum number of user-article interactions by any 1 user is ______.
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`2.` Explore and remove duplicate articles from the **df_content** dataframe.
row_per_article = df_content.groupby('article_id').count() duplicates = row_per_article[row_per_article['doc_full_name'] > 1].index df_content[df_content['article_id'].isin(duplicates)].sort_values('article_id') # Remove any rows that have the same article_id - only keep the first df_content_no_duplicates = df_content.drop_duplicates('article_id') df_content_no_duplicates[df_content_no_duplicates['article_id'].isin(duplicates)].sort_values('article_id')
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`3.` Use the cells below to find:**a.** The number of unique articles that have an interaction with a user. **b.** The number of unique articles in the dataset (whether they have any interactions or not).**c.** The number of unique users in the dataset. (excluding null values) **d.** The number of user-article interactions in the dataset.
# Articles with an interaction len(df['article_id'].unique()) # Total articles len(df_content_no_duplicates['article_id'].unique()) # Unique users len(df[df['email'].isnull() == False]['email'].unique()) # Unique interactions len(df) unique_articles = 714 # The number of unique articles that have at least one interaction total_articles = 1051 # The number of unique articles on the IBM platform unique_users = 5148 # The number of unique users user_article_interactions = 45993 # The number of user-article interactions
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
df.groupby('article_id').count().sort_values(by='email',ascending = False).head(1) most_viewed_article_id = str(1429.0) # The most viewed article in the dataset as a string with one value following the decimal max_views = 937 # The most viewed article in the dataset was viewed how many times? ## No need to change the code here - this will be helpful for later parts of the notebook # Run this cell to map the user email to a user_id column and remove the email column def email_mapper(): coded_dict = dict() cter = 1 email_encoded = [] for val in df['email']: if val not in coded_dict: coded_dict[val] = cter cter+=1 email_encoded.append(coded_dict[val]) return email_encoded email_encoded = email_mapper() del df['email'] df['user_id'] = email_encoded # show header df.head() ## If you stored all your results in the variable names above, ## you shouldn't need to change anything in this cell sol_1_dict = { '`50% of individuals have _____ or fewer interactions.`': median_val, '`The total number of user-article interactions in the dataset is ______.`': user_article_interactions, '`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user, '`The most viewed article in the dataset was viewed _____ times.`': max_views, '`The article_id of the most viewed article is ______.`': most_viewed_article_id, '`The number of unique articles that have at least 1 rating ______.`': unique_articles, '`The number of unique users in the dataset is ______`': unique_users, '`The number of unique articles on the IBM platform`': total_articles } # Test your dictionary against the solution t.sol_1_test(sol_1_dict)
It looks like you have everything right here! Nice job!
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
Part II: Rank-Based RecommendationsUnlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.`1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.
def get_top_articles(n, df=df): ''' INPUT: n - (int) the number of top articles to return df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: top_articles - (list) A list of the top 'n' article titles ''' top_articles = list(df.groupby('title').count().sort_values(by='user_id',ascending = False).head(n).index) return top_articles # Return the top article titles from df (not df_content) def get_top_article_ids(n, df=df): ''' INPUT: n - (int) the number of top articles to return df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: top_articles - (list) A list of the top 'n' article titles ''' top_articles = list(df.groupby('article_id').count().sort_values(by='user_id',ascending = False).head(n).index.astype(str)) return top_articles # Return the top article ids print(get_top_articles(10)) print(get_top_article_ids(10)) # Test your function by returning the top 5, 10, and 20 articles top_5 = get_top_articles(5) top_10 = get_top_articles(10) top_20 = get_top_articles(20) # Test each of your three lists from above t.sol_2_test(get_top_articles)
Your top_5 looks like the solution list! Nice job. Your top_10 looks like the solution list! Nice job. Your top_20 looks like the solution list! Nice job.
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
Part III: User-User Based Collaborative Filtering`1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns. * Each **user** should only appear in each **row** once.* Each **article** should only show up in one **column**. * **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1. * **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**. Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
# create the user-article matrix with 1's and 0's def create_user_item_matrix(df): ''' INPUT: df - pandas dataframe with article_id, title, user_id columns OUTPUT: user_item - user item matrix Description: Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with an article and a 0 otherwise ''' user_item = df.groupby(['user_id', 'article_id']).count().groupby(['user_id', 'article_id']).count().unstack() user_item = user_item.fillna(0) return user_item # return the user_item matrix user_item = create_user_item_matrix(df) ## Tests: You should just need to run this cell. Don't change the code. assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right." assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right." assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right." print("You have passed our quick tests! Please proceed!")
You have passed our quick tests! Please proceed!
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users. Use the tests to test your function.
def find_similar_users(user_id, user_item=user_item): ''' INPUT: user_id - (int) a user_id user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: similar_users - (list) an ordered list where the closest users (largest dot product users) are listed first Description: Computes the similarity of every pair of users based on the dot product Returns an ordered ''' # compute similarity of each user to the provided user similarities_matrix = user_item.dot(np.transpose(user_item)) similarities_user = similarities_matrix[similarities_matrix.index == user_id].transpose() similarities_user.columns = ['similarities'] # sort by similarity similarities_sorted = similarities_user.sort_values(by = 'similarities', ascending=False) # create list of just the ids most_similar_users = list(similarities_sorted.index) # remove the own user's id most_similar_users.remove(user_id) return most_similar_users # return a list of the users in order from most to least similar # Do a spot check of your function print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10])) print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5])) print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
The 10 most similar users to user 1 are: [3933, 23, 3782, 203, 4459, 3870, 131, 4201, 46, 5041] The 5 most similar users to user 3933 are: [1, 23, 3782, 203, 4459] The 3 most similar users to user 46 are: [4201, 3782, 23]
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
def get_article_names(article_ids, df=df): ''' INPUT: article_ids - (list) a list of article ids df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: article_names - (list) a list of article names associated with the list of article ids (this is identified by the title column) ''' article_names = df[df['article_id'].isin(article_ids)]['title'].unique().tolist() return article_names # Return the article names associated with list of article ids def get_user_articles(user_id, user_item=user_item): ''' INPUT: user_id - (int) a user id user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: article_ids - (list) a list of the article ids seen by the user article_names - (list) a list of article names associated with the list of article ids (this is identified by the doc_full_name column in df_content) Description: Provides a list of the article_ids and article titles that have been seen by a user ''' user_transpose = user_item[user_item.index == user_id].transpose() user_transpose.columns = ['seen'] article_ids = list(user_transpose[user_transpose['seen'] == 1].reset_index()['article_id'].astype(str)) article_names = get_article_names(article_ids,df) return article_ids, article_names # return the ids and names def user_user_recs(user_id, m=10): ''' INPUT: user_id - (int) a user id m - (int) the number of recommendations you want for the user OUTPUT: recs - (list) a list of recommendations for the user Description: Loops through the users based on closeness to the input user_id For each user - finds articles the user hasn't seen before and provides them as recs Does this until m recommendations are found Notes: Users who are the same closeness are chosen arbitrarily as the 'next' user For the user where the number of recommended articles starts below m and ends exceeding m, the last items are chosen arbitrarily ''' articles_seen = get_user_articles(user_id) closest_users = find_similar_users(user_id) # Keep the recommended articles here recs = np.array([]) # Go through the users and identify articles they have seen the user hasn't seen for user in closest_users: users_articles_seen_id, users_articles_seen_name = get_user_articles(user) #Obtain recommendations for each neighbor new_recs = np.setdiff1d(users_articles_seen_id, articles_seen, assume_unique=True) # Update recs with new recs recs = np.unique(np.concatenate([new_recs, recs], axis=0)) # If we have enough recommendations exit the loop if len(recs) > m-1: break # Pick the first m recs = recs[0:m] return recs # return your recommendations for this user_id # Test your functions here - No need to change this code - just run this cell assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect." assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect." assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0']) assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']) assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0']) assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']) print("If this is all you see, you passed all of our tests! Nice job!")
If this is all you see, you passed all of our tests! Nice job!
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`4.` Now we are going to improve the consistency of the **user_user_recs** function from above. * Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.* Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.
def get_top_sorted_users(user_id, df=df, user_item=user_item): ''' INPUT: user_id - (int) df - (pandas dataframe) df as defined at the top of the notebook user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: neighbors_df - (pandas dataframe) a dataframe with: neighbor_id - is a neighbor user_id similarity - measure of the similarity of each user to the provided user_id num_interactions - the number of articles viewed by the user - if a u Other Details - sort the neighbors_df by the similarity and then by number of interactions where highest of each is higher in the dataframe ''' # Get similarities similarities_matrix = user_item.dot(np.transpose(user_item)) similarities_user = similarities_matrix[similarities_matrix.index == user_id].transpose() similarities_user.columns = ['similarities'] # Get interactions interactions = df.groupby('user_id').count().drop(['title'],axis=1) interactions.columns = ['interactions'] # Merge similarities with interactions neighbors_df_not_sorted = similarities_user.join(interactions, how='left') neighbors_df = neighbors_df_not_sorted.sort_values(by = ['similarities', 'interactions'], ascending = False) return neighbors_df # Return the dataframe specified in the doc_string def user_user_recs_part2(user_id, m=10): ''' INPUT: user_id - (int) a user id m - (int) the number of recommendations you want for the user OUTPUT: recs - (list) a list of recommendations for the user by article id rec_names - (list) a list of recommendations for the user by article title Description: Loops through the users based on closeness to the input user_id For each user - finds articles the user hasn't seen before and provides them as recs Does this until m recommendations are found Notes: * Choose the users that have the most total article interactions before choosing those with fewer article interactions. * Choose articles with the articles with the most total interactions before choosing those with fewer total interactions. ''' articles_seen = get_user_articles(user_id) closest_users = get_top_sorted_users(user_id).index.tolist() closest_users.remove(user_id) top_articles_all = get_top_article_ids(len(df)) # Keep the recommended articles here recs = np.array([]) # Go through the users and identify articles they have seen the user hasn't seen for user in closest_users: users_articles_seen_id, users_articles_seen_name = get_user_articles(user) # Sort articles according to the number of interactions users_articles_seen_id = sorted(users_articles_seen_id, key=lambda x: top_articles_all.index(x)) # Obtain recommendations for each neighbor new_recs = np.setdiff1d(users_articles_seen_id, articles_seen, assume_unique=True) # Update recs with new recs recs = np.unique(np.concatenate([new_recs, recs], axis=0)) # If we have enough recommendations exit the loop if len(recs) > m-1: break # Pick the first m recs = recs[0:m].tolist() # Get rec names rec_names = get_article_names(recs) return recs, rec_names # Quick spot check - don't change this code - just use it to test your functions rec_ids, rec_names = user_user_recs_part2(20, 10) print("The top 10 recommendations for user 20 are the following article ids:") print(rec_ids) print() print("The top 10 recommendations for user 20 are the following article names:") print(rec_names)
The top 10 recommendations for user 20 are the following article ids: ['1024.0', '1085.0', '109.0', '1150.0', '1151.0', '1152.0', '1153.0', '1154.0', '1157.0', '1160.0'] The top 10 recommendations for user 20 are the following article names: ['airbnb data for analytics: washington d.c. listings', 'analyze accident reports on amazon emr spark', 'tensorflow quick tips', 'airbnb data for analytics: venice listings', 'airbnb data for analytics: venice calendar', 'airbnb data for analytics: venice reviews', 'using deep learning to reconstruct high-resolution audio', 'airbnb data for analytics: vienna listings', 'airbnb data for analytics: vienna calendar', 'airbnb data for analytics: chicago listings']
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
### Tests with a dictionary of results user1_most_sim = get_top_sorted_users(1).iloc[1].name #Find the user that is most similar to user 1 user131_10th_sim = get_top_sorted_users(131).iloc[10].name #Find the 10th most similar user to user 131 ## Dictionary Test Here sol_5_dict = { 'The user that is most similar to user 1.': user1_most_sim, 'The user that is the 10th most similar to user 131': user131_10th_sim, } t.sol_5_test(sol_5_dict)
This all looks good! Nice job!
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users. We would provide the top articles for all the users. `7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
new_user = '0.0' # What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles. # Provide a list of the top 10 article ids you would give to new_user_recs = get_top_article_ids(10) assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users." print("That's right! Nice job!")
That's right! Nice job!
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information. `1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations. This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
def make_content_recs(): ''' INPUT: OUTPUT: '''
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender? This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills. **Write an explanation of your content based recommendation system here.** `3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations. This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
# make recommendations for a brand new user # make a recommendations for a user who only has interacted with article id '1427.0'
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
Part V: Matrix FactorizationIn this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.`1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook.
# Load the matrix here user_item_matrix = pd.read_pickle('user_item_matrix.p') # quick look at the matrix user_item_matrix.head()
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
# Perform SVD on the User-Item Matrix Here u, s, vt = np.linalg.svd(user_item_matrix) s.shape, u.shape, vt.shape # Change the dimensions of u, s, and vt as necessary # update the shape of u and store in u_new u_new = u[:, :len(s)] # update the shape of s and store in s_new s_new = np.zeros((len(s), len(s))) s_new[:len(s), :len(s)] = np.diag(s) # Because we are using 4 latent features and there are only 4 movies, # vt and vt_new are the same vt_new = vt s_new.shape, u_new.shape, vt_new.shape
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
There are no null values in the matrix since we are not using ratings, but whether the user has seen an article or not. Therefore it is enough for us to use SVD, we do not need to use funkSVD which needs to be used when handling null values. `3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
num_latent_feats = np.arange(10,700+10,20) sum_errs = [] for k in num_latent_feats: # restructure with k latent features s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :] # take dot product user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new)) # compute error for each prediction to actual value diffs = np.subtract(user_item_matrix, user_item_est) # total errors and keep track of them err = np.sum(np.sum(np.abs(diffs))) sum_errs.append(err) plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]); plt.xlabel('Number of Latent Features'); plt.ylabel('Accuracy'); plt.title('Accuracy vs. Number of Latent Features');
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below. Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below: * How many users can we make predictions for in the test set? * How many users are we not able to make predictions for because of the cold start problem?* How many articles can we make predictions for in the test set? * How many articles are we not able to make predictions for because of the cold start problem?
df_train = df.head(40000) df_test = df.tail(5993) def create_test_and_train_user_item(df_train, df_test): ''' INPUT: df_train - training dataframe df_test - test dataframe OUTPUT: user_item_train - a user-item matrix of the training dataframe (unique users for each row and unique articles for each column) user_item_test - a user-item matrix of the testing dataframe (unique users for each row and unique articles for each column) test_idx - all of the test user ids test_arts - all of the test article ids ''' # Get user_item_matrices user_item_train = create_user_item_matrix(df_train) user_item_test = create_user_item_matrix(df_test) # Get user ids test_idx = user_item_test.index.tolist() # Get article ids test_arts = user_item_test.columns.droplevel().tolist() return user_item_train, user_item_test, test_idx, test_arts user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test) print('1. How many users can we make predictions for in the test set?') qst_1 = len(np.intersect1d(test_idx, user_item_train.index.tolist(), assume_unique=True)) print(qst_1) print('') print('2. How many users in the test set are we not able to make predictions for because of the cold start problem?') print(len(test_idx) - qst_1) print('') print('3. How many movies can we make predictions for in the test set?') qst_3 = len(np.intersect1d(test_arts, user_item_train.columns.droplevel().tolist(), assume_unique=True)) print(qst_3) print('') print('4. How many movies in the test set are we not able to make predictions for because of the cold start problem') print(len(test_arts) - qst_3) print('') # Replace the values in the dictionary below a = 662 b = 574 c = 20 d = 0 sol_4_dict = { 'How many users can we make predictions for in the test set?': c, 'How many users in the test set are we not able to make predictions for because of the cold start problem?': a, 'How many movies can we make predictions for in the test set?': b, 'How many movies in the test set are we not able to make predictions for because of the cold start problem?': d } t.sol_4_test(sol_4_dict)
Awesome job! That's right! All of the test movies are in the training data, but there are only 20 test users that were also in the training set. All of the other users that are in the test set we have no data on. Therefore, we cannot make predictions for these users using SVD.
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
Please note that I had to modify 'articles' to 'movies' otherwise the function would not get the right result. However, we are talking about articles here, not movies. `5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
# fit SVD on the user_item_train matrix u_train, s_train, vt_train = np.linalg.svd(user_item_train)# fit svd similar to above then use the cells below s_train.shape, u_train.shape, vt_train.shape # Find users to predict in test matrix users_to_predict = np.intersect1d(test_idx, user_item_train.index.tolist(), assume_unique=True).tolist() # Get filtered test matrix user_item_test_f = user_item_test[user_item_test.index.isin(users_to_predict)] # Get position of the users to predict in the train matrix users_train_pos = user_item_train.reset_index()[user_item_train.reset_index()['user_id'].isin(users_to_predict)].index.tolist() # Find articles to predict in test matrix articles_to_predict = np.intersect1d(test_arts, user_item_test.columns.droplevel().tolist(), assume_unique=True).tolist() # Get position of the articles to predict in the train matrix articles_train_pos = user_item_train.columns.droplevel().isin(articles_to_predict) # Get u and vt matrices for train u_test = u_train[users_train_pos,:] vt_test = vt_train[:,articles_train_pos] from sklearn.metrics import accuracy_score # Use these cells to see how well you can use the training # decomposition to predict on test data num_latent_feats = np.arange(10,700+10,20) sum_errs_train = [] sum_errs_test = [] train_errs = [] test_errs = [] for k in num_latent_feats: # restructure with k latent features s_train_new, u_train_new, vt_train_new = np.diag(s_train[:k]), u_train[:, :k], vt_train[:k, :] u_test_new, vt_test_new = u_test[:, :k], vt_test[:k, :] # take dot product user_item_est_train = np.around(np.dot(np.dot(u_train_new, s_train_new), vt_train_new)) user_item_est_test = np.around(np.dot(np.dot(u_test_new, s_train_new), vt_test_new)) # compute error for each prediction to actual value diffs_train = np.subtract(user_item_train, user_item_est_train) diffs_test = np.subtract(user_item_test_f, user_item_est_test) # total errors and keep track of them err_train = np.sum(np.sum(np.abs(diffs_train))) sum_errs_train.append(err_train) err_test = np.sum(np.sum(np.abs(diffs_test))) sum_errs_test.append(err_test) # number of interactions nb_interactions_train = user_item_est_train.shape[0] * user_item_est_train.shape[1] nb_interactions_test = user_item_est_test.shape[0] * user_item_est_test.shape[1] plt.plot(num_latent_feats, 1 - np.array(sum_errs_train)/nb_interactions_train, label = 'Train'); plt.plot(num_latent_feats, 1 - np.array(sum_errs_test)/nb_interactions_test, label = 'Test'); plt.xlabel('Number of Latent Features'); plt.ylabel('Accuracy'); plt.title('Accuracy vs. Number of Latent Features'); plt.legend() plt.show()
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
`6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles? When using SVD on the test dataset, the accuracy decreases with the number of latent factors. This is due to the fact that only a small amount of users (20) are common between the train and test dataset. This makes that our data (0s and 1s) is imbalanced, that there is a big disproportionate ratio of 0s in the dataset compared to the 1s.Moreover, when increasing the number of latent factors, we are increasing the overfitting on the training data (accuracy increases), which also explains why the accuracy on the testing dataset decreases.In order to understand if our results are working in practice, I would conduct an experiment. I would split my users into three groups with different treatments:- Group 1: do not receive any recommendation- Group 2: receives recommendations from a mix of collaborative filtering and top ranked- Group 3: receives recommendations from matrix factorizationWe would split the users on a cookie-based, so that they see the same experience everytime they check the website. I would do the following tests:- Group 1 vs. Group 2, where the null hypothesis is that there is no difference between not providing recommendations to users and providing collaborative + top ranked-based recommendations- Group 1 vs. Group 3, where the null hypothesis that there is no difference between not providing recommendations to suers and providing matrix factorization-based recommendationsThe success metric will be the number of clicks on articles per user per session on the website. This would mainly focus on the novelty effect of the recommendations, as we would assume users would click on articles if they have not seen them before. It could also be beneficial if the users could rate the article (even with just thumbs up or down), in order to know if the article was interesting for them, and therefore if the recommendations are relevant.This success metric would need to be statistically significant to go ahead with implementing the recommendations engine, and which method. If it is not, we'd need to understand if other factors would justify the implementation of the recommendations engine.We would also check the invariant metric within our groups to be sure that they are equivalent (e.g. one group contains a high share of users who have already seen more than 200 articles (for example) while another group contains a high share of users who have only been through 10 articles).
from subprocess import call call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])
_____no_output_____
IBM-pibs
Recommendations_with_IBM.ipynb
julie-data/recommendations-ibm-watson
0.0. IMPORTS
import inflection import math import datetime import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from IPython.core.display import HTML from IPython.display import Image
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
0.1. Helper Functions
def load_csv(path): df = pd.read_csv(path, low_memory=False) return df def rename_columns(df, old_columns): snakecase = lambda x: inflection.underscore(x) cols_new = list(map(snakecase, old_columns)) print(f"Old columns: {df.columns.to_list()}") # Rename df.columns = cols_new print(f"\nNew columns: {df.columns.to_list()}") print('\n', df.columns) return df def show_dimensions(df): print(f"Number of Rows: {df1.shape[0]}") print(f"Number of Columns: {df1.shape[1]}") print(f"Shape: {df1.shape}") return None def show_data_types(df): print(df.dtypes) return None def check_na(df): print(df.isna().sum()) return None def show_descriptive_statistical(df): # Central Tendency - mean, median ct1 = pd.DataFrame(df.apply(np.mean)).T ct2 = pd.DataFrame(df.apply(np.median)).T # Dispersion - std, min, max, range, skew, kurtosis d1 = pd.DataFrame(df.apply(np.std)).T d2 = pd.DataFrame(df.apply(min)).T d3 = pd.DataFrame(df.apply(max)).T d4 = pd.DataFrame(df.apply(lambda x: x.max() - x.min())).T d5 = pd.DataFrame(df.apply(lambda x: x.skew())).T d6 = pd.DataFrame(df.apply(lambda x: x.kurtosis())).T m = pd.concat([d2, d3, d4, ct1, ct2, d1, d5, d6]).T.reset_index() m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis'] print(m) def jupyter_settings(): %matplotlib inline %pylab inline plt.style.use( 'ggplot') plt.rcParams['figure.figsize'] = [24, 9] plt.rcParams['font.size'] = 24 display( HTML( '<style>.container { width:100% !important; }</style>') ) pd.options.display.max_columns = None pd.options.display.max_rows = None pd.set_option( 'display.expand_frame_repr', False ) sns.set() jupyter_settings()
Populating the interactive namespace from numpy and matplotlib
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
0.2. Path Definition
# path home_path = 'C:\\Users\\sindolfo\\rossmann-stores-sales\\' raw_data_path = 'data\\raw\\' interim_data_path = 'data\\interim\\' figures = 'reports\\figures\\'
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
0.3. Loading Data
## Historical data including Sales df_sales_raw = load_csv(home_path+raw_data_path+'train.csv') ## Supplemental information about the stores df_store_raw = load_csv(home_path+raw_data_path+'store.csv') # Merge df_raw = pd.merge(df_sales_raw, df_store_raw, how='left', on='Store')
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
1.0. DATA DESCRIPTION
df1 = df_raw.copy() df1.to_csv(home_path+interim_data_path+'df1.csv')
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
Data fields Most of the fields are self-explanatory. The followingย are descriptions for those that aren't.- **Id** - an Id that represents a (Store, Date) duple within the test set- **Store** -ย a unique Id for each store- **Sales** - the turnover for any given day (this is what you are predicting)- **Customers** - the number of customers on a given day- **Open** - an indicator for whether the store was open: 0 = closed, 1 = open- **StateHoliday** -ย indicatesย a state holiday. Normallyall stores, with few exceptions, are closed on state holidays. Note that all schools are closed on public holidays and weekends. a = publicholiday, b = Easter holiday, c = Christmas, 0 = None- **SchoolHoliday** -ย indicatesย if theย (Store, Date)ย was affected by the closure of public schools- **StoreType**ย -ย differentiates between 4 different store models: a, b, c, d- **Assortment** -ย describes an assortment level: a = basic, b = extra,ย c = extended- **CompetitionDistance** - distance in meters to the nearest competitor store- **CompetitionOpenSince[Month/Year]** - gives the approximate year and month of the time the nearest competitor was opened- **Promo** - indicates whether a store is running a promo on that day- **Promo2** - Promo2ย is a continuing and consecutive promotion for some stores: 0 = store is not participating, 1 = store is participating- **Promo2Since[Year/Week]** -ย describes the year and calendar week when the store startedย participating inย Promo2- **PromoInterval** -ย describesย the consecutive intervals Promo2ย is started, naming the months the promotion is started anew.E.g. "Feb,May,Aug,Nov" means each round starts in February, May, August, November of any given year for that store 1.1. Rename Columns
cols_old = [ 'Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval' ] df1 = rename_columns(df1, cols_old)
Old columns: ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval'] New columns: ['store', 'day_of_week', 'date', 'sales', 'customers', 'open', 'promo', 'state_holiday', 'school_holiday', 'store_type', 'assortment', 'competition_distance', 'competition_open_since_month', 'competition_open_since_year', 'promo2', 'promo2_since_week', 'promo2_since_year', 'promo_interval'] Index(['store', 'day_of_week', 'date', 'sales', 'customers', 'open', 'promo', 'state_holiday', 'school_holiday', 'store_type', 'assortment', 'competition_distance', 'competition_open_since_month', 'competition_open_since_year', 'promo2', 'promo2_since_week', 'promo2_since_year', 'promo_interval'], dtype='object')
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
1.2. Data Dimensions
show_dimensions(df1)
Number of Rows: 1017209 Number of Columns: 18 Shape: (1017209, 18)
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
1.3. Data Types
show_data_types(df1) ## Date is a object type. This is wrong. In the section "Types Changes" others chages is made. df1['date'] = pd.to_datetime(df1['date'])
store int64 day_of_week int64 date object sales int64 customers int64 open int64 promo int64 state_holiday object school_holiday int64 store_type object assortment object competition_distance float64 competition_open_since_month float64 competition_open_since_year float64 promo2 int64 promo2_since_week float64 promo2_since_year float64 promo_interval object dtype: object
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
1.4. Check NA
check_na(df1) ## Columns with NA vales ## competition_distance 2642 ## competition_open_since_month 323348 ## competition_open_since_year 323348 ## promo2_since_week 508031 ## promo2_since_year 508031 ## promo_interval 508031
store 0 day_of_week 0 date 0 sales 0 customers 0 open 0 promo 0 state_holiday 0 school_holiday 0 store_type 0 assortment 0 competition_distance 2642 competition_open_since_month 323348 competition_open_since_year 323348 promo2 0 promo2_since_week 508031 promo2_since_year 508031 promo_interval 508031 dtype: int64
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
1.5. Fillout NA
# competition_distance: distance in meters to the nearest competitor store # # Assumption: if there is a row that is NA in this column, # it is because there is no close competitor. # The way I used to represent this is to put # a number much larger than the maximum value # of the competition_distance variable. # # The number is 250000. df1['competition_distance'] = df1['competition_distance'].apply(lambda x : 250000 if math.isnan(x) else x) # competition_open_since_month: # gives the approximate year and month of the # time the nearest competitor was opened # # Assumption: I'm going to keep this variable because # it's important to have something that expresses # the feeling of "since it happened" or "until when". # # If it's NA I'll copy the month of sale of that line. df1['competition_open_since_month'] = df1.apply(lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis=1) #competition_open_since_year # The same assumption from competition_open_since_month df1['competition_open_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis=1) # promo2_since_week: # describes the year and calendar week when the store started participating in Promo2 # # The same assumption from competition_open_since_month df1['promo2_since_week'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis=1) # promo2_since_year: # describes the year and calendar week when the store started participating in Promo2 df1['promo2_since_year'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis=1) # promo_interval month_map = {1: 'Jan', 2: 'Feb', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug',9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec'} df1['promo_interval'].fillna(0, inplace=True) df1['month_map'] = df1['date'].dt.month.map(month_map) df1['is_promo'] = df1[['promo_interval', 'month_map']].apply(lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split(',') else 0, axis=1)
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
1.6. Type Changes
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype('int64') df1['competition_open_since_year'] = df1['competition_open_since_year'].astype('int64') df1['promo2_since_week'] = df1['promo2_since_week'].astype('int64') df1['promo2_since_year'] = df1['promo2_since_year'].astype('int64')
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
1.7. Descriptive Statistical
num_attributes = df1.select_dtypes(include=['int64', 'float64']) cat_attributes = df1.select_dtypes(exclude=['int64', 'float64', 'datetime64[ns]'])
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
1.7.1 Numerical Attributes
show_descriptive_statistical(num_attributes) sns.displot(df1['sales'])
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
1.7.2 Categorical Attributes
cat_attributes.apply(lambda x: x.unique().shape[0]) aux1 = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)] plt.subplot(1, 3, 1) sns.boxplot(x='state_holiday', y='sales', data=aux1) plt.subplot(1, 3, 2) sns.boxplot(x='store_type', y='sales', data=aux1) plt.subplot(1, 3, 3) sns.boxplot(x='assortment', y='sales', data=aux1)
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
2.0. FEATURE ENGINEERING
df2 = df1.copy() df2.to_csv(home_path+interim_data_path+'df2.csv')
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
2.1. Hypothesis Mind Map
Image(home_path+figures+'mind-map-hypothesis.png')
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
2.2 Creating hypotheses 2.2.1 Store Hypotheses **1.** Stores with larger staff should sell more.**2.** Stores with more inventory should sell more.**3.** Stores with close competitors should sell less.**4.** Stores with a larger assortment should sell more.**5.** Stores with more employees should sell more.**6.** Stores with longer-term competitors should sell more. 2.2.2 Product Hypotheses **1.** Stores with more marketing investment should sell more.**2.** Stores with more products on display should sell more.**3.** Stores that have cheaper products should sell more.**4.** Stores that have more inventory should sell more.**5.** Stores that do more promotions should sell more.**6.** Stores with promotions active for longer should sell more.**7.** Stores with more promotion days should sell more.**8.** Stores with more consecutive promotions should sell more. 2.2.3 Temporal Hypotheses **1.** Stores that have more holidays should sell less.**2.** Stores that open within the first six months should sell more.**3.** Stores that open on weekends should sell more.**4.** Stores open during the Christmas holiday should sell more.**5.** Stores should sell more over the years.**6.** Stores should sell more after the 10th of each month.**7.** Stores should sell more in the second half of the year.**8.** Stores should sell less on weekends.**8.** Stores should sell less during school holidays. 2.3. Final List of Hypotheses **1.** Stores with close competitors should sell less.**2.** Stores with a larger assortment should sell more.**3.** Stores with longer-term competitors should sell more.**4.** Stores with promotions active for longer should sell more.**5.** Stores with more promotion days should sell more.**6.** Stores with more consecutive promotions should sell more.**7.** Stores open during the Christmas holiday should sell more.**8.** Stores should sell more over the years.**9.** Stores should sell more after the 10th of each month.**10.** Stores should sell more in the second half of the year.**11.** Stores should sell less on weekends.**12.** Stores should sell less during school holidays. 2.4. Feature Engineering RODAR DE NOVO O FEATURE ENGINEERING SEPARADAMENTE PARA CADA VARIรVEL
# year df2['year'] = df2['date'].dt.year # month df2['month'] = df2['date'].dt.month # day df2['day'] = df2['date'].dt.day # week of year df2['week_of_year'] = df2['date'].dt.isocalendar().week # year week df2['year_week'] = df2['date'].dt.strftime('%Y-%W') # competition since # I have competition measured in months and years. # Now I'm going to put the two together and create a date. df2['competition_since'] = df2.apply( lambda x: datetime.datetime( year=x['competition_open_since_year'], month=x['competition_open_since_month'], day=1), axis=1) ## competition_time_month df2['competition_time_month'] = ((df2['date'] - df2['competition_since'])/30).apply(lambda x: x.days).astype('int64') # promo since df2['promo_since'] = df2['promo2_since_year'].astype(str) + '-' + df2['promo2_since_week'].astype(str) print(df2['promo_since'].sample()) df2['promo_since'] = df2['promo_since'].apply(lambda x: datetime.datetime.strptime(x + '-1', '%Y-%W') - datetime.timedelta(days=7)) # promo_time_week df2['promo_time_week'] = ((df2['date'] - df2['promo_since'])/7).apply(lambda x: x.days).astype('int64') # assortment df2['assortment'] = df2['assortment'].apply(lambda x: 'basic' if x == 'a' else 'extra' if x == 'b' else 'extended') # state holiday df2.head().T
_____no_output_____
FSFAP
notebooks/c0.2-sg-feature-engineering.ipynb
sindolfoGomes/rossmann-stores-sales
.tg {border-collapse:collapse;border-spacing:0;}.tg td{border-color:white;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;}.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}.tg .tg-baqh{text-align:left;vertical-align:top} 09 Compute best-fit ellipsoid approximation of the whole fruitWe now go for macro modeling.For each fruit, a point cloud, a collection of $(x,y,z)$ coordinates in the space, was defined by the centers of all its individual oil glands.- We compute an ellipsoid that fits the best this point cloud- To that end, we do just an ordinary least square fit to find the best coefficients of the respective quadratic equation that approximate most of the point cloud points.- The algebraic-fit ellipsoid was adapted from [Li and Griffiths (2004)](https://doi.org/10.1109/GMAP.2004.1290055). - This produces a 10-dimensional vector that algebraically defines an ellipsoid. - See [Panou et al. (2020)](https://doi.org/10.1515/jogs-2020-0105) on how to convert this vector into geometric parameters. Approximating a sweet orange Approximating a sour orange
import numpy as np import pandas as pd import glob import os import tifffile as tf from importlib import reload import warnings warnings.filterwarnings( "ignore") import matplotlib.pyplot as plt %matplotlib inline import citrus_utils as vitaminC
_____no_output_____
MIT
jupyter/09_ellipsoid_fruit_fitting.ipynb
amezqui3/vitaminC_morphology
Define the appropriate base/root name and label name- This is where having consistent file naming pays off
tissue_src = '../data/tissue/' oil_src = '../data/oil/' bnames = [os.path.split(x)[-1] for x in sorted(glob.glob(oil_src + 'WR*'))] for i in range(len(bnames)): print(i, '\t', bnames[i]) bname = bnames[0] L = 3 lname = 'L{:02d}'.format(L) rotateby = [2,1,0]
_____no_output_____
MIT
jupyter/09_ellipsoid_fruit_fitting.ipynb
amezqui3/vitaminC_morphology
Load voxel-size data- The micron size of each voxel depends on the scanning parameters
voxel_filename = '../data/citrus_voxel_size.csv' voxel_size = pd.read_csv(voxel_filename) voxsize = (voxel_size.loc[voxel_size.ID == bname, 'voxel_size_microns'].values)[0] print('Each voxel is of side', voxsize, 'microns')
Each voxel is of side 57.5 microns
MIT
jupyter/09_ellipsoid_fruit_fitting.ipynb
amezqui3/vitaminC_morphology
Load oil gland centers and align based on spine- From the previous step, retrieve the `vh` rotation matrix to align the fruit- The point cloud is made to have mean zero and it is scaled according to its voxel size- The scale now should be in cm- Plot 2D projections of the oil glands to make sure the fruit is standing upright after rotation
savefig= False filename = tissue_src + bname + '/' + lname + '/' + bname + '_' + lname + '_vh_alignment.csv' vh = np.loadtxt(filename, delimiter = ',') print(vh) oil_dst = oil_src + bname + '/' + lname + '/' filename = oil_dst + bname + '_' + lname + '_glands.csv' glands = np.loadtxt(filename, delimiter=',', dtype=float) glands = np.matmul(glands, np.transpose(vh)) centerby = np.mean(glands, axis = 0) scaleby = 1e4/voxsize glands = (glands - centerby)/scaleby dst = oil_src + bname + '/' vitaminC.plot_3Dprojections(glands, title=bname+'_'+lname, writefig=savefig, dst=dst)
_____no_output_____
MIT
jupyter/09_ellipsoid_fruit_fitting.ipynb
amezqui3/vitaminC_morphology
Compute the general conic parametersHere we follow the algorithm laid out by [Li and Griffiths (2004)](https://doi.org/10.1109/GMAP.2004.1290055). A general quadratic surface is defined by the equation$$\eqalignno{ & ax^{2}+by^{2}+cz^{2}+2fxy+2gyz+2hzy\ \ \ \ \ \ \ \ \ &\hbox{(1)}\cr &+2px+2qy+2rz+d=0.}$$Let $$\rho = \frac{4J-I}{a^2 + b^2 + c^2},$$$$\eqalignno{ &I = a+b+c &\hbox{(2)}\cr &J =ab+bc+ac-f^{2}-g^{2}-h^{2}&\hbox {(3)}\cr & K=\left[\matrix{ a & h & g \cr h & b & f \cr g & f & c }\right] &\hbox{(4)}}.$$These values are invariant under rotation and translation and equation (1) represents an ellipsoid if $J > 0$ and $IK>0$.With our observations $\{(x_i,y_i,z_i)\}_i$, we would ideally want a vector of parameters $(a,b,c,f,g,h,p,q,r,d)$ such that$$\begin{pmatrix}x_1^2 & y_1^2 & z_1^2 & 2x_1y_1 & 2y_1z_1 & 2x_1z_1 & x_1 & y_1 & z_1 & 1\\x_2^2 & y_2^2 & z_2^2 & 2x_2y_2 & 2y_2z_2 & 2x_2z_2 & x_2 & y_2 & z_2 & 1\\\vdots& \vdots& \vdots& \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\x_n^2 & y_n^2 & z_n^2 & 2x_ny_n & 2y_nz_n & 2x_nz_n & x_n & y_n & z_n & 1\end{pmatrix}\begin{pmatrix}a \\ b \\ \vdots \\ d\end{pmatrix}=\begin{pmatrix}0 \\ 0 \\ \vdots \\ 0\end{pmatrix}$$or$$\mathbf{D}\mathbf{v} = 0$$The solution to the system above can be obtained via Lagrange multipliers$$\min_{\mathbf{v}\in\mathbb{R}^{10}}\left\|\mathbf{D}\mathbf{v}\right\|^2, \quad \mathrm{s.t.}\; kJ - I^2 = 1$$If $k=4$, the resulting vector $\mathbf{v}$ is guaranteed to be an ellipsoid. - Experimental results suggest that the optimization problem also yields ellipsoids for higher $k$'s if there are enough sample points.---- This whole procedure yields a 10-dimensional vector $(a,b,c,f,g,h,p,q,r,1)$, which is then translated to geometric parameters as shown in [Panou et al. (2020)](https://doi.org/10.1515/jogs-2020-0105)We obtain finally a `6 x 3` matrix with all the geometric parameters```[ x,y,z coordinates of ellipsoid center ][ semi-axes lengths ][ | ][ -- 3 x 3 rotation matrix -- ][ | ][ x,y,z rotation angles ]```
np.vstack(tuple(ell_params.values())).shape bbox = (np.max(glands, axis=0) - np.min(glands, axis=0))*.5 guess = np.argsort(np.argsort(bbox)) print(bbox) print(guess[rotateby]) bbox[rotateby] datapoints = glands.T filename = oil_src + bname + '/' + lname + '/' + bname + '_' + lname + '_vox_v_ell.csv' ell_v_params, flag = vitaminC.ell_algebraic_fit2(datapoints, k=4) print(np.around(ell_v_params,3), '\n', flag, 'ellipsoid\n') np.savetxt(filename, ell_v_params, delimiter=',') filename = oil_src + bname + '/' + lname + '/' + bname + '_' + lname + '_vox_m_ell.csv' ell_params = vitaminC.get_ell_params_from_vector(ell_v_params, guess[rotateby]) np.savetxt(filename, np.vstack(tuple(ell_params.values())), delimiter=',') ell_params oil_src + bname + '/' + lname + '/' + bname + '_' + lname + '_ell_m.csv'
_____no_output_____
MIT
jupyter/09_ellipsoid_fruit_fitting.ipynb
amezqui3/vitaminC_morphology
Project the oil gland centers to the best-fit ellipsoid- The oil gland point cloud is translated to the center of the best-fit ellipsoid.- Projection will be **geocentric**: trace a ray from the origin to the oil gland and see where it intercepts the ellipsoid.Additionally, we can compute these projection in terms of geodetic coordinates:- longitude $\lambda\in[-\pi,\pi]$- latitude $\phi\in[-\frac\pi2,\frac\pi2]$ - See [Diaz-Toca _et al._ (2020)](https://doi.org/10.1016/j.cageo.2020.104551) for more details.The geodetic coordinates are invariant with respect to ellipsoid size, as long as the ratio between its semi-major axes lengths remains constant.- These geodetic coordinates are a natural way to translate our data to the sphere- Later, it will allow us to draw machinery from directional statistics ([Pewsey and Garcรญa-Portuguรฉs, 2021](https://doi.org/10.1007/s11749-021-00759-x)).Results are saved in a `N x 3` matrix, where `N` is the number of individual oil glands- Each row of the matrix is```[ longitude latitude residue ]```- The residue is the perpendicular distance from the oil gland to the ellipsoid surface.
footpoints = 'geocentric' _, xyz = vitaminC.get_footpoints(datapoints, ell_params, footpoints) rho = vitaminC.ell_rho(ell_params['axes']) print(rho) eglands = xyz - ell_params['center'].reshape(-1,1) eglands = eglands[rotateby] cglands = datapoints - ell_params['center'].reshape(-1,1) cglands = cglands[rotateby] eglands_params = {'center': np.zeros(len(eglands)), 'axes': ell_params['axes'], 'rotation': np.identity(len(eglands))} geodetic, _ = vitaminC.get_footpoints(eglands, eglands_params, footpoints) filename = oil_dst + bname + '_' + lname + '_' + footpoints + '.csv' np.savetxt(filename, geodetic.T, delimiter=',') print('Saved', filename) pd.DataFrame(geodetic.T).describe()
_____no_output_____
MIT
jupyter/09_ellipsoid_fruit_fitting.ipynb
amezqui3/vitaminC_morphology
Plot the best-fit ellipsoid sphere and the gland projections- Visual sanity check
domain_lon = [-np.pi, np.pi] domain_lat = [-.5*np.pi, 0.5*np.pi] lonN = 25 latN = 25 longitude = np.linspace(*domain_lon, lonN) latitude = np.linspace(*domain_lat, latN) shape_lon, shape_lat = np.meshgrid(longitude, latitude) lonlat = np.vstack((np.ravel(shape_lon), np.ravel(shape_lat))) ecoords = vitaminC.ellipsoid(*(lonlat), *ell_params['axes']) title = bname + '_' + lname + ' - ' + footpoints.title() + ' projection' markersize = 2 sidestep = np.min(bbox) alpha = .5 fs = 20 filename = oil_dst + '_'.join(np.array(title.split(' '))[[0,2]]) vitaminC.plot_ell_comparison(cglands, eglands, ecoords, title, sidestep, savefig=savefig, filename=filename)
_____no_output_____
MIT
jupyter/09_ellipsoid_fruit_fitting.ipynb
amezqui3/vitaminC_morphology
LAB 4c: Create Keras Wide and Deep model.**Learning Objectives**1. Set CSV Columns, label column, and column defaults1. Make dataset of features and label from CSV files1. Create input layers for raw features1. Create feature columns for inputs1. Create wide layer, deep dense hidden layers, and output layer1. Create custom evaluation metric1. Build wide and deep model tying all of the pieces together1. Train and evaluate Introduction In this notebook, we'll be using Keras to create a wide and deep model to predict the weight of a baby before it is born.We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a wide and deep neural network in Keras. We'll create a custom evaluation metric and build our wide and deep model. Finally, we'll train and evaluate our model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4c_keras_wide_and_deep_babyweight.ipynb). Load necessary libraries
import datetime import os import shutil import matplotlib.pyplot as plt import numpy as np import tensorflow as tf print(tf.__version__)
2.1.1
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Verify CSV files existIn the seventh lab of this series [4a_sample_babyweight](../solutions/4a_sample_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
%%bash ls *.csv %%bash head -5 *.csv
==> eval.csv <== 6.3118345610599995,Unknown,35,Single(1),38 5.43659938092,Unknown,21,Multiple(2+),35 7.43839671988,Unknown,20,Single(1),40 6.37576861704,Unknown,27,Multiple(2+),34 7.62358501996,True,30,Single(1),38 ==> test.csv <== 6.9996768185,Unknown,20,Single(1),39 6.9996768185,Unknown,26,Single(1),37 7.93443680938,Unknown,25,Single(1),39 5.5005334369,Unknown,35,Single(1),34 7.31052860792,Unknown,26,Single(1),39 ==> train.csv <== 6.1883756943399995,Unknown,24,Single(1),38 9.39389698382,Unknown,25,Single(1),38 7.81318256528,Unknown,32,Single(1),41 6.75055446244,True,21,Single(1),36 8.12623897732,True,34,Single(1),40
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Create Keras model Lab Task 1: Set CSV Columns, label column, and column defaults.Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
# Determine CSV, label, and key columns # TODO: Create list of string column headers, make sure order matches. CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"] # TODO: Add string name for label column LABEL_COLUMN = "weight_pounds" # Set default values for each CSV column as a list of lists. # Treat is_male and plurality as strings. DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
_____no_output_____
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Lab Task 2: Make dataset of features and label from CSV files.Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
def features_and_labels(row_data): """Splits features and labels from feature dictionary. Args: row_data: Dictionary of CSV column names and tensor values. Returns: Dictionary of feature tensors and label tensor. """ label = row_data.pop(LABEL_COLUMN) return row_data, label # features, label def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): """Loads dataset using the tf.data API from CSV files. Args: pattern: str, file pattern to glob into list of files. batch_size: int, the number of examples per batch. mode: tf.estimator.ModeKeys to determine if training or evaluating. Returns: `Dataset` object. """ # TODO: Make a CSV dataset dataset = tf.data.experimental.make_csv_dataset( file_pattern=pattern, batch_size=batch_size, column_names=CSV_COLUMNS, column_defaults=DEFAULTS) # TODO: Map dataset to features and label dataset = dataset.map(map_func=features_and_labels) # features, label # Shuffle and repeat for training if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.shuffle(buffer_size=1000).repeat() # Take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(buffer_size=1) return dataset
_____no_output_____
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Lab Task 3: Create input layers for raw features.We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.* dtype: The data type expected by the input, as a string (float32, float64, int32...)
def create_input_layers(): """Creates dictionary of input layers for each feature. Returns: Dictionary of `tf.Keras.layers.Input` layers for each feature. """ # TODO: Create dictionary of tf.keras.layers.Input for each dense feature deep_inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ["mother_age", "gestation_weeks"]} # TODO: Create dictionary of tf.keras.layers.Input for each sparse feature wide_inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="string") for colname in ["is_male", "plurality"]} inputs = {**wide_inputs, **deep_inputs} return inputs
_____no_output_____
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Lab Task 4: Create feature columns for inputs.Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
def categorical_fc(name, values): """Helper function to wrap categorical feature by indicator column. Args: name: str, name of feature. values: list, list of strings of categorical values. Returns: Categorical and indicator column of categorical feature. """ cat_column = tf.feature_column.categorical_column_with_vocabulary_list( key=name, vocabulary_list=values) ind_column = tf.feature_column.indicator_column( categorical_column=cat_column) return cat_column, ind_column def create_feature_columns(nembeds): """Creates wide and deep dictionaries of feature columns from inputs. Args: nembeds: int, number of dimensions to embed categorical column down to. Returns: Wide and deep dictionaries of feature columns. """ # TODO: Create deep feature columns for numeric features deep_fc = { colname: tf.feature_column.numeric_column(key=colname) for colname in ["mother_age", "gestation_weeks"] } # TODO: Create wide feature columns for categorical features wide_fc = {} is_male, wide_fc["is_male"] = categorical_fc( "is_male", ["True", "False", "Unknown"]) plurality, wide_fc["plurality"] = categorical_fc( "plurality", ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"]) # TODO: Bucketize the float fields. This makes them wide age_buckets = tf.feature_column.bucketized_column( source_column=deep_fc["mother_age"], boundaries=np.arange(15, 45, 1).tolist()) wide_fc["age_buckets"] = tf.feature_column.indicator_column( categorical_column=age_buckets) gestation_buckets = tf.feature_column.bucketized_column( source_column=deep_fc["gestation_weeks"], boundaries=np.arange(17, 47, 1).tolist()) wide_fc["gestation_buckets"] = tf.feature_column.indicator_column( categorical_column=gestation_buckets) # TODO: Cross all the wide cols, have to do the crossing before we one-hot crossed = tf.feature_column.crossed_column( keys=[age_buckets, gestation_buckets], hash_bucket_size=1000) # TODO: Embed cross and add to deep feature columns deep_fc["crosssed_embeds"] = tf.feature_column.embedding_column( categorical_column=crossed, dimension=nembeds) return wide_fc, deep_fc
_____no_output_____
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Lab Task 5: Create wide and deep model and output layer.So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. We need to create a wide and deep model now. The wide side will just be a linear regression or dense layer. For the deep side, let's create some hidden dense layers. All of this will end with a single dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units): """Creates model architecture and returns outputs. Args: wide_inputs: Dense tensor used as inputs to wide side of model. deep_inputs: Dense tensor used as inputs to deep side of model. dnn_hidden_units: List of integers where length is number of hidden layers and ith element is the number of neurons at ith layer. Returns: Dense tensor output from the model. """ # Hidden layers for the deep side layers = [int(x) for x in dnn_hidden_units] deep = deep_inputs # TODO: Create DNN model for the deep side for layerno, numnodes in enumerate(layers): deep = tf.keras.layers.Dense( units=numnodes, activation="relu", name="dnn_{}".format(layerno+1))(deep) deep_out = deep # TODO: Create linear model for the wide side wide_out = tf.keras.layers.Dense( units=10, activation="relu", name="linear")(wide_inputs) # Concatenate the two sides both = tf.keras.layers.concatenate( inputs=[deep_out, wide_out], name="both") # TODO: Create final output layer output=tf.keras.layers.Dense( units=1, activation="linear", name="weight")(both) return output
_____no_output_____
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Lab Task 6: Create custom evaluation metric.We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
def rmse(y_true, y_pred): """Calculates RMSE evaluation metric. Args: y_true: tensor, true labels. y_pred: tensor, predicted labels. Returns: Tensor with value of RMSE between true and predicted labels. """ # TODO: Calculate RMSE from true and predicted labels return tf.sqrt(tf.reduce_mean((y_pred-y_true)**2))
_____no_output_____
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Lab Task 7: Build wide and deep model tying all of the pieces together.Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is NOT a simple feedforward model with no branching, side inputs, etc. so we can't use Keras' Sequential Model API. We're instead going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3): """Builds wide and deep model using Keras Functional API. Returns: `tf.keras.models.Model` object. """ # Create input layers inputs = create_input_layers() # Create feature columns wide_fc, deep_fc = create_feature_columns(nembeds) # The constructor for DenseFeatures takes a list of numeric columns # The Functional API in Keras requires: LayerConstructor()(inputs) # TODO: Add wide and deep feature colummns wide_inputs = tf.keras.layers.DenseFeatures( feature_columns=wide_fc.values(), name="wide_inputs")(inputs) deep_inputs = tf.keras.layers.DenseFeatures( feature_columns=deep_fc.values(), name="deep_inputs")(inputs) # Get output of model given inputs output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units) # Build model and compile it all together model = tf.keras.models.Model(inputs=inputs, outputs=output) # TODO: Add custom eval metrics to list model.compile(optimizer="adam", loss="mse", metrics=["mse", rmse]) return model print("Here is our wide and deep architecture so far:\n") model = build_wide_deep_model() print(model.summary())
Here is our wide and deep architecture so far: Model: "model_1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== gestation_weeks (InputLayer) [(None,)] 0 __________________________________________________________________________________________________ is_male (InputLayer) [(None,)] 0 __________________________________________________________________________________________________ mother_age (InputLayer) [(None,)] 0 __________________________________________________________________________________________________ plurality (InputLayer) [(None,)] 0 __________________________________________________________________________________________________ deep_inputs (DenseFeatures) (None, 5) 3000 gestation_weeks[0][0] is_male[0][0] mother_age[0][0] plurality[0][0] __________________________________________________________________________________________________ dnn_1 (Dense) (None, 64) 384 deep_inputs[0][0] __________________________________________________________________________________________________ wide_inputs (DenseFeatures) (None, 71) 0 gestation_weeks[0][0] is_male[0][0] mother_age[0][0] plurality[0][0] __________________________________________________________________________________________________ dnn_2 (Dense) (None, 32) 2080 dnn_1[0][0] __________________________________________________________________________________________________ linear (Dense) (None, 10) 720 wide_inputs[0][0] __________________________________________________________________________________________________ both (Concatenate) (None, 42) 0 dnn_2[0][0] linear[0][0] __________________________________________________________________________________________________ weight (Dense) (None, 1) 43 both[0][0] ================================================================================================== Total params: 6,227 Trainable params: 6,227 Non-trainable params: 0 __________________________________________________________________________________________________ None
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
We can visualize the wide and deep network using the Keras plot_model utility.
tf.keras.utils.plot_model( model=model, to_file="wd_model.png", show_shapes=False, rankdir="LR")
_____no_output_____
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Run and evaluate model Lab Task 8: Train and evaluate.We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
TRAIN_BATCH_SIZE = 32 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around NUM_EVALS = 5 # how many times to evaluate # Enough to get a reasonable sample, but not so much that it slows down NUM_EVAL_EXAMPLES = 10000 # TODO: Load training dataset trainds = load_dataset( pattern="train*", batch_size=TRAIN_BATCH_SIZE, mode=tf.estimator.ModeKeys.TRAIN) # TODO: Load evaluation dataset evalds = load_dataset( pattern="eval*", batch_size=1000, mode=tf.estimator.ModeKeys.EVAL).take(count=NUM_EVAL_EXAMPLES // 1000) steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) logdir = os.path.join( "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir=logdir, histogram_freq=1) # TODO: Fit model on training dataset and evaluate every so often history = model.fit( trainds, validation_data=evalds, epochs=NUM_EVALS, steps_per_epoch=steps_per_epoch, callbacks=[tensorboard_callback])
Train for 312 steps, validate for 10 steps Epoch 1/5 312/312 [==============================] - 5s 15ms/step - loss: 1.8696 - mse: 1.8696 - rmse: 1.2285 - val_loss: 1.2763 - val_mse: 1.2763 - val_rmse: 1.1294 Epoch 2/5 312/312 [==============================] - 3s 8ms/step - loss: 1.1673 - mse: 1.1673 - rmse: 1.0681 - val_loss: 1.0742 - val_mse: 1.0742 - val_rmse: 1.0363 Epoch 3/5 312/312 [==============================] - 3s 8ms/step - loss: 1.0982 - mse: 1.0982 - rmse: 1.0377 - val_loss: 1.1666 - val_mse: 1.1666 - val_rmse: 1.0801 Epoch 4/5 312/312 [==============================] - 3s 8ms/step - loss: 1.1077 - mse: 1.1077 - rmse: 1.0439 - val_loss: 1.0761 - val_mse: 1.0761 - val_rmse: 1.0371 Epoch 5/5 312/312 [==============================] - 3s 9ms/step - loss: 1.0656 - mse: 1.0656 - rmse: 1.0212 - val_loss: 1.3020 - val_mse: 1.3020 - val_rmse: 1.1409
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Visualize loss curve
# Plot nrows = 1 ncols = 2 fig = plt.figure(figsize=(10, 5)) for idx, key in enumerate(["loss", "rmse"]): ax = fig.add_subplot(nrows, ncols, idx+1) plt.plot(history.history[key]) plt.plot(history.history["val_{}".format(key)]) plt.title("model {}".format(key)) plt.ylabel(key) plt.xlabel("epoch") plt.legend(["train", "validation"], loc="upper left");
_____no_output_____
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
Save the model
OUTPUT_DIR = "babyweight_trained_wd" shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join( OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S")) tf.saved_model.save( obj=model, export_dir=EXPORT_PATH) # with default serving function print("Exported trained model to {}".format(EXPORT_PATH)) !ls $EXPORT_PATH
assets saved_model.pb variables
Apache-2.0
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/4c_keras_wide_and_deep_babyweight-checkpoint.ipynb
jfesteban/Google-ASL
__________________________
import numpy as np labels = np.load("data/frame_labels_avenue.npy") labels = np.reshape(labels,labels.shape[1]) noll = 0 ett = 0 for x in Y_test: if x == 0: noll += 1 else: ett +=1 print("Noll: ",noll) print("Ett: ",ett) from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(bilder,labels,test_size=0.2, random_state= 10) #nylabels = np.concatenate((labels,nollor)) np.save("data/frame_labels_ped2_2.npy",nylabels) bilder = bilder.reshape(bilder.shape[0],bilder.shape[1],bilder.shape[2],bilder.shape[3],1) from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() bilder = scaler.fit_transform(bilder) output = np.full((2550,1),0) ett = bilder[0,:,:,:] import tensorflow.keras as keras batch_size = 4 model = keras.Sequential() inputs = keras.Input((240, 360, 3, 1)) #model.add(keras.layers.Conv3D(input_shape = ,activation="relu",filters=64,kernel_size=3,padding="same")) model.add(keras.layers.Conv3D(activation="relu",filters=64,kernel_size=3,padding="same"))(inputs) model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1))) model.add(keras.layers.BatchNormalization()) model.add(keras.layers.Conv3D(activation="relu",filters=64,kernel_size=3,padding="same")) model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1))) model.add(keras.layers.Conv3D(activation="relu",filters=128,kernel_size=3,padding="same")) model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1))) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(128,activation="relu")) model.add(keras.layers.Dense(64,activation="relu")) model.add(keras.layers.Dense(10,activation="relu")) model.add(keras.layers.Dense(1,activation="sigmoid")) model.compile(optimizer="adam",metrics=keras.metrics.categorical_crossentropy) model.summary() model = keras.Sequential() model.add(keras.layers.Conv3D(input_shape =(240, 360, 3, 1),activation="relu",filters=64,kernel_size=3,padding="same")) model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1))) model.add(keras.layers.BatchNormalization()) model.add(keras.layers.Conv3D(activation="relu",filters=128,kernel_size=3,padding="same")) model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1))) model.add(keras.layers.Conv3D(activation="relu",filters=128,kernel_size=2,padding="same")) model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1))) model.add(keras.layers.Dense(64,activation="relu")) #model.add(keras.layers.GlobalAveragePooling3D()) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(256,activation="relu")) model.add(keras.layers.Dense(64,activation="relu")) model.add(keras.layers.Dense(10,activation="relu")) model.add(keras.layers.Dense(1,activation="sigmoid")) from tensorflow.keras.layers import Dense,Conv3D,MaxPooling3D,BatchNormalization,Flatten,Input, Add from tensorflow.keras.models import Model input = Input((240,360,3,1)) x = Conv3D(64,3,padding="same")(input) x = MaxPooling3D(pool_size=(3,3,3))(x) x = Flatten()(x) x = Dense(128)(x) #y = Dense(128)(input) y = Flatten()(input) y = Dense(128)(y) y = Dense(128)(y) x = Add()([x,y]) x = Dense(10)(x) x = Dense(1)(x) model = Model(inputs = input,outputs = x) model.compile() model.summary() from tensorflow.keras.utils import plot_model plot_model(model,show_shapes=True) from tensorflow.keras.utils import plot_model plot_model(model,show_shapes=True) with open('data//UCFCrime2Local//UCFCrime2Local//Train_split_AD.txt') as f: lines = f.readlines() import cv2 import numpy as np import os from pathlib import * path = "data/UFC" films = list() files = (x for x in Path(path).iterdir() if x.is_file()) for file in files: #print(str(file.name).split("_")[0], "is a file!") films.append(str(file.name).split("_")[0]) for x in range(len(lines)): if lines[x].strip() != films[x]: print(lines[x]) break import cv2 import numpy as np import os from pathlib import * path = "data//UCFCrime2Local//UCFCrime2Local//Txt annotations" files = (x for x in Path(path).iterdir() if x.is_file()) for file in files: films = list() name = file.name.split(".")[0] with open(file) as f: lines = f.readlines() for line in lines: lost = int(line.split(" ")[6]) if lost == 0: lost = 1 else: lost = 0 films.append(lost) films = np.array(films) np.save(os.path.join("data//UFC//training",name + ".npy"),films) #print(str(file.name).split("_")[0], "is a file!") #films.append(str(file.name).split(" ")[6]) import cv2 import numpy as np import os from pathlib import * file = "data//UCFCrime2Local//UCFCrime2Local//Txt annotations//Burglary099.txt" films = list() name = "Burglary099" with open(file) as f: lines = f.readlines() for line in lines: lost = int(line.split(" ")[6]) if lost == 0: lost = 1 else: lost = 0 films.append(lost) films = np.array(films) np.save(os.path.join("data//UFC//testing",name + ".npy"),films) import numpy as np assult = np.load("data//UFC//testing//NormalVideos004.npy") sub = os.listdir("data//UFC//training//frames") sub = os.listdir("data//UFC//testing//frames") import numpy as np for name in sub: if "Normal" in name: files = os.listdir(os.path.join("data//UFC//training//frames",name)) name = name.split("_")[0:2] name = name[0] + name[1] tom = np.zeros((len(files),),np.int8) np.save(os.path.join("data//UFC//training",name),tom) import tensorflow.keras as keras keras.models.load_model("flow_inception_i3d_kinetics_only_tf_dim_ordering_tf_kernels_no_top.h5") import math import tensorflow.keras as keras import cv2 import os import numpy as np from sklearn.model_selection import train_test_split from tensorflow import config from utils import Dataloader from sklearn.metrics import roc_auc_score , roc_curve from pathlib import * gpus = config.experimental.list_physical_devices('GPU') config.experimental.set_memory_growth(gpus[0], True) test_bilder = list() for folder in os.listdir("data//UFC//testing//frames"): path = os.path.join("data//UFC//testing//frames",folder) #bildmappar.append(folder) for img in os.listdir(path): bild = os.path.join(path,img) test_bilder.append(bild) test_etiketter = list() path = "data//UFC//testing" testnings_ettiketter = (x for x in Path(path).iterdir() if x.is_file()) for ettiket in testnings_ettiketter: test_etiketter.append(np.load(ettiket)) test_etiketter = np.concatenate(test_etiketter,axis=0) batch_size = 16 test_gen = Dataloader(test_bilder,test_etiketter,batch_size) reconstructed_model = keras.models.load_model("modelUFC3D_4-ep004-loss0.367-val_loss0.421.tf") validation_steps = math.floor( len(test_bilder) / batch_size) y_score = reconstructed_model.predict(test_gen,verbose=1) auc = roc_auc_score(test_etiketter,y_score=y_score) print('AUC: ', auc*100, '%') with open('y_score.npy', 'wb') as f: np.save(f, y_score) from sklearn.metrics import RocCurveDisplay import matplotlib.pyplot as plt RocCurveDisplay.from_predictions(test_etiketter,y_score) plt.figure(figsize=(18, 6)) plt.get_figlabels() plt.show
_____no_output_____
MIT
experiemnt1.ipynb
evinus/My-appproch-One
Neural Network ExampleBuild a 2-hidden layers fully connected neural network (a.k.a multilayer perceptron) with TensorFlow.- Author: Aymeric Damien- Project: https://github.com/aymericdamien/TensorFlow-Examples/ Neural Network Overview MNIST Dataset OverviewThis example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flatten and converted to a 1-D numpy array of 784 features (28*28).![MNIST Dataset](http://neuralnetworksanddeeplearning.com/images/mnist_100_digits.png)More info: http://yann.lecun.com/exdb/mnist/
from __future__ import print_function # Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) import tensorflow as tf # Parameters learning_rate = 0.1 num_steps = 500 batch_size = 128 display_step = 100 # Network Parameters n_hidden_1 = 256 # 1st layer number of neurons n_hidden_2 = 256 # 2nd layer number of neurons num_input = 784 # MNIST data input (img shape: 28*28) num_classes = 10 # MNIST total classes (0-9 digits) # tf Graph input X = tf.placeholder("float", [None, num_input]) Y = tf.placeholder("float", [None, num_classes]) # Store layers weight & bias weights = { 'h1': tf.Variable(tf.random_normal([num_input, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_hidden_2, num_classes])) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([num_classes])) } # Create model def neural_net(x): # Hidden fully connected layer with 256 neurons layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) # Hidden fully connected layer with 256 neurons layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) # Output fully connected layer with a neuron for each class out_layer = tf.matmul(layer_2, weights['out']) + biases['out'] return out_layer # Construct model logits = neural_net(X) # Define loss and optimizer loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=Y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train_op = optimizer.minimize(loss_op) # Evaluate model (with test logits, for dropout to be disabled) correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # Start training with tf.Session() as sess: # Run the initializer sess.run(init) for step in range(1, num_steps+1): batch_x, batch_y = mnist.train.next_batch(batch_size) # Run optimization op (backprop) sess.run(train_op, feed_dict={X: batch_x, Y: batch_y}) if step % display_step == 0 or step == 1: # Calculate batch loss and accuracy loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x, Y: batch_y}) print("Step " + str(step) + ", Minibatch Loss= " + \ "{:.4f}".format(loss) + ", Training Accuracy= " + \ "{:.3f}".format(acc)) print("Optimization Finished!") # Calculate accuracy for MNIST test images print("Testing Accuracy:", \ sess.run(accuracy, feed_dict={X: mnist.test.images, Y: mnist.test.labels}))
Step 1, Minibatch Loss= 13208.1406, Training Accuracy= 0.266 Step 100, Minibatch Loss= 462.8610, Training Accuracy= 0.867 Step 200, Minibatch Loss= 232.8298, Training Accuracy= 0.844 Step 300, Minibatch Loss= 85.2141, Training Accuracy= 0.891 Step 400, Minibatch Loss= 38.0552, Training Accuracy= 0.883 Step 500, Minibatch Loss= 55.3689, Training Accuracy= 0.867 Optimization Finished! Testing Accuracy: 0.8729
MIT
TensorFlow-Examples/notebooks/3_NeuralNetworks/neural_network_raw.ipynb
elitej13/project-neural-ersatz
T81-558: Applications of Deep Neural Networks**Module 10: Time Series in Keras*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 10 Material* Part 10.1: Time Series Data Encoding for Deep Learning [[Video]](https://www.youtube.com/watch?v=dMUmHsktl04&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_1_timeseries.ipynb)* Part 10.2: Programming LSTM with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=wY0dyFgNCgY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_2_lstm.ipynb)* **Part 10.3: Text Generation with Keras and TensorFlow** [[Video]](https://www.youtube.com/watch?v=6ORnRAz3gnA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_3_text_generation.ipynb)* Part 10.4: Image Captioning with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=NmoW_AYWkb4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_4_captioning.ipynb)* Part 10.5: Temporal CNN in Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=i390g8acZwk&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_5_temporal_cnn.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False
_____no_output_____
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
Part 10.3: Text Generation with LSTMRecurrent neural networks are also known for their ability to generate text. As a result, the output of the neural network can be free-form text. In this section, we will see how to train an LSTM can on a textual document, such as classic literature, and learn to output new text that appears to be of the same form as the training material. If you train your LSTM on [Shakespeare](https://en.wikipedia.org/wiki/William_Shakespeare), it will learn to crank out new prose similar to what Shakespeare had written. Don't get your hopes up. You are not going to teach your deep neural network to write the next [Pulitzer Prize for Fiction](https://en.wikipedia.org/wiki/Pulitzer_Prize_for_Fiction). The prose generated by your neural network will be nonsensical. However, it will usually be nearly grammatically and of a similar style as the source training documents. A neural network generating nonsensical text based on literature may not seem useful at first glance. However, this technology gets so much interest because it forms the foundation for many more advanced technologies. The fact that the LSTM will typically learn human grammar from the source document opens a wide range of possibilities. You can use similar technology to complete sentences when a user is entering text. Simply the ability to output free-form text becomes the foundation of many other technologies. In the next part, we will use this technique to create a neural network that can write captions for images to describe what is going on in the picture. Additional InformationThe following are some of the articles that I found useful in putting this section together.* [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)* [Keras LSTM Generation Example](https://keras.io/examples/lstm_text_generation/) Character-Level Text GenerationThere are several different approaches to teaching a neural network to output free-form text. The most basic question is if you wish the neural network to learn at the word or character level. In many ways, learning at the character level is the more interesting of the two. The LSTM is learning to construct its own words without even being shown what a word is. We will begin with character-level text generation. In the next module, we will see how we can use nearly the same technique to operate at the word level. We will implement word-level automatic captioning in the next module.We begin by importing the needed Python packages and defining the sequence length, named **maxlen**. Time-series neural networks always accept their input as a fixed-length array. Because you might not use all of the sequence elements, it is common to fill extra elements with zeros. You will divide the text into sequences of this length, and the neural network will train to predict what comes after this sequence.
from tensorflow.keras.callbacks import LambdaCallback from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import LSTM from tensorflow.keras.optimizers import RMSprop from tensorflow.keras.utils import get_file import numpy as np import random import sys import io import requests import re
_____no_output_____
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
For this simple example, we will train the neural network on the classic children's book [Treasure Island](https://en.wikipedia.org/wiki/Treasure_Island). We begin by loading this text into a Python string and displaying the first 1,000 characters.
r = requests.get("https://data.heatonresearch.com/data/t81-558/text/"\ "treasure_island.txt") raw_text = r.text print(raw_text[0:1000])
รฏยปยฟThe Project Gutenberg EBook of Treasure Island, by Robert Louis Stevenson This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.net Title: Treasure Island Author: Robert Louis Stevenson Illustrator: Milo Winter Release Date: January 12, 2009 [EBook #27780] Language: English *** START OF THIS PROJECT GUTENBERG EBOOK TREASURE ISLAND *** Produced by Juliet Sutherland, Stephen Blundell and the Online Distributed Proofreading Team at http://www.pgdp.net THE ILLUSTRATED CHILDREN'S LIBRARY _Treasure Island_ Robert Louis Stevenson _Illustrated by_ Milo Winter [Illustration] GRAMERCY BOOKS NEW YORK Foreword copyright ร‚ยฉ 1986 by Random House V
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
We will extract all unique characters from the text and sort them. This technique allows us to assign a unique ID to each character. Because we sorted the characters, these IDs should remain the same. If we add new characters to the original text, then the IDs would change. We build two dictionaries. The first **char2idx** is used to convert a character into its ID. The second **idx2char** converts an ID back into its character.
processed_text = raw_text.lower() processed_text = re.sub(r'[^\x00-\x7f]',r'', processed_text) print('corpus length:', len(processed_text)) chars = sorted(list(set(processed_text))) print('total chars:', len(chars)) char_indices = dict((c, i) for i, c in enumerate(chars)) indices_char = dict((i, c) for i, c in enumerate(chars))
corpus length: 397400 total chars: 60
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
We are now ready to build the actual sequences. Just like previous neural networks, there will be an $x$ and $y$. However, for the LSTM, $x$ and $y$ will both be sequences. The $x$ input will specify the sequences where $y$ are the expected output. The following code generates all possible sequences.
# cut the text in semi-redundant sequences of maxlen characters maxlen = 40 step = 3 sentences = [] next_chars = [] for i in range(0, len(processed_text) - maxlen, step): sentences.append(processed_text[i: i + maxlen]) next_chars.append(processed_text[i + maxlen]) print('nb sequences:', len(sentences)) sentences print('Vectorization...') x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 x.shape y.shape
_____no_output_____
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
The dummy variables for $y$ are shown below.
y[0:10]
_____no_output_____
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
Next, we create the neural network. This neural network's primary feature is the LSTM layer, which allows the sequences to be processed.
# build the model: a single LSTM print('Build model...') model = Sequential() model.add(LSTM(128, input_shape=(maxlen, len(chars)))) model.add(Dense(len(chars), activation='softmax')) optimizer = RMSprop(lr=0.01) model.compile(loss='categorical_crossentropy', optimizer=optimizer) model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm (LSTM) (None, 128) 96768 _________________________________________________________________ dense (Dense) (None, 60) 7740 ================================================================= Total params: 104,508 Trainable params: 104,508 Non-trainable params: 0 _________________________________________________________________
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
The LSTM will produce new text character by character. We will need to sample the correct letter from the LSTM predictions each time. The **sample** function accepts the following two parameters:* **preds** - The output neurons.* **temperature** - 1.0 is the most conservative, 0.0 is the most confident (willing to make spelling and other errors).The sample function below is essentially performing a [softmax]() on the neural network predictions. This causes each output neuron to become a probability of its particular letter.
def sample(preds, temperature=1.0): # helper function to sample an index from a probability array preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas)
_____no_output_____
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
Keras calls the following function at the end of each training Epoch. The code generates sample text generations that visually demonstrate the neural network better at text generation. As the neural network trains, the generations should look more realistic.
def on_epoch_end(epoch, _): # Function invoked at end of each epoch. Prints generated text. print("******************************************************") print('----- Generating text after Epoch: %d' % epoch) start_index = random.randint(0, len(processed_text) - maxlen - 1) for temperature in [0.2, 0.5, 1.0, 1.2]: print('----- temperature:', temperature) generated = '' sentence = processed_text[start_index: start_index + maxlen] generated += sentence print('----- Generating with seed: "' + sentence + '"') sys.stdout.write(generated) for i in range(400): x_pred = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(sentence): x_pred[0, t, char_indices[char]] = 1. preds = model.predict(x_pred, verbose=0)[0] next_index = sample(preds, temperature) next_char = indices_char[next_index] generated += next_char sentence = sentence[1:] + next_char sys.stdout.write(next_char) sys.stdout.flush() print()
_____no_output_____
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
We are now ready to train. It can take up to an hour to train this network, depending on how fast your computer is. If you have a GPU available, please make sure to use it.
# Ignore useless W0819 warnings generated by TensorFlow 2.0. Hopefully can remove this ignore in the future. # See https://github.com/tensorflow/tensorflow/issues/31308 import logging, os logging.disable(logging.WARNING) os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" # Fit the model print_callback = LambdaCallback(on_epoch_end=on_epoch_end) model.fit(x, y, batch_size=128, epochs=60, callbacks=[print_callback])
Train on 132454 samples Epoch 1/60 128/132454 [..............................] - ETA: 35:39****************************************************** ----- Generating text after Epoch: 0 ----- temperature: 0.2 ----- Generating with seed: "im shouting. but you may suppose i pa" im shouting. but you may suppose i pa
Apache-2.0
t81_558_class_10_3_text_generation.ipynb
tenyi257/t81_558_deep_learning
Define a couple of helper functions
def get_within_between_distances(map_df, dm, col): filtered_dm, filtered_map = filter_dm_and_map(dm, map_df) groups = [] distances = [] map_dict = filtered_map[col].to_dict() for id_1, id_2 in itertools.combinations(filtered_map.index.tolist(), 2): row = [] if map_dict[id_1] == map_dict[id_2]: groups.append('Within') else: groups.append('Between') distances.append(filtered_dm[(id_1, id_2)]) groups = zip(groups, distances) distances_df = pd.DataFrame(data=list(groups), columns=['Groups', 'Distance']) return distances_df def filter_dm_and_map(dm, map_df): ids_to_exclude = set(dm.ids) - set(map_df.index.values) ids_to_keep = set(dm.ids) - ids_to_exclude filtered_dm = dm.filter(ids_to_keep) filtered_map = map_df.loc[ids_to_keep] return filtered_dm, filtered_map colors = sns.color_palette("YlGnBu", 100) sns.palplot(colors)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Load mapping file and munge it-----------------
home = '/home/office-microbe-files' map_fp = join(home, 'master_map_150908.txt') sample_md = pd.read_csv(map_fp, sep='\t', index_col=0, dtype=str) sample_md = sample_md[sample_md['16SITS'] == 'ITS'] sample_md = sample_md[sample_md['OfficeSample'] == 'yes'] replicate_ids = '''F2F.2.Ce.021 F2F.2.Ce.022 F2F.3.Ce.021 F2F.3.Ce.022 F2W.2.Ca.021 F2W.2.Ca.022 F2W.2.Ce.021 F2W.2.Ce.022 F3W.2.Ce.021 F3W.2.Ce.022 F1F.3.Ca.021 F1F.3.Ca.022 F1C.3.Ca.021 F1C.3.Ca.022 F1W.2.Ce.021 F1W.2.Ce.022 F1W.3.Dr.021 F1W.3.Dr.022 F1C.3.Dr.021 F1C.3.Dr.022 F2W.3.Dr.059 F3F.2.Ce.078'''.split('\n') reps = sample_md[sample_md['Description'].isin(replicate_ids)] reps = reps.drop(reps.drop_duplicates('Description').index).index sample_md.drop(reps, inplace=True)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Load alpha diversity----------------------
alpha_div_fp = '/home/johnchase/office-project/office-microbes/notebooks/UNITE-analysis/core_div/core_div_open/arare_max999/alpha_div_collated/observed_species.txt' alpha_div = pd.read_csv(alpha_div_fp, sep='\t', index_col=0) alpha_div = alpha_div.T.drop(['sequences per sample', 'iteration']) alpha_cols = [e for e in alpha_div.columns if '990' in e] alpha_div = alpha_div[alpha_cols] sample_md = pd.concat([sample_md, alpha_div], axis=1, join='inner') sample_md['MeanAlpha'] = sample_md[alpha_cols].mean(axis=1) sample_md['MedianAlpha'] = sample_md[alpha_cols].median(axis=1) alpha_div = pd.read_csv(alpha_div_fp, sep='\t', index_col=0) alpha_div = alpha_div.T.drop(['sequences per sample', 'iteration']) alpha_cols = [e for e in alpha_div.columns if '990' in e] alpha_div = alpha_div[alpha_cols]
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
add alpha diversity to map-------------
sample_md = pd.concat([sample_md, alpha_div], axis=1, join='inner') sample_md['MeanAlpha'] = sample_md[alpha_cols].mean(axis=1)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Filter the samples so that only corrosponding row 2, 3 samples are included-----------------------------------------------------------
sample_md['NoRow'] = sample_md['Description'].apply(lambda x: x[:3] + x[5:]) row_df = sample_md[sample_md.duplicated('NoRow', keep=False)].copy() row_df['SampleType'] = 'All Row 2/3 Pairs (n={0})'.format(int(len(row_df)/2)) plot_row_df = row_df[['Row', 'MeanAlpha', 'SampleType']] sample_md_wall = row_df[row_df['PlateLocation'] != 'floor'].copy() sample_md_wall['SampleType'] = 'Wall and Ceiling Pairs (n={0})'.format(int(len(sample_md_wall)/2)) plot_sample_md_wall = sample_md_wall[['Row', 'MeanAlpha', 'SampleType']] sample_md_floor = row_df[row_df['PlateLocation'] == 'floor'].copy() sample_md_floor['SampleType'] = 'Floor Pairs (n={0})'.format(int(len(sample_md_floor)/2)) plot_sample_md_floor = sample_md_floor[['Row', 'MeanAlpha', 'SampleType']] plot_df = pd.concat([plot_row_df, plot_sample_md_wall, plot_sample_md_floor]) with plt.rc_context(dict(sns.axes_style("darkgrid"), **sns.plotting_context("notebook", font_scale=2.5))): plt.figure(figsize=(20, 11)) ax = sns.violinplot(x='SampleType', y='MeanAlpha', data=plot_df, hue='Row', hue_order=['3', '2'], palette="YlGnBu") ax.set_xlabel('') handles, labels = ax.get_legend_handles_labels() ax.set_ylabel('OTU Counts') ax.set_title('OTU Counts') ax.legend(handles, ['Frequent', 'Infrequent'], title='Sampling Frequency') ax.get_legend().get_title().set_fontsize('15') plt.savefig('figure-3-its-A.svg', dpi=300) row_2_values = list(row_df[(row_df['Row'] == '2')]['MeanAlpha']) row_3_values = list(row_df[(row_df['Row'] == '3')]['MeanAlpha']) obs_t, param_p_val, perm_t_stats, nonparam_p_val = mc_t_two_sample(row_2_values, row_3_values) obs_t, param_p_val print((obs_t, param_p_val), "row 2 mean: {0}, row 1 mean: {1}".format(np.mean(row_2_values),np.mean(row_3_values))) row_2_values = list(sample_md_wall[(sample_md_wall['Row'] == '2')]['MeanAlpha']) row_3_values = list(sample_md_wall[(sample_md_wall['Row'] == '3')]['MeanAlpha']) obs_t, param_p_val, perm_t_stats, nonparam_p_val = mc_t_two_sample(row_2_values, row_3_values) print((obs_t, param_p_val), "row 2 mean: {0}, row 1 mean: {1}".format(np.mean(row_2_values),np.mean(row_3_values))) row_2_values = list(sample_md_floor[(sample_md_floor['Row'] == '2')]['MeanAlpha']) row_3_values = list(sample_md_floor[(sample_md_floor['Row'] == '3')]['MeanAlpha']) obs_t, param_p_val, perm_t_stats, nonparam_p_val = mc_t_two_sample(row_2_values, row_3_values) print((obs_t, param_p_val), "row 2 mean: {0}, row 1 mean: {1}".format(np.mean(row_2_values),np.mean(row_3_values)))
(5.8449136803810715, 1.7233641180780523e-08) row 2 mean: 176.48000000000002, row 1 mean: 123.44017094017092
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Beta Diversity! Create beta diversity boxplots of within and bewteen distances for row. It may not make a lot of sense doing this for all samples as the location and or city affect may drown out the row affect Load the distance matrix----------------------
dm = skbio.DistanceMatrix.read(join(home, '/home/johnchase/office-project/office-microbes/notebooks/UNITE-analysis/core_div/core_div_open/bdiv_even999/binary_jaccard_dm.txt'))
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Run permanova and recored within between values on various categories----------------------All of these will be based on the row 2, 3 paired samples, though they may be filtered to avoind confounding variables Row distances
filt_map = row_df[(row_df['City'] == 'flagstaff') & (row_df['Run'] == '2')] filt_dm, filt_map = filter_dm_and_map(dm, filt_map) row_dists = get_within_between_distances(filt_map, filt_dm, 'Row') row_dists['Category'] = 'Row (n=198)' permanova(filt_dm, filt_map, column='Row', permutations=999)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Plate locationWe can use the same samples for this as the previous test
plate_dists = get_within_between_distances(filt_map, filt_dm, 'PlateLocation') plate_dists['Category'] = 'Plate Location (n=198)' permanova(filt_dm, filt_map, column='PlateLocation', permutations=999)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Run
filt_map = row_df[(row_df['City'] == 'flagstaff')] filt_dm, filt_map = filter_dm_and_map(dm, filt_map) run_dists = get_within_between_distances(filt_map, filt_dm, 'Run') run_dists['Category'] = 'Run (n=357)' permanova(filt_dm, filt_map, column='Run', permutations=999)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Material
filt_map = row_df[(row_df['City'] == 'flagstaff') & (row_df['Run'] == '2')] filt_dm, filt_map = filter_dm_and_map(dm, filt_map) material_dists = get_within_between_distances(filt_map, filt_dm, 'Material') material_dists['Category'] = 'Material (n=198)' permanova(filt_dm, filt_map, column='Material', permutations=999) all_dists = material_dists.append(row_dists).append(plate_dists).append(run_dists) with plt.rc_context(dict(sns.axes_style("darkgrid"), **sns.plotting_context("notebook", font_scale=1.8))): plt.figure(figsize=(20,11)) ax = sns.boxplot(x="Category", y="Distance", hue="Groups", hue_order=['Within', 'Between'], data=all_dists, palette=sns.color_palette(['#f1fabb', '#2259a6'])) ax.set_ylim([0.9, 1.02]) ax.set_xlabel('') ax.set_title('Binary-Jaccard') plt.legend(loc='upper right') plt.savefig('figure-3-its-B.svg', dpi=300) dm = skbio.DistanceMatrix.read(join(home, '/home/johnchase/office-project/office-microbes/notebooks/UNITE-analysis/core_div/core_div_open/bdiv_even999/bray_curtis_dm.txt'))
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Row Distances
filt_map = row_df[(row_df['City'] == 'flagstaff') & (row_df['Run'] == '2')] filt_dm, filt_map = filter_dm_and_map(dm, filt_map) row_dists = get_within_between_distances(filt_map, filt_dm, 'Row') row_dists['Category'] = 'Row (n=198)' permanova(filt_dm, filt_map, column='Row', permutations=999)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Plate Location
plate_dists = get_within_between_distances(filt_map, filt_dm, 'PlateLocation') plate_dists['Category'] = 'Plate Location (n=198)' permanova(filt_dm, filt_map, column='PlateLocation', permutations=999)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Run
filt_map = row_df[(row_df['City'] == 'flagstaff')] filt_dm, filt_map = filter_dm_and_map(dm, filt_map) run_dists = get_within_between_distances(filt_map, filt_dm, 'Run') run_dists['Category'] = 'Run (n=357)' permanova(filt_dm, filt_map, column='Run', permutations=999)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Material
filt_map = row_df[(row_df['City'] == 'flagstaff') & (row_df['Run'] == '2')] filt_dm, filt_map = filter_dm_and_map(dm, filt_map) material_dists = get_within_between_distances(filt_map, filt_dm, 'Material') material_dists['Category'] = 'Material (n=198)' permanova(filt_dm, filt_map, column='Material', permutations=999) all_dists = material_dists.append(row_dists).append(plate_dists).append(run_dists) with plt.rc_context(dict(sns.axes_style("darkgrid"), **sns.plotting_context("notebook", font_scale=1.8))): plt.figure(figsize=(20,11)) ax = sns.boxplot(x="Category", y="Distance", hue="Groups", hue_order=['Within', 'Between'], data=all_dists, palette=sns.color_palette(['#f1fabb', '#2259a6'])) ax.set_ylim([0.9, 1.02]) ax.set_xlabel('') ax.set_title('Bray-Curtis') plt.legend(loc='upper right') plt.savefig('figure-3-its-C.svg', dpi=300)
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
ANCOM-----
table_fp = join(home, 'core_div_out/table_even1000.txt') table = pd.read_csv(table_fp, sep='\t', skiprows=1, index_col=0).T table.index = table.index.astype(str) table_ancom = table.loc[:, table[:3].sum(axis=0) > 0] table_ancom = pd.DataFrame(multiplicative_replacement(table_ancom), index=table_ancom.index, columns=table_ancom.columns) table_ancom.dropna(axis=0, inplace=True) intersect_ids = set(row_md.index).intersection(set(table_ancom.index)) row_md_ancom = row_md.loc[intersect_ids, ] table_ancom = table_ancom.loc[intersect_ids, ] %time results = ancom(table_ancom, row_md_ancom['Row']) sigs = results[results['reject'] == True] tax_fp = '/home/office-microbe-files/pick_otus_out_97/uclust_assigned_taxonomy/rep_set_tax_assignments.txt' taxa_map = pd.read_csv(tax_fp, sep='\t', index_col=0, names=['Taxa', 'none', 'none']) taxa_map.drop('none', axis=1, inplace=True) taxa_map.index = taxa_map.index.astype(str) taxa_map.loc[sigs.sort_values('W').index.astype(str)] pd.options.display.max_colwidth = 200 sigs np.mean(w_dm.data) np.median(w_dm.data) w_dm = skbio.DistanceMatrix.read(join(home, '/home/johnchase/office-project/office-microbes/notebooks/UNITE-analysis/core_div/core_div_closed/bdiv_even1000/bray_curtis_dm.txt')) np.mean(w_dm.data) np.median(w_dm.data) 4980239/22783729 foo
_____no_output_____
BSD-3-Clause
Final/Figure-3/figure-3-its.ipynb
gregcaporaso/office-microbes
Training with Chainer[VGG](https://arxiv.org/pdf/1409.1556v6.pdf) is an architecture for deep convolution networks. In this example, we train a convolutional network to perform image classification using the CIFAR-10 dataset. CIFAR-10 consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. We'll train a model on SageMaker, deploy it to Amazon SageMaker hosting, and then classify images using the deployed model.The Chainer script runs inside of a Docker container running on SageMaker. For more information about the Chainer container, see the sagemaker-chainer-containers repository and the sagemaker-python-sdk repository:* https://github.com/aws/sagemaker-chainer-containers* https://github.com/aws/sagemaker-python-sdkFor more on Chainer, please visit the Chainer repository:* https://github.com/chainer/chainerThis notebook is adapted from the [CIFAR-10](https://github.com/chainer/chainer/tree/master/examples/cifar) example in the Chainer repository.
# Setup from sagemaker import get_execution_role import sagemaker sagemaker_session = sagemaker.Session() # This role retrieves the SageMaker-compatible role used by this Notebook Instance. role = get_execution_role()
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
Downloading training and test dataWe use helper functions provided by `chainer` to download and preprocess the CIFAR10 data.
import chainer from chainer.datasets import get_cifar10 train, test = get_cifar10()
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
Uploading the dataWe save the preprocessed data to the local filesystem, and then use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value `inputs` identifies the S3 location, which we will use when we start the Training Job.
import os import shutil import numpy as np train_data = [element[0] for element in train] train_labels = [element[1] for element in train] test_data = [element[0] for element in test] test_labels = [element[1] for element in test] try: os.makedirs("/tmp/data/train_cifar") os.makedirs("/tmp/data/test_cifar") np.savez("/tmp/data/train_cifar/train.npz", data=train_data, labels=train_labels) np.savez("/tmp/data/test_cifar/test.npz", data=test_data, labels=test_labels) train_input = sagemaker_session.upload_data( path=os.path.join("/tmp", "data", "train_cifar"), key_prefix="notebook/chainer_cifar/train" ) test_input = sagemaker_session.upload_data( path=os.path.join("/tmp", "data", "test_cifar"), key_prefix="notebook/chainer_cifar/test" ) finally: shutil.rmtree("/tmp/data") print("training data at %s", train_input) print("test data at %s", test_input)
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
Writing the Chainer script to run on Amazon SageMaker TrainingWe need to provide a training script that can run on the SageMaker platform. The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:* `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to. These artifacts are uploaded to S3 for model hosting.* `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host.* `SM_OUTPUT_DIR`: A string representing the filesystem path to write output artifacts to. Output artifacts may include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed and uploaded to S3 to the same S3 prefix as the model artifacts.Supposing two input channels, 'train' and 'test', were used in the call to the Chainer estimator's ``fit()`` method,the following will be set, following the format `SM_CHANNEL_[channel_name]`:* `SM_CHANNEL_TRAIN`: A string representing the path to the directory containing data in the 'train' channel* `SM_CHANNEL_TEST`: Same as above, but for the 'test' channel.A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to `model_dir` so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance. For example, the script run by this notebook starts with the following:```pythonimport argparseimport osif __name__ =='__main__': parser = argparse.ArgumentParser() retrieve the hyperparameters we set from the client (with some defaults) parser.add_argument('--epochs', type=int, default=50) parser.add_argument('--batch-size', type=int, default=64) parser.add_argument('--learning-rate', type=float, default=0.05) Data, model, and output directories These are required. parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR']) parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR']) parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN']) parser.add_argument('--test', type=str, default=os.environ['SM_OUTPUT_DATA_DIR']) args, _ = parser.parse_known_args() num_gpus = int(os.environ['SM_NUM_GPUS']) ... load from args.train and args.test, train a model, write model to args.model_dir.```Because the Chainer container imports your training script, you should always put your training code in a main guard (`if __name__=='__main__':`) so that the container does not inadvertently run your training code at the wrong point in execution.For more information about training environment variables, please visit https://github.com/aws/sagemaker-containers. Hosting and InferenceWe use a single script to train and host the Chainer model. You can also write separate scripts for training and hosting. In contrast with the training script, the hosting script requires you to implement functions with particular function signatures (or rely on defaults for those functions).These functions load your model, deserialize data sent by a client, obtain inferences from your hosted model, and serialize predictions back to a client:* **`model_fn(model_dir)` (always required for hosting)**: This function is invoked to load model artifacts from those that were written into `model_dir` during training.The script that this notebook runs uses the following `model_fn` function for hosting:```pythondef model_fn(model_dir): chainer.config.train = False model = L.Classifier(net.VGG(10)) serializers.load_npz(os.path.join(model_dir, 'model.npz'), model) return model.predictor```* `input_fn(input_data, content_type)`: This function is invoked to deserialize prediction data when a prediction request is made. The return value is passed to predict_fn. `input_data` is the serialized input data in the body of the prediction request, and `content_type`, the MIME type of the data. * `predict_fn(input_data, model)`: This function accepts the return value of `input_fn` as the `input_data` parameter and the return value of `model_fn` as the `model` parameter and returns inferences obtained from the model. * `output_fn(prediction, accept)`: This function is invoked to serialize the return value from `predict_fn`, which is passed in as the `prediction` parameter, back to the SageMaker client in response to prediction requests.`model_fn` is always required, but default implementations exist for the remaining functions. These default implementations can deserialize a NumPy array, invoking the model's `__call__` method on the input data, and serialize a NumPy array back to the client.This notebook relies on the default `input_fn`, `predict_fn`, and `output_fn` implementations. See the Chainer sentiment analysis notebook for an example of how one can implement these hosting functions.Please examine the script below. Training occurs behind the main guard, which prevents the function from being run when the script is imported, and `model_fn` loads the model saved into `model_dir` during training.For more on writing Chainer scripts to run on SageMaker, or for more on the Chainer container itself, please see the following repositories: * For writing Chainer scripts to run on SageMaker: https://github.com/aws/sagemaker-python-sdk* For more on the Chainer container and default hosting functions: https://github.com/aws/sagemaker-chainer-containers
!pygmentize 'src/chainer_cifar_vgg_single_machine.py'
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
Running the training script on SageMakerTo train a model with a Chainer script, we construct a ```Chainer``` estimator using the [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk). We pass in an `entry_point`, the name of a script that contains a couple of functions with certain signatures (`train` and `model_fn`), and a `source_dir`, a directory containing all code to run inside the Chainer container. This script will be run on SageMaker in a container that invokes these functions to train and load Chainer models. The ```Chainer``` class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on one `ml.p2.xlarge` instance.
from sagemaker.chainer.estimator import Chainer chainer_estimator = Chainer( entry_point="chainer_cifar_vgg_single_machine.py", source_dir="src", role=role, sagemaker_session=sagemaker_session, train_instance_count=1, train_instance_type="ml.p2.xlarge", hyperparameters={"epochs": 50, "batch-size": 64}, ) chainer_estimator.fit({"train": train_input, "test": test_input})
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples